back of a man presenting to a group holding a whiteboard marker
Friday 23 October 2020 -
15:00 to 16:30
SGT Singapore time (UTC+8)

This event will be held online.

The use and implications of technology in language testing have been a focus of research, debate, and discussion over the last 3 decades. The 1990’s and early 2000s saw a move to computer-based testing in which  the focus of validation and validity evidence was often on demonstrating that computer-mediated or computer delivered assessments were capable of accurately measuring the same constructs as “traditional” face to face and paper-based methods. The first two decades of the 21st century saw an increasingly strong presence of automated scoring in language testing for complex, constructed responses in speaking and writing tests, and the use of AI to enhance the capability of automated scoring models. The focus, however, was still often on how close or different these scoring models would be to human-mediated alternatives. These changes in the technological landscape of language testing have of course occurred against the background of larger changes, including the rapid growth of international mobility in both education and employment, the use of English as a global lingua franca, and the accompanying increase in the use of large-scale, standardized assessments (particularly of English) as important gate-keepers and decision-making tools in these processes. This year (2020), as seen all of these influences come crashing together under the influence of the global Covid-19 pandemic. This has accelerated rapidly changes already in progress in the use of technology in teaching, learning and assessment, but also dramatically impacted, and changed the course of, the contexts within which these developments having taken place, e.g. the global mobility paradigm as a driver of developments in large-scale testing. Importantly, the use of technology has now assumed a much larger role in defining the construct of communication itself in the target language use domains our tests hope to replicate, and these changes promise to remain after the immediate impact of global lockdowns passes. This symposium brings together a number of first-hand case studies on research into, and validation of, technology in language assessment with a direct connection to the dramatic changes over the first half of 2020. What have we learnt? What are the implications not just for the short term, but the longer-term validity of our approaches to test development? What has and will continue to change in our understanding of communication in whatever TLU domain we are targeting? And how do we validate our approaches to assessment in this dynamic context?

Format: Main presentations (15 minutes each), Discussant: 15 minutes; open discussion 15 minutes

Recent research into and application of technology in language assessment:

validity in a dynamically changing assessment context

Moderator: Guoxing Yu (University of Bristol)



Working titles of papers (TBC)

Benjamin Kremmel & Daniel Isbell

Innsbruck University / University of Hawaii

Survey of remote invigilation options for large-scale standardized language tests

Trevor Breakspear

British Council

Framing a validation agenda for the development of an automated rating application for speaking tests

Mina Patel

British Council

From research to application: developing a video-conferencing system for testing speaking

Vahid Aryadoust

NIE, Singapore

Beyond the traditional: A review of four cutting-edge technologies for test validity research

Guoxing Yu

University of Bristol

Discussant (validation from the perspective of a changing construct and the place of technology)