RETHINKING ASSESSMENT: INNOVATION IN A CRISIS
The hiatus in face-to-face testing in most parts of the world has had a massive impact on education and learning. Alternative, interim solutions have needed to be found for end of course and school evaluations, for formative assessments that would have been conducted in classrooms, and for admissions score requirements at institutions around the world. These moves have raised questions around the validity and reliability of online tests, the fairness of alternative means of evaluating learners’ achievements, and the security of remote proctoring, amongst others. How have teachers, learners, testers and stakeholders adapted? What are the opportunities and challenges, the risks and benefits? Will language assessment be impacted in the long term or are these innovations stop-gap measures? This sub-theme specifically invites presentations of Work-in-Progress research, practical applications, and innovative solutions to assessing language proficiency in these challenging times.
ALTERNATIVE APPROACHES TO ASSESSMENT IN LEARNING SYSTEMS
High-stakes tests have traditionally served as milestones in the educational careers of learners from a young age and these have impacted how they are taught as well as what and how they learn. As education systems look to adapt in order to produce the skills needed in a rapidly changing world, formative, continuous or other alternative methods of assessing learners’ language skills are being considered or introduced. For this strand, we invite proposals that present research or theoretical models that focus on alternative approaches to testing.
CONNECTING ASSESSMENT AND REAL-LIFE LANGUAGE USE
The primary purpose of most language tests is to evaluate a learner’s ability to function in a particular target-language use situation. Reflecting real-life communication contexts in a test environment is challenging and test designers are limited by practical and other constraints, however. Presentations of research into or theoretical perspectives on the relationship between assessment and the real-life use of language and how this might be strengthened are invited for this strand.
ENGLISH OWNERSHIP, IDENTITY AND CONSEQUENCES FOR ASSESSMENT
It has long been recognised that English has a multifaceted, shared ownership, most often manifest in the variety of forms and the myriad of speakers who leverage and shape the language to describe their own experience and identity to a global audience. With the intrinsic link between language, power, and identity, as Asian economies have grown and Asian cultures have grown in influence, there has been an increasing assertion of Asian Englishes as bona fide forms of the language. While the recognition of diversity within a language so widely used is to be celebrated, multifariousness poses a particular conundrum for language assessment where proficiency is frequently measured in relation to a standard. This strand invites papers that explore the connection between Englishes and language testing or present examples of how the diversity of English can be reflected in language assessment.
THE IMPACT AND CONSEQUENCES OF TECHNOLOGY
The future is here: computer delivered tests, automated assessment of language proficiency and personalised learning apps are commonplace. Increasingly widespread use of technology for learning, teaching and testing provides rich data for exploration of the effects of this trend and rapid improvements in AI offer an opportunity to experiment with potential iterations of testing tools. In this strand, we invite papers that focus on the impact and consequences of new technology on language assessment.
A TEACHING-PERSPECTIVE: QUALITY ASSESSMENT AND THE IMPACT IN THE CLASSROOM
At the centre of education and assessment lies the learner. While discussions at theoretical and policy levels are important, consequences are felt in the classroom and that is where real change emerges. In this sub-theme, we invite teachers and practitioners to present on practical implications of testing for teaching. In particular, action research and in-practice examples of overcoming the challenges or positively leveraging test washback are welcomed.