Please click on the following plenary topics to view the abstracts:

Technology, values, ethics and consequences: From innovation to impact in language assessment

The Covid-19 pandemic has had a dramatic impact on language assessment practices worldwide. Throughout 2020, language teachers switched to online delivery and testing organisations rushed to provide digital solutions to practical problems raised by lockdowns and social distancing measures (see Isbell & Kremmel, 2020). The pandemic, however, only accelerated a process that has been developing for many years: the gradual shift of language assessment practices to a digital space. There are numerous potential advantages of this shift online: cheaper and more accessible language tests for candidates, more consistent and reliable scoring systems, and quicker turnaround of score reports. However, this move towards greater integration of digital technology in language assessment brings with it the prospect of a more rapid diffusion of unintended consequences and creates new challenges with respect to ethical test use. In this talk, I will explore these issues, focusing particularly on the challenges involved in capturing the complex constructs of lingua franca communication, and the increasing salience of security technologies in language test delivery. I will argue that ethical language assessment will require a dedicated orientation towards responsible, multi-disciplinary innovation and that the rapid pace of change should be accompanied by an urgent research agenda focused on understanding the nature and scope of test impact in the digital age. 

Validity: An integrated arguments approach

In this plenary, we focus on the perennial topic of discussion and controversy in language and broader educational assessment: validity. In doing so we examine the need to consider, in addition to the psychometric evidence, which continues to prevail especially in the US, other critical sources of quality evidence.  

We call attention to a number of key issues in our determination of what might constitute best practice in test development and validation. The first of these is principled design and the evidence accumulated from various departments/groups involved in test design and development process. This calls for meaningful operational definitions of the target language as represented in models of language progression or communication. In addition, we argue for a functioning underlying measurement model, supported, but not driven, by sound psychometric principals. The second issue is that of impact by design, which places consequences at the top of the evidence chain in guiding all testing efforts and quality documentation. We argue that validity evidence should move on from its current focus on a narrow expert audience and attend to consequences as they relate to all key stakeholders (i.e. at the individual, aggregate/group, and larger educational/organisational/societal levels). This requires consideration at all stages of the development process, from clearly defined theories of change and action (what do I want to happen and how can I make this happen) to targeted communications designed to engage all key stakeholder groups.  

We conclude by arguing that this integrated approach can yield a convincing validity argument. 

Metaphors we test by: Communicative concepts that connect language assessment with contexts of use

Exactly forty years ago, an influential volume in the field of applied linguistics explored the role of metaphor in language and mind (Lakoff and Johnson, 1980). The authors explained how metaphors (e.g. orientational, structural, etc.) allow us to use what we know about our physical and social experience to help understand countless other subjects, in turn shaping our perceptions and actions. Since 1980, theory of metaphor has developed within the cognitive sciences to become central to contemporary understanding of how we think and express our thoughts in language.

The field of language testing and assessment (LTA) has grown considerably over this same 40-year period and has espoused various metaphors that promote thinking and understanding about assessment. Mislevy (2012), for example, highlights several key metaphors, e.g. ‘assessment as feedback loop’ and ‘assessment as evidentiary argument’, each offering a set of concepts, relationships, processes and actors for thinking about real-world situations, especially those involving assessment policy, practice and reform. 

This presentation will consider how metaphor helps to foster shared understanding within and across our LTA field, mitigating divides and supporting transitions in a rapidly changing context. We shall examine some of the most useful metaphors for communicating LTA theory and practice to stakeholders. We shall also explore how certain metaphors commonly used today may be less helpful for various reasons, and we shall consider some alternatives that may offer us fresh insights and understanding.  As the 2020 pandemic requires us to rethink how we do language testing and assessment for the future, so we may also need to reassess some of the ways we currently think about and understand aspects of assessment.

 

References:Lakoff, G and Johnson, M (1980/2003) Metaphors We Live By. Chicago: University of Chicago Press Ltd.Mislevy, R (2012) Four Metaphors We Need To Understand Assessment. Paper prepared for the Gordon Commission on the Future of Assessment in K-12 Education. http://www.gordoncommission.org

Validity in classroom-based formative assessment

Formative assessment has been gaining increasing currency in language education in policy, practice, and research domains. Issues about the qualities of formative assessment, especially its validity, however, have not received enough attention. This presentation aims to outline major issues of validity in classroom-based formative assessment. I first explain the development of the validity concept, introduce facets of validity and the unitary conception of construct validity. Next, I will present my definition and operationalisation framework of formative assessment. I will argue that validity as an educational measurement concept applies to formative assessment as well. I will use illustrative examples to show that invalid interpretations and uses of classroom-based formative assessment results do not achieve formative effects. It is hoped that the conceptualisation and operationalisation of formative assessment and the discussion of its validity will be useful for researchers and teachers alike.

Integrated language assessment in a digital age: Some fundamental considerations in task design and validation

As a field of research inquiry and a method of assessing language proficiency, integrated assessment tasks have become increasingly popular among major test providers, facilitated and enhanced by the use of technology in test delivery. In this talk, we will review (a) the current practices of integrated assessment in high-stakes language tests; (b) the debates on the premises, promises, problems and compromises in using integrated tasks at different assessment contexts; (c) highlight the multidimensionality and complexity of the construct(s) of integrated assessment tasks; and (d) the opportunities and challenges in researching and validating such tasks in a digital age.   

We will ask several fundamental questions to critique the current research and practice in integrated assessment, such as: What can be counted as integrated assessment tasks? What are the differential impacts of features of source input (e.g., visuals, audios, texts, graphs, paper-based or computer-delivered) on task performance? What are the cognitive processes and core skill(s) involved in completing integrated assessment tasks? To what extent should the assessment criteria for integrated tasks differ from those for independent tasks? What are the roles that technology can play in designing and validating integrated assessment tasks? A systematic approach and concerted effort to address these fundamental questions will help us develop the theoretical framework(s) of integrated language assessment in a digital age. 

New Directions in Language and Assessment: Implications of INDIA’s new education policy for Foundational Learning

India has just launched the New Education Policy (NEP2020). The policy outlines several fundamental structural shifts in the system and also recommends changes in language of instruction and assessment practices. The talk will focus on the implications of these three strands in the new policy for the acquisition of foundational literacy and numeracy skills with a view from the ground. What opportunities does this moment bring and what constraints in enabling children to learn well?