Please click on the following plenary topics to view the abstracts:
ND2020: Plenary Sessions Abstracts
Validity: An integrated arguments approach
In this plenary, we focus on the perennial topic of discussion and controversy in language and broader educational assessment: validity. In doing so we examine the need to consider, in addition to the psychometric evidence, which continues to prevail especially in the US, other critical sources of quality evidence.
We call attention to a number of key issues in our determination of what might constitute best practice in test development and validation. The first of these is principled design and the evidence accumulated from various departments/groups involved in test design and development process. This calls for meaningful operational definitions of the target language as represented in models of language progression or communication. In addition, we argue for a functioning underlying measurement model, supported, but not driven, by sound psychometric principals. The second issue is that of impact by design, which places consequences at the top of the evidence chain in guiding all testing efforts and quality documentation. We argue that validity evidence should move on from its current focus on a narrow expert audience and attend to consequences as they relate to all key stakeholders (i.e. at the individual, aggregate/group, and larger educational/organisational/societal levels). This requires consideration at all stages of the development process, from clearly defined theories of change and action (what do I want to happen and how can I make this happen) to targeted communications designed to engage all key stakeholder groups.
We conclude by arguing that this integrated approach can yield a convincing validity argument.
Metaphors we test by: Communicative concepts that connect language assessment with contexts of use
Exactly forty years ago, an influential volume in the field of applied linguistics explored the role of metaphor in language and mind (Lakoff and Johnson, 1980). The authors explained how metaphors (e.g. orientational, structural, etc.) allow us to use what we know about our physical and social experience to help understand countless other subjects, in turn shaping our perceptions and actions. Since 1980, theory of metaphor has developed within the cognitive sciences to become central to contemporary understanding of how we think and express our thoughts in language.
The field of language testing and assessment (LTA) has grown considerably over this same 40-year period and has espoused various metaphors that promote thinking and understanding about assessment. Mislevy (2012), for example, highlights several key metaphors, e.g. ‘assessment as feedback loop’ and ‘assessment as evidentiary argument’, each offering a set of concepts, relationships, processes and actors for thinking about real-world situations, especially those involving assessment policy, practice and reform.
This presentation will consider how metaphor helps to foster shared understanding within and across our LTA field, mitigating divides and supporting transitions in a rapidly changing context. We shall examine some of the most useful metaphors for communicating LTA theory and practice to stakeholders. We shall also explore how certain metaphors commonly used today may be less helpful for various reasons, and we shall consider some alternatives that may offer us fresh insights and understanding. As the 2020 pandemic requires us to rethink how we do language testing and assessment for the future, so we may also need to reassess some of the ways we currently think about and understand aspects of assessment.
References:Lakoff, G and Johnson, M (1980/2003) Metaphors We Live By. Chicago: University of Chicago Press Ltd.Mislevy, R (2012) Four Metaphors We Need To Understand Assessment. Paper prepared for the Gordon Commission on the Future of Assessment in K-12 Education. http://www.gordoncommission.org
Validity in classroom-based formative assessment
Formative assessment has been gaining increasing currency in language education in policy, practice, and research domains. Issues about the qualities of formative assessment, especially its validity, however, have not received enough attention. This presentation aims to outline major issues of validity in classroom-based formative assessment. I first explain the development of the validity concept, introduce facets of validity and the unitary conception of construct validity. Next, I will present my definition and operationalisation framework of formative assessment. I will argue that validity as an educational measurement concept applies to formative assessment as well. I will use illustrative examples to show that invalid interpretations and uses of classroom-based formative assessment results do not achieve formative effects. It is hoped that the conceptualisation and operationalisation of formative assessment and the discussion of its validity will be useful for researchers and teachers alike.