“Test providers should carefully research the types of tasks students need to perform and replicate such tasks as accurately as possible.”
English-language classrooms around the world all look and function differently, writes Spiros Papageorgiou of ETS. But one common thread among them is that teachers of English consistently look for reliable information from language tests to assess whether their students have achieved specific learning goals.
Students who wish to pursue a degree in an English-speaking country are typically asked to submit scores from a language test to demonstrate that they can cope with the language demands their university classes require.
By Spiros Papageorgiou
As such, developers of these language tests should be establishing a rigorous program of research in order to support the intended uses of these tests, and further be able to provide evidence that substantiates claims about what test takers know or can do based on their scores.
It is critical for English-language assessment providers to organise and execute a program of research that drives all stages of test development, score interpretation and score use. Such a research program can be based on an argument-based approach to test validation and it can be thought of in terms of a series of questions. All test providers should be able to answer, with concrete evidence:
- Is test content representative of authentic language use?
- Does the test measure the language ability of interest?
- Are test scores consistent across forms and dates?
- Are the test scores a meaningful indicator of language ability?
- Do scores predict performance when completing authentic language tasks?
- Are scores useful for decision making and does their use bring positive consequences?
Test providers should collect empirical evidence to support any claim they make about the use of a test and the interpretation of its scores. For a language proficiency test, this evidence should cover all aspects of test development and score interpretation and use, from the rationale behind the test design to all aspects of psychometric quality, as well as and the relationship between test performance and authentic use of language.
The use of English-language proficiency tests for university admission of international students illustrates the importance of collecting evidence to support all aspects of test development and score use.
Assessing English-language skills for academic purposes is a complex endeavour because English-language proficiency is a key element to university admission, in addition to other factors including subject-related knowledge and non-cognitive attributes like motivation and persistence.
It is very likely that international students who are unprepared to cope with the English-language demands of instruction will face difficulties in succeeding academically, no matter how well they know their subject area or how motivated they are.
To help university staff make admissions decisions about students’ academic language proficiency, a language test should contain tasks that reflect the real-life tasks students need to complete when using English on campus.
In other words, an academic English test should evaluate students’ ability to comprehend academic content, both oral and written, and perform the speaking and writing tasks typically expected in an academic context (such as synthesising information and presenting arguments).
Test providers should carefully research the types of tasks students need to perform on campus and replicate such tasks as accurately as possible in a test.
Language proficiency tests that do not contain tasks representative of real-life academic language, and instead only evaluate rudimentary English-language skills through inauthentic tasks, are unlikely to offer useful information for admissions decisions and scores.
Inappropriate test design is also likely to invite ineffective learning practices, such as memorising lists of words, without any motivation for the students to learn English as used in the academic context.
Because of the consequences of decisions that are made based on information from language test scores, it is important that these scores can be trusted. Providers of such tests have the responsibility to establish a comprehensive and multifaceted research program supporting all claims about test design and score use.
Language test providers should also be transparent about the results of such research, so that institutions can make informed admissions decisions to set their students on a path for academic success.
Spiros Papageorgiou is a managing senior research scientist in Research & Development at ETS. He holds a PhD from Lancaster University and has been researching the assessment of English as a foreign language (EFL) for over a decade.
The article first appeared in the following link: https://blog.thepienews.com/2020/06/ets-english-testing/
Reprinted with permission