Policy ForumEducation

Reframing rankings in educational assessments

See allHide authors and affiliations

Science  23 Apr 2021:
Vol. 372, Issue 6540, pp. 338-340
DOI: 10.1126/science.abd3300

You are currently viewing the summary.

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution

Summary

Educational large-scale assessments (LSAs), such as the Programme for International Student Assessment (PISA), with their widely publicized country rankings, are central sources of information used in policy interventions in education. Their main aim is to assess competencies needed to meet real-life challenges (1). Current reporting practices, however, confound differences in test-taking behavior (such as working speed and item nonresponse) with differences in competencies (ability). Furthermore, they do so in a different way for different examinees, threatening the fairness of comparisons, such as country rankings (2, 3). We argue that test-taking behavior is not a nuisance factor that may confound measurement, but an aspect that provides important information on how examinees approach tasks, which is relevant for real-life outcomes. Disentangling and reporting all of these factors as part of a portfolio of performance could result in fairer comparisons across groups and also allow for a better understanding and valid assessment of competencies, as well as for more tailored interventions, targeting possible causes of low performance.

View Full Text

Stay Connected to Science