Policy ForumEducation

Testing international education assessments

See allHide authors and affiliations

Science  06 Apr 2018:
Vol. 360, Issue 6384, pp. 38-40
DOI: 10.1126/science.aar4952

You are currently viewing the summary.

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution

Summary

News stories on international large-scale education assessments (ILSAs) tend to highlight the performance of the media outlet's home country in comparison with the highest-scoring nations (in recent years, typically located in East Asia). Low (or declining) rankings can be so alarming that policy-makers leap to remedies—often ill-founded—on the basis of what they conclude is the “secret sauce” behind the top performers' scores. As statisticians studying the methods and policy uses of ILSAs (1), we believe the obsession with rankings—and the inevitable attempts to mimic specific features of the top performing systems—not only misleads, it diverts attention from more constructive uses of ILSA data. We highlight below the perils of drawing strong policy inferences from such highly aggregated data, illustrate benefits of conducting more nuanced analyses of ILSA data both within and across countries, and offer concrete suggestions for improving future ILSAs.