Evaluating Computer Scoring

See allHide authors and affiliations

Science  29 Nov 2013:
Vol. 342, Issue 6162, pp. 1021
DOI: 10.1126/science.342.6162.1021-b

The preferred way to evaluate science students' argumentation and communication skills is through written essays and oral interviews. Because these assessment methods are time-consuming for teachers, automated grading machines are being developed. Beggrow et al. tested the knowledge of 104 undergraduate students exposed to varying amounts of biological evolution content by using three types of assessment: an oral interview with two researchers; a written, open-response assessment scored by both a human and a computer; and a multiple-choice test scored by a computer. Regression and correlation analysis of the data, showed that the multiple-choice test results were most weakly correlated with interview results, whereas the computer-graded written test had the strongest correlation with oral interview results. This suggests that multiple-choice tests are not the best way to evaluate students' debate and communication skills and should be replaced with computer-graded short-answers essays.

J. Sci. Educ. Technol. 10.1007/s10956-013-9461-9 (2013).

Stay Connected to Science

Navigate This Article