In DepthComputer Science

Has artificial intelligence become alchemy?

See allHide authors and affiliations

Science  04 May 2018:
Vol. 360, Issue 6388, pp. 478
DOI: 10.1126/science.360.6388.478

You are currently viewing the summary.

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution

Summary

Ali Rahimi, a researcher in artificial intelligence (AI) at Google in San Francisco, California, has charged that machine learning algorithms, in which computers learn through trial and error, have become a form of "alchemy." Researchers, he says, do not know why some algorithms work and others don't, nor do they have rigorous criteria for choosing one AI architecture over another. Now, in a paper presented on 30 April at the International Conference on Learning Representations in Vancouver, Canada, Rahimi and his collaborators document examples of what they see as the alchemy problem and offer prescriptions for bolstering AI's rigor. The issue is distinct from AI's reproducibility problem, in which researchers can't replicate each other's results because of inconsistent experimental and publication practices. It also differs from the "black box" or "interpretability" problem in machine learning: the difficulty of explaining how a particular AI has come to its conclusions.