PerspectiveComputer Science

Mastering board games

See allHide authors and affiliations

Science  07 Dec 2018:
Vol. 362, Issue 6419, pp. 1118
DOI: 10.1126/science.aav1175

You are currently viewing the summary.

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


From the earliest days of the computer era, games have been considered important vehicles for research in artificial intelligence (AI) (1). Game environments simplify many aspects of real-world problems yet retain sufficient complexity to challenge humans and machines alike. Most programs for playing classic board games have been largely human-engineered (2, 3). Sophisticated search methods, complex evaluation functions, and a variety of game-specific tricks have allowed programs to surpass the best human players. More recently, a learning approach achieved superhuman performance in the hardest of the classic games, Go (4), but was specific for this game and took advantage of human-derived game–specific knowledge. Subsequent work (5) removed the need for human knowledge, and additional algorithmic enhancements delivered further performance improvements. On page 1140 of this issue, Silver et al. (6) show that a generalization of this approach is effective across a variety of games. Their Alpha-Zero system learned to play three challenging games (chess, shogi, and Go) at the highest levels of play seen.