American Go E-Journal

AlphaGo, KataGo, and the future of AI

Saturday June 6, 2020

Visualization of ownership predictions by KataGo

“There’s something magical about the game of go,” writes Branton DeMoss in a recent blog post. “For thousands of years, it has captured the imagination of those who want to learn what it is to learn, to think about what thinking means. With the recent advent of strong, open source go AI that can beat top professionals, it’s worth tracing the history of the game, why it remained so difficult to beat humans for so long, and what the future of go may hold.”

DeMoss explores the evolution of computer go, and then discusses how AlphaGo differs from the open source Katago. “KataGo attempts to predict a greater number of game outcomes than just value,’ says DeMoss, “in particular, KataGo also predicts final territory control, final score difference, and from each board state the opponent’s next move. As a result of these improvements, KataGo massively outperforms Leela Zero and Facebook’s ELF bot in learning efficiency. KataGo achieves a factor of fifty improvement in training efficiency vs. ELF”.

The creator of KataGo, David J. Wu, answers some of DeMoss’s questions at the end of the article. “I think the AlphaZero-style training loop using MCTS (Monte Carlo Tree Search) is not the last word on [things like] this,” says Wu. “Blind spots are just the most visible of the flaws, but there are some technical and theoretical details you can dig into that start to make it clear that there are some practical problems with how exploration and move discovery work in this loop, some basic theoretical flaws involving mismatches between the neural net’s training distribution and usage, and also some fundamental ‘missing’ capabilities in current bots in terms of using search effectively.” The full blog post can be read here. -Story by Paul Barchilon. image from Accelerating Self-Play Learning in Go, by David J. Wu.