Software players

Go poses a daunting challenge to computer programmers.? While the strongest computer chess programs can defeat the best human players (for example, the Deep Fritz program, running on a laptop, beat reigning world champion Vladimir Kramnik without losing a single game in 2006), the best Go programs only manage to reach an intermediate amateur level. On the small 9?9 board, the computer fares better, and some programs have reached a strong amateur level. Human players generally achieve an intermediate amateur level by studying and playing regularly for a few years. Many in the field of artificial intelligence consider Go to require more elements that mimic human thought than chess.

A finished beginner's game on a 13?13 board. Go software can reach stronger levels on a smaller board size.

A finished beginner's game on a 13?13 board. Go software can reach stronger levels on a smaller board size.
The reasons why computer programs do not play Go well are attributed to many qualities of the game, including

  • The number of spaces on the board is much larger (more than five times the spaces on a chess board - 361 vs. 64). On most turns there are many more possible moves in Go than in chess. Throughout most of the game, the number of legal moves stays at around 150?250 per turn, and rarely goes below 50 (in chess, the average number of moves is 37).? Because an exhaustive computer program for Go must calculate and compare every possible legal move in each ply?(player turn), its ability to work out favorable lines of play is sharply reduced when there are a large number of possible moves. Most computer game algorithms, such as those for chess, compute several moves in advance. Given an average of 200 available moves through most of the game, for a computer to calculate its next move by exhaustively anticipating the next four moves of each possible play (two of its own and two of its opponent's), it would have to consider more than 320 billion (3.2???1011) possible combinations. To exhaustively calculate the next eight moves, would require computing 512 quintillion (5.12???1020) possible combinations. As of June 2008, the most powerful supercomputer in the world, IBM's "Roadrunner" distributed cluster, can sustain 1.02 petaflops. At this rate, even given an exceedingly low estimate of 10 flops required to assess the value of one play of a stone, Roadrunner would require 138 hours, more than five days, to assess all possible combinations of the next eight moves in order to make a single play.
  • Unlike chess and Reversi, the placement of a single stone in the initial phase can affect the play of the game hundreds of moves later. For a computer to have a real advantage over a human, it would have to predict this influence, and from the example above, it would be completely unworkable to attempt to exhaustively analyze the next hundred moves to predict what a stone's placement will do.
  • In capture-based games (such as chess), a position can often be evaluated relatively easily, such as by calculating who has a material advantage or more active pieces.? In Go, there is often no easy way to evaluate a position. The number of stones on the board (material advantage) is only a weak indicator of the strength of a position, and a territorial advantage (more empty points surrounded) for one player might be compensated by the opponent's strong positions and influence all over the board.

The first instance of a professional losing against a computer program on a 19?19 board was in August 2008. In an exhibition game during the US Go Congress, Kim Myeong-Wan 8 dan pro lost to the Mogo program while giving a 9 stone handicap. Since then, he has defeated the Mogo program and the Many Faces of Go program, each with a 7 stone handicap.

Google Advertise

Who's Online

We have 1288 guests online