In what seemed impossible just a few years ago, a computer has beaten a Go champion. Computer scientists in Google’s DeepMind division in the UK managed to achieve this feat, with their artificial intelligence (AI) defeating a human.
Ancient game, new players
Go is an ancient game. It was invented in China, over 2,500 years ago. Deceivingly simple in appearance, Go is actually an incredibly complex game. It is played by two people, who have black or white pieces. The goal is to surround more territory than the opponent. There is a great amount of theory and strategy involved in the game and the total number of moves in Go is estimated at 10761 compared for example to the estimated 10120 possible in chess. In other words, Go is much more complex than chess.
“The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves,” they write in the study.
For even more terms of comparison, the total number of atoms in the universe is estimated at around 1080. Needless to say, trying to crack Go with a computer like you’d crack chess is not really going to work.
“Traditional AI methods – which construct a search tree over all possible positions – don’t have a chance in Go,” writes DeepMind founder Demis Hassabis in a Google blog post. “So when we set out to crack Go, we took a different approach.”
Their approach was to build a system that blends in a neural network and an advanced tree search. Their system, called AlphaGo, learned from some 30 million moves in games played by human experts, to the point where he could anticipate the opponent’s move with an accuracy of 57 percent of the time, beating the previous record of 44 percent.
Then, the next step was to have AlphaGo play against itself, adjusting its trial-and-error strategies by playing thousands and thousands of games against itself. After that, they had it play against other Go AIs, where it pretty much smashes the competition. But the first real showing was against reigning three-time European Go champion, Fan Hui. This is where it gets interesting.
Computer vs Human
In 2014, less than two years ago, Wired wrote this engaging article about Go, calling it “the mysterious game that computers can’t win”. They estimated that it will take another 10 years before computers overcome humans at this strategy game and it seemed like a reasonable claim. Computers were already beating humans at checkers in 1994, and in 1997 the Deep Blue AI famously beat world champion Garry Kasparov – but Go stood strong; until now.
The computer won 5 games without losing a single one. It played just as good as its opponent, and it didn’t make any mistakes.
“The problem is humans sometimes make very big mistakes, because we are human. Sometimes we are tired, sometimes we so want to win the game, we have this pressure,” Fan told Elizabeth Gibney at Nature, describing the match. “The programme is not like this. It’s very strong and stable, it seems like a wall. For me this is a big difference. I know AlphaGo is a computer, but if no one told me, maybe I would think the player was a little strange, but a very strong player, a real person.”
Now, there’s only one challenge for the Go AI – to play against South Korea’s Lee Sedol, considered the top Go player in the world. Whether or not it will defeat the Go champion, AlphaGo made its point: Go is still a limited-possibility game, and computers will overcome humans sooner rather than later. But for its creators, it’s not just the competition achievements, but rather the way in which it learned and became so good at the game.
“We are thrilled to have mastered Go and thus achieved one of the grand challenges of AI,” writes Hassabis. “However, the most significant aspect of all this for us is that AlphaGo isn’t just an ‘expert’ system built with hand-crafted rules; instead it uses general machine learning techniques to figure out for itself how to win at Go. While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real-world problems.”
Journal Reference: Mastering the game of Go with deep neural networks and tree search