In 2016, long before ChatGPT and Dall-E flooded the internet with AI-generated content, Google’s DeepMind AI division unveiled a new machine that left everyone speechless. Called AlphaGo, the AI program shocked everyone by beating Lee Sedol, one of the strongest players in the history of Go, an ancient board game that is orders of magnitude more complex than chess.
A year later, DeepMind released AlphaGo Zero. Unlike its previous iteration, AlphaGo Zero went on to destroy the top human players in 60 games by teaching itself how to play the game, without having to study the strategies of human masters as initial input.
So it seems like yet another game has been ruined through sheer brute force by computers, just like computer programs have gained the upper hand in chess ever since IBM’s Deep Blue beat Garry Kasparov in 1996.
But not all human players are ready to hang up the towel.
AI has grown strong but humans are adapting too
If you haven’t heard of Go, you’re not alone. Largely played in China, Korea and Japan, the 2,500-year-old board game consists of a grid of 19 lines by 19 lines, creating 361 intersections. The players take turns placing black and white stones, called “goishi,” on the intersections, with the goal of surrounding and capturing their opponent’s stones, while also claiming as much territory as possible for themselves.
As the game progresses, players must navigate complex strategies, weighing the value of each move and anticipating their opponent’s next move. One of the most fascinating aspects of Go is the concept of “joseki,” or standard sequences of moves that have been developed over centuries of play for which the result is considered balanced for both black and white sides. These joseki offer players a framework for developing their own strategies and responding to their opponent’s moves.
But perhaps the most awe-inspiring aspect of Go is its sheer complexity. With more possible board positions than there are atoms in the universe, Go offers a near-infinite realm of possibilities for exploration and discovery.
In the aftermath of the string of humiliating defeats, many have wondered if there was any more room for human players at the top of Go. Some of these people include Minkyu Shin at the City University of Hong Kong, who conducted a new study in which they used similar AI programs that can compete and destroy human Go players to analyze the quality of Go moves.
This also means that such systems can function essentially as a Go coach. Case in point, in February 2023, an amateur Go player decisively defeated KataGo — one of the highest-ranked AI systems for GO — winning 14 of 15 games by exploiting a weakness discovered by a second computer. The player in question, the American Kellin Pelrine, used the same strategy to beat Leela Zero, another top Go AI.
These developments showed that AI systems can be monkey-wrenched when faced with gameplay that they never encountered in their training, at which point they can behave rather stupidly — and Shin’s research shows this isn’t some isolated case.
The researchers gathered a vast dataset of 5.8 million move decisions made by professional players between 1950 and 2021. They used an AI to calculate a measure called a “decision quality index” (DQI) which evaluates the quality of a move. They labeled a move as “novel” if it had not been attempted in combination with preceding moves.
The analysis revealed that human players had made significantly better and more novel moves in response to the introduction of superhuman AI in 2016. Before that year, the improvement in the quality of play was relatively small, with a median annual DQI oscillating between roughly -0.2 and 0.2. However, after the advent of superhuman AI, the DQI spiked upwards, with median values above 0.7 from 2018 to 2021. In 2015, only 63 percent of games showed novel strategies, whereas by 2018, that figure had risen to 88 percent.
The findings presented in the Proceedings of the National Academy of Sciences suggest that the presence of superhuman AI has had a positive impact on human play, driving players to come up with more original and effective moves. This outcome may not have been immediately apparent, as some experts had predicted that the rise of AI in Go would discourage human players and stifle creativity.
In other words, ‘superhuman’ AI has pushed humans to become much more creative at Go, and we can only wonder if the same can be said about other fields which are currently being disrupted by such technologies.
By analyzing massive amounts of data, AI can identify moves that human players might not have considered, opening up new avenues for creative play. When a human player sees an AI making a bold move, they may be inspired to be just as bold in their next match. Like any competition, the presence of a formidable opponent can motivate players to raise their game and push beyond their limits. It is possible that the superhuman AI has played this role in the world of Go, inspiring human players to come up with ever-more-impressive moves.
Moving away from Go, could we say that AIs may be pushing creativity elsewhere? A lot of people are anxious about AI chatbots and how these kinds of systems could replace their jobs, for instance. As AI becomes increasingly advanced, will it become more difficult for humans to compete on a level playing field? Or will human intelligence continue to evolve in tandem with AI, producing new and exciting forms of collaboration and creativity? If these new findings are any indication, there is still some hope. AI has made humans better Go players, and perhaps it will turn us into better people overall. Time will tell.