When the recent World Chess Championship match took place, it didn’t decide the best player on Earth — only the best human player on Earth. For years, computers have been beating us at chess and the recent Artificial Intelligence (AI) developments have only solidified their dominance. Go, the ancient strategy game which is immensely more complex than chess, was also mastered by an AI.
Now, AIs have their eyes set on our favorite strategy games — and they’re doing an excellent job.
Google’s Deep Mind has the ambition to solve some of the world’s most challenging problems, but along the way, researchers are training it with board- and computer games alike.
In addition to obvious differences in gameplay, there’s another fundamental difference between games like chess and Starcraft II: vision. In chess, you have full information on what’s happening on the board, whereas in Starcraft, you only see you units and a small area around them — the rest is hidden by the “fog of war”. This type of uncertainty has been difficult to deal with for AIs, which have continuously struggled to grapple with the fog of war. StarCraft has therefore emerged as a “grand challenge” for AI research, being one of the most difficult games to master.
After months of training, DeepMind released AlphaStar — the cousin of AlphaZero and AlphaGo, which played chess and Go respectively. AlphaStar was trained directly from raw game data by supervised learning and reinforcement learning. In other words, it learned from the best humans. In contrast, the recent version of AlphaZero went through an unsupervised learning process, playing countless games against itself. It got better after each iteration and developed its own unique style, which led to spectacular games. However, this was not possible in Starcraft.
Even so, AlphaStar’s success was remarkable. After a training game, it faced two professional players and defeated them convincingly.
“In a series of test matches held on 19 December, AlphaStar decisively beat Team Liquid’s Grzegorz “MaNa” Komincz, one of the world’s strongest professional StarCraft players, 5-0, following a successful benchmark match against his team-mate Dario “TLO” Wünsch. The matches took place under professional match conditions on a competitive ladder map and without any game restrictions,” DeepMind writes.
However, this happened when AlphaStar was given complete reign over what it was allowed to do. It shined in the “micro” aspects, controlling its units with stunning accuracy and precision, making correct decisions in a split second — which is ultimately what you’d expect from an AI. In the grand scheme of things, MaNa did great strategically, but he just couldn’t overpower his opponent.
Things change substantially, however, when AlphaStar was made a bit more “human”.
In an additional game streamed on Twitch, AlphaStar was hobbled in some ways (like only being allowed to “see” by moving the focus of the in-game camera and not being allowed to make more clicks than a human would), which most commentators agreed was “fair.” Although it still did very well, MaNa ultimately managed to defeat the AI, scoring mankind’s only win so far.
However, to be fair, some of this result might be owed to the element of surprise. AlphaStar was very familiar with the style of human play, whereas humans weren’t really sure what to expect. This type of issue seems very similar to what happened in Dota 2, a game which shares many similarities with Starcraft. When humans first played against the algorithm, they were defeated handily and were surprised by the strategies employed by the AI. When they returned knowing what to expect, they did a much better job and were able to beat the AI.
Another aspect worth mentioning is that although AlphaStar faced professional opponents, they weren’t the best of the best — so given a fair playground, mankind still probably keeps the crown — but only barely.
Starcraft is a type of rock-paper-scissors game, where there’s no ideal strategy: everything is good against something and weak against something else. The Deep Mind researchers created a league where AIs duked it out between one another, akin to human matchmaking play. New competitors were dynamically added to the league, by branching from existing competitors.
Estimate of the Match Making Rating (MMR) — an approximate measure of a player’s skill — for competitors in the AlphaStar league, throughout training, in comparison to Blizzard’s online leagues. Image credits: Deep Mind. All the competitors developed new strategies and learned from one another, taking advantage of Starcrafts huge strategic potential. For instance, the first iteration attempted “cheesy” and very risky strategies, such as a quick rush with Photon Cannons or Dark Templars. These strategies were discarded as the AI progressed, leading to employ other, more complex strategies which focused on economic domination.
This was clearly visible in the type of units they developed.
Of course, beating humans at Starcraft can be a goal in and of itself, but Deep Mind has something much loftier in mind. They want to use Starcraft as a stepping stone towards addressing real-life complex issues such as climate change and language understanding.
“While StarCraft is just a game, albeit a complex one, we think that the techniques behind AlphaStar could be useful in solving other problems. For example, its neural network architecture is capable of modelling very long sequences of likely actions – with games often lasting up to an hour with tens of thousands of moves – based on imperfect information. Each frame of StarCraft is used as one step of input, with the neural network predicting the expected sequence of actions for the rest of the game after every frame. The fundamental problem of making complex predictions over very long sequences of data appears in many real world challenges, such as weather prediction, climate modelling, language understanding and more. We’re very excited about the potential to make significant advances in these domains using learnings and developments from the AlphaStar project.”
The Deep Mind team also said that Starcraft is a game which emphasizes many of the problems AIs have traditionally struggled with, and overcoming them could also pave the way for overcoming concrete issues in AI design.
However, a key problem is still unsolved: when an AI is pushed outside of its “comfort zone”, it collapses. This makes these algorithms surprisingly brittle — a kind of a “glass cannon” for solving specific issues. It’s important to develop robust algorithms capable of adapting to different types of situations, and it’s exactly here that playing Starcraft can make a substantial difference.
“Achieving the highest levels of StarCraft play represents a major breakthrough in one of the most complex video games ever created. We believe that these advances, alongside other recent progress in projects such as AlphaZero and AlphaFold, represent a step forward in our mission to create intelligent systems that will one day help us unlock novel solutions to some of the world’s most important and fundamental scientific problems.”
A full technical description of this work is being prepared for publication in a peer-reviewed journal, Deep Mind concludes.