homehome Home chatchat Notifications


Now computers are also beating us at Starcraft

Training computers to beat humans at war-like strategy games. What could go wrong.

Mihai Andrei
January 25, 2019 @ 12:41 pm

share Share

When the recent World Chess Championship match took place, it didn’t decide the best player on Earth — only the best human player on Earth. For years, computers have been beating us at chess and the recent Artificial Intelligence (AI) developments have only solidified their dominance. Go, the ancient strategy game which is immensely more complex than chess, was also mastered by an AI.

Now, AIs have their eyes set on our favorite strategy games — and they’re doing an excellent job.

Image via Deep Mind.

Google’s Deep Mind has the ambition to solve some of the world’s most challenging problems, but along the way, researchers are training it with board- and computer games alike.

In addition to obvious differences in gameplay, there’s another fundamental difference between games like chess and Starcraft II: vision. In chess, you have full information on what’s happening on the board, whereas in Starcraft, you only see you units and a small area around them — the rest is hidden by the “fog of war”. This type of uncertainty has been difficult to deal with for AIs, which have continuously struggled to grapple with the fog of war. StarCraft has therefore emerged as a “grand challenge” for AI research, being one of the most difficult games to master.

After months of training, DeepMind released AlphaStar — the cousin of AlphaZero and AlphaGo, which played chess and Go respectively. AlphaStar was trained directly from raw game data by supervised learning and reinforcement learning. In other words, it learned from the best humans. In contrast, the recent version of AlphaZero went through an unsupervised learning process, playing countless games against itself. It got better after each iteration and developed its own unique style, which led to spectacular games. However, this was not possible in Starcraft.

Even so, AlphaStar’s success was remarkable. After a training game, it faced two professional players and defeated them convincingly.

“In a series of test matches held on 19 December, AlphaStar decisively beat Team Liquid’s Grzegorz “MaNa” Komincz, one of the world’s strongest professional StarCraft players, 5-0, following a successful benchmark match against his team-mate Dario “TLO” Wünsch. The matches took place under professional match conditions on a competitive ladder map and without any game restrictions,” DeepMind writes.

However, this happened when AlphaStar was given complete reign over what it was allowed to do. It shined in the “micro” aspects, controlling its units with stunning accuracy and precision, making correct decisions in a split second — which is ultimately what you’d expect from an AI. In the grand scheme of things, MaNa did great strategically, but he just couldn’t overpower his opponent.

Things change substantially, however, when AlphaStar was made a bit more “human”.

A visualization of the AlphaStar agent during game two of the match against MaNa. This shows the game from the agent’s point of view: the raw observation input to the neural network, the neural network’s internal activations, some of the considered actions the agent can take such as where to click and what to build, and the predicted outcome. Image credits: Deep Mind.

In an additional game streamed on Twitch, AlphaStar was hobbled in some ways (like only being allowed to “see” by moving the focus of the in-game camera and not being allowed to make more clicks than a human would), which most commentators agreed was “fair.” Although it still did very well, MaNa ultimately managed to defeat the AI, scoring mankind’s only win so far.

However, to be fair, some of this result might be owed to the element of surprise. AlphaStar was very familiar with the style of human play, whereas humans weren’t really sure what to expect. This type of issue seems very similar to what happened in Dota 2, a game which shares many similarities with Starcraft. When humans first played against the algorithm, they were defeated handily and were surprised by the strategies employed by the AI. When they returned knowing what to expect, they did a much better job and were able to beat the AI.

Another aspect worth mentioning is that although AlphaStar faced professional opponents, they weren’t the best of the best — so given a fair playground, mankind still probably keeps the crown — but only barely.

Starcraft is a type of rock-paper-scissors game, where there’s no ideal strategy: everything is good against something and weak against something else. The Deep Mind researchers created a league where AIs duked it out between one another, akin to human matchmaking play. New competitors were dynamically added to the league, by branching from existing competitors.

Estimate of the Match Making Rating (MMR) — an approximate measure of a player’s skill — for competitors in the AlphaStar league, throughout training, in comparison to Blizzard’s online leagues. Image credits: Deep Mind. All the competitors developed new strategies and learned from one another, taking advantage of Starcrafts huge strategic potential. For instance, the first iteration attempted “cheesy” and very risky strategies, such as a quick rush with Photon Cannons or Dark Templars. These strategies were discarded as the AI progressed, leading to employ other, more complex strategies which focused on economic domination.

This was clearly visible in the type of units they developed.

As training progressed, AlphaStar built different units and chose varying tech trees. Image credits: Deep Mind.

Of course, beating humans at Starcraft can be a goal in and of itself, but Deep Mind has something much loftier in mind. They want to use Starcraft as a stepping stone towards addressing real-life complex issues such as climate change and language understanding.

“While StarCraft is just a game, albeit a complex one, we think that the techniques behind AlphaStar could be useful in solving other problems. For example, its neural network architecture is capable of modelling very long sequences of likely actions – with games often lasting up to an hour with tens of thousands of moves – based on imperfect information. Each frame of StarCraft is used as one step of input, with the neural network predicting the expected sequence of actions for the rest of the game after every frame. The fundamental problem of making complex predictions over very long sequences of data appears in many real world challenges, such as weather prediction, climate modelling, language understanding and more. We’re very excited about the potential to make significant advances in these domains using learnings and developments from the AlphaStar project.”

The Deep Mind team also said that Starcraft is a game which emphasizes many of the problems AIs have traditionally struggled with, and overcoming them could also pave the way for overcoming concrete issues in AI design.

However, a key problem is still unsolved: when an AI is pushed outside of its “comfort zone”, it collapses. This makes these algorithms surprisingly brittle — a kind of a “glass cannon” for solving specific issues. It’s important to develop robust algorithms capable of adapting to different types of situations, and it’s exactly here that playing Starcraft can make a substantial difference.

“Achieving the highest levels of StarCraft play represents a major breakthrough in one of the most complex video games ever created. We believe that these advances, alongside other recent progress in projects such as AlphaZero and AlphaFold, represent a step forward in our mission to create intelligent systems that will one day help us unlock novel solutions to some of the world’s most important and fundamental scientific problems.”

A full technical description of this work is being prepared for publication in a peer-reviewed journal, Deep Mind concludes.

share Share

This 5,500-year-old Kish tablet is the oldest written document

Beer, goats, and grains: here's what the oldest document reveals.

A Huge, Lazy Black Hole Is Redefining the Early Universe

Astronomers using the James Webb Space Telescope have discovered a massive, dormant black hole from just 800 million years after the Big Bang.

Did Columbus Bring Syphilis to Europe? Ancient DNA Suggests So

A new study pinpoints the origin of the STD to South America.

The Magnetic North Pole Has Shifted Again. Here’s Why It Matters

The magnetic North pole is now closer to Siberia than it is to Canada, and scientists aren't sure why.

For better or worse, machine learning is shaping biology research

Machine learning tools can increase the pace of biology research and open the door to new research questions, but the benefits don’t come without risks.

This Babylonian Student's 4,000-Year-Old Math Blunder Is Still Relatable Today

More than memorializing a math mistake, stone tablets show just how advanced the Babylonians were in their time.

Sixty Years Ago, We Nearly Wiped Out Bed Bugs. Then, They Started Changing

Driven to the brink of extinction, bed bugs adapted—and now pesticides are almost useless against them.

LG’s $60,000 Transparent TV Is So Luxe It’s Practically Invisible

This TV screen vanishes at the push of a button.

Couple Finds Giant Teeth in Backyard Belonging to 13,000-year-old Mastodon

A New York couple stumble upon an ancient mastodon fossil beneath their lawn.

Worms and Dogs Thrive in Chernobyl’s Radioactive Zone — and Scientists are Intrigued

In the Chernobyl Exclusion Zone, worms show no genetic damage despite living in highly radioactive soil, and free-ranging dogs persist despite contamination.