homehome Home chatchat Notifications


Google's MuZero chess AI reached superhuman performance without even knowing the rules

This gives it a surprisingly human-like intuition.

Mihai Andrei
October 8, 2021 @ 10:06 pm

share Share

Artificial Intelligence is becoming more and more intelligent — and more and more human-like.

Image credits: DeepMind

A lot of things have changed in modern chess compared to the past, but the most important change is the hegemony of computers. Take Magnus Carlsen — who, over the past decade, has been the uncontested world chess champion — he can’t really claim to be the best chess player, only the best human player.

Chess algorithms have long surpassed the human ability to play the game, for a very simple reason: they can memorize and calculate simple tasks far better than we can. But when AI’s started entering the scene, chess algo’s were also in for a revolution.

Traditionally, chess algorithms were trained in a very straightforward way: they were taught the rules of the game, fed a huge database of games, taught how to calculate, and off they went. But Google’s AlphaZero, for instance, takes a very different approach.AlphaZero has become, arguably, the best chess-playing entity in the world without studying a single human game. Instead, it was only taught the rules of the game and allowed to play against itself over and over. Intriguingly, this not only enabled it to achieve remarkable prowess, but also to develop a style of its own. Unlike traditional algorithms which play very concrete, grinding type of games, AlphaZero tends to play in a very conceptual and creative way (though the word ‘creative’ will surely annoy some readers). For instance, AlphaZero would often sacrifice a piece with no immediate reward in sight — it itself doesn’t necessarily calculate all the outcomes. Instead of playing moves that it can fully calculate to be better, which is what most algorithms do, AlphaZero plays moves that seem better.

It’s a surprisingly human way to approach the game, although many of AlphaZero’s moves seem distinctly inhuman.

Now, Google’s researchers have taken things to the next level with MuZero.

Unlike AlphaZero, MuZero wasn’t even told the rules of chess. It wasn’t allowed to make any illegal moves, but it was allowed to ponder them. This allows the algorithm to think in a more human way, considering threats and possibilities even when they might not be apparent or possible at a given time. For instance, the threat of losing an exposed piece might always be present in the back of a human player’s mind, even though it is not threatened at the moment.

Researchers say that this also allows MuZero to develop an internal intuition regarding the rules of the game.

The Elo evaluation of MuZero throughout training in chess, shogi, Go, and Atari. Image Credit: DeepMind

This led to remarkably good performances. Although the details that researchers presented are sparse, they claim that MuZero achieved the same performance as AlphaZero. But it gets even better.

Researchers didn’t only train the engine in chess, they also trained it in go, shogi, and 57 Atari games commonly used in this sort of study.

The most impressive results came from Go, a game that is unfathomably more complex than chess. MuZero slightly exceeded the performance of AlphaZero despite using less overall computation, which seems to indicate that MuZero has a deeper understanding of the game and the positions it was playing. Similar performances were reported in the Atari games, where MuZero outperformed state-of-the-art engines in 42 out of 57 games.

Of course, there is much more to this than just chess, Go, or PacMan. There are very concrete lessons that can be applied in artificial intelligence in a very practical setting.

“Many of the breakthroughs in artificial intelligence have been based on either high-performance planning,” wrote the researchers. “In this paper we have introduced a method that combines the benefits of both approaches. Our algorithm, MuZero, has both matched the superhuman performance of high-performance planning algorithms in their favored domains — logically complex board games such as chess and Go — and outperformed state-of-the-art model-free [reinforcement learning] algorithms in their favored domains — visually complex Atari games.”

The study can be read in a preprint on ArXiv.

share Share

America’s Favorite Christmas Cookies in 2024: A State-by-State Map

Christmas cookie preferences are anything but predictable.

The 2,500-Year-Old Gut Remedy That Science Just Rediscovered

A forgotten ancient clay called Lemnian Earth, combined with a fungus, shows powerful antibacterial effects and promotes gut health in mice.

Should we treat Mars as a space archaeology museum? This researcher believes so

Mars isn’t just a cold, barren rock. Anthropologists argue that the tracks of rovers and broken probes are archaeological treasures.

Hidden for Centuries, the World’s Largest Coral Colony Was Mistaken for a Shipwreck

This massive coral oasis offers a rare glimmer of hope.

This Supermassive Black Hole Shot Out a Jet of Energy Unlike Anything We've Seen Before

A gamma-ray flare from a black hole 6.5 billion times the Sun’s mass leaves scientists stunned.

Scientists Say Antimatter Rockets Could Get Us to the Stars Within a Lifetime — Here’s the Catch

The most explosive fuel in the universe could power humanity’s first starship.

Superflares on Sun-Like Stars Are Much More Common Than We Thought

Sun-like stars release massive quantities of radiation into space more often than previously believed.

This Wild Quasiparticle Switches Between Having Mass and Being Massless. It All Depends on the Direction It Travels

Scientists have stumbled upon the semi-Dirac fermion, first predicted 16 years ago.

New Study Suggests GPT Can Outsmart Most Exams, But It Has a Weakness

Professors should probably start changing how they evaluate students.

Killer whales target whale sharks in rarely seen hunting strategy

Orcas have been observed launching synchronized attacks hunting whale sharks for the first time.