homehome Home chatchat Notifications


Google's MuZero chess AI reached superhuman performance without even knowing the rules

This gives it a surprisingly human-like intuition.

Mihai Andrei
October 8, 2021 @ 10:06 pm

share Share

Artificial Intelligence is becoming more and more intelligent — and more and more human-like.

Image credits: DeepMind

A lot of things have changed in modern chess compared to the past, but the most important change is the hegemony of computers. Take Magnus Carlsen — who, over the past decade, has been the uncontested world chess champion — he can’t really claim to be the best chess player, only the best human player.

Chess algorithms have long surpassed the human ability to play the game, for a very simple reason: they can memorize and calculate simple tasks far better than we can. But when AI’s started entering the scene, chess algo’s were also in for a revolution.

Traditionally, chess algorithms were trained in a very straightforward way: they were taught the rules of the game, fed a huge database of games, taught how to calculate, and off they went. But Google’s AlphaZero, for instance, takes a very different approach.AlphaZero has become, arguably, the best chess-playing entity in the world without studying a single human game. Instead, it was only taught the rules of the game and allowed to play against itself over and over. Intriguingly, this not only enabled it to achieve remarkable prowess, but also to develop a style of its own. Unlike traditional algorithms which play very concrete, grinding type of games, AlphaZero tends to play in a very conceptual and creative way (though the word ‘creative’ will surely annoy some readers). For instance, AlphaZero would often sacrifice a piece with no immediate reward in sight — it itself doesn’t necessarily calculate all the outcomes. Instead of playing moves that it can fully calculate to be better, which is what most algorithms do, AlphaZero plays moves that seem better.

It’s a surprisingly human way to approach the game, although many of AlphaZero’s moves seem distinctly inhuman.

Now, Google’s researchers have taken things to the next level with MuZero.

Unlike AlphaZero, MuZero wasn’t even told the rules of chess. It wasn’t allowed to make any illegal moves, but it was allowed to ponder them. This allows the algorithm to think in a more human way, considering threats and possibilities even when they might not be apparent or possible at a given time. For instance, the threat of losing an exposed piece might always be present in the back of a human player’s mind, even though it is not threatened at the moment.

Researchers say that this also allows MuZero to develop an internal intuition regarding the rules of the game.

The Elo evaluation of MuZero throughout training in chess, shogi, Go, and Atari. Image Credit: DeepMind

This led to remarkably good performances. Although the details that researchers presented are sparse, they claim that MuZero achieved the same performance as AlphaZero. But it gets even better.

Researchers didn’t only train the engine in chess, they also trained it in go, shogi, and 57 Atari games commonly used in this sort of study.

The most impressive results came from Go, a game that is unfathomably more complex than chess. MuZero slightly exceeded the performance of AlphaZero despite using less overall computation, which seems to indicate that MuZero has a deeper understanding of the game and the positions it was playing. Similar performances were reported in the Atari games, where MuZero outperformed state-of-the-art engines in 42 out of 57 games.

Of course, there is much more to this than just chess, Go, or PacMan. There are very concrete lessons that can be applied in artificial intelligence in a very practical setting.

“Many of the breakthroughs in artificial intelligence have been based on either high-performance planning,” wrote the researchers. “In this paper we have introduced a method that combines the benefits of both approaches. Our algorithm, MuZero, has both matched the superhuman performance of high-performance planning algorithms in their favored domains — logically complex board games such as chess and Go — and outperformed state-of-the-art model-free [reinforcement learning] algorithms in their favored domains — visually complex Atari games.”

The study can be read in a preprint on ArXiv.

share Share

Beetles Conquered Earth by Evolving a Tiny Chemical Factory

There are around 66,000 species of rove beetles and one researcher proposes it's because of one special gland.

These researchers counted the trees in China using lasers

The answer is 142 billion. Plus or minus a few, of course.

New Diagnostic Breakthrough Identifies Bacteria With Almost 100% Precision in Hours, Not Days

A new method identifies deadly pathogens with nearly perfect accuracy in just three hours.

This Tamagotchi Vape Dies If You Don’t Keep Puffing

Yes. You read that correctly. The Stupid Hackathon is an event like no other.

Wild Chimps Build Flexible Tools with Impressive Engineering Skills

Chimpanzees select and engineer tools with surprising mechanical precision to extract termites.

Archaeologists in Egypt discovered a 3,600-Year-Old pharaoh. But we have no idea who he is

An ancient royal tomb deep beneath the Egyptian desert reveals more questions than answers.

Researchers create a new type of "time crystal" inside a diamond

“It’s an entirely new phase of matter.”

Strong Arguments Matter More Than Grammar in English Essays as a Second Language

Grammar takes a backseat to argumentation, a new study from Japan suggests.

A New Study Reveals AI Is Hiding Its True Intent and It's Getting Better At It

The more you try to get AI to talk about what it's doing, the sneakier it gets.

Cat Owners Wanted for Science: Help Crack the Genetic Code of Felines

Cats are beloved family members in tens of millions of households, but we know surprisingly little about their genes.