homehome Home chatchat Notifications


New chess AI achieves top human performance without even looking for the best move

Without any explicit search, the engine is able to reach Grandmaster performance levels.

Mihai Andrei
February 21, 2024 @ 12:40 am

share Share

Former chess world champion José Raúl Capablanca once famously said “I see only one move ahead, but it is always the correct one.” Capablanca, widely hailed as one of the greatest chess players in history, was definitely onto something. In addition to a healthy layer of bravado, it turns out that you can play good chess without actually looking for multiple moves. At least, if you’re an AI.

AI chess
This image was (of course) generated by AI.

For decades, the gold standard in computer chess was epitomized by engines like IBM’s Deep Blue, which famously bested world champion Garry Kasparov in 1997. These systems relied on brute-force calculations, evaluating millions of potential moves and their outcomes with the help of vast databases and complex algorithms.

It wasn’t long before computers became way, way better than humans at chess; nowadays, it’s not even a competition anymore. In fact, computers are so much better that top human players trust their judgement and practice based on computer evaluations and recommendations.

Then AIs came along and showed that there’s a different way to do things. You didn’t need brute-force calculation and raw power. Granted, that also helps, but you can use existing games to get the chess engine to develop a sort of intuition. By that we mean that the chess algorithm (or “engine”) “likes” certain moves even without calculating their outcomes all the way to the end.

Researchers didn’t stop here and had AI chess engines learn chess without even being taught the rules (or offered any data from human games). The AIs were just left to play a bajillion games against each other and learn what they could from that. Even with this approach, the AIs managed to reach superhuman performance in chess.

But this is different.

A departure from other models

Traditional chess engines rely on deeply analyzing future possible moves (searching) and evaluating them with complex heuristics to decide on the best course of action. The new algorithm, presented by Google DeepMind, has taken a different approach. It learned from a vast dataset of historical chess games annotated by the Stockfish 16 engine, one of the strongest chess programs in the world.

The core of DeepMind’s innovation is a transformer model, a type of neural network that has revolutionized fields like natural language processing. This model, with its 270 million parameters, was trained on a dataset comprising 10 million chess games. Remarkably, this AI achieved a performance rating equivalent to a Grandmaster performance. Chess players get rankings based on their performance, and Grandmaster is the highest one (specifically, the AI got an Elo rating of 2895 in blitz chess, played using a shorter time control).

However, it’s essential to clarify that while the AI does not employ explicit search algorithms during its decision-making process, the neural network itself inherently performs a form of implicit calculation by evaluating the board’s state to predict the best move. This process is fundamentally different from the explicit, step-by-step lookahead search performed by traditional chess engines but still involves a form of calculation in terms of processing and interpreting the input data to arrive at a decision.

Why this matters

This is important for two reasons. The first is chess specific: it shows that contrary to popular belief, you don’t need to calculate too many moves to be a good player. Yep, Capablanca was right.

The second is perhaps more impactful because it shows a new way in which AI is capable of making decisions.

The success of DeepMind’s model is a testament to the power of scale in machine learning. The research highlights that the model’s ability to play chess at such a high level is contingent on the sheer volume of data it was trained on and the complexity of its neural network architecture. This finding aligns with a broader trend in AI research, where larger datasets and more sophisticated models have led to breakthroughs in various domains, from language understanding to image recognition.

In essence, DeepMind’s research is a vivid illustration of AI’s potential to learn and excel in complex domains, relying not on hard-coded rules and strategies, but on the ability to discern patterns and make predictions based on vast amounts of data.

This isn’t really about chess. This is more about showing that despite its huge popularity, we’ve still only scratched the surface of what AI can actually do. This is exciting and (if we’re being honest) a bit frightening as well.

The study was published in arXiv.

share Share

How Hot is the Moon? A New NASA Mission is About to Find Out

Understanding how heat moves through the lunar regolith can help scientists understand how the Moon's interior formed.

America’s Favorite Christmas Cookies in 2024: A State-by-State Map

Christmas cookie preferences are anything but predictable.

The 2,500-Year-Old Gut Remedy That Science Just Rediscovered

A forgotten ancient clay called Lemnian Earth, combined with a fungus, shows powerful antibacterial effects and promotes gut health in mice.

Should we treat Mars as a space archaeology museum? This researcher believes so

Mars isn’t just a cold, barren rock. Anthropologists argue that the tracks of rovers and broken probes are archaeological treasures.

Hidden for Centuries, the World’s Largest Coral Colony Was Mistaken for a Shipwreck

This massive coral oasis offers a rare glimmer of hope.

This Supermassive Black Hole Shot Out a Jet of Energy Unlike Anything We've Seen Before

A gamma-ray flare from a black hole 6.5 billion times the Sun’s mass leaves scientists stunned.

Scientists Say Antimatter Rockets Could Get Us to the Stars Within a Lifetime — Here’s the Catch

The most explosive fuel in the universe could power humanity’s first starship.

Superflares on Sun-Like Stars Are Much More Common Than We Thought

Sun-like stars release massive quantities of radiation into space more often than previously believed.

This Wild Quasiparticle Switches Between Having Mass and Being Massless. It All Depends on the Direction It Travels

Scientists have stumbled upon the semi-Dirac fermion, first predicted 16 years ago.

New Study Suggests GPT Can Outsmart Most Exams, But It Has a Weakness

Professors should probably start changing how they evaluate students.