homehome Home chatchat Notifications


A human just defeated an AI in Go. Here's why that matters

Go is arguably the most complex game devised by mankind.

Mihai Andrei
February 24, 2023 @ 3:22 pm

share Share

In 2016, the news was that AI beat humans at Go. Fast forward seven years, and the news is that humans beat AI at Go. But it’s not like we got much smarter between tries — we simply learned to exploit its bugs.

A game of Go. Simple in essence, but extremely complex in practice. Image credits: Elena Popova.

Go is so mind-bendingly complex that it makes chess seem like tic tac toe. Go is played on a 19 by 19 board (compared to just 8 by 8 for chess), and a typical game of around 150 moves has around  10360 possible moves, or 1 followed by 360 zeroes — a number that’s simply unfathomable. For comparison, it’s estimated that there are some 1082 atoms in the universe.

Calculating everything in the game of Go is simply not possible, so players often rely on their intuition and pattern recognition skills, which is why Go was thought to be unconquerable by AIs. But in 2016, DeepMind’s AlphaGo turned all that on its head. Despite staunch resistance from mankind’s champion, AI triumphed and got more and more ahead of humanity.

The best player of Go is currently KataGo, a machine-learning algorithm that taught itself how to play, surpassing even previous AI iterations.

KataGo is a monster, it just wipes the floor with all opponents. But researchers have been looking for potential flaws or weaknesses in KataGo. Recently, a team of researchers published a preprint of their research in which they describe how they train their own AI opponents, specifically aimed at KataGo. They don’t want to become better players, they just want to trick the AI.

“Notably, our adversaries do not win by learning to play Go better than KataGo – in fact, our adversaries are easily beaten by human amateurs,” the team wrote in their paper. “Instead, our adversaries win by tricking KataGo into making serious blunders.”

This is where Kellin Pelrine steps in. Pelrine is a good player, but an amateur. Specifically, he’s one level below the top amateur ranking. He’s also one of the study authors, so he was well aware of the vulnerabilities of KataGo, so he thought why not try his own hand?

Apparently, it was surprisingly easy to find a way to defeat AI by exploiting its weakness. Pelrine managed to beat KataGo 14 out of 15 times. For comparison, KataGo beat AlphaGo 100 times out of 100, and AlphaGo beat mankind’s best player 4-1.

But as is so often the case, this isn’t about the game itself, it’s about what this means for the future of artificial intelligence. The main takeaway is that performance doesn’t always translate into robustness. This failure of the Go-playing algorithm is a bit like a self-driving car crashing into a tree because the bark had a specific color. In other words, even when something seems to be performing extremely well, there could be fringe situations where it behaves badly. This is less of a problem in Go, and more of a problem when AI steps into the real world, so this is an important cautionary tale.

Crucially, Pelrine’s tactic would have been quite easily spotted by a human. He simply created a loop of stones to encircle the opponent’s stones, but then started making moves in the corners of the board to distract the AI. It’s not completely trivial, says Pelrine, but not very difficult.

Artificial systems, however, don’t have the ability to react to situations they’re not prepared for. They don’t have “common sense”. In fact, this is why game-playing AIs are so important: they teach us about how these algorithms behave — not just in terms of opportunities and performance, but also in terms of what can go wrong.

It’s common to find flaws and exploits in AI systems. Ironically, this is also done with the aid of computers, but this field is extremely important and often overlooked. More and more, we’re seeing AIs being deployed into the world with little verification. Maybe, just maybe, we should learn from this type of event and pay more attention to how we deploy such systems in real life.

share Share

Ford Pinto used to be the classic example of a dangerous car. The Cybertruck is worse

Is the Cybertruck bound to be worse than the infamous Pinto?

A Software Engineer Created a PDF Bigger Than the Universe and Yes It's Real

Forget country-sized PDFs — someone just made one bigger than the universe.

The World's Tiniest Pacemaker is Smaller Than a Grain of Rice. It's Injected with a Syringe and Works using Light

This new pacemaker is so small doctors could inject it directly into your heart.

Researchers Say They’ve Solved One of the Most Annoying Flaws in AI Art

A new method that could finally fix the bizarre distortions in AI-generated images when they're anything but square.

Scientists Just Found a Hidden Battery Life Killer and the Fix Is Shockingly Simple

A simple tweak could dramatically improve the lifespan of Li-ion batteries.

Westerners cheat AI agents while Japanese treat them with respect

Japan’s robots are redefining work, care, and education — with lessons for the world.

A Brain Implant Just Turned a Woman’s Thoughts Into Speech in Near Real Time

This tech restores speech in real time for people who can’t talk, using only brain signals.

If you use ChatGPT a lot, this study has some concerning findings for you

So, umm, AI is not your friend — literally.

The Soviets Built a Jet Powered Train and It Was as Wild as It Sounds

This thing was away of its time and is now building rust in a scrapyard.

Miyazaki Hates Your Ghibli-fied Photos and They're Probably a Copyright Breach Too

“I strongly feel that this is an insult to life itself,” he said.