homehome Home chatchat Notifications


Watch out, Messi: artificial intelligence has finally learned to play football

An artificial intelligence worked through decades' worth of soccer matches in just a few weeks, learning how to play the game.

Fermin Koop
September 11, 2022 @ 4:02 pm

share Share

DeepMind, Google’s artificial intelligence division, taught AI humanoids how to work as a team in order to play football together, turning them from flailing tots to proficient players. Researchers ran a computer simulation through an athletic curriculum, giving AI control over humanoids with realistic body masses and movements.

It’s not the first time DeepMind tried its hand at games. The AI previously mastered chess and Go, a feat that researchers thought was nigh impossible at one point. Then, the group focused on other games, like Mario or Starcraft. Now, the system seems ready to take on a “real” game.

The researchers at Google trained physically simulated AI to play two-vs-two games as part of an experiment to advance coordination between AI systems and offer new pathways towards building artificial general intelligence (AGI) that is a similar level to a human. They described how they were able to pull it off in a paper in the journal Science Robotics.

“Our agents acquired skills including agile locomotion, passing, and division of labor as demonstrated by a range of statistics,” DeepMind researchers wrote in a blog post. “The players exhibit both agile high-frequency motor control and long-term decision-making that involved anticipation of teammates’ behaviors, leading to coordinated team play.”

From motor control to embodied intelligence

This isn’t the first time AI mastered a game just by looking at it. A similar feat was accomplished with simple computer games — but watching a computer game being played and watching a “real” game being played by humans are two different things.

The researchers first fed motion-capture videos of humans playing football (soccer), training the humanoids to run naturally by imitating the videos. Then they practiced dribbling and shooting the ball through machine learning that rewarded the AI for staying close to the ball. These two phases meant about 1.5 years of simulation training time, which the AI raced through in 24 hours.

However, more complex behaviors beyond movement and ball control started to emerge after further simulations. The researchers challenged the humanoids to score goals in two-on-two games. They learn teamwork skills, such as anticipating where to receive a pass, in about 20 to 30 simulated years, equivalent to two to three weeks in the real world.

“We optimized teams of agents to play simulated football via reinforcement learning, constraining the solution space to that of plausible movements learned using human motion capture data,” the study reads. “The result is a team of coordinated humanoid football players that exhibit complex behavior at different scales, quantified by a range of analysis and statistics.”

The humanoids were trained on simple rules that allowed fouls, provided a wall-like boundary around the pitch, and avoided elements such as goal kicks and throw-ins. This and the long-learning times mean that the simulation won’t immediately lead to soccer-playing robots. The researchers are now teaching the robots how to push a ball toward a target.

While we won’t see AIs mastering FIFA or any other sports computer games, it’s an important achievement that could have long-term ramifications, not just for computer games, but also beyond.

For years, engineers have been trying to create robots capable of playing soccer. This has resulted in competition between groups to see who can create the best robot players. And that has led to the creation of the RoboCup, with several leagues in the real world and simulated. Now, researchers took another step, teaching robots how to play football without teaching the rules.

While watching them play football is kind of cool, that’s really not the end goal behind the experiment. It’s all part of research being done on “embodied intelligence”. This is the idea that a general artificial intelligence could one day be necessary to move around the world in some sort of physical form, and that the nature of that form might determine the way it behaves.

Think of it this way: if you can teach an AI system to perform (virtually) the tasks that humans do, that could be used to create real-life systems that do the same thing. We could basically teach robots to do the same things humans do — and the potential for that is huge.

“The results shown in this study constitute only a small step toward human-level motor intelligence in artificial systems,” the study reads. “Although the scenario in the present article was more challenging than many simulated environments considered in the community, it was lacking in complexity along important dimensions compared to real-world scenarios.”

The study is available here.

share Share

LG’s $60,000 Transparent TV Is So Luxe It’s Practically Invisible

This TV screen vanishes at the push of a button.

A Factory for Cyborg Insects? Researchers Unveil Mass Production of Robo-Roaches

The new system can turn cockroaches into cyborgs in under 70 seconds.

Origami-Inspired Heart Valve May Revolutionize Treatment for Toddlers

A team of researchers at UC Irvine has developed an origami-inspired heart valve that grows with toddlers.

AI thought X-rays are connected to eating refried beans or drinking beer

Instead of finding true medical insights, these algorithms sometimes rely on irrelevant factors — leading to misleading results.

AI is scheming to stay online — and then lying to humans

An alarming third party report almost looks like a prequel to Terminator.

Scientists Built a Radioactive Diamond Battery That Could Last Longer Than Human Civilization

A tiny diamond battery could power devices for thousands of years.

Is AI the New Dot-Com Bubble? The Year 2025 Has 1999 Vibes All Over It

AI technology has promised us many advances and 2025 looms ahead of us. Will the outputs match the promises?

New 3D Bio-printer Injects Living Cells Directly Onto Injuries To Heal Bones and Cartilage

In recent years, 3D printing has evolved dramatically. Once limited to materials like plastic or resin, it now extends to human cells, enabling the creation of living tissues. However, bioprinting remains a slow and limited process—until now. This latest innovation promises to change that. A team of researchers has introduced a new, cutting-edge bioprinting system […]

Google's DeepMind claims they have the best weather prediction model

After chess, Go, and proteins, has DeepMind taken over weather forecasting as well?

The David Mayer case: ChatGPT refuses to say some names. We have an idea why

Who are David Mayer and Brian Hood?