DeepMind, Google’s artificial intelligence division, taught AI humanoids how to work as a team in order to play football together, turning them from flailing tots to proficient players. Researchers ran a computer simulation through an athletic curriculum, giving AI control over humanoids with realistic body masses and movements.
It’s not the first time DeepMind tried its hand at games. The AI previously mastered chess and Go, a feat that researchers thought was nigh impossible at one point. Then, the group focused on other games, like Mario or Starcraft. Now, the system seems ready to take on a “real” game.
The researchers at Google trained physically simulated AI to play two-vs-two games as part of an experiment to advance coordination between AI systems and offer new pathways towards building artificial general intelligence (AGI) that is a similar level to a human. They described how they were able to pull it off in a paper in the journal Science Robotics.
“Our agents acquired skills including agile locomotion, passing, and division of labor as demonstrated by a range of statistics,” DeepMind researchers wrote in a blog post. “The players exhibit both agile high-frequency motor control and long-term decision-making that involved anticipation of teammates’ behaviors, leading to coordinated team play.”
From motor control to embodied intelligence
This isn’t the first time AI mastered a game just by looking at it. A similar feat was accomplished with simple computer games — but watching a computer game being played and watching a “real” game being played by humans are two different things.
The researchers first fed motion-capture videos of humans playing football (soccer), training the humanoids to run naturally by imitating the videos. Then they practiced dribbling and shooting the ball through machine learning that rewarded the AI for staying close to the ball. These two phases meant about 1.5 years of simulation training time, which the AI raced through in 24 hours.
However, more complex behaviors beyond movement and ball control started to emerge after further simulations. The researchers challenged the humanoids to score goals in two-on-two games. They learn teamwork skills, such as anticipating where to receive a pass, in about 20 to 30 simulated years, equivalent to two to three weeks in the real world.
“We optimized teams of agents to play simulated football via reinforcement learning, constraining the solution space to that of plausible movements learned using human motion capture data,” the study reads. “The result is a team of coordinated humanoid football players that exhibit complex behavior at different scales, quantified by a range of analysis and statistics.”
The humanoids were trained on simple rules that allowed fouls, provided a wall-like boundary around the pitch, and avoided elements such as goal kicks and throw-ins. This and the long-learning times mean that the simulation won’t immediately lead to soccer-playing robots. The researchers are now teaching the robots how to push a ball toward a target.
While we won’t see AIs mastering FIFA or any other sports computer games, it’s an important achievement that could have long-term ramifications, not just for computer games, but also beyond.
For years, engineers have been trying to create robots capable of playing soccer. This has resulted in competition between groups to see who can create the best robot players. And that has led to the creation of the RoboCup, with several leagues in the real world and simulated. Now, researchers took another step, teaching robots how to play football without teaching the rules.
While watching them play football is kind of cool, that’s really not the end goal behind the experiment. It’s all part of research being done on “embodied intelligence”. This is the idea that a general artificial intelligence could one day be necessary to move around the world in some sort of physical form, and that the nature of that form might determine the way it behaves.
Think of it this way: if you can teach an AI system to perform (virtually) the tasks that humans do, that could be used to create real-life systems that do the same thing. We could basically teach robots to do the same things humans do — and the potential for that is huge.
“The results shown in this study constitute only a small step toward human-level motor intelligence in artificial systems,” the study reads. “Although the scenario in the present article was more challenging than many simulated environments considered in the community, it was lacking in complexity along important dimensions compared to real-world scenarios.”
The study is available here.