A team of researchers from Columbia University has demonstrated a method that allows a robot to learn the model of its own body. This self-modeling process enabled the robot to decide the type of movements best suited under different circumstances and basically think about its next move.
Every change in our body posture or position is commanded by our nervous system (motor cortex). The human brain knows how the different body parts can move and therefore, it can plan and coordinate our every action before it happens. This is possible because the brain has maps and models of our entire body.
These maps allow the brain to guide the movement of our different body parts, provide us with well-coordinated motion, and even save us from injuries while we face obstacles on our path. Could we do the same thing for robots? Boyuan Chen, the lead author of a new study and an assistant professor at Duke University believes so.
“We humans clearly have a notion of self. Somewhere inside our brain, we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move.”
Similar to how human body movements are guided using multiple brain maps, Boyuan and his team have demonstrated that a robot can also develop a kinematic model of itself.
A kinematic model is a mathematical information about a robot’s dimensions, moving capabilities and limitations, depth of field, and the workspace it can cover at any given time. It is used by robot operators to control the actions of a machine. However, after self-modeling, a robot can control itself as it becomes aware of how different motor commands trigger different body movements.
How did the scientists enable the robot to model itself?
There is no way scientists can see the brain maps formed inside a person’s mind or what a person thinks at any given point in time — at least, we don’t have the technology yet. Similarly, if a robot imagines something, a scientist can’t see the same by simply peeking into the robot’s neural network. The researchers suggest that a robot’s brain is like a “black box”, so in order to find out if a robot can model itself, they performed an interesting experiment.
Describing the experiment in interview with ZME Science, one of the authors of the study and the director of Columbia University’s Creative Machines Lab, Hod Lipson explained:
“You can imagine yourself, every human can imagine where they are in space but we don’t know exactly how this works. Nobody can look into the brain even of a mouse and say here is how the mouse sees itself.”
So during their study, the researchers surrounded a robot arm called WidowX 200 with five cameras in a room. The live feed from all the cameras was connected to the robot’s neural network so the robot could see itself through the cameras. As WidowX performed different kinds of body movements in front of the live streaming cameras, it started observing how its different body parts behaved in response to different motor commands.
After three hours, the robot stopped moving. Its deep neural network had collected all the information required to model the robot’s entire body. The researchers then performed another experiment to test if the robot had successfully modeled itself. They assigned a complex task to the robot that involved touching a 3D red sphere while avoiding a large obstacle in its path.
Moreover, the robot has to touch the sphere with a particular body part (the end effector). To complete the task successfully, WidowX needed to propose and follow a safe trajectory that could allow it to reach the sphere without collision. Surprisingly, the robot did it without any human help, and for the first time, Boyuan Chen and his team proved that a robot can also learn to model itself.
Self-modeling robots can advance the field of artificial intelligence
The WidowX robotic hand is not exactly an advanced machine, it can only perform a limited number of actions and movements. Humans in general looks forward to a future that will be run by robots and machines much more complex than WidowX. When asked if any robot could learn to model itself using the same approach, Professor Lipson told ZME Science:
“We did it with a very simple cheap robot (WidowX 200) that we can just buy on Amazon but this should work on other things. Now the question is how complex a robot can be and will this still work? This work for a six-degree robot, will this work for a driverless car? Will this work for 18 motors, a spider robot? And that’s what we gonna do next, we gonna try to push this to see how far it can go.”
Many recent AI-based innovations such as drones, driverless cars, and humanoids like Sophia perform multiple functions at the same time. If these machines learn to imagine themselves and others including humans, this could lead to a robot revolution. The researchers believe that the ability to model self and others would allow robots to program, repair, and function on their own without human supervision.
“We rely on factory robots, we rely on drones, we rely more and more on these robots, and we can’t babysit all these robots all the time. We can’t always model them or program them, it’s a lot of work. We want the robots to model themselves and we are also interested in working on how robots can model other robots. So they can help each other, keep taking care of themselves, adapt, and be much more resilient and I think it’s gonna be important,” said Professor Lipson.
The study is published in the journal Science Robotics.