Researchers have put together C-LEARN, a system that should allow anyone to teach their robot any task without having to code.
Quasi-intelligent robots are already a part of our lives, and someday soon, their full-fledged robotic offspring will be too. But until (or rather, unless) they reach a level of intelligence where we can teach them verbally, as you would a child, instructing a robot will require you to know how to code. Since coding is complicated, more complicated than just doing the dishes yourself, anyway, it’s unlikely that regular people will have much use for robots.
Unless, of course, we could de-code the process of instructing robots. Which is exactly what roboticists at the MIT have done. Called C-LEARN, the system should make the task of instructing your robot as easy as teaching a child. Which is a bit of good-news-bad-news, depending on how you feel about the rise of the machines: good, because we can now have robot friends without learning to code, and bad, because technically the bots can use the system to teach one another.
How to train your bot
So as I’ve said, there’re two ways you can go about it. The first one is to program them, which requires expertise in the field of coding and takes a lot of time. The other is to show the bot what you want it to do by tugging on its limbs or moving digital representations of them around, or just doing the task yourself and having it imitate you. For us muggles the latter is the way to go, but it takes a lot of work to teach a machine even simple movements — and then it can only repeat, not adapt them.
C-LEARN is meant to form a middle road and address the shortcoming of these two methods by arming robots with a knowledge base of simple steps that it can intelligently apply when learning a new task. A human user first helps build up this base by working with the robot. The paper describes how the researchers taught Optimus, a two-armed robot, by using software to simulate the motion of its limbs. Like so:
The researchers described movements such as grasping the top of a cylinder or the side of a block, in different positions, retaking each motion for seven times from each position. The motions varied slightly each time, so the robot can look for underlying patterns in the motions and integrate them into the data bank. If for example the simulated grasper always ended up parallel to the object, the robot would note this position is important in the process and would constrain its future motions to attain this parallelism.
By this point, the robot is very similar to a young child, “that just knows how to reach for something and grasp it,” according to D’Arpino. But starting from this database the robot can learn new, complex tasks following a single demonstration. All you have to do is show it what you want done, then approve or correct its attempt.
Does it work?
To test the system, the researchers taught Optimus four multistep tasks — to pick up a bottle and place it in a bucket, to grab and lift a horizontal tray using both hands, to open a box with one hand and use the other to press a button inside it, and finally to grasp a handled cube with one hand and pull a rod out of it with the other. Optimus was shown how to perform each task once, made 10 attempts at each, and succeeded 37 out of 40 times. Which is pretty good.
The team then went one step further and transferred Optimus’s knowledge base and its understanding of the four tasks to a simulation of Atlas, the bullied bot. It managed to complete all four tasks using the data. When researchers corrupted the data banks by deleting some of the information (such as the constraint to place a grasper parallel to the object), Atlas failed to perform the tasks. Such a system would allow us to confer the models of motion created by one bot with thousands of hours of training and experience to any other robot — anywhere in the world, almost instantly.
D’Arpino is now testing whether having Optimus interact with people for the first time can refine its movement models. Afterward, the team wants to make the robots more flexible in how they apply the rules in their data banks, so that they can adjust their learned behavior to whatever situation they’re faced with.
The goal is to make robots who are able to perform complex, dangerous, or just plain boring tasks with high precision. Applications could include bomb defusal, disaster relief, high-precision manufacturing, and helping sick people with housework.
The findings will be presented later this month at the IEEE International Conference on Robotics and Automation in Singapore.
You can read the full paper “C-LEARN: Learning Geometric Constraints from Demonstrations for Multi-Step Manipulation in Shared Autonomy” here.