Robotic chefs have made headlines many times in recent years. Hype aside, cooking is a highly challenging task for a robot. Many companies have built prototype robot chefs but none are commercially available and they lag behind humans in terms of skill.
However, a newly created robot chef could be a game-changer for robot cooking.
A team of researchers at Cambridge University have trained a robot to watch and learn from cooking videos and then recreate the dish. They trained their robot with a cookbook of eight simple salad recipes. After watching a video of a human cooking one of the salad dishes, the robot could identify which recipe was being prepared and then make it itself.
The videos also helped the robot to expand its cookbook. At the end of the experiment, the robot had come up with a ninth salad recipe on its own. The results, the researchers argue, show how video content can be a valuable source of data for automated food production, and could accelerate the deployment of robot chefs.
“We wanted to see whether we could train a robot chef to learn in the same incremental way that humans can – by identifying the ingredients and how they go together in the dish,” Grzegorz Sochacki, a researcher at Cambridge University’s Department of Engineering and the paper’s first author, said in a media statement.
Time to cook
For their study, Sochacki and his team devised eight salad recipes and documented the process of preparing them on film. They then employed a readily accessible neural network to train their robot chef. This network had previously been configured to recognize various objects, including the fruits and vegetables incorporated in the salad recipes.
The robot analyzed each frame of the video and identified the different objects and features, such as a knife and the ingredients, as well as the human demonstrator’s arms, hands, and face.
The recipes and videos were then converted to vectors, which the robot used to perform mathematical operations.
By analyzing the ingredients and observing the actions of the human chef, the robot deduced the recipes being prepared. Out of the 16 videos it observed, the robot identified the correct recipe in 93% of the cases, even though it detected only 83% of the chef’s actions. Moreover, the robot exhibited the ability to discern minor variations within a recipe, recognizing them as variations rather than distinct recipes.
“It’s amazing how much nuance the robot was able to detect,” said Sochacki. “These recipes aren’t complex – they’re essentially chopped fruits and vegetables, but it was really effective at recognising, for example, that two chopped apples and two chopped carrots is the same recipe as three chopped apples and three chopped carrots.”
This is a pretty big deal. But, like other demonstrations before it, this robochef has many limitations.
The videos used to train the robot aren’t like the videos made by social media influencers, with fast cuts and visual effects. The robot would struggle to identify a carrot if the human demonstrator had their hand wrapped around it, the researchers explained. Instead, to identify a carrot, the human demonstrator had to hold it so the robot could fully see it.
However, as robot chefs get better and faster at identifying ingredients in food videos, the researchers said they might one day be able to use websites such as YouTube to learn a whole range of recipes. In the meantime, it looks like we’ll have to continue cooking our own food without the help of a robot cook in our kitchen.
The study was published in the journal IEEE.