Would you visit a restaurant where food is made by a robot? I would give it a go, and so would about 77% of consumers willing to try food created with the help of a robot or AI, according to a 2023 report. However, there’s one caveat that these surveys don’t seem to mention: cost.
Several robo-chefs, some quite capable, have been unveiled over the years. While the way they handle some complex dishes is impressive, their high cost — running upwards of $350,000 — has curbed enthusiasm. The restaurant industry is notorious for its low margins and high competition, so robo dishes are currently simply too expensive to serve.
But that may soon change. Now, researchers at Stanford University have shown that sophisticated robots don’t require exorbitant budgets.
The Standford engineers devised a wheeled, sophisticated bot that can cook a three-course Cantonese meal. It costs only $32,000 to build (as a prototype) and is powered by artificial intelligence (AI). The robot cooks shrimp, cleans up after itself, and calls an elevator for delivery — all without any human supervision.
Named Mobile ALOHA (an acronym for “a low-cost open-source hardware teleoperation system for bimanual operation”), the robot is built with off-the-shelf and 3-D printed parts to keep overhead low. Mobile ALOHA mastered seven different tasks that required an array of mobility and dexterity, from rinsing pans to offering high fives, showcasing the robot’s impressive versatility.
Traditionally, these sorts of robot systems that perform complex manipulation tasks have been confined to stationary tabletop activities. However, despite utilizing advanced AI architecture like transformers and diffusion models, they often fall short in mobility and dexterity — key attributes for performing practical, everyday tasks.
In contrast, Mobile ALOHA is designed to coordinate its arms with base actions seamlessly. In the videos, you can see it crack open eggs, add soy sauce, stir a pan, chop onions, and even plate the three dishes. It’s all quite remarkable to look at.
The robot’s training involved a blend of direct operation and observation. For instance, to learn shrimp cooking, the robot was remotely operated 20 times, each with slight variations. This approach enabled it to understand different methods for the same task. Once the human operators showed it the ropes, the robot knew its way around the kitchen by itself.
Mobile ALOHA was also fed training data from other previous robot-operated exercises that don’t necessarily involve cooking. These include routine tasks one might expect to encounter in the kitchen, such as tearing off a paper towel. This combination of new and old data, known as the “co-training” approach, helped the robot acquire the necessary skill set to be effective in a restaurant without requiring the thousands if not millions of training examples typically required in training conventional AI systems.
Despite its advancements, Mobile ALOHA isn’t without its limitations. Its size and design restrict its use in confined spaces. The next steps for the researchers include refining these aspects and enhancing the robot’s autonomous learning abilities.
Moreover, the Stanford researchers plan to train Mobile ALOHA on more complex tasks, such as sorting and folding crumpled laundry. The Stanford researchers note that handling laundry has been particularly challenging for robots due to the unpredictable shapes and textures. However, their innovative training technique is expected to enable robots to undertake tasks once deemed impossible.