Let’s face it, even the most cutting edge robots developed today are stumbling goofs. So, in an effort to make robots less blundering a team at MIT has borrowed human reflexes to one of their lab’s creation: a robot called Hermes. When the bot is just about to stumble about an obstacle, a human operator who is strapped with all sorts of actuators and motors synced to bots’ movements is alerted. In a split-second, Hermes is back on his feet thanks to the innate reflexes of good ol’ humans.
To be fair, Hermes isn’t that impressive when you consider he isn’t autonomous. Rather, Hermes is a sort of puppet – an avatar for the human operator. When the human says jump, the robot jumps. When the human picks up a hammer, the robot picks up a hammer. When the human uses said hammer to punch through a wall, the robot follows suit. Yes, MIT can be quite fun.
“For humanoid robots and legged robots in general, keeping balance is critical to being able to carry out any task,” said PhD student Albert Wang. “We’ve decided to tackle this head-on by feeding the balance sensation back to the human via forces on his waist, and that way we can take advantage of the natural reflexes and learning capability of the human to be able to keep the robot balanced.”
First, the team had to contrive a way to measure Hermes’ balance and weight distribution. So, on each leg they fitted load sensors that measure how much pressure the robot is exerting on the ground. Based on this forces, the MIT engineers could calculate the momentary center of pressure or where it was shifting its weight. Past a critical level, the robot was in danger of falling. This is where the balance-feedback interface comes in. Basically, it’s an exoskeleton packed with metal bars and motors that a person wears on his waist. When Hermes is about to fall, a signal is sent to the exoskeleton which pushes the person back and forth, mimicking the robot’s weight distribution.
“The interface works by pushing harder on the operator as the robot’s center of pressure approaches the edge of the support polygon,” Wang explains. “If the robot is leaning too far forward, the interface will push the operator in the opposite direction, to convey that the robot is in danger of falling.”
To test the interface, Wang performed a simple test: he hit Hermes with a hammer while PhD student Joao Ramos, acting as the operator, was unaware when the strike will fall. As Wang struck the robot, the platform exerted a similar jolt on Ramos, who reflexively shifted his weight to regain his balance, causing the robot to also catch itself, according to MIT News. Another fun test involved Hermes punching through a dry wall.
“These experiments show the versatility of the human operator. In one test, the robot unexpectedly got its arm stuck in the wall. But, because the human was in the loop, the operator could arrive at a creative solution which was translated directly to the robot,” Wang says. “Our next goal is to try more complex coordinated movements such as swinging an axe or opening a spring-loaded door. These actions are difficult for many robots. If the robot stands stiff while pushing on a door, it tends to tip over. You have to lean your body weight into it and catch yourself as the because it’s so natural to humans, you can have the human do it.”
Hermes and his future cousins will prove handy in disaster relief situation where its just too dangerous to send humans and useless to send clumsy autonomous bots. Given time and enough punches in the gut, Hermes could grow to have its own reflexes.
“This interface likely won’t even distract a person,” Jonathan Hurst, associate professor of mechanical, industrial, and manufacturing engineering at Oregon State University, who was not involved in the research. “It’s normal to keep your balance while focusing on a task. But perhaps more important than just a way to control a robot in the absence of knowing how to do it autonomously is being able to observe and collect data from the robot. Given hours of data recording the details of human strategies for balance and pose adjustment, I’d be willing to bet they will discover some relatively simple approaches for autonomous strategies.”