Personal navigation has come a long way since asking a stranger for directions was one of your best bets for reaching an unknown destination. With GPS and the world wide web at our fingertips, finding your way around has never been easier, even when traveling to a foreign country. The visually impaired, however, have not been able to enjoy the benefits of this remarkable technology, many being restricted to walking canes whose design and functionality have changed much.
Researchers at Stanford University want to flip this paradigm on its head. Taking cues from the same obstacle-detecting technology that allows autonomous cars to travel on busy roads without human input, the researchers have devised a high-tech yet affordable walking cane that similarly helps the visually impaired to navigate their environment.
The augmented cane features a number of sensors and is largely made from off-the-shelf parts. The navigation software is based on open-source code. In fact, anyone can assemble their own version of this augmented cane as the study comes with a list of parts and soldering instructions, perhaps for a friend or relative who would find one useful.
“We wanted something more user-friendly than just a white cane with sensors,” says Patrick Slade, a graduate research assistant in the Stanford Intelligent Systems Laboratory. “Something that cannot only tell you there’s an object in your way, but tell you what that object is and then help you navigate around it.”
This isn’t the first smart walking cane, but it’s probably the most versatile and affordable on the market right now. According to the study’s authors, other canes with similar functionalities can weigh up to 50 pounds (22 kg) and cost at least $6,000. In contrast, the Stanford design only weighs 3 pounds (1.3 kg) and its parts cost $400.
The parts include a LIDAR sensor, a 3D laser scanning technology originally developed in the early 1960s for submarine detection from an aircraft. It works by generating a laser pulse train that hits various surfaces and obstacles in its way. By calculating the time it takes for the laser pulse to reflect back to its source, the cane provides real-time information about various stationary or moving obstacles directly in front of it.
A motorized, omnidirectional wheel attached to the tip of the cane is constantly in contact with the ground’s surface, which provides live feedback to the cane’s user.
Other sensors include GPS, accelerometers, magnetometers, and gyroscopes — the kind of hardware found in a smartphone — that monitor and track the cane’s geographic location, speed, and direction.
All of these sensors feed real-time information to an AI that controls robotic actuators in the cane to automatically steer the user towards an objective while navigating obstacles. For instance, the visually impaired user may set their destination to a convenience store or a local coffee shop. On the way there, the cane gently tugs and nudges, either left or right, so the user can move around obstacles.
The cane was tested in the field by both visually impaired and blindfolded sighted volunteers who had to use the augmented cane to navigate through hallways and traverse outdoor waypoints.
“We want the humans to be in control but provide them with the right level of gentle guidance to get them where they want to go as safely and efficiently as possible,” says Mykel Kochenderfer, an associate professor of aeronautics and astronautics and an expert in aircraft collision-avoidance systems.
Compared to the conventional white cane, the augmented cane allowed the volunteers with impaired vision to walk about 20 percent faster. Sighted volunteers with blindfolds walked nearly 35% faster than they did while using the white cane.
But although these results are impressive, they could be even better. The researchers caution that the augmented cane is still very much a work in progress and they’d like to run more safety tests and experiments before they are ready to commercially release it to the public.
The augmented cane was described in the journal Science Robotics.