It’s been fairly easy for some to adopt a remote working model during the pandemic, but manufacturing and warehouse workers have had it rougher — some tasks just need people to be physically present in the workplace.
But now, one team is working on a solution for the traditional factory floor that could allow more workers to carry out their labor from home.
Columbia Engineering announced that researchers have won a grant to develop the project titled “FMRG: Adaptable and Scalable Robot Teleoperation for Human-in-the-Loop Assembly.” The project’s raw ingredients include machine perception, human-computer interaction, human-robot interaction, and machine learning.
They have come up with a “physical-scene-understanding algorithm” to convert visual observations via camera shots of a robot workspace into a virtual 3D-scene representation.
Handling 3D models
The system analyzes the robot worksite and can change it into a visual physical scene representation. Each object is represented by a 3D model that mimics its shape, size, and physical attributes. A human operator gets to specify the assembly goal by manipulating these virtual 3D models.
A reinforcement learning algorithm infers a planning policy, given the task goals and the robot configuration. Also, this algorithm can infer its probability of success and use it to determine when to request human assistance — otherwise, it carries out its work automatically.
The project is led by Shuran Song, an assistant professor of computer science at Columbia University. She said the system they envision will allow workers who are not trained roboticists to operate the robots and this pleases her.
“I am excited to see how this research could eventually provide greater job access to workers regardless of their geographical location or physical ability.”
Automation for the future
The team received $3.7m funding from the National Science Foundation (NSF). The NSF stated the award period starts from January 1 to an estimated end date of Dec. 31, 2025. The NSF award abstract reveals the positive impact such an effort could have on business and workers:
“The research will benefit both the manufacturing industry and the workforce by increasing access to manufacturing employment and improving working conditions and safety. By combining human-in-the-loop design with machine learning, this research can broaden the adoption of automation in manufacturing to new tasks. Beyond manufacturing, the research will also lower the entry barrier to using robotic systems for a wide range of real-world applications, such as assistive and service robots.”
The abstract said their team is collaborating with NYDesigns and LaGuardia Community College “to translate research results to industrial partners and develop training programs to educate and prepare the future manufacturing workforce.”
Song is directing the vision-based perception and machine learning algorithm designs for the physical-scene-understanding algorithms. Computer Science Professor Steven Feiner, Columbia University, is looking at the 3D and VR user interface. Matei Ciocarlie, associate professor of mechanical engineering, Columbia University, is building the robot learning and control algorithms. Before joining the faculty, Matei was a scientist at Willow Garage, and scientist at Google. Matei contributed to the development of the open-source Robot Operating System.
A takeaway: News of robots often results in hair-pulling remarks on a tradeoff that can result in lost jobs for humans. Here is a project that, once complete, has the potential to complement human capabilities by using robotics.
Nancy Cohen is a contributing author. Want to get involved like Nancy and send your story to ZME Science? Check out our contact and contribute page.