Researchers Demonstrate Articulated Robot Motion for SLAM Using Small Depth Camera

Before a robot arm can reach into a tight space or pick up a delicate object, the robot needs to know precisely where its hand is. Researchers at Carnegie Mellon University's Robotics Institute have shown that a camera attached to the robot's hand can rapidly create a 3-D model of its environment and also locate the hand within that 3-D world.

Researchers at Carnegie Mellon University's Robotics Institute have shown that a camera attached to a robot's hand can rapidly create a 3-D model of its environment and also locate the hand within that 3-D world. (credit: Personal Robotics Lab)

Doing so with imprecise cameras and wobbly arms in real-time is tough, but the CMU team found they could improve the accuracy of the map by incorporating the arm itself as a sensor, using the angle of its joints to better determine the pose of the camera. This would be important for a number of applications, including inspection tasks, said Matthew Klingensmith, a Ph.D. student in robotics.

The researchers will present their findings on May 17 at the IEEE International Conference on Robotics and Automation in Stockholm, Sweden. Siddhartha Srinivasa, associate professor of robotics, and Michael Kaess, assistant research professor of robotics, joined Klingensmith in the study.

Placing a camera or other sensor in the hand of a robot has become feasible as sensors have grown smaller and more power-efficient, Srinivasa said. That's important, he explained, because robots "usually have heads that consist of a stick with a camera on it." They can't bend over like a person could to get a better view of a work space.

But an eye in the hand isn't much good if the robot can't see its hand and doesn't know where its hand is relative to objects in its environment. It's a problem shared with mobile robots that must operate in an unknown environment. A popular solution for mobile robots is called simultaneous localization and mapping, or SLAM, in which the robot pieces together input from sensors such as cameras, laser radars and wheel odometry to create a 3-D map of the new environment and to figure out where the robot is within that 3-D world.

"There are several algorithms available to build these detailed worlds, but they require accurate sensors and a ridiculous amount of computation," Srinivasa said.

Those algorithms often assume that little is known about the pose of the sensors, as might be the case if the camera was handheld, Klingensmith said. But if the camera is mounted on a robot arm, he added, the geometry of the arm will constrain how it can move.

"Automatically tracking the joint angles enables the system to produce a high-quality map even if the camera is moving very fast or if some of the sensor data is missing or misleading," Klingensmith said.

The researchers demonstrated their Articulated Robot Motion for SLAM (ARM-SLAM) using a small depth camera attached to a lightweight manipulator arm, the Kinova Mico. In using it to build a 3-D model of a bookshelf, they found that it produced reconstructions equivalent or better to other mapping techniques.

"We still have much to do to improve this approach, but we believe it has huge potential for robot manipulation," Srinivasa said. Toyota, the U.S. Office of Naval Research and the National Science Foundation supported this research.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.