Researchers at the Personal Robotics Laboratory in Cornell University are teaching robots to find their way in new surroundings and manipulate objects. They presented their study at the 2011 Robotics: Science and Systems Conference held at the University of Southern California.
Assistant professor of computer science, Ashutosh Saxena, led the research team that has created a system that allows the robot to identify objects in a room after scanning the environment. The robot has a 3-D camera that captures images of the room. The pictures are collated so that a 3-D image of the total room is formed. Based on distances and discontinuities between objects, the 3-D image is divided into sections.
The team labeled most of the objects in 28 home scenes and 24 office scenes, and provided it to the robot for study. The computer analyzed common characteristics such as texture, color and neighboring objects in order to determine the common features in objects having the same label. When the robot is made to scan a new environment it tries to relate the objects in individual segments to the objects stored in its memory. In the experiments, the robot identified a keyboard in a new environment. The robot found the monitor, and based on that information it identified the keyboard. It was also able to identify 88% of objects in office scenes and 83% in home scenes.
Robots have been programmed to observe an array of cups, identify their handles and then grasp them correctly. But placing of objects is difficult due to the choices available. The research team trained the robot to generalize placing strategies for keeping objects in new locations. The robot would scan the surrounding with a camera and test locations for placement. On a table it would place a plate flat, but in a dishwasher it would place it upright. In the experiment, the robot placed the objects properly 98% of the time, and in new environments it placed the objects correctly 95% of the time.