Nov 1 2018
General-purpose robots have various disadvantages—they can be cumbersome and very costly, and usually achieve only a single type of task.
However modular robots—developed using several interchangeable parts, or modules—are relatively more flexible. In case one of the part breaks, it can be removed and replaced. It is possible to rearrange components as per requirements—or even better, the robots have the ability find out ways to reconfigure themselves, depending upon the tasks assigned to them and the environments in which they navigate.
At present, a Cornell-headed research group has created modular robots that can sense their surroundings, make decisions, and autonomously transform into various shapes to carry out various tasks—an achievement that takes the dream of multipurpose, adaptive robots one step closer to reality.
“This is the first time modular robots have been demonstrated with autonomous reconfiguration and behavior that is perception-driven,” stated Hadas Kress-Gazit, associate professor in the Sibley School of Mechanical and Aerospace Engineering and principal investigator on the project. “We are creating a modular system that is able to do different tasks autonomously. By changing the high-level task, it totally changes its behavior.”
The outcomes of this study have been reported in Science Robotics on October 31st, 2018.
The robots are developed using wheeled, cube-shaped modules with the ability to detach and reattach to form innovative shapes with various capabilities. The modules, created by scientists from the University of Pennsylvania, include magnets to attach to each other, as well as Wi-Fi to enable communication with a centralized system.
These interchangeable modules are linked to a sensor module, which is provided with multiple cameras and a small computer for gathering and processing data related to its surroundings. The robot’s software is provided with a high-level planner to direct its actions and reconfiguration, and also perception algorithms for mapping, navigating, and classifying the environment.
In a previous study, the researchers developed an open-source online tool that can be used to create, simulate, and test designs for robot configurations and behaviors. The library was populated by hosting design competitions and inviting students to invent and test different shapes.
Currently, the library includes 57 possible robot configurations—such as Proboscis (with a long arm in front), Scorpion (modules arranged in perpendicular lines, with a horizontal row in front), and Snake (modules in a single line)—and 97 behaviors, like highReach, pickUp, drive, or drop. Upon being given a task, the high-level planner of the robot searches the library for shapes and behaviors that satisfy the existing needs.
Although other modular robot systems have successfully carried out particular tasks in controlled environments, these robots are the first to perform fully autonomous behavior and reconfigurations based on the task and an unfamiliar environment, stated Kress-Gazit.
“I want to tell the robot what it should be doing, what its goals are, but not how it should be doing it,” she said. “I don’t actually prescribe, ‘Move to the left, change your shape.’ All these decisions are made autonomously by the robot.”
The researchers proved the effectiveness of the system by performing three experiments. In the first experiment, instructions were given to a robot to find, retrieve, and deliver all pink and green objects to a designated zone marked with a blue square on the wall. The robot used the “Car” configuration to explore, and subsequently reshaped itself into “Proboscis” to retrieve a pink object from a narrow pathway, and finally returned to its car shape to deliver its haul.
In the second experiment, instructions were given to the robot to place a circuit board in a mailbox marked with pink tape at the top of a set of stairs. In the third one, it was charged with placing a postage stamp high on the box—fundamentally the same task, but which requires different behaviors in different environments.
It was found by the researchers that the low-level software and hardware were most prone to error. For example, in the second experiment, the robot took 24 attempts before being successful, with the stairs posing a specific challenge. Upon resolving such challenges, such robots could be used for any jobs that need maneuvering in changing terrain, such as cleaning up from an earthquake or any natural disaster where a robot might have to enter building cracks and crevices, stated Kress-Gazit.
“Modular robots in general are just fascinating systems, because you’re not restricted by one shape, so there’s a lot of flexibility,” she stated. “The hardware is still in research stages, but if we had commercial modular robots they would be very useful for anything where the environment changes significantly and the robot should adapt to its environment as well.”
The paper was co-authored with Mark Campbell, the John A. Mellowes ’60 Professor of Mechanical Engineering; mechanical engineering doctoral students Jonathan Daudelin and Gangyuan Jing; and Professor Mark Yim and doctoral student Tarik Tosun of the University of Pennsylvania.
The National Science Foundation funded the study.
An Integrated System for Perception-Driven Autonomy with Modular Robots
(Video credit: Cornell University)