Oct 12 2017
How many robots does it take to screw in a lightbulb? Just one robot will be needed in the case of a new robotic gripper that has been developed by Engineers at the University of California San Diego.
The Engineers have designed and constructed a gripper capable of picking up and manipulating objects without the need for seeing them and training them. The gripper is unique as it brings together three varied capabilities. It has the potential to twist objects besides sensing objects and building models of the objects it is manipulating. This allows the gripper to work in low light and low visibility conditions, for instance.
The engineering team, headed by Michael T. Tolley, a Roboticist at the Jacobs School of Engineering at UC San Diego, presented the gripper at the International Conference on Intelligent Robots and Systems (IROS) September 24th to 28th in Vancouver, Canada.
The gripper was tested by Researchers on an industrial Fetch Robotics robot and they also demonstrated that it is capable of picking up, manipulating and modeling an extensive range of objects, from screwdrivers to lightbulbs.
We designed the device to mimic what happens when you reach into your pocket and feel for your keys.
Michael T. Tolley, Roboticist, The Jacobs School of Engineering, UC San Diego
The gripper is available with three fingers. Each finger has three soft flexible pneumatic chambers capable of moving when air pressure is applied. This provides the gripper more than one degree of freedom, thus it can in fact manipulate the objects it is holding. For instance, the screw in lightbulbs, gripper can turn screwdrivers and also hold pieces of paper, because of this design.
Additionally, each finger is covered with a smart, sensing skin. The skin is produced from silicone rubber, where sensors developed from conducting carbon nanotubes are embedded. This is followed by rolling and sealing the sheets of rubber, which are then slipped onto the flexible fingers in order to cover them like skin.
The conductivity of the nanotubes differs as the fingers flex, which permits the sensing skin to record and then detect when the fingers are moving and coming into contact with an object. The data the sensors produce are transmitted to a control board, which brings the information together in order to develop a 3D model of the object the gripper is manipulating. It is a process similar to a CT scan, where 2D image slices add up to a 3D picture.
Tolley stated that the breakthroughs were achieved because of the team’s diverse expertise and their experience in the fields of soft robotics and manufacturing.
Future steps include adding machine learning and artificial intelligence to data processing such that the gripper will in fact be able to recognize the objects it is manipulating, instead of just modeling them. Researchers also are exploring using 3D printing for the gripper’s fingers in order to make them more durable.
The Office of Naval Research grant number N000141712062, the UC San Diego Frontiers of Innovation Scholars Program (FISP) and the National Science Foundation Graduate Research Fellowship Grant No. DGE-1144086 supported this work.
Custom Soft Robotic Gripper Sensor Skins for Haptic Object Visualization