By Kalwinder KaurAug 23 2012
Led by Professor Mark Campbell, and assistant professor Hadas Kress-Gazit, the students at the Cornell's Autonomous Systems Lab in Rhodes Hall are working with a target of revolutionizing the functional capabilities of robots. Several student researchers demonstrated their latest contributions through a project demo during the end of last semester.
Based on Linear Temporal Mission Planning (LTLMoP), an advanced software toolkit, Robert Villalba '15 formulated high-level commands for a spiderlike robot capable of navigating through various terrains, eliminating the need to be programmed each time. Here, the robot responds to an environment using broad specifications.
Ahmed Elsamadisi '14 and his team displayed and described a robotic system that can provide prerecorded campus tour. With a bit of human supervision, this robotic system works like a tour guide.
In order to demonstrate the robot, rolling mechanical Segway platform was deployed along with QR code-like tags along the walls of Rhodes hall for a "vision" system, supporting orientation of robot. When preloaded with specifications, the robot can traverse through hallways and corners without any assistance. During its course of movement, the robot delivers auditory recordings. Additionally, it can modify its behavior when it encounters obstacles.
A robotic universal "gripper" was demonstrated by Annie Dai '12. Here, a balloon filled with coffee grounds serves as a gripper that can pick up objects, combined with a platform used for monitoring "bedrooms" of a mock apartment and other operations like identifying trash, picking it up and placing it in receptacles. Similar to the arthropod and the Segway, this robot is directed by simple English commands, and understands and responds to its environment with high-efficiency.
Focused on advancing search-and-rescue missions, another team designed a software interface with an interactive map of an area (the engineering quad). Using a touchpad, the user can feed information in the form of instructions that will ultimately be sent back to the human by the robot.
Furthermore, the students had their interface loaded within a pair of monitor goggles, reflecting the computer screen. Using these goggles, the user can experience a heads-up display. These goggles support search-and-rescue type environment.
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.