Posted in | News | Military Robotics

Deceptive Robot Lures Predator Robot

Using deceptive behavioral patterns of squirrels and birds, researchers at the Georgia Institute of Technology have developed robots that are able to deceive each other. The research is funded by the Office of Naval Research and is led by Professor Ronald Arkin, who suggests the applications could be implemented by the military in the future. The research is highlighted in the November/December 2012 edition of IEEE Intelligent Systems.

Deceptive Robots

Arkin and his team learned by reviewing biological research results that squirrels gather acorns and store them in specific locations. The animal then patrols the hidden caches, routinely going back and forth to check on them. When another squirrel shows up, hoping to raid the hiding spots, the hoarding squirrel changes its behavior. Instead of checking on the true locations, it visits empty cache sites, trying to deceive the predator.

Arkin and his Ph.D. student Jaeeun Shim implemented the same strategy into a robotic model and demonstration. The deceptive behaviors worked. The deceiving robot lured the “predator” robot to the false locations, delaying the discovery of the protected resources.

“This application could be used by robots guarding ammunition or supplies on the battlefield,” said Arkin, a Regents Professor in Georgia Tech’s School of Interactive Computing. “If an enemy were present, the robot could change its patrolling strategies to deceive humans or another intelligent machine, buying time until reinforcements are able to arrive.”

Click here to see a lab video of the demonstration.

Arkin and his student Justin Davis have also created a simulation and demo based on birds that might bluff their way to safety. In Israel, Arabian babblers in danger of being attacked will sometimes join other birds and harass their predator. This mobbing process causes such a commotion that the predator will eventually give up the attack and leave.

Arkin's team investigated whether a simulated babbler is more likely to survive if it fakes or feigns strength when it doesn't exist. The team’s simulations, based on biological models of dishonesty and the handicap principle, show that deception is the best strategy when the addition of deceitful agents pushes the size of the group to the minimum level required to frustrate the predator enough for it to flee. He says the reward for deceit in a few of the agents sometimes outweighs the risk of being caught.

“In military operations, a robot that is threatened might feign the ability to combat adversaries without actually being able to effectively protect itself,” said Arkin. “Being honest about the robot’s abilities risks capture or destruction. Deception, if used at the right time in the right way, could possibly eliminate or minimize the threat.”

From the Trojan Horse to D-Day, deception has always played a role during wartime. In fact, there is an entire Army field manual on its use and value in the battlefield. But Arkin is the first to admit that there are serious ethical questions regarding robot deception behavior with humans.

“When these research ideas and results leak outside the military domain, significant ethical concerns can arise,” said Arkin. “We strongly encourage further discussion regarding the pursuit and application of research on deception for robots and intelligent machines.”

This isn’t the first time Arkin has worked in this field. In 2010, he and Georgia Tech Research Institute Research Engineer Alan Wagner studied how robots could use deceptive behavior to hide from humans or other intelligent machines.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.