Jul 19 2017
A concept called Empowerment has been developed by Scientists at the University of Hertfordshire in the UK in order to help robots to serve and protect humans, while simultaneously keeping themselves safe.
Robots are currently finding increasing applications in workplaces and homes and this seems to be a growing trend. It will be necessary for a number of robots to interact with humans in random situations. For instance, self-driving cars will have to protect the car from damage besides keeping their occupants safe. Robots responsible for taking care of the elderly will have to adapt to difficult situations and respond to their owners' requirements.
Thinkers like Stephen Hawking have recently warned about the possible dangers of artificial intelligence, and this indeed has given rise to public discussion. "Public opinion seems to swing between enthusiasm for progress and downplaying any risks, to outright fear," says Daniel Polani, a Scientist involved in the research, which was published recently in Frontiers in Robotics and AI.
The concept of "intelligent" machines running uncontrollably and turning on their human creators is however not new. Isaac Asimov, a Science Fiction Writer, proposed his three laws of robotics in 1942. These laws govern how robots should interact with humans. In simple terms, these laws state that a robot should not allow a human to be harmed, or harm a human. They also guarantee that robots obey orders from humans, and even protect their own existence, as long as this does not cause harm to a human.
The laws are meant to have a positive impact, but they can also be misinterpretated, particularly when robots fail to understand ambiguous and nuanced human language. Asimov's stories are in fact filled with examples comprising of tragic consequences as a result of robots misinterpreting the spirit of the laws.
One problem refers to the fact that the concept of "harm" is context-specific, complex and hard to explain clearly to a robot. If robots cannot understand "harm", how can they avoid causing it?
We realized that we could use different perspectives to create 'good' robot behavior, broadly in keeping with Asimov's laws.
Christoph Salge, A Scientist involved in the study
The concept developed by the team is called Empowerment. Instead of trying to make a machine comprehend difficult ethical questions, it is based on robots constantly seeking to keep their options open.
Empowerment means being in a state where you have the greatest potential influence on the world you can perceive. So, for a simple robot, this might be getting safely back to its power station, and not getting stuck, which would limit its options for movement. For a more futuristic, human-like robot this would not just include movement, but could incorporate a variety of parameters, resulting in more human-like drives.
Christoph Salge, A Scientist involved in the study
The Empowerment concept was mathematically coded by the team, so that it can be used by a robot. The Empowerment concept was initially developed by the Researchers in 2005, but it was only in a recent key development, that the Researchers expanded the concept so that the robot also tries to maintain a human's Empowerment. "We wanted the robot to see the world through the eyes of the human with which it interacts," explains Polani. "Keeping the human safe consists of the robot acting to increase the human's own Empowerment."
In a dangerous situation, the robot would try to keep the human alive and free from injury. We don't want to be oppressively protected by robots to minimize any chance of harm, we want to live in a world where robots maintain our Empowerment.
Christoph Salge, A Scientist involved in the study
This altruistic Empowerment concept is capable of powering robots that adhere to the spirit of Asimov's three laws, from robot butlers, to self-driving cars, "Ultimately, I think that Empowerment might form an important part of the overall ethical behavior of robots," says Salge.