Posted in | News | Consumer Robotics

Robots Programmed with Appropriate Human Values can Make Wise Decisions for Humans

The final lecture in the series 'The Emergence of Intelligent Machines: Challenges and Opportunities' dealing with philosophical questions about moral responsibility should not indeed be a surprise.

The series deals with what life will be like when artificial intelligences and robots become common. Humans could be allowing machines to make life and death decisions, hence it is essential to focus on what values humans program into these robots.

On May 1st, Joseph Halpern, Professor of Computer Science, concluded the series, with 'Moral Responsibility, Blameworthiness and Intention: In Search of Formal Definitions' and spoke more on philosophy than robotics.

It all starts with the much-discussed 'Trolley Problem.' A runaway trolley is careening downhill toward a switch controlled by an individual. Five people on the track will be killed if the trolley is sent to the left and just one person will be killed if it is sent on the right track. In a variation, only one track is present and a large man sits on a bridge above it. If this man is pushed he will fall in front of the train and it will derail, thus saving the five people farther ahead. Questions about blame and intention arise while analyzing these situations. The individual controlling the switch had no intentions to kill the man, but wanted to save the five people.

Such questions keep emerging as technology advances. According to laws that have been proposed, self-driving cars need to be programmed to select property damage in preference to injuring people. Another proposal highlights that the car should avoid running into pedestrian groups even if the outcome might be to kill the passenger. Based on surveys, a number of people considered this to be a good idea, however several pointed out that they would not buy that car.

The aging population in Japan has led to the proposal for allowing robots to help care for the elderly. According to Asimov’s second law, robots are believed to obey human commands. What happens if an individual asks a robot to help him commit suicide?

Halpern concluded by saying that this cannot be agreed and that it is essential to come up with a consensus about the sort of autonomy given to machines.

Don’t leave it up to the experts.

Joseph Halpern, Professor of Computer Science

Despite being opened to the public, the lecture series was part of a course, CS 4732, 'Ethical and Social Issues in AI.'

Halpern and Bart Selman, Professor of Computer Science and Co-creator of the course and lecture series are co-principal investigators for the Center for Human-Compatible Artificial Intelligence.  It is a nationwide research effort to guarantee that the artificial intelligence systems of the future act in a manner that is associated with human values.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.