Apr 12 2017
While some worry that robots may become our adversaries, Daniel Weld, professor of computer science at the University of Washington, sees them as incredibly beneficial helpers. Self-driving cars, for example, might help prevent 1.3 million road deaths each year. Medical robots might avoid the 250,000 annual deaths from human errors in treatment.
But there is work still to be done to make that happen, Weld said in a lecture, “Computational Ethics for AI,” March 20. He was a guest lecturer in the series “The Emergence of Intelligent Machines: Challenges and Opportunities.”
“AIs won’t wake up and want to kill us,” he said, “but they might hurt us by accident.”
While some AIs already possess superhuman intelligence, they can also be “super stupid,” he said, so we have to be careful in crafting the “utility function” that tells the system what you want it to do. Suppose you told your household robot to “Clean up as much dirt as possible.” It might start making messes so there would be more to clean up. If you told it to do whatever is necessary to keep the house from getting messy, the robot might decide to lock the cat – or you – out of the house.
“Brains don’t kill,” Weld pointed out. It’s the “effectors” – devices or systems they can operate – that could enable them to cause harm: AIs that control the power grid or manipulate stock trading could do tremendous damage. The most potentially dangerous effector, he added, might be a humanlike hand that ultimately could build anything else.
The first of Isaac Asimov’s famous laws of robotics says that a robot may not harm a human. How do we tell the robot what constitutes harm? The robot that cooks for you needs to know if you are deathly allergic to peanuts.
Many AI systems are based on “machine learning,” where a computer is given a lot of examples and works out what they have in common, and that offers many pitfalls, Weld said. Biases in the data may cause the machine to “repeat the mistakes humans are making.” His image search for a “housecleaning robot” came up with a robot that looked very female. He also cited a case mentioned by Jon Kleinberg, the Tisch University Professor of Computer Science, in a previous lecture: A computer program used to guide sentencing and parole decisions unfairly predicted that African-American defendants were more likely to commit crimes in the future. This could create a “feedback loop,” Weld suggested, where people who spend more time in prison might become more likely to commit crime.
Echoing another earlier lecture by Kilian Weinberger, associate professor of computer science, Weld said that machine-learning results must be explainable. The computer must tell us how it makes its decisions. A program that could tell the difference between a photo of a sled dog and a wolf turned out to be basing its decision on whether or not there was snow in the picture.
And machine-learning systems can be fooled. A malicious intruder might convince your self-driving car that it sees a freeway exit where none actually exists.
The message could be summed up in Weld’s statement, “We have to make our computers smarter still … and give them common sense.”
The lecture series, although open to the public, is part of a course, CS 4732, “Ethical and Social Issues in AI.” Lectures continue at 7:30 p.m. Monday nights in 155 Olin Hall. On April 17, Karen Levy, assistant professor of information science and associate member of the faculty of the Law School, will speak on “Working With and Against AI: Lessons From Low-Wage Labor,” discussing how human workers confront computational incursions into their workspaces, how they work alongside them – and how they push back against them.