Philosopher and computer scientist Peter Asaro ’94 wants world leaders and ordinary citizens to consider the dangers of programming war machines to decide who lives or dies.
A philosophy and computer science double major at IWU, Asaro (below) takes a multidisciplinary approach to his teaching, research, creative and advocacy work.
When he was in fourth grade, Peter Asaro ’94 got the assignment of making a Valentine’s Day mailbox. But unlike most 9-year-olds, he didn’t dig around his mother’s closet for a shoebox. Instead, he cannibalized circuit boards, a clock-radio speaker and a remote-controlled tank to build a robotic mailbox that visited his classmates’ desks.
The story is a harbinger of what was to become a defining interest in Asaro’s life: how technology and people interact and influence one another.
Asaro is a philosopher of science, technology and media. He is also a talented practitioner of technology, doing innovative research in the areas of virtual reality, human-computer interaction, artificial intelligence, machine learning and robot vision at the National Center for Supercomputer Applications and the Beckman Institute for Advanced Science and Technology. For Wolfram Research, he was involved in the design of the natural language interface for Wolfram|Alpha, which is also used by Apple’s Siri and Microsoft’s Bing to answer math queries.
His multiple talents made him a good fit at Illinois Wesleyan, where he double majored in philosophy and computer science.
Asaro traces his interests in those disciplines to a question raised by one of his IWU philosophy professors more than 20 years ago: Can machines think?
“I’ve never really got off that question,” he says. “It started me on this path.”
He again combined computer science and philosophy at the University of Illinois at Urbana–Champaign, earning master’s degrees in both before completing his Ph.D. in the history, philosophy and sociology of science.
Asaro is now a visiting fellow at Princeton University’s Center for Information Technology and continues his role as an assistant professor at the New School in New York City, where he served as director of the School of Media Studies’ graduate program. Widely published in international peer-reviewed journals and edited volumes, he is at work on a book that looks at how advanced robotics and social and ethical issues intersect.
As a filmmaker, he wrote and directed a feature-length documentary, Love Machine, examining the surprisingly complex relationships between technology and the human experience of love.
Asaro is now filming a sort of sequel to Love Machine, a documentary about autonomous “killer robots” designed and authorized to target and kill on their own without human intervention. Its title: War Machine.
He knows it will take more than a film documentary to get policymakers and the general public to take the subject seriously. Asaro also realizes the common perception of “killer robots” derives from fiction like the Terminator films — something that could happen but is still a long way off.
But if you simply replace the image of a machine that looks like Arnold Schwarzenegger with one that resembles a drone, the reality of lethal autonomous robots becomes more feasible. “In fact, there are already combat drones capable of autonomous flight and the autonomous targeting of weapons is in the testing and development stage,” says Asaro, “so it’s better to carefully consider the implications now, while we still have a choice.”
In 2009, he co-founded an NGO committed to the peaceful use of robotics and the regulation of robotic weapons, the International Committee for Robot Arms Control. He is also a leading expert for the Campaign to Stop Killer Robots, which launched publicly in London two summers ago.
As an influential thinker on the legal and ethical implications of lethal autonomous weapons, he has been interviewed by major media outlets and has presented his analyses at two United Nations conferences in Geneva. He also travels around the world to encourage political leaders and average citizens to consider and publicly elaborate their nations’ policy on fully autonomous weapons.
This spring, Asaro spoke with IWU Magazine by phone from his Brooklyn, N.Y., apartment.
When could we see killer robots on the battlefield?
Asaro (above) addresses an informal meeting of experts at the United Nations in Geneva
Part of that depends on how you define it. What we have been working with as a definition at the United Nations is autonomous targeting and firing of weapons. If the system automatically picks targets on its own and has the authority to fire a weapon without human supervision, that is an autonomous weapon. Under that description certain anti-missile defense systems have that capability, though they are not really targeting people — or they are not supposed to be targeting people; they make mistakes sometimes.
There is also some next-generation drone technology that has fully automated capabilities. The U.S. has been developing the X-47B, which is an aircraft carrier-based drone that carries bombs and missiles, and can take off and fly autonomously. That system has been flight-tested and will be ready in five to 10 years. Our real concern is if there’s some algorithm determining what to target, instead of a human.
You have a responsibility as a military commander to make a decision on what to target and that is a complicated decision. If you are attacking an ammunition depot next to a school you have to make a calculation — what’s the potential impact of killing students in school versus the military value of that ammunition depot. A robot doesn’t really understand the situation. It’ll just say this pattern matches that pattern so it’s a target.
We’re a long way from developing systems that can do that responsibly and accurately. Even if we could, there’s a more fundamental moral question about whether we really want to delegate the authority to actively kill people to machines.
Which nations are the closest to developing these weapons?
The U.S. has been at the forefront for a long time and has certainly spent the most money on it. China, especially in drone technology, has kept up. Others in the forefront of drone technology are Israel and U.S. allies throughout NATO.
How is morality compromised when machines replace human soldiers?
You have a large moral burden to confirm that the killing that you’re doing is morally and legally justified, and I don’t think machines can do that. They’re just not capable of moral reasoning. They can plot a calculation and they may do what we want a human to do in a situation, but that doesn’t make them moral actors.
Machines aren’t really accountable. We can say they didn’t work the way they were supposed to work … but you’re not going to try a robot for a crime. We’re not going to say it’s a moral failing of a robot because it didn’t do what it was supposed to do. Maybe it was a technical mistake or programming error, or maybe it should never have been deployed.
You have discussed the threat of these autonomous weapons as an important human rights issue. How does it rise to that level?
Because it really changes our concept of human dignity. If we think about slavery or torture, those things are bad because the person who is enslaved or tortured suffers a bad consequence but it’s also bad for everybody. The fact that slavery exists, or torture exists, harms everyone as a human because it diminishes what it means to be human.
There’s a real question about whether allowing machines to take human lives independent of human control and supervision has the ability to diminish what it means to be human or the value of a human life.
If we think about the use of robotic police forces to suppress peaceful uprisings or just for policing, do we really want machines roaming around using lethal force on civilian populations? That very clearly becomes a human rights question. We allow police to use lethal force for self-defense when they’re doing their jobs, but robots can’t be killed. With that logic, the robot shouldn’t be allowed to harm people, maybe to restrain them, but not [to use] lethal force. The use of lethal force in the military falls under a different set of laws, but human rights still exist in war, so it is more complicated.
But is it possible robots could be superior to humans in moral judgment or ability to follow the law?
There are a variety of tasks that can be turned into calculations that computers and robots might perform better than humans, but moral and legal decisions are not reducible to calculations. When you decide to be a virtuous person, you make a certain moral choice or choose one value over another. Through reasoning you ask, who do I want to be? That’s part of your effort to construct an identity. Robots don’t do that. They just follow the program that’s been given to them. They don’t have the ability to step back and think: Is this the robot I want to be or do I want to find another path?
What about the argument that robots could save human lives by being deployed in combat?
We already design systems that can be remote-controlled, like deactivating roadside bombs. But there are still people who are responsible and have meaningful control over the weapons. We don’t really need to automate that in order to get safety for the troops.
The groups you work with have encouraged the United Nations to take a stand on this issue.
How successful have you been?
Asaro and others, including Nobel Peace Laureate Jody Williams, launched the international Campaign to Stop Killer Robots in London with events to inform activists, the media and parliamentarians.
We’re still at the advisory level, trying to bring diplomats from 120 countries up to speed on how this technology works and what the issues are, and we’re hoping they’ll move forward toward a treaty. Their next official meeting will be in November and we hope at that point they will consider whether to move these meetings to a treaty level of negotiation. What we will focus on is the concept of meaningful human control and a requirement that any weapon system that’s developed should have some form of meaningful human control.
Do you share the concern raised by scientists such as Stephen Hawking that computers with artificial intelligence could soon be robust enough to pose a threat to humanity?
The fact that computers can do calculations and process certain types of data much better than us is already the case. But what does it mean to say that they are smarter than us? Google can look up information much faster than we can but it doesn’t know how to cook an egg.
Knowledge is a very practical thing. We will be able to build really smart, capable machines. Whether they become self-aware, that’s difficult to define — and if we can’t define it, I don’t know how we’re going to engineer it. And it seems so complicated; I don’t think we’re going to do it by accident.
What I think would be dangerous would be for us to delegate the responsibility for various human activities to machines which are not really capable of performing them. In those cases they will make mistakes, or we will simply move the goalposts and change our expectations.
Artificial intelligence is creeping into our lives but we’re accepting it quickly. Are we being naïve when it comes to our relationship with devices like our smartphones and self-driving cars?
Smartphones are interesting in how fast they were accepted and how broadly. It took about a decade for cell phones to be accepted but only three to four years for smartphones. They track you and create all this data about you. As you install apps, you’re giving them permission to know who all your contacts are and read your texts. People have been very willing to trade a lot of privacy for a relatively small amount of functionality and convenience.
Self-driving cars also raise a lot of interesting questions. Whatever you program has some kind of consequence. You should always try to minimize harm, but there may be some situations where it’s not clear what the minimal harm is. Is it better to run over an old person than a young person? Are you trying to protect the occupants more than the people outside the vehicle?
I think the algorithms that are going to be implemented in the first generation are going to be mostly based on physics, and it’ll try to avoid obstacles. If it can’t, it’s not going to be processing an obstruction as a person or deer or light pole.
Was there a specific point, earlier in your life, where you had an epiphany about potential moral problems with some of the computer technology you were busy researching?
Philosophy Professor Larry Colter, who died in 2012, inspired Asaro and many other students.
The first epiphany was in a seminar I took as a philosophy major with [IWU Philosophy Professor] Larry Colter. It was called “Can Machines Think?” That put me on the path of wanting to learn more. I took all the computer science classes that were offered, and I couldn’t get enough of it. I took an artificial intelligence class with [IWU Computer Science Professor] Susan Anderson-Freed.
I got a great education at Wesleyan, and I don’t think I’d be doing what I’m doing if I hadn’t done that. I really kept that pattern going as I went to grad school, trying to mix together computer science and philosophy.
A lot of my work since graduate school has been thinking about this question of social values and ethics and how that relates to technology. As we build technology, we build values into them, and that question was always in the background. It’s not so much the explicit danger, but the fact that we should recognize every technology we use has these values built into them.
If all your work could answer one question, what would it be?
How we can improve society with better technology. We’re at a place where we have an enormous amount of technological innovation taking place, but it’s sort of disconnected from a lot of these more humanist and liberal arts sensibilities of what we really want technology to do.
We’ve gotten really good at answering questions about how to get technology to do some specific capability but we haven’t gotten to the point as a society of understanding what it is we want technologies to do for us. That’s part of what I’m trying to do.
I can see it in the killer robots. Sure we can design these robots to kill people very efficiently but is this really what we want to do? Or do we want to build technologies that protect civilians and ensure that human rights and human dignity are respected? Isn’t that a better goal to start with?
Of all your roles — computer scientist, advocate, author, filmmaker — which do you personally find the most rewarding?
Teaching is always the most rewarding. Getting to work with the students and seeing them develop and what they produce is always quite amazing.
I have students who are now working on arms control at the United Nations for their home countries, who are screening their independent films at major film festivals and who are developing the next generation of media technologies. It is always exciting to think that you helped them get to a place where they can make a difference.