Feb 12 2019
According to a recent study, cooperation among individuals can be increased through the use of autonomous machines.
Scientists from the U.S. Combat Capabilities Development Command’s Army Research Laboratory, the Army’s Institute for Creative Technologies (ICT), and Northeastern University (NU), worked together on a study that was recently reported in the Proceedings of the National Academy of Sciences.
The research team, headed by Dr Celso de Melo, ARL, in association with Dr Jonathan Gratch from ICT and Dr Stacy Marsella from NU, carried out a study in which 1,225 volunteers took part in computerized experiments that involved a social predicament with autonomous vehicles.
Autonomous machines that act on people’s behalf—such as robots, drones and autonomous vehicles—are quickly becoming a reality and are expected to play an increasingly important role in the battlefield of the future. People are more likely to make unselfish decisions to favor collective interest when asked to program autonomous machines ahead of time versus making the decision in real-time on a moment-to-moment basis.
Dr Celso de Melo, U.S. Army Research Laboratory
According to De Melo, in spite of the promises of better efficiency, it is still unclear whether this paradigm shift will alter the decisions of people when their self-interests are pitted against the overall interest.
“For instance, should a recognition drone prioritize intelligence gathering that is relevant to the squad’s immediate needs or the platoon’s overall mission?” asked de Melo. “Should a search-and-rescue robot prioritize local civilians or focus on mission-critical assets?”
Our research in PNAS starts to examine how these transformations might alter human organizations and relationships. Our expectation, based on some prior work on human-intermediaries, was that AI representatives might make people more selfish and show less concern for others.
Dr Jonathan Gratch, Institute for Creative Technologies
The outcomes in the study indicate that the autonomous vehicles were programmed by the volunteers to behave more cooperatively than if they were driving these vehicles themselves. The evidence revealed that the reason for this is that programming machines make selfish short-term rewards to become less salient, resulting in considerations of wider societal objectives.
“We were surprised by these findings,” stated Gratch. “By thinking about one’s choices in advance, people actually show more regard for cooperation and fairness. It is as if by being forced to carefully consider their decisions, people placed more weight on prosocial goals. When making decisions moment-to-moment, in contrast, they become more driven by self-interest.”
The outcomes additionally demonstrated that such an effect takes place in an abstract version of the social predicament, which according to the researchers, indicates that it generalizes beyond the autonomous vehicle domain.
The decision of how to program autonomous machines, in practice, is likely to be distributed across multiple stakeholders with competing interests, including government, manufacturers and controllers. In moral dilemmas, for instance, research indicates that people would prefer other people’s autonomous vehicles to maximize preservation of life (even if that meant sacrificing the driver), whereas their own vehicle to maximize preservation of the driver’s life.
Dr Celso de Melo, U.S. Army Research Laboratory
As these problems are argued, it is significant to comprehend that in the potentially more prevalent case of social predicaments—in which individual interest is pitted against collective interest—autonomous machines are capable of shaping how the predicaments are solved and, therefore, these stakeholders have a chance to promote a more cooperative society, concluded the researchers.