An interdisciplinary team of researchers devised a framework for developing algorithms that more successfully incorporate moral principles into artificial intelligence (AI) decision-making programs. The project was especially focused on AI-human interaction technologies, such as “carebots” or virtual assistants used in healthcare settings.
Technologies like carebots are supposed to help ensure the safety and comfort of hospital patients, older adults and other people who require health monitoring or physical assistance. In practical terms, this means these technologies will be placed in situations where they need to make ethical judgments.
Veljko Dubljević, Study Corresponding Author and Associate Professor, Science, Technology & Society program, North Carolina State University
Dubljević added, “For example, let’s say that a carebot is in a setting where two people require medical assistance. One patient is unconscious but requires urgent care, while the second patient is in less urgent need but demands that the carebot treat him first. How does the carebot decide which patient is assisted first? Should the carebot even treat a patient who is unconscious and therefore unable to consent to receiving the treatment?”
“Previous efforts to incorporate ethical decision-making into AI programs have been limited in scope and focused on utilitarian reasoning, which neglects the complexity of human moral decision-making. Our work addresses this and, while I used carebots as an example, is applicable to a wide range of human-AI teaming technologies,” Dubljević continued.
The focus of utilitarian decision-making is on results and repercussions. But humans also take into account two other factors when making moral decisions.
The first factor is the purpose behind a specific action and the agent's personality. In other words, who or what is carrying out a particular action? Is it benign or malicious? The action itself is the second factor. People frequently perceive certain behaviors as fundamentally negative, such as lying.
The researchers created a mathematical formula and a connected set of decision trees that can be implemented into AI programs to address the intricacies of moral decision-making.
The Agent, Deed, and Consequence (ADC) Model, created by Dubljević and colleagues to reflect how people make difficult ethical decisions in the real world, is a foundational concept for these tools.
Dubljević further stated, “Our goal here was to translate the ADC Model into a format that makes it viable to incorporate into AI programming. We are not just saying that this ethical framework would work well for AI, we are presenting it in language that is accessible in a computer science context.”
He concluded, “With the rise of AI and robotics technologies, society needs such collaborative efforts between ethicists and engineers. Our future depends on it.”
The research is available online for free in the journal AI and Ethics. The study was co-authored by Chang Nam, a professor in NC State’s Edward P. Fitts Department of Industrial and Systems Engineering, Michael Pflanzer, Zachary Traylor, and Joseph Lyons of the Air Force Research Laboratory.
The National Institute for Occupational Safety and Health and the National Science Foundation, under grant number 2043612, provided funding for the study.
Journal Reference
Pflanzer, M., et al. (2022) Ethics in human–AI teaming: principles and perspectives. AI and Ethics. doi:10.1007/s43681-022-00214-z.