A professor from the University of Central Florida (UCF) and 26 other scientists have reported a new study. This study finds the difficulties that humans should overcome to guarantee that artificial intelligence is safe, dependable, compatible, and trustworthy with human values.
The study was reported in the International Journal of Human-Computer Interaction.
Ozlem Garibay, an assistant professor in UCF’s Department of Industrial Engineering and Management Systems, was the study’s lead researcher.
According to Garibay, the upcoming extensive integration of artificial intelligence could considerably affect human life in ways that are yet to be understood completely. Garibay works on AI applications in material and drug design and discovery and how AI affects social systems.
Given below are the six challenges that were found by Garibay and his research group.
- Challenge 1, Human Well-Being: AI must have the capacity to find out the implementation chances for it to benefit the well-being of humans. Also, it should be thought to assist the well-being of the user while interacting with AI.
- Challenge 2, Responsible: Responsible AI cites the idea of prioritizing societal and human well-being throughout the AI lifecycle. This guarantees that the possible advantages of AI are leveraged in a way that arranges with human values and priorities, while also reducing the risk of ethical breaches or accidental impacts.
- Challenge 3, Privacy: The use, collection, and dissemination of data in AI systems must be taken into account carefully to guarantee the safety of individuals’ privacy and avoid detrimental use against groups or individuals.
- Challenge 4, Design: Human-centered design principles available for AI systems must utilize a framework that could inform practitioners. This framework would differentiate between AI having extremely low risks, AI with no unique measures required, AI with extremely high risks, and AI that must not be allowed.
- Challenge 5, Governance and Oversight: A governance framework that considers the complete AI lifecycle right from conception to development to deployment is required.
- Challenge 6, Human-AI interaction: To promote an equitable and ethical relationship between humans and AI systems, interactions must be anticipated upon the basic principle regarding the cognitive capacities of humans. Particularly, humans should retain entire control over and responsibility for the behavior and results of AI systems.
The study, which was performed for more than 20 months, includes the views of 26 international experts who have different backgrounds in AI technology.
These challenges call for the creation of human-centered artificial intelligence technologies that prioritize ethicality, fairness and the enhancement of human well-being. The challenges urge the adoption of a human-centered approach that includes responsible design, privacy protection, adherence to human-centered design principles, appropriate governance and oversight, and respectful interaction with human cognitive capacities.
Ozlem Garibay, Assistant Professor, Department of Industrial Engineering and Management Systems, University of Central Florida
Garibay states that, on the whole, these difficulties are a sign of immediate action for the scientific community to come up and implement artificial intelligence technologies that prioritize and benefit humanity.
The team of 26 experts includes National Academy of Engineering members and scientists from Asia, Europe, and North America who have wide experiences throughout industry, academia, and government. Also, the group has various educational backgrounds in areas varying from computer science and engineering to psychology and medicine.
Also, their work will be featured in a chapter in the book: Human-Computer Interaction: Foundations, Methods, Technologies, and Applications.
Five UCF faculty members co-authored the study:
- Gavriel Salvendy is a university distinguished professor in UCF’s College of Engineering and Computer Science and the founding president of the Academy of Science, Engineering and Medicine of Florida.
- Waldemar Karwowski, a professor, and chair of the Department of Industrial Engineering and Management Systems and executive director of the Institute for Advanced Systems Engineering at the University of Central Florida.
- Steve Fiore, director of the Cognitive Sciences Laboratory and professor with UCF’s cognitive sciences program in the Department of Philosophy and Institute for Simulation & Training.
- Ivan Garibay is an associate professor in industrial engineering and management systems and director of the UCF Artificial Intelligence and Big Data Initiative.
- Joe Kider, an associate professor at the IST, School of Modeling, Simulation, and Training and a co-director of the SENSEable Design Laboratory.
Doctorate was received by Garibay in computer science from UCF and then joined UCF’s Department of Industrial Engineering and Management Systems, part of the College of Engineering and Computer Science, in 2020.
Journal Reference
Garibay, O. O., et al. (2023) Six Human-Centered Artificial Intelligence Grand Challenges. International Journal of Human–Computer Interaction. doi.org/10.1080/10447318.2022.2153320.