Posted in | News | Machine-Vision

Human-Like AI Technology Based on Deep Neural Networks Approach

Developing a human-like artificial intelligence technology requires more than just reflecting human behavior. Such technology must be capable of processing information, or “thinking” similar to humans, to have reliability.

Human-Like AI Technology Based on Deep Neural Networks Approach.

Image Credit: University of Glasgow.

Research led by the School of Psychology and Neuroscience at the University of Glasgow employed 3D modeling to learn the method by which Deep Neural Networks process information, to visualize the relevance between their information processing and that of humans. Deep Neural Networks are a part of the broader family of machine learning.

The study was published in the journal Patterns.

The scientists believe that that the new approach will provide opportunities to create more reliable AI technology capable of processing information like humans and make errors that would help us understand and predict.

One of the difficulties in AI development is to trace how to better learn the machine thinking process, and their chances of matching information processing by humans to ensure precision.

Deep Neural Networks are usually considered to be the best existing model of human decision-making behavior, which achieves or even surpasses human performance in some tasks. However, simple visual discrimination tasks can slip and unveil clear inconsistencies and errors from AI models, in comparison to humans.

Presently, Deep Neural Network technology is employed in applications such as face recognition. Even though the technology is very successful in this regard, scientists are still confused about the way in which they process information, therefore when errors are likely to occur.

In this new study, the researchers overcame these challenges by modeling the visual stimulus loaded to the Deep Neural Network, transforming it in several methods, enabling them to exhibit a similarity recognition through the processing of similar information between humans and AI models.

When building AI models that behave ‘like’ humans, for instance to recognise a person’s face whenever they see it as a human would do, we have to make sure that the AI model uses the same information from the face as another human would do to recognise it. If the AI doesn’t do this, we could have the illusion that the system works just like humans do, but then find it gets things wrong in some new or untested circumstances.

Philippe Schyns, Study Senior Author, Professor and Head, Institute of Neuroscience and Technology, University of Glasgow

A set of 3D model faces was used by the researchers and they requested humans to rate the similarity of these randomly created faces to four familiar identities. This data was later used to compare with the rating provided by Deep Neural Networks, for the same reason. This was to verify whether the humans and AI made the same decision, and also to check whether the decision was made considering the same information.

Significantly, the researchers using this method were able to visualize these results as the 3D faces that drive the behavior of humans and networks. For example, a network that rightly classified 2000 identities was driven by a heavily calculated face, revealing the identity of the face processing using different information than humans.

The researchers anticipate that this work will open doors for more reliable AI technology capable of behaving like humans with fewer unpredictable errors.

The study was financially supported by Wellcome Trust and the Engineering and Physical Sciences Research Council (EPSRC), part of the UK Research and Innovation.

Journal Reference:

Daube, C., et al. (2021) Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity. Patterns. doi.org/10.1016/j.patter.2021.100348.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.