Success with an AI that “thinks” like humans

0


Creating human-like AI is about more than mimicking human behavior – technology must also be able to process information or “think” like humans in order to be fully relied upon.

The University of Glasgow’s School of Psychology and Neuroscience uses 3D modeling to analyze the way deep neural networks – part of the broader machine learning family – process information to visualize how their information processing compares with that of humans matches.

It is hoped that this new work will pave the way for creating more reliable AI technology that processes information like humans and makes mistakes that we can understand and predict.

One of the challenges AI development still faces is better understanding the process of machine thinking and whether it aligns with how humans process information to ensure accuracy. Deep neural networks are often portrayed as the best model of human decision-making behavior currently available, meeting or even exceeding human performance in some tasks. But even deceptively simple visual discrimination tasks can reveal significant inconsistencies and errors in the AI ​​models compared to humans.

Currently, Deep Neural Network technology is used in applications such as facial recognition, and while it has been very successful in these areas, scientists still do not fully understand how these networks process information and therefore errors can occur.

In this new study, the research team addressed this problem by modeling the visual stimulus given to the Deep Neural Network and transforming it in various ways to create a similarity of recognition by processing similar information between humans and the AI. Be able to demonstrate the model.

Professor Philippe Schyns, lead researcher on the study and head of the Institute of Neuroscience and Technology at the University of Glasgow, said: “In developing AI models that“ act like ”people, for example to recognize a person’s face when If they would as humans, we need to make sure that the AI ​​model is using the same information from the face that another human would do to recognize them. If the AI ​​doesn’t do this, we could have the illusion that the system works the same way humans do, but then find that it goes wrong under new or untested circumstances. “

The researchers used a series of changeable 3D faces and asked people to rate the similarity of these randomly generated faces to four known identities. They then used that information to test whether the Deep Neural Networks gave the same ratings for the same reasons – not just whether humans and AI made the same decisions, but also whether they were based on the same information. It is important that the researchers can use their approach to visualize these results as 3D faces that determine the behavior of people and networks. For example, a network that correctly classified 2,000 identities was powered by a heavily caricatured face, showing that the faces processed facial information very differently than humans.

Researchers hope this work will pave the way for more reliable AI technology that behaves more like humans and makes fewer unpredictable mistakes.

The study “Grounding deep neural network predictions of human categorization behavior in understandable functional characteristics: The case of facial identity”


Share.

Leave A Reply