Developing AI that thinks exactly like humans: New research

Developing AI that thinks exactly like humans: New research

Creating human-like AI requires more than just replicating human behavior; technology must also be able to analyses information, or “think” like humans, if it is to be completely trusted. The University of Glasgow’s School of Psychology and Neuroscience led new research published in the journal Patterns that uses 3D modelling to analyses the way Deep Neural Networks—part of the broader family of machine learning—process information, to visualize how their data processing fits that of humans.

This new effort is intended to pave the path for the development of more trustworthy AI technology that would process information like humans and make errors that we can understand and foresee.

One of the remaining problems in AI research is determining how to better understand the process of machine thinking and whether it corresponds to how humans process information in order to assure accuracy. Deep Neural Networks are frequently portrayed as the most recent best model of human decision-making behavior, capable of matching or even outperforming human performance in particular tasks. When compared to humans, even seemingly basic visual discrimination tests can show apparent discrepancies and mistakes from AI models.

Deep Neural Network technology is now utilized in applications such as facial recognition, and while it is highly effective in these areas, scientists do not completely understand how these networks process information and, as a consequence, errors may emerge.

In this new study, the research team solved this issue by modelling the visual stimulus that the Deep Neural Network was given and changing it in different ways such that they could establish recognizing similarities between humans and the AI model by processing comparable information.

Researchers believe that their findings will pave the path for more reliable AI technology that behaves more like people and makes fewer unpredictable errors.

Related posts

Leave a Comment