RTNet
(Pixabay)

Artificial Intelligence is Learning to ‘Think’ More Like Humans, New Research Suggests

Artificial intelligence (AI) isn’t just performing with high accuracy; for the first time, new research suggests that it is “thinking” very much like humans.

Work on AI models has long focused on the scale of tasks or accuracy, but a group of researchers is looking more closely at how AI makes decisions. By developing a process more similar to the human mind, troubling tendencies for AI “hallucinations” may be mitigated.

Bridging the AI-Human Gap

A significant point of convergence between human reasoning and AI is that AI uses the same number of computations for simple information as it does for complex and uncertain information. Faced with uncertainty regarding input and predictability, humans think differently than when faced with routine inputs.

Now, RTNet, named for its similarity to human response time and developed by researchers at Georgia Tech University, represents the latest attempt to follow the human lead in stochastic decision-making, and according to a new study, “exhibits the critical signatures” of such processes currently only associated with our own brains.

The State of Artificial Neural Networks

In a regular neural network, every artificial neuron is connected to the entire network. Each neuron creates one small output that it sends across the system to the other nodes. In a convolutional neural network, some neurons are not connected to the entire network but serve to abstract information or compare it to information around it. For example, they may contrast a pixel with those surrounding it.

Convolutional networks are beneficial for helping computers understand images by identifying shapes and patterns. Still, while they can mimic and even exceed human vision in specific ways, the actual decision-making process lacks human nuance.

The RTNet Going Forward

RTNet was developed to enhance convolutional networks by adding traditional cognitive models from neuroscience. It features eight layers comprised of five convolutional and three regular connected layers. The result combines AI’s image-processing ability with a human being’s dynamic stochastic reasoning.

RTNet processes each image multiple times using samples from a Bayesian neural network. This mimics the human brain’s neuron firing random responses as it compares what it’s looking at to objects from memory. When a certain threshold is exceeded, one output is selected. The “noisy accumulation,” as the study refers to it,  is designed to reflect the cognitive function of the human mind. RTNet was tested using MNIST, a handwritten number set used in many machine learning experiments, where visual noise was introduced into the images, making them more challenging to read.

During the developmental process, the team did something novel: they didn’t just look at whether or not the model correctly determined what digit the image showed; they also looked at how it compared to a group of 60 real-live humans performing the same task over 960 times each. The resulting dataset was one of the largest ever put together, focusing on human reactions to MNIST. Study author Farshad Rafiei noted, “Generally speaking, we don’t have enough human data in existing computer science literature.” 

RTNet Doesn’t Just Tell, It Predicts

Truly replicating how humans make decisions relies not just on being correct but on understanding the quirks of how humans draw conclusions. A major element in developing an evaluative framework for RTNet involved what is known as the speed-accuracy tradeoff, or SAT. Simply put, the less time we spend on a problem, the less likely we are to arrive at the correct answer. SAT was important in the three criteria on which human and machine responses were measured: speed, accuracy, and, perhaps uniquely, confidence.

Confidence is an element of human decision-making that has been difficult to recreate in AI. AI models often deflect the question if they can’t find the answer rather than admitting what they don’t know. For example, when asked questions about the 2024 election, such as who the Democratic nominee for 2024 was shortly after Joe Biden exited the race, many AI chatbots reportedly gave the wrong answer or refused to answer at all when they were uncertain. 

The answers from RTNet were only sometimes correct, but the pattern mimicked those of human results quite closely. The same stimulus sometimes yielded different results, although generally, the more time that was spent decoding, the more accurate the guess turned out to be. One of the study’s most critical points was that the network could give a confidence rating to each decision that accurately reflected how likely it was the decision was correct and also closely resembled human confidence and accuracy results.

The Humanlike Future of AI

RTNet represents a significant step forward in machine learning, as it features all the major elements of human thinking. It outperformed other models in tests due to its accumulation of evidence and empirical validation. Looking forward, the team suggests developing RTNet to help us get even closer to the human brain. While its “evidence accumulation system can be thought of as a recurrent network,” there is still room to increase the number of recurrent systems in RTNet, aiming to improve its prediction of human behavior and allow extrapolation of simply past instances to solve more complicated problems in the future.

The team’s paper, “The neural network RTNet exhibits the signatures of human perceptual decision-making,” appears in the latest issue of the journal Nature Human Behavior. 

Ryan Whalen is a writer based in New York. He has served in the Army National Guard and holds a BA in History and a Master of Library and Information Science with a certificate in Data Science. He is currently finishing an MA in Public History and working with the Harbor Defense Museum at Fort Hamilton, Brooklyn.