neuromorphic computer
(Credit: Unsplash)

Forget AI—Scientists Have Developed a Neuromorphic Computer That ‘Thinks’ Like a Human Brain

Scientists from the University of Texas at Dallas have created a neuromorphic computer that functions similarly to a human brain.

The novel computer works by reinforcing pathways between synthetic neurons when they are stimulated, just like neurons in the brain reinforce commonly used pathways to learn and process new information.

The researchers behind the prototype said their brain-inspired processor can learn faster and use less energy than emerging AI systems, offering system designers a powerful and efficient new tool that could lead to reduced reliance on energy-intensive data centers and “bring AI inference and learning to mobile devices.”

neuromorphic computer
A probe station is used to test small neuromorphic devices in Dr. Joseph Friedman’s lab. Image credit University of Texas at Dallas

Conventional computers are designed with data stored on one medium, such as a hard drive or RAM, and processing occurring on a separate processor. According to Dr. Joseph S. Friedman, associate professor of electrical and computer engineering at the University of Texas at Dallas, this design limits AI’s ability to make efficient inferences, much like the human brain does naturally. Dr. Friedman and his team note that this design requires large amounts of “labeled” data and “an enormous number of complex training computations.”

“The costs of these training computations can be hundreds of millions of dollars,” the researchers explained in a statement announcing the completion of their prototype neuromorphic computer.

Conversely, neuromorphic computers integrate processing and memory storage in a single location, much like the human brain. For example, brain cells contain networks of neurons and synapses that work together to store and process information in the same area. This occurs when synapses, which form the connections between individual neurons, are strengthened or weakened depending on their activity patterns. The research team says this integration “allows the brain to adapt continuously as it learns.”

neuromorphic computer
Dr. Joseph S. Friedman and his colleagues at The University of Texas at Dallas created a computer prototype that learns patterns and makes predictions using fewer training computations than conventional artificial intelligence systems. Image Credit: The University of Texas at Dallas.

Hoping to design a computer using the neural-synapse connections within the human brain as a template, Friedman and colleagues used a principle they say was first proposed by Dr. Donald Hebb. Called Hebb’s Law, the idea suggests that neurons that fire together, wire together.

“The principle that we use for a computer to learn on its own is that if one artificial neuron causes another artificial neuron to fire, the synapse connecting them becomes more conductive,” Friedman explained.

Working in the University’s NeuroSpinCompute Laboratory with researchers from Everspin Technologies Inc. and Texas InstrumentsFriedman and colleagues employed a tool called magnetic tunnel junctions (MJTs) to simulate the connection between neurons in neuromorphic computers. According to Friedman, these nanoscale devices consist of two layers of magnetic material separated by a non-magnetic insulating layer. If the magnetizations of the outer layers are aligned, electrons can tunnel through the insulating barrier more easily. Conversely, magnetizations that are aligned in opposite directions can make electron tunneling more difficult.

According to the authors, they integrated MTJs into their prototype neuromorphic computer as network relays “to mimic the way the brain processes and learns patterns.”

“As signals pass through MTJs in a coordinated manner, their connections adjust to strengthen certain pathways, much as synaptic connections in the brain are reinforced during learning,” they explained.

The team said this “binary switching” design makes neuromorphic computers a reliable information storage medium, “resolving a challenge that has long impeded alternative neuromorphic approaches.”

In the next iteration, the University of Texas at Dallas team is hoping to scale up its proof-of-concept neuromorphic computer to larger sizes. If they can create a commercially viable system, Friedman said their limited energy needs and reduced training times compared to conventional AI systems mean neuromorphic computers could “power smart devices without huge energy costs.”

“Our work shows a potential new path for building brain-inspired computers that can learn on their own,” he concluded.

The study “Neuromorphic Hebbian learning with magnetic tunnel junction synapses” was published in Communications Engineering.

Christopher Plain is a Science Fiction and Fantasy novelist and Head Science Writer at The Debrief. Follow and connect with him on X, learn about his books at plainfiction.com, or email him directly at christopher@thedebrief.org.