Super-Turing

New “Super-Turing” AI Chip Mimics the Human Brain to Learn in Real Time — Using Just Nanowatts of Power

In a lab-simulated flight through a digital forest besieged by shifting winds, a drone steers itself with remarkable skill—ducking obstacles, changing course in real-time, and reaching its target with human-like agility. 

The system directing the drone isn’t powered by a massive server or trained on petabytes of data. Instead, it’s being guided by a tiny, energy-sipping brain-like chip — a new type of artificial intelligence that learns as it acts, much like the human mind.

Developed by a team of engineers from UCLA, Texas A&M, and several collaborating institutions, the device represents a revolutionary “Super-Turing” AI model, which breaks free from the constraints of conventional computing by mimicking the brain’s ability to adapt on the fly. 

At its core lies a circuit of “synaptic resistors” — or “synstors” — made with ferroelectric hafnium zirconium oxide (HfZrO), which enables the system to modify its own connections in real time.

The research, published in Science Advances, shows that this Super-Turing architecture can outperform traditional artificial neural networks (ANNs) in adaptability and energy efficiency while consuming just 158 nanowatts — a power level seven orders of magnitude lower than typical AI systems.

“Traditional AI models rely heavily on backpropagation — a method used to adjust neural networks during training,” co-author and assistant professor of electrical and computer engineering at Texas A&M, Dr. Suin Yi, said in a press release. “While effective, backpropagation is not biologically plausible and is computationally intensive.” 

“What we did in that paper is troubleshoot the biological implausibility present in prevailing machine learning algorithms. Our team explores mechanisms like Hebbian learning and spike-timing-dependent plasticity — processes that help neurons strengthen connections in a way that mimics how real brains learn.”

From Turing to Super-Turing

Most AI systems today, including those powering self-driving cars and large language models, operate on the Turing model of computation. They execute fixed algorithms trained beforehand — sometimes over weeks — and once deployed, they can’t change course unless retrained from scratch. That makes it difficult for AI systems to function in unfamiliar environments and is painfully power-hungry.

In essence,  current AI is like a student who learns everything before the exam and is forbidden to learn anything new during the test. In contrast, the human brain continuously learns and adapts in real-time.

“These data centers are consuming power in gigawatts, whereas our brain consumes 20 watts,” Dr. Yi explained. “Data centers that are consuming this energy are not sustainable with current computing methods. So while AI’s abilities are remarkable, the hardware and power generation needed to sustain it is still needed.”

That’s where Super-Turing computing comes in. Inspired by how synapses in the brain adjust as we learn, researchers developed a synstor circuit, which uses a form of spike timing-dependent plasticity (STDP) — a biologically plausible learning rule — to update its internal parameters as it processes input. Unlike memristors or phase-change memories that require separate phases for learning and inference, these synstors can do both simultaneously.

The system hinges on a material marvel at the hardware level: a heterojunction composed of a WO₂.₈ layer, a thin film of ferroelectric Hf₀.₅Zr₀.₅O₂, and a silicon substrate. This stack allows for fine-tuning conductance values — akin to adjusting the strength of a synapse — with exquisite precision, repeatability, and durability.

Over 1.6 × 10¹¹ switching cycles were achieved without degradation, and conductance could be tuned across 1000 analog levels with learning accuracy down to 36 picosiemens. These updates occurred with voltage pulses as low as ±3 V and within 10 nanoseconds, making the system power-efficient and remarkably fast.

This capability allows the synstor circuit to operate in what the researchers call “Super-Turing mode — continuously updating its internal weights in response to environmental feedback while performing inference. If the environment changes — say, unexpected turbulence or a new obstacle appears — the circuit adapts on the fly without requiring a pause or external retraining.

Outperforming AI and Humans

To test this novel Super-Turing technology, researchers pitted their synstor-guided drone against two competitors in a simulated mountainous landscape: one controlled by a traditional computer-based ANN and another operated by humans unfamiliar with the drone system.

The results were striking. The synstor circuit guided the drone to its destination faster than the humans, with an average learning time of just 4.4 seconds, compared to 6.6 seconds for people. 

Meanwhile, the ANN took more than 35 hours to achieve similar competence — and even then, it consistently failed when conditions changed. In forested, high-wind environments, the synstor and humans succeeded in navigating without collisions, while the ANN crashed every time.

However, the biggest win came in energy efficiency. The entire synstor Super-Turing system consumed only 158 nanowatts, compared to 6.3 watts used by the conventional AI running on a high-end desktop computer — a difference of over 40 million times.

Toward Super-Turing Brain-Like Machines

The implications of this work could ripple across industries. From autonomous drones and robotic prosthetics to smart wearables and space exploration, systems that need to react in real-time to unpredictable environments—without draining a battery dry—would benefit enormously from Super-Turing AI.

Moreover, the synstor’s architecture lends itself to scalability. While the current prototype features a relatively modest 8×8 crossbar, the team believes the technology could be scaled to circuits with millions of synstors using existing nanofabrication techniques.

The breakthrough could be the foundation for a new class of brain-like computers, where systems don’t just execute pre-learned tasks but can continue learning, adapting, and improving, all in real time and with little power.

Not Just Faster, But Smarter

While debates continue over AI’s ability to match human intelligence, this study quietly shifts the goalposts. Instead of brute-forcing intelligence by simply scaling up models and datasets—as today’s generative AI typically does—this new Super-Turing approach aims to achieve greater intelligence more efficiently, doing far more with less.

The fact that a circuit with no prior training could outperform a pre-trained neural net in real-world conditions suggests that true intelligence isn’t about storing more data but about adapting when the data runs out.

Modern AI like ChatGPT is awesome, but it’s too expensive, Dr. Yi said. “Super-Turing AI could reshape how AI is built and used, ensuring that as it continues to advance, it does so in a way that benefits both people and the planet.”

Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan.  Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com