AI superintelligence

AI Superintelligence Alert: Expert Warns of Uncontrollable Risks, Calling It a Potential ‘An Existential Catastrophe’

A recent study by an AI safety expert and associate professor at the University of Louisville, Dr. Roman V. Yampolskiy, casts a long shadow over the future of Artificial intelligence (AI) and the development of inherently uncontrollable AI superintelligence. 

In his latest book,  AI: Unexplainable, Unpredictable, Uncontrollable, Dr. Yampolskiy says that based on an extensive review of the latest scientific literature, there is no evidence that AI can be safely controlled. Challenging the foundation of AI advancement and the direction of future technologies, he warns, “Without proof that AI can be controlled, it should not be developed.” 

“We are facing an almost guaranteed event with potential to cause an existential catastrophe,” Dr. Yampolskiy said in a statement issued by publisher Taylor & Francis. “No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.”

Dr. Yampolskiy, an expert in AI safety, has highlighted the dangers of uncontrollable AI for over a decade, emphasizing the existential threat it could present to humanity. In a 2018 paper, Dr. Yampolskiy and co-author Michaël Trazzi said “Achilles heels” or “artificial stupidity” should be introduced to prevent AI systems from becoming dangerous. For example, AI should be prevented from being able to access and modify its own source code. 

Last summer, in an article for Nautilus, Dr. Yampolskiy and public policy attorney Tam Hunt described building AI superintelligence as being “riskier than Russian roulette.” 

“Once AI is able to improve itself, it will quickly become much smarter than us on almost every aspect of intelligence, then a thousand times smarter, then a million, then a billion … What does it mean to be a billion times more intelligent than a human?” Dr. Yampolskiy and Hunt wrote. “We would quickly become like ants at its feet. Imagining humans can control superintelligent AI is a little like imagining that an ant can control the outcome of an NFL football game being played around it.” 

In his latest book, Dr. Yampolskiy delves into the myriad ways AI could dramatically reshape society, often veering away from human benefit. The core of his argument is that without incontrovertible proof of controllability, the development of AI should be approached with extreme caution if not halted altogether.

Despite the widespread recognition of AI’s transformative potential, Dr. Yampolskiy points out that the AI “control problem,” also known as AI’s “hard problem,” remains a nebulous, under-researched issue. 

“Why do so many researchers assume that AI control problem is solvable? To the best of our knowledge, there is no evidence for that, no proof,” Dr. Yampolskiy states, emphasizing the gravity and immediacy of the challenge at hand. “Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable.” 

One of the most alarming aspects highlighted in Dr. Yampolskiy’s research is the inherent uncontrollability of AI superintelligence. 

AI superintelligence refers to a theoretical scenario where an AI system’s intelligence surpasses that of even the brightest human minds. 

Researchers disagree about how likely present-day human intelligence can be surpassed by technology, arguing that AI will always lack human cognitive abilities, including possessing true human consciousness.  

However, other scientists, including Dr. Yampolskiy, believe that the advancement of AI superintelligence “is an almost guaranteed event” following the development of artificial general intelligence. 

Dr. Yampolskiy says systems with AI superintelligence will be able to evolve their ability to learn, adapt, and act semi-autonomously. Consequently, this would decrease our capacity to control or fully understand the AI system’s actions. Ultimately, it would create a paradox where the advancement of AI autonomy corresponds with a decrease in human safety and control.

After a “comprehensive literature review,” Dr. Yampolskiy concludes that AI superintelligent systems “can never be fully controllable.” Thus, AI superintelligence will always present a degree of risk, regardless of any benefit they can provide. 

Dr. Yampolskiy points out several obstacles to creating “safe” AI, including the infinite potential decisions and failures a system with AI superintelligence can make, resulting in endless and unpredictable safety issues. 

Another concern is that AI superintelligence may not be able to articulate the reasoning behind its decisions, compounded by human limitations in grasping the advanced concepts it utilizes. Dr. Yampolskiy emphasizes that, at the very least, AI systems must be capable of detailing their decision-making processes to guarantee they are free from bias.

“If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers,” Dr. Yampolsky explained. 

Concerns over AI bias have taken center stage recently when it was revealed that Google’s AI-powered image generator and chatbot, Gemini, had difficulty producing images depicting white people. 

Scores of people on social media posted pictures demonstrating that when asked to depict traditionally white historical figures, such as “America’s founding fathers,” Gemini would instead generate images exclusively featuring people of color. In one example, when prompted to visualize a 1943 German soldier, the AI chatbot created images of a black man and an Asian woman dressed in Nazi Waffen SS uniforms.

Google has since taken down Gemini’s image generator feature. 

“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” Google said in a statement. “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

According to Dr. Yampolskiy, the recent Gemini debacle is a relatively harmless and mild preview of what can go wrong with AI left unchecked. More alarming, he argues it is fundamentally impossible to truly control systems with AI superintelligence.  

“Less intelligent agents (people) can’t permanently control more intelligent agents (ASIs). This is not because we may fail to find a safe design for superintelligence in the vast space of all possible designs, it is because no such design is possible, it doesn’t exist,” Dr. Yampolskiy argued. “Superintelligence is not rebelling, it is uncontrollable to begin with.” 

“Humanity is facing a choice, do we become like babies, taken care of but not in control, or do we reject having a helpful guardian but remain in charge and free.”

Dr. Yampolskiy says there are some ways to minimize the risks. These include making AI modifiable with ‘undo’ options and being limited to using transparent and understandable in human terms. 

Additionally, “nothing should be taken off the table” when it comes to limiting or partially banning the development of certain types of AI technology that have the potential to be uncontrollable. 

Dr. Yampolskiy’s work has garnered support from notable figures in the tech world, including Elon Musk. A vocal critic of unrestricted AI development, Musk was one of over 33,000 industry experts last year who signed an open letter calling for an immediate pause on “the training of AI systems more powerful than GPT-4.”

Despite the ominous impact AI could have on humanity, Dr. Yampolskiy says the concerns raised by his latest research should serve as a catalyst for increased AI safety and security research. 

“We may not ever get to 100% safe AI, but we can make AI safer in proportion to our efforts, which is a lot better than doing nothing.” urged Dr. Yampolskiy. “We need to use this opportunity wisely.”

Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan.  Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com