AI

Despite Dire Warnings, New Research Reveals AI Poses No Existential Threat to Humanity

In a significant development for the ongoing debate over the future of artificial intelligence (AI), a new study led by researchers from the University of Bath and the Technical University of Darmstadt has debunked the notion that AI, specifically large language models (LLMs) like ChatGPT, poses an existential threat to humanity. 

The findings, published as part of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), challenge the prevailing fears that advanced AI technologies could evolve beyond human control, leading to unintended and potentially catastrophic consequences.

“The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning,” computer scientists at the University of Bath and study co-author, Dr. Harish Tayyar Madabushi, said in a press release. 

“This has triggered a lot of discussion – for instance, at the AI Safety Summit last year at Bletchley Park, for which we were asked for comment – but our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.” 

The study’s central focus was on the “emergent abilities” concept in LLMs. These abilities refer to the models’ capacity to perform tasks they were not explicitly trained for, such as answering questions about social situations. 

Dr. Madabushi acknowledged that the “concerns over the existential threat posed by LLMs are not restricted to non-experts and have been expressed by some of the top AI researchers across the world.” 

Associate professor at the University of Louisville, Dr. Roman V. Yampolskiy, is one of those leading AI safety experts who has expressed grave concerns about the current trend towards the development of uncontrollable AI superintelligence. 

In his latest book,  AI: Unexplainable, Unpredictable, Uncontrollable, Dr. Yampolskiy says there is no scientific evidence that AI can be safely controlled, cautioning that “without proof that AI can be controlled, it should not be developed.” In a more dire warning, Dr. Yampolskiy described AI as the “most important problem humanity has ever faced, adding that “we are facing an almost guaranteed event with the potential to cause an existential catastrophe.” 

Fears over advanced AI and LLM development extend beyond the computer science community and have permeated into virtually all science sectors. 

In a paper published in June 2024, Dr. Michael Garrett, a professor of radio astronomy at Leiden University in the Netherlands and the director of the Jodrell Bank Centre for Astrophysics, proposed a controversial theory that artificial superintelligence (ASI) could explain why we have yet to detect any signs of technologically advanced extraterrestrials. 

According to Dr. Garrett, the rapid development of AI and ASI may act as a “Great Filter,” drastically reducing the lifespan of technological civilizations to a mere 200 years. While Dr. Garrett’s theory is focused on explaining the universe’s “Great Silence, the underlying implications offer a gloomy prognosis that humanity may not exist within the next two centuries. 

Despite these chilling forecasts, this recent study demonstrated that the abilities of AI and LLM are not the result of the models independently developing complex reasoning or problem-solving skills. 

Instead, they are a product of “in-context learning (ICL), where the model leverages its training on vast datasets to generate responses based on examples provided during interactions.

Significantly, Dr. Madabushi and his colleagues emphasized that while LLMs can generate sophisticated language and follow detailed prompts, they lack the ability to learn or reason independently. 

These findings are crucial because they directly counter the narrative and prevailing fears that AI could unexpectedly acquire dangerous capabilities, such as planning or autonomous decision-making.

The study’s results are a timely intervention in the heated discussions surrounding AI safety. These discussions have often been dominated by worst-case scenarios, where AI systems might surpass human intelligence and operate beyond our control. Events like the AI Safety Summit held at Bletchley Park have highlighted these fears, drawing attention from policymakers, tech leaders, and researchers alike.

However, the research team from the University of Bath and the University of Darmstadt says its findings show that most of these fears are unfounded. 

Instead of steamrolling towards an AI-driven apocalypse, recent experiments revealed that LLMs supposed “emergent abilities are not indicators of the models developing new skills autonomously. Instead, these abilities are merely the result of sophisticated pattern recognition and language processing capabilities that can be easily directed and controlled.

“The ability to follow instructions does not imply having reasoning abilities, and more importantly, it does not imply the possibility of latent, potentially-dangerous abilities, researchers wrote. “Additionally, these observations imply that our findings hold true for any model which exhibits a propensity for hallucination or requires prompt engineering, including those with greater complexity, regardless of scale or number of modalities, such as GPT-4. 

Researchers say concerns over AI developing unforeseen and potentially hazardous abilities have diverted attention from more immediate and genuine risks associated with AI, such as its misuse to spread misinformation or facilitate fraud.

Dr. Madabushi argues that the focus should shift from existential fears to practical concerns about how AI can be responsibly developed and used. “The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies and also diverts attention from the genuine issues that require our focus, he explained. 

The findings of this study have significant implications for the future development and regulation of AI technologies. If LLMs are inherently controllable and predictable, as the research suggests, then the calls for stringent regulations based on fears of an AI-driven apocalypse may be premature. 

Instead, researchers urge the current emphasis should be on ensuring that AI is used ethically and that its deployment is closely monitored to prevent misuse.

For policymakers, tech companies, and researchers, the task ahead is to strike a balance between fostering innovation in AI and addressing the legitimate concerns that arise from its use. This includes developing frameworks for the ethical use of AI and investing in research that focuses on genuine, tangible risks rather than speculative existential threats. 

This perspective is shared by computer scientist and study co-author  Dr. Iryna Gurevych, who acknowledged that while AI poses certain risks, these are not related to the technology’s ability to independently develop complex reasoning or decision-making skills. 

“Our results do not mean that AI is not a threat at all, Dr. Gurevych added. “Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. 

“Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news.” 

Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan.  Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com