existential

Navigating Humanity’s Greatest Challenge Yet: Experts Debate the Existential Risks of AI

In recent years, the rapid proliferation of artificial intelligence (AI) has emerged as a beacon of innovation, promising to reshape the world with unparalleled efficiency and knowledge. 

Yet, beneath the surface of these technological advancements, a myriad of questions and concerns lurk, casting shadows over AI’s glowing promise. 

Scientists, experts, and the general public are beginning to question the trajectory of AI technology and its implications for the future of humanity. At the heart of the debate is whether AI represents an existential threat to humanity.

A recent event hosted by the American nonprofit global policy think tank, the RAND Corporation, brought together a diverse panel of five experts to delve into the existential risks posed by AI. 

Experts were divided on what they considered to be the most significant threats AI poses to humanity’s future, indicating that AI security is a complex and nuanced issue.

“The risk I’m concerned about isn’t a sudden, immediate event,” Benjamin Boudreaux, a policy researcher who studies the intersection of ethics, emerging technology, and security, said. “It’s a set of incremental harms that worsen over time. Like climate change, AI might be a slow-moving catastrophe, one that diminishes the institutions and agency we need to live meaningful lives.” 

Dr. Jonathan Welburn, a RAND senior researcher and a professor of policy analysis, noted that advancements in AI draw similar parallels to past periods of technological upheaval. 

However, unlike the advent of electricity, the printing press, or the internet, Dr. Welburn said his most significant concern with AI lies in its potential to amplify existing societal inequities and introduce new forms of bias, potentially undermining social and economic mobility through ingrained racial and gender prejudices. 

“The world in 2023 already had high levels of inequality,” Dr. Welburn said. “And so, building from that foundation, where there’s already a high level of concentration of wealth and power—that’s where the potential worst-case scenario is for me.” 

Dr. Jeff Alstott, the RAND Center for Technology and Security Policy Director and a Senior Information Scientist, painted a particularly sobering picture of future challenges. He shared his most profound concern, noting that the prospect of AI being weaponized by bad actors “keeps me up at night.”

“Bioweapons [happen] to be one example where, historically, the barriers have been information and knowledge. You don’t need much in the way of specialized matériel or expensive sets of equipment any longer in order to achieve devastating effects with the launching of pandemics,” Dr. Alstott explained. “AI could close the knowledge gap. And bio is just one example. The same story repeats with AI and chemical weapons, nuclear weapons, cyber weapons.” 

During the panel discussion, the experts’ primary concern wasn’t the technology itself. Instead, their worries centered on the potential for humans to misuse AI for harmful purposes.

“To me, AI is gas on the fire,” Dr. Nidhi Kalra, a senior Information Scientist at RAND, explained. I’m less concerned with the risk of AI than with the fires themselves—the literal fires of climate change and potential nuclear war and the figurative fires of rising income inequality and racial animus.” 

From AI-induced mistrust and undermining democracy, RAND policy researcher Dr. Edward Geis expressed his concerns about AI more directly, stating, “AI threatens to be an amplifier for human stupidity.” 

Following a comprehensive analysis of recent scientific studies, Dr. Roman V. Yampolskiy, an AI safety expert and associate professor at the University of Louisville, identified an additional existential threat posed by AI. According to Dr. Yampolskiy, there is no evidence that AI superintelligence can be safely controlled, cautioning, “Without proof that AI can be controlled, it should not be developed.” 

“We are facing an almost guaranteed event with potential to cause an existential catastrophe,” Dr. Yampolskiy warned. “No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.”

In a recent paper, Dr. Atoosa Kasirzadeh, an assistant professor at the University of Edinburgh who focuses on the ethics, safety, and philosophy of AI, further explored the existential risks posed by AI. 

According to Dr. Kasirzadeh, the conventional discourse on existential risks posed by AI typically focuses on “decisive” threats or abrupt, dire events caused by advanced AI systems that “lead to human extinction or irreversibly cripple human civilization to a point beyond recovery.” 

Dr. Kasirzadeh explained that AI development also carries “accumulative” risks, likened to a “boiling frog scenario.” In this scenario, gradual AI-related risks build up over time, gradually weakening resilience. This process continues until a critical event occurs, leading to an irreversible collapse.

Echoing Dr. Boudreaux’s sentiments, Dr. Kasirzadeh concluded her paper by saying, “There is no inherent reason to consider that the accumulative hypothesis is any less likely than the decisive view. The need to further substantiate the accumulative hypothesis is apparent.”

Dr. Yampolskiy and Dr. Kasirzadeh did not participate in the recent RAND panel discussion on AI existential risks. However, their latest research findings introduce additional complexity to the ongoing debate.

The experts at RAND had differing opinions on whether AI poses a direct existential threat to humanity’s future.

Dr. Welburn and Dr. Kalra both believed that AI does not currently represent an irreversible threat, pointing out that humanity has a long history of overcoming significant challenges.

“We are an incredibly resilient species, looking back over millions of years,” Dr. Kalra said. “I think that’s not to be taken lightly.” 

Conversely, Dr. Boudreaux and Dr. Alstott felt that AI did pose a threat to humanity’s future, noting that the extinction of the human race is not the only catastrophic impact AI can have on societies. 

“One way that could happen is that humans die,” Dr. Boudreaux explained. “But the other way that can happen is if we no longer engage in meaningful human activity, if we no longer have embodied experience, if we’re no longer connected to our fellow humans. That, I think, is the existential risk of AI.” 

Dr. Geist said he was uncertain of just how significant AI’s risks are, likening its advancement to the development of nuclear weapons. 

“The current scientific understanding is that the nuclear weapons that exist today could probably not be used in a way that would result in human extinction,” Dr. Geist pointed out. “That scenario would require more and bigger bombs. That said, a nuclear arsenal that could plausibly cause human extinction via fallout and other effects does appear to be something a country could build if it really wanted to.” 

Panelists expressed that the path forward is fraught with uncertainty. However, this does not inherently mean that AI will doom us to extinction.

All five experts unanimously agreed that independent, high-quality research will play a crucial role in assessing AI’s short—and long-term risks and shaping public policy accordingly.

Addressing AI’s existential risks will require a multifaceted approach, emphasizing transparency, oversight, and inclusive policymaking. As the experts suggested, ensuring AI’s integration into our lives enhances rather than diminishes our humanity is paramount. 

Experts underscored that this must involve rigorous research and policy interventions and foster communities resilient to the broad spectrum of crises we could face in a future AI-filled world. 

“Researchers have a special responsibility to look at the harms and the risks. This isn’t just looking at the technology itself but also at the context—looking into how AI is being integrated into criminal justice and education and employment systems, so we can see the interaction between AI and human well-being,” Dr. Boudreaux said. “But I don’t think there’s a technical fix alone. We need to have a much broader view of how we build resilient communities that can deal with societal challenges.” 

Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan.  Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com