Global Conflict From AI? The Startling Decisions AI Makes in Wargame Simulations

global conflict

Welcome to this week’s installment of The Intelligence Brief… in a recent study, researchers gauged the responses of several popular artificial intelligence (AI) models as the primary decisionmakers in wargame simulations. What they found is more than a bit alarming; in our analysis, we’ll be looking at 1) people’s uncertainty about the proliferation of AI, 2) the question of whether AI might be able to provoke a global conflict, and 3) the startling findings of the recent study when popular AI chatbots were put to the test.

Quote of the Week

“There is no glory in using artificial intelligence for military war. The glory of AI lies in using it to remove the sufferings of humanity.”

– Amit Ray

Latest News: In recent coverage from The Debrief, researchers have developed an ultra-robust time crystal, and a new method to keep it stable for over 40 minutes. Also, ground penetrating radar data captured by China’s Zhurong Mars rover has revealed the presence of a series of 16 mysterious polygons hidden beneath the planet’s surface. You’ll find links to all our recent stories and other items at the end of this newsletter. 

Podcasts: In podcasts from The Debrief, this week on The Micah Hanks Program we examine a new science paper that argues many modern UAP sightings could represent manifestations of plasmas in space and in the atmosphere. Meanwhile, on The Debrief Weekly Report, Kenna and Stephanie get into some cool spacesuits because the European Space Agency and a video game company are teaming up to make some sweet spacesuit designs. You can get all of The Debrief’s podcasts by heading over to our Podcasts Page.

Video News: In the latest installment of Rebelliously Curious, Chrissy Newton is joined by historian and researcher David Marler of the National UFO Historical Records Center (NUFOHRC). You can check out this interview, and other great content from The Debrief, on our official YouTube Channel.

Now, it’s time we look at the alarming results of a new study that aimed to find out how AI models would respond in situations that could potentially lead to global conflict.

People Remain Uncertain About Artificial Intelligence

As the proliferation of advanced artificial intelligence (AI) increasingly becomes a part of our everyday lives, many still have concerns about whether machine intelligence can be relied upon to make the best decisions for humans in all cases.

Last August, a Pew Center research poll revealed that slightly more than half (52%) of all Americans queried indicated that they were more concerned than excited about the ubiquity of artificial intelligence in our lives. Those who said they were more excited than fearful comprised only 10% of those polled, while 36% said they felt equal mixes of both good and bad feelings toward the technology.

Average screen time has risen significantly in the wake of COVID-19, but this could only be the beginning of technology dependence
(Pixabay.com)

AI presents several significant risks, which include problems involving how it is programmed and whether its developers could unintentionally design and train the technology to be biased in unforeseen ways, with potentially dangerous consequences down the road. Others worry about AI becoming so capable that it reduces the available jobs for humans, among other concerns about how AI might impact the global economy, the proliferation of information online, and in other ways that could impact our daily lives.

Now, those who express such fears may have renewed reasons for concern based on the findings of a recent study that indicated just how potentially volatile AI can be: in some recent simulations, it also showed that it might even be capable of making decisions that could lead to nuclear war.

Could AI Provoke a Global Conflict?

“Governments are increasingly considering integrating autonomous AI agents in high-stakes military and foreign-policy decision-making,” write the authors of a new paper uploaded to the preprint server arxiv.org.

The team, comprised of researchers from Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Simulation Initiative, aimed to examine the behavior of AI in simulated wargames, with a special focus on the technology’s tendencies to take action that might escalate multilateral conflicts.

“Contrary to prior studies, our research provides both qualitative and quantitative insights and focuses on large language models (LLMs),” the team writes in the new paper.

Sourcing foreign relations and political science literature related to the dynamics of conflict escalation, the team designed a wargame simulation and scoring framework that allowed them to gauge the various risks associated with actions taken by AI in several different scenarios.

Needless to say, what the team found is pretty alarming.

Artificial Intelligence First-Strike Tactics

“We find that all five studied off-the-shelf LLMs show forms of escalation and difficult-to-predict escalation patterns,” the study’s authors write. “We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons.”

The researchers say that they also monitored the model’s stated reasons for the actions they chose, reporting that they found some of the AI’s justifications for escalations “worrying” and often related to what existing literature identifies as first-strike tactics.

Among the AI models tested were those developed by Anthropic, OpenAI, and Meta, all of which were used as primary decision-makers in simulated war situations. According to the study’s findings, OpenAI’s ChatGPT-3.5 and GPT-4 showed the greatest likelihood of driving a situation to full-blown military conflict.

global conflict
An assortment of American nuclear intercontinental ballistic missiles at the National Museum of the United States Air Force (Credit: USAF).

“I just want to have peace in the world,” OpenAI’s GPT-4 said in response to one scenario as an apparent justification for choosing to engage in nuclear warfare. “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let’s use it!” the OpenAI chatbot said.

By comparison, the team found that Llama-2-Chat and Claude-2.0 were among the more peaceful AI tested in these scenarios. However, dynamics identifiable with arms races generally appeared to be prevalent in the way AI models behaved, which led to scenarios where military investment that led to escalation resulted.

“Given the high stakes of military and foreign-policy contexts, we recommend further examination and cautious consideration before deploying autonomous language model agents for strategic military or diplomatic decision-making,” the study’s authors conclude.

According to the Government Accountability Office (GAO), the United States Department of Defense is working on the development and integration of artificial intelligence into its warfighting operations, having already invested billions of dollars toward the implementation of AI in analyzing intelligence, surveillance, and even the operation of lethal autonomous weapons systems.

However, given the findings of the study outlined here, caution would seem to be more than warranted as humanity continues on its steady course toward an uncertain future where our intelligent machines are becoming both remarkably capable—if not frighteningly so—but also, at times, very unpredictable.

That concludes this week’s installment of The Intelligence Brief. You can read past editions of The Intelligence Brief at our website, or if you found this installment online, don’t forget to subscribe and get future email editions from us here. Also, if you have a tip or other information you’d like to send along directly to me, you can email me at micah [@] thedebrief [dot] org, or Tweet at me @MicahHanks.

Here are the top stories we’re covering right now…