AI swarms
(Image Credit: M. Nohassi/Unsplash)

“Malicious AI Swarms” Could Hijack Democracy—And May Even Go Unnoticed, Experts Say

A new breed of AI-controlled personas could pose an unprecedented threat to democratic societies, experts have recently warned. 

Unlike traditional cyberattacks, these systems operate more insidiously, infiltrating online communities and shaping narratives. 

The warning was presented in a recent policy forum in Science, which argues that swarms of AI personas can mimic human behavior so convincingly that they can influence conversations, sway opinions, and even tilt elections. Compared to the capabilities of botnets in the past, which were clumsy and detectable, these new AI agents can coordinate in real time, respond to feedback, and propel narratives amid thousands of online accounts and conversations. 

How AI-Controlled Personas Work

AI models, along with multi-agent systems, can use a single operator to disseminate thousands of AI “voices” that appear authentic, localized, and very human-like. These types of systems can run millions of micro-tests to support and enhance messaging. The results, experts warn, would be a form of manufactured public opinion online that may seem grassroots-driven, but is in fact 100 percent AI-manufactured. 

This new AI capability goes beyond classical propaganda as we know it. With AI-controlled personas, responses can be analyzed, tone adjustments can be made, and activity coordinated across networks, enabling swarms to potentially be used to amplify hate speech and polarization, suppress dissenting viewpoints, and guide online discussions in directions favorable to specific political or individual self-interests. 

Such technology, in other words, could potentially have the power to create an illusion that a large number of people agree with something, even when they don’t.

Early Warning Signs

As of now, full-scale AI swarms remain a theoretical problem, although early indicators involving the potential use of such capabilities are raising concerns internationally. AI-generated deepfakes—and even entirely fabricated news outlets—have already demonstrated the ability to influence recent electoral debates in the United States and in other countries, such as Taiwan, Indonesia, and India, according to Dr. Kevin Leyton-Brown, a computer scientist at the University of British Columbia.

Fundamentally, AI swarms could tilt the balance of power in democracies, said Dr. Leyton-Brown. “We shouldn’t imagine that society will remain unchanged as these systems emerge,” he says. “A likely result is decreased trust of unknown voices on social media, which could empower celebrities and make it harder for grassroots messages to break through.” 

Leyton-Brown and others warn that AI swarms could subtly tilt the balance of power in democracies, shaping voter perceptions before anyone realizes they are being manipulated in such a way. Researchers caution that the next major election could become a focal point, serving as a veritable proving ground for these technologies.

In a Substack article, “AI bot swarms threaten to undermine democracy,” Dr. Leyton-Brown and his co-author recently explained the dangers such capabilities represent. 

“No democracy can guarantee perfect truth, but democratic deliberation depends on something more fragile: the independence of voices,” they write. “The ‘wisdom of crowds’ works only if the crowd is made of distinct individuals. When one operator can speak through thousands of masks, that independence collapses.”

“We face the rise of synthetic consensus: swarms seeding narratives across disparate niches and amplifying them to create the illusion of grassroots agreement,” the authors write.

Through the adaptive mimicry of human social dynamics, the authors of the recent Science Policy Forum argue, AI could potentially represent a significant threat to democratic societies if left unmitigated.

The authors recommend “interventions at multiple leverage points,” adding that a focus on “pragmatic mechanisms over voluntary compliance” may be required in order to overcome such potential concerns, along with efforts to develop safer AI systems overall.

“How malicious AI swarms can threaten democracy” appeared in Science on January 22, 2026.

Chrissy Newton is a PR professional and the founder of VOCAB Communications. She currently appears on The Discovery Channel and Max and hosts the Rebelliously Curious podcast, which can be found on YouTube and on all audio podcast streaming platforms. Follow her on X: @ChrissyNewton, Instagram: @BeingChrissyNewton, and chrissynewton.com. To contact Chrissy with a story, please email chrissy @ thedebrief.org.