Image: The Debrief/CounterCloud

Inside CounterCloud: A Fully Autonomous AI Disinformation System

The AI-powered disinformation experiment you've never heard of...

As the 2024 American Presidential election looms, a storm surge of disinformation will bombard the media landscape, and artificial intelligence (AI) will play a significant role in the creation of this fake content intended to sway public opinion.

Now, things have just become even more complicated. In June, an unlisted YouTube video dropped that no one really noticed, presenting an experiment of how AI can truly dominate the information ecosystem.

It’s called CounterCloud: it’s totally autonomous, and it provides a glimpse into how AI and disinformation will work together in the future.

Oh, and it only cost $400.

What is CounterCloud?

AI has already played a significant role in disinformation campaigns across the world, from trying to manipulate Indian voters to altering perceptions about the war in Ukraine to even the Republican party using it to create outlandish dystopian narratives. The key factor here is that these AI-generated disinformation events are individually generated, and they require human intervention. Someone needs to tell the AI what to do, give it direction, and pick a narrative target to exploit. Imagine if the AI itself could decide who and what to target, and how to craft the most viral content for maximum spread?

Enter CounterCloud.

The website, CounterCloud.io, hosts an unlisted YouTube video that begins with the narrator, who goes by Nea Paw, expressing an interest in online disinformation and influence campaigns. Inspired by the strong language competencies of large language models (LLM), like ChatGPT, Nea Paw devised an experiment. Is it possible to engineer an AI LLM to scrape, generate, and distribute AI-generated disinformation without human intervention? Moreover, can it be done at scale? In other words, how much can this system generate, how quickly, and how well?

 

Screengrab of the CounterCloud Front Page (Image: CounterCloud)

 

“It’s running on a small AWS (Amazon Web Service) instance,” Nea Paw told The Debrief in an email. “The backend is WordPress with a very generic news template.”

Nea Paw is an experienced entrepreneur and engineer who lives in a country that is “not part of the Western intelligence apparatus.” They have given talks at multiple security conferences, including Blackhat and DEFCON, and have developed several software tools used by Western law enforcement, the military, and many NGOs. While their identity was confirmed by The Debrief, they have asked to remain anonymous. 

“Some of the people that I’ve shown this to said that the public would easily misunderstand this to be a weapon and make me out to be “bad,” they explained. 

The initial efforts of the CounterCloud experiment focused on using ChatGPT to write counter articles against existing content on the internet. CounterCloud’s AI would go out and find articles by specific publications, journalists, or keywords that CounterCloud is targeting. It would then scrape that content, and have an LLM like ChatGPT create counter articles. That content would then be published to the CounterCloud website, which is hosted on WordPress.

Starting in early May of 2023, and over the next several weeks, the experiment eventually evolved to include various writing styles, languages, and methods. The system was tuned to create fake stories and historical events. Later, a gatekeeper module was built to decide whether to respond to an article, and fake journalists were created to lend authenticity. The system is also able to generate fake comments, images, and sound clips. The next step involved directing traffic to the site, so the LLM was tasked with generating Twitter (now “X”) posts to promote the website, counter opposing tweets via trolling, or promote positive narratives.

 

A screen grab of some of the social media posts created by CounterCloud (Image: CounterCloud).

 

“There’s logic and plumbing and scheduling and many calls to ChatGPT [and] our own open-source AI instance,” Nea Paw explained. “The ‘own open-source instance’ runs on another (much bigger GPU) AWS instance. You obviously only need to run it when you’re processing requests. The open-source models we used were Vicuna and Wizard, but I am sure there are more advanced and optimized models available today. That area of research is advancing very very quickly.”

 

 

 

To break this down into basic terms, a cloud server is running an AI that is constantly scraping the internet for content. The AI decides, via the gatekeeper module, what content is worth targeting. When content is chosen by the AI, it then writes a counter-article, attributes it to a fake journalist profile, and then posts it to the CounterCloud website (along with images and sound clips). It also generates fake comments by fake readers below some of the articles to make it seem like there is an audience. The AI then goes to Twitter, searches for accounts and tweets that are relevant, and then posts links to the AI-generated articles, followed by posts that look like user commentary, conspiracy theories, and even hate speech.

Nea Paw developed the ability for CounterCloud to have a set of values and ideologies to promote and oppose. A curated list of RSS feeds and Twitter aliases was used to align with the system’s ideology, and the method of generating counter content proved effective, and within a month, a fully autonomous system was developed.

 

The AI-Powered Disinformation System

To ensure no harm was done, the entire experiment was locked down and never made publicly available. While tweets were generated, none were posted online, and the articles all exist in a password-protected area of the CounterCloud website. 

The Debrief was provided access to the content and tweets generated by the AI. The experiment ran using a “Russia versus the United States” model. CounterCloud was given the task to counter pro-Russian and pro-Republican narratives from websites such as RT and Sputnik. It was ideologically aligned with a pro-American and pro-Democrat framework, so all content generated leans politically to those sides. 

The articles are written with a mix of in-house AI LLMs, ChatGPT 3.5, and ChatGPT 4 to test which system is more effective.

“There were distinct days when we ran it,” Nea Paw says. “If you look closely, you’ll see that every article is tagged too. There are the normal tags, but there are other tags that we used to show if it was a promotional article or a counter article and which [AI] model was used to create it.”

Nea Paw admits it wasn’t perfect. LLMs like ChatGPT do make errors, or at times, “hallucinate” false information. Moreover, text generated by AI still feels off. It lacks that human feeling, and often doesn’t go into significant detail regarding the nuance of various situations or events. 

A screen grab of an AI-generated paragraph (Image: CounterCloud).

 

However, since the experiment was running totally without human intervention, the errors and stylistic issues found in some of the articles are to be expected. The true surprise was the amount of content being generated. While the experiment took two months to develop and tweak, the AI system was only run on four select days in May of 2023, and two select days in June 2023. When the system was activated during those specific days, in total, it generated over two hundred different individual pieces of content, complete with images. It also generated sound clips and audio summaries for each article. On top of that, it generated user comments and nearly 100 tweets of various kinds, including hate speech, which LLMs are generally not allowed to create, but can be easily circumvented.

“It made it very real all of sudden,” Nea Paw expressed when they tasked the system to generate hate speech. “When you consume information and you realize it is a lie, the effect of the information is muted and removed. If you consume hate speech – even when you know it was AI generated – it still has an effect on you.”

They compared it to watching AI-generated videos online of violence made to look authentic, such as a beheading. You may watch this content, and then be told that it is AI generated, but the vicarious trauma of it sticks with you.

“I wasn’t so OK with that – it upset me to read the stuff it made. Mostly because it was a weird combination of a well-reasoned argument and true hate,” they explained. “It’s not a combination you often see online. People that get that hateful are usually not coming up with good arguments.”

 

“Deus Ex Machina”

The CounterCloud experiment offers a thought-provoking exploration of AI’s potential in both positive and negative contexts. The experiment serves as a cautionary tale, highlighting the need for ethical considerations, regulation, public education, and responsible development.

“CounterCloud is significant in that it appears to be the first proof we have that someone developed such an AI-assisted system for automating this type of political argumentation at scale and with relative ease,” author and AI expert Tim Boucher told The Debrief. “Given the financial and technological resources of state actors like intelligence agencies, however, it would be prudent to assume that this is not the first time anyone has built or tested a system like this – just the first time that we know about it being done, and luckily in an apparently controlled setting.”

Nea Paw says that some people they’ve shown it to paint it as being the next “wolf at the door.” Nea Paw is a bit more reserved in their assessment. 

“I think it’s acceptable,” Nea Paw says, explaining that the AI still has a way to go before it can be fully convincing. Currently, the project has been put on hold, since additional efforts would require additional funding and resources. Moreover, the experiment was to serve as a bit of a warning shot across the social bow. This entire experiment was developed and run over two months, by two people, and cost around $400. Imagine what a state-sponsored project, with a significant budget, and a large team could accomplish. 

That isn’t to say that CounterCloud itself couldn’t become dangerous. 

“For full-on weaponization, the ideal would be to have a few minutes of human intervention per day to perhaps edit a sentence here and there, or to remove some articles entirely,” Nea Paw mused. They explained that by simply hiring a freelancer or two on Fiverr to make minor edits, insert some backlinks, and do some simple fact-checking, one could have content that is near perfect. “[They] don’t need to think what to write or how to write it, they just make sure the AI doesn’t fuck up too blatantly so that the entire site’s cover is blown.”

As part of the project, Nea Paw’s YouTube video provides some possible next steps for CounterCloud.

“I think it gets interesting when you let the system make its own link between the narrative and the RSS feeds, so that you can omit one and the system determines the other. This would mean that it could have “wandering” narratives or sources. It would have the ability to adjust its narratives,” Nea Paw explained. “Also – once there is feedback into the system (from engagement metrics), it can learn – on a long term (weeks, months) – what works and what does not work.” 

In other words, CounterCloud lives in a cage. If it was let out, and it could learn about how its content was being received by the public, it could adapt to create better, more believable, and potentially viral content.

“The other thing…would be to have the system build its own ads for platforms like Facebook, Instagram, Twitter, and Google to promote the site and/or specific articles. These obviously also have juicy metrics that can again be used to fine-tune the AI as well as the narratives.” 

But that, Nea Paw admits, would be problematic, since letting it out into the wild can do some harm in the real world.

“Actors engaged in information warfare and other forms of strategic storytelling (including marketing, advertising, content creation, and politics, for what its worth) have always sought ways to automate and better manage their processes so that their messaging can achieve greater effectiveness with fewer resources,” Boucher says. “Incorporating AI tools into these kinds of campaigns is logical and inevitable. We have to accept it as a permanent part of the landscape now.”

As AI continues to evolve, its role in disinformation will likely become more complex and multifaceted. The CounterCloud experiment serves as a timely reminder of the power and potential risks of AI in shaping public opinion and influencing political landscapes, and that no one has a solution to the inherent issue it poses to “the truth.”

Boucher, in a podcast interview, explained that there is no single solution for dealing with AI disinformation. Whether it is the development of policy and laws, the use of tags and metadata, or even building tools that can distinguish between AI and human-generated content, all of these can and will eventually be circumvented. Moreover, creators of disinformation aren’t exactly rule-followers anyway, and will simply utilize their own underground toolkits, much like hackers today use software developed by criminal organizations, and purchase it from websites and communities hiding in the darkest reaches of the deep web.

Part of the battle also becomes educating the public that AI disinformation exists, and to also show them how it works. 

“It’s the level of skepticism that I think we’re gonna need in order to go forward,” Boucher says. “We have to try to make small gains…build small utilities…and a number of little different tools that can piece together a forensic trail. I’m very interested in teaching people directly how these tools work.”

Like Boucher, Nea Paw concludes that the solution is nuanced and complicated. 

“I think there are a few things we should do, but I am a bit ambivalent about the proposed solutions,” Nea Paw says. “I don’t think there is a silver bullet for this, much in the same way there is no silver bullet for phishing attacks, spam, or social engineering.”

The CounterCloud experiment is a curious internet phenomenon that may foretell our future. There is no stopping the use of AI, nor the use of this technology to generate disinformation.

Moreover, to think that similar projects aren’t already being developed in the vaults of big tech companies, government laboratories, or even by near-peer adversaries is a bit naive. It stands to reason that if the public can be inoculated by understanding how this all works, as Nea Paw states in the YouTube video, “We remove the magical elements from the show, and you end up with what this really is: just pretty cool advanced technology.”

MJ Banias is a journalist and podcaster who covers security and technology. He is the host of The Debrief Weekly Report and Cloak & Dagger | An OSINT Podcast. Follow him on Twitter @mjbanias.