fake Twitter accounts
(Credit: RUB, Marquard)

Fake Social Media Accounts with AI-Generated Images Linked to Spread of Propaganda and Conspiracies, New Research Shows

New research is revealing the prevalence of fake social media accounts using AI-generated images and their involvement with the spread of misinformation online.

An analysis of millions of social media profiles on X (formerly Twitter) has revealed that thousands using AI-generated profile images that look like real people were likely fake accounts. This was determined based on factors that include their date of creation, the number of followers they have, the total number of accounts they follow, the content they post, and their overall behavior.

The researchers behind the study, which took place in 2023 while X was still known as Twitter, say they started with a pool of around 15 million accounts. After eliminating those without pictures, the team found that 0.052 percent of the remaining 43% had an AI-generated profile image of a person as their avatar. While a small fraction of the total, the researchers note that the accounts with AI-generated images numbered in the thousands, with 7,723 confirmed AI-generated images.

“That may not sound like much, but such images feature prominently on Twitter,” said lead author Jonas Ricker from Ruhr University Bochum, Germany. “Our analyses also indicate that many of the accounts are fake profiles that spread, for example, political propaganda and conspiracy theories.”

Along with spreading disinformation, the accounts with AI images had several clues indicating they were fake. For example, these accounts averaged fewer followers and followed fewer accounts than accounts without AI-generated profile images. The team also found a telltale pattern in the creation dates of these accounts.

“We also noticed that more than half of the accounts with fake images were first created in 2023; in some cases, hundreds of accounts were set up in a matter of hours – a clear indication that they weren’t real users,” Ricker explained.

After performing a hand check on select accounts to verify their initial findings, the research team waited nine months. When they went back to check the accounts with AI-generated profile images against similar accounts with authentic photos, they found that over half of those with AI images had been blocked by Twitter. According to Ricker, the company blocking these accounts is “yet another indication that these are accounts that have acted in bad faith.”

An analysis of the content shared by these accounts also revealed some overall themes. According to the researchers, the fake Twitter accounts regularly shared conspiracy theories involving President-Elect Donald Trump, the COVID-19 pandemic and vaccinations, the ongoing war in Ukraine, and stories about lotteries and finance, including cryptocurrencies. While these patterns are not necessarily indications that these were fake Twitter accounts, the researchers note that the combination of the AI-generated profile image and the other observed patterns lends the concept some credibility.

“We can only speculate what’s going on there,” said Ricker. “But it’s fair to assume that some accounts were created to spread targeted disinformation and political propaganda.”

For this study, the researchers limited their search for AI-generated profile images to the StyleGAN 2 mode. The website thispersondoesnotexist.com made that AI image generator famous by creating images of fake people that look like real ones.

“We assume that this site is often used to produce AI-generated profile pictures,” says Ricker. “This is because such AI-generated images are more difficult to trace than when someone uses a real image of a stranger as their avatar.”

In future studies, the team expects to expand their search with more recent AI models, which they hope will return an even higher percentage of potentially fake Twitter accounts. However, they believe that whatever tool the account creators are using likely works on the same principle.

“The current iteration of AI allows us to create deceptively real-looking images that can be leveraged on social media to create accounts that appear to be real,” Ricker explained.

The study “AI-Generated Faces in the Real World: A Large-Scale Case Study of Twitter Profile Images” was presented at the 27th International Symposium on Research in Attacks, Intrusions and Defenses (RAID).

Christopher Plain is a Science Fiction and Fantasy novelist and Head Science Writer at The Debrief. Follow and connect with him on X, learn about his books at plainfiction.com, or email him directly at christopher@thedebrief.org.