GTP-3
Image: Unsplash/Debrief

GPT-3 May Become The AI Disinformation Machine We’ve Always Feared, Study Finds

Ever since a Tom Cruise look-alike “deep faked” the world, concerns about AI being used to deceive and manipulate opinions have grown. 

A Georgetown University study has shown how an AI software known as GPT-3 was successfully used to generate strings of “human-like” text, producing disinformation persuasive enough to sway the opinion of its readers on a whole range of political issues. 

Background: GPT-3 And Its History

Created in 2015 by its two co-founders Sam Altman and Elon Musk, OpenAI is a non-profit artificial intelligence research center headquartered in San Francisco. According to its website, the company’s goal is to continue to make progress in and evolve artificial intelligence as a tool in a way that will “benefit humanity as a whole.” 

The group’s Generative Pre-Trained Transformer (abbreviated GPT) programs are a series of advanced language models that, given enough data, could perform language-based tasks such as answering questions or writing narratives. In short, they are incredibly complex and advanced word auto-filling programs. 

However, what makes the GPT models groundbreaking is their advanced machine learning capabilities, which allow them to understand tasks quickly and with little data and complete such tasks with little human supervision. 

For example, one goal researchers had for the initial version of the GPT program was to generate text of a high enough quality that it would be near indistinguishable from human writing. The second iteration, GPT-2, was able to get closer to its goal with its enhanced data processing capabilities but still would contain some obvious signs that its outputs were not written by a human. GPT-3, the third and latest version, is the most advanced model, raising concerns over its nearly human quality writing.

 

Anxiety concerning GPT-3 rises when non-state or even state actors use it to create complex disinformation campaigns. (Credit: Unsplash)

 

Analysis: Why GPT-3 Can Be Used As A Disinformation Machine

GPT -3’s earliest applications were primarily harmless and in good fun, including an article published in The Guardian written entirely by the GPT-3 algorithm covering the dark topic of this AI-generated essay being to persuade readers that robots come in peace. The article itself was mainly lighthearted in its tone, yet unnervingly morbid, with an overall quality of writing coherent and persuasive enough that one would be forgiven to think it was authored by HAL 9000. 

Those interested in the technology can join OpenAI‘s waitlist for the product, as it is currently in private beta. Still, the few who have been lucky enough to get their hands on it have already been able to use it to generate promotional emails, create internet memes, and even write functioning computer code. Experts already see the possibility for dangerous applications immediately.

Published this month, the Georgetown Study confirmed the worries some had about the program’s potential for harm. For example, GPT-3 not only performed its tasks well, which included the generation of everything from short blurbs of fake news to creating narratives to shift others’ worldviews, but it excelled at many of them- and with little human supervision. 

The program was also able to “mimic the writing style of QAnon” during its tests for ‘narrative seeding’ and was hypothesized to have the ability to apply this to other conspiracy theories, as well as being able to create narrative bases for entirely new ones. Its best and most coherent works are short “tweet-like” messages. Still, its creators have speculated that should powerful entities get ahold of the program, it would evolve even further given enough resources and experimentation. 

The study also recognized that GPT-3 would be extremely powerful and near impossible to distinguish from human writing should it be used in targeted disinformation campaigns. The best shot at offering any resistance, the study authors suggest, would be to target instead the accounts and services propagating the false information, such as the already popularized usage of bot accounts to spread a narrative on social media.

Since it was trained on a wealth of human-produced content, the GPT-3 program is just as much a slave to its own biases as humans can be. For example, it has been noted that the program already contains an islamophobic bias, often predicting words relating to terrorism in close proximity to ones about the religion. 

Outlook: The Future of Messaging May Not be Human

The prospect of using AI in disinformation campaigns is nothing new to this study; it has already been happening for some time. Botting on social media has existed since the medium’s inception and has always been a plight to many platforms. Twitter has even banned several million accounts since deciding to crack down on the issue in 2018. 

Unfortunately, this hasn’t stopped disinformation from running wild across the platform. A recent example is Amazon’s fake ambassador accounts- fake Twitter accounts masquerading as Amazon employees that exist solely to praise the company and defend its less popular actions, including its opposition to its workers unionizing. If its creators are right, GPT-3 would, without a doubt, make similar campaigns significantly more accessible and much more frequent as well. 

Currently, the United States has little regulation regarding artificial intelligence; however, this will likely not be the case for much longer. The EU recently drafted regulations regarding prospects of AI that could prove dangerous, such as facial recognition technology. With many experts and others, including OpenAI’s co-founder Elon Musk himself, recently becoming more outspoken about the dangers of artificial intelligence, there is still hope that the future of artificial intelligence may not prove to be as grim as many think.

Liam Stewart is a junior at NYU studying Journalism and Political Science. He is currently covering Science, Space, and Technology at The Debrief.


Don’t forget to follow us on Twitter, Facebook, and Instagram, to weigh in and share your thoughts. You can also get all the latest  news and exciting feature content from The Debrief on Flipboard, and Pinterest. And subscribe to The Debrief YouTube Channel to check out all of The Debrief’s exciting original shows: DEBRIEFED: Digging Deeper with Cristina Gomez –Rebelliously Curious with Chrissy Newton