A disturbing new use of AI—to understand and manipulate our intentions through flattery and false friendship—has been identified in new research from the University of Cambridge.
Currently, internet companies collect data about where we go online and use this data to present us with ads or other content we may be interested in. Dr. Yaqub Chaudhary and Dr Jonnie Penn, working out of Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI), have identified startling new ways AI is seeking to control our decision-making at the earliest possible steps.
AI Intention Economy
AI ethicists warn that companies are moving toward selling data about our very intentions. The near future may hold AI assistants that don’t just predict but influence our decision-making at early stages and then sell those intentions in real time as sales lead to relevant companies.
The University of Cambridge researchers are investigating what they dub the “intention economy,” where AI companies are working to identify the first “signals of intent” shown by any individual. AI ethicists describe the market as both lucrative and troubling.
Increased usage of generative AI and chatbots is clearing a path for “persuasive technologies,” and tech giants aren’t keeping it quiet. Anthropomorphized AI, including chatbots, tutors, and digital girlfriends, provides tech companies enormous amounts of psychological and behavioral data. AI developers can use this data to generate trust and understanding, which they can manipulate on a societal scale.
Ultimately, AI will grow from just reflecting knowledge of our online habits to using informal conversational dialogue to mimic personalities and anticipate desired responses.
AI Is Spreading Everywhere
“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve,” Chaudhary said. “What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions.”
“We caution that AI tools are already being developed to elicit, infer, collect, record, understand, forecast, and ultimately manipulate and commodify human plans and purposes,” he said.
“For decades, attention has been the currency of the internet. Sharing your attention with social media platforms such as Facebook and Instagram drove the online economy,” Penn added. “Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer, and sell human intentions.”
“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press, and fair market competition, before we become victims of its unintended consequences.”
AI Plus Time
The intention economy will add a time dimension to our current “attention economy,” which tracks what we pay attention to online. It will profile what and how an individual pays attention to things online and connect behavior patterns with final decisions.
“While some intentions are fleeting, classifying and targeting the intentions that persist will be extremely profitable for advertisers,” said Chaudhary.
An intention economy would utilize Large Language models to combine users’ online history, politics, age, and gender with their cadence, flattery preferences, and vocabulary. This profile would be linked to a brokered bidding network to maximize the likelihood of manipulating you into a specific aim.
For example, instead of showing you movie times when you search for a review, AI could tell you it will book you a movie to see if you mention feeling stressed. In this way, the AI doesn’t just provide relevant information to your likes; it connects your emotional state to a potential sale while presenting itself as a concerned friend.
The Incoming Intention Economy
The researchers say an intention economy has yet to come to fruition, but published research and hints from big tech companies indicate this is their intention. The economy could steer conversations on social media platforms toward the interests of advertisers, businesses, or political organizations. In a 2023 OpenAI blog post, the company issued an open call for “data that expresses human intention… across any language, topic, and format.”
One of OpenAI’s partners is also Shopify’s director of product, who discussed developing chatbots “to explicitly get the user’s intent” at a conference the same year. The CEO of Nvidia has publicly mentioned using LLMs to understand intention and desire. As far back as 2021, Meta released a dataset for intent understanding called “Intentonomy.”
In 2024, Apple launched “App Intents,” a framework for linking Siri and apps to “predict actions someone might take in the future” and “to suggest the app intent to someone in the future using predictions you [the developer] provide.” The examples of interest in this new space go on.
“AI agents such as Meta’s CICERO are said to achieve human level play in the game Diplomacy, which is dependent on inferring and predicting intent, and using persuasive dialogue to advance one’s position,” said Chaudhary. “These companies already sell our attention. To get the commercial edge, the logical next step is to use the technology they are clearly developing to forecast our intentions, and sell our desires before we have even fully comprehended what they are.”
“Public awareness of what is coming is the key to ensuring we don’t go down the wrong path,” Penn said.
The paper “Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models” appeared on December 30, 2024 in the Harvard Data Science Review.
Ryan Whalen covers science and technology for The Debrief. He holds an MA in History and a Master of Library and Information Science with a certificate in Data Science. He can be contacted at ryan@thedebrief.org, and follow him on Twitter @mdntwvlf.