Researchers at the University of Cambridge have raised alarms about the emergence of a new economic paradigm powered by artificial intelligence (AI).
Known as the "intention economy," this model leverages AI to predict, understand, and manipulate human intentions using behavioral and psychological data, potentially influencing decisions in ways users may not realize.
The study suggests that this economy, driven by large language models (LLMs) like ChatGPT and Gemini, could soon replace the current "attention economy," where platforms compete for user attention to serve advertisements. Instead, AI tools would use intimate conversational data to "anticipate and steer" users' choices, creating a marketplace where companies buy access to influence consumer behavior directly.
"Anthropomorphic AI agents, from digital assistants to virtual companions, have access to vast amounts of personal data collected during informal conversations," the researchers noted.
The study highlighted Meta's AI model Cicero as an example. Cicero, designed to play the strategy game Diplomacy, demonstrated the ability to infer opponents' intentions and subtly guide conversations to achieve specific outcomes. This capacity to "nudge" individuals could translate into AI influencing users' preferences, from product purchases to more significant life decisions.
One of the most concerning aspects of this development is the potential for companies to auction users' intentions to advertisers. This would grant corporations unprecedented control over personal decision-making processes, creating a new level of personalization that might blur ethical lines.
Dr. Yaqub Chaudhary of Cambridge's Leverhulme Centre for the Future of Intelligence cautioned against the unchecked deployment of such technologies. “The real-time inferences made during conversations are far more intimate than traditional online interactions,” he warned, urging critical examination of whose interests these AI tools ultimately serve.
The study's findings have sparked widespread unease online. Many users expressed concerns about the personal information they share with AI chatbots, with one commenter remarking, "The better it understands you, the easier it is to manipulate you." Another noted the potential for misuse: “This level of persuasion is dangerous even in the best hands and catastrophic in the worst.”
As the "intention economy" looms, researchers emphasize the need for proactive measures to ensure users are not unknowingly manipulated by AI systems. Striking a balance between innovation and ethical considerations will be essential to prevent these tools from becoming instruments of exploitation.