Ven. Gen 10th, 2025

    AI assistants could start manipulating you into making decisions—and then selling your plans to the highest bidder before you’ve even consciously made your mind up.According to AI ethicists from the University of Cambridge, published research and the hints dropped by several major tech players indicate that the ‘intention economy’ is set to take off.

AI agents, from chatbot assistants to digital tutors and girlfriends, could exploit the access that they have to our psychological and behavioral data, and manipulate our responses by mimicking personalities and anticipating desired responses.

“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve,” said visiting scholar Dr Yaqub Chaudhary of the University of Cambridge’s Leverhulme Centre for the Future of Intelligence.

“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions.”
Large language models can cheaply target a user’s cadence, politics, vocabulary, age, gender, online history, and even preferences for flattery and ingratiation, the researchers said.
Brokered bidding networks would then attempt to maximize the chance of achieving a given aim, such as selling a cinema trip or pushing a political party, by subtly steering conversations.
“Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer, and sell human intentions,” said Dr Jonnie Penn, an LCFI historian of technology.

“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press, and fair market competition, before we become victims of its unintended consequences.”

This isn’t just idle speculation. An OpenAI blog post last year called for “data that expresses human intention… across any language, topic, and format”, while the director of product at Shopify—an OpenAI partner – spoke of chatbots coming in “to explicitly get the user’s intent” at a conference the same year.
Meanwhile, Nvidia’s CEO has spoken publicly of using LLMs to figure out intention and desire, while Meta released research on what it referred to as the ‘Intentonomy’ in 2021.
And earlier this year, Apple’s new ‘App Intents’ developer framework for connecting apps to its voice-controlled personal assistant Siri included protocols to “predict actions someone might take in future” and “suggest the app intent to someone in the future using predictions you [the developer] provide”.
“AI agents such as Meta’s CICERO are said to achieve human level play in the game Diplomacy, which is dependent on inferring and predicting intent, and using persuasive dialogue to advance one’s position,” said Chaudhary.
“These compan   

Di