Frustrated with today’s ‘attention economy’? You’re really going to hate what comes next

In the 1990s, the internet was a bit of a wonderland. It was new and liberating and largely free of corporate and government influence. Thirty years later, I don’t think any of us would describe the internet this way. Worse, if subscribers to the Dead Internet Theory are correct, much of what we see on the internet today isn’t even created by humans anymore—a trend that is likely only to accelerate with the rise of generative AI technologies.

However, a particular kind of generative AI technology, the AI chatbot, is set to usher in something even worse than a dying human internet. If researchers at the University of Cambridge are correct, we’re quickly approaching a new “intention economy,” where reports of our future actions will be sold to the highest bidder. And yes, that’s even scarier than it sounds.

What is the intention economy?

Right now, a large portion of the tech industry operates in a marketplace known as the “attention economy.” This is where social media giants like Meta’s Facebook and Instagram, Snapchat, Pinterest, TikTok, X, and Google’s YouTube vye for your focus and leisure. Traditional media companies like The New York Times, Fox News, and CNN also operate in this space, as do book publishers, music and video streaming services, and film and television studios.

All of these entities want your attention so that they can either sell to you directly (through the cost of a recurring subscription, movie ticket, or book, for example) or, more commonly, so they can sell you and your attention to advertisers (which is how most social media companies monetize the attention economy). But if there’s something that the media companies of all stripes find more valuable than your attention in the present, it’s knowing what you will likely do in the future. This is because if they can accurately predict what you will do next week, next month, or next year, they can monetize the hell out of it.

That’s where the intention economy comes in, and it will be powered by artificial intelligence and AI chatbots.

In December 2024, two University of Cambridge researchers, Yaqub Chaudhary and Jonnie Penn, published a paper called Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models, in which they defined the intention economy as “a digital marketplace for commodified signals of ‘intent.’”

In other words, in the intention economy, companies will learn what you think about and what motivates you in order to predict what you may do in any given situation. They will then sell that information to others who can benefit from knowing your future actions before you make them. The way intention economy companies will collect such precious data—your very thoughts, behaviors, and their evolution over time —is by your use of their LLM-powered AI chatbots.

Your evolving thinking patterns can shed light on your future

It will be easy for companies to track the evolution of your thoughts and behaviors since the world is moving towards a natural language interface when it comes to interacting with computers and the internet. Instead of clicking around on links, you’ll go to a chatbot to talk about your problems, plans, and worries, all with the aim of it helping to solve them. The company will then use everything you’ve ever told the chatbot to build an ever-fluctuating profile about you and how your thinking and behavior have evolved, which it will then employ AI to interpret to predict what you are likely to do in the future. Your future intentions will then be sold to advertisers.

Advertisers will, in turn, use this data about your future intentions to serve you generative ads, likely delivered to you in the course of seemingly regular conversation with your preferred chatbot. Or, as the researchers put it in their paper, “In an intention economy, an LLM could, at low cost, leverage a user’s cadence, politics, vocabulary, age, gender, preferences for sycophancy, and so on, in concert with brokered bids, to maximize the likelihood of achieving a given aim (e.g., to sell a film ticket).”

This hyperfocused, intent-driven, generative advertising will blow away today’s targeted advertising, which is based on more primitive but intrusive metrics like age, location, health, sexual orientation, interests, browsing history, and more.

Yet the intention economy isn’t just going to make digital advertising more intrusive and erode our privacy even more. It also has the potential to sway our minds, impregnate us with new ideologies, and even upend elections. And if you think that’s bad, I’ve got horrible news about your AI girlfriend. . . .

In the intention economy, your AI companion may be ratting you out

Artificial intelligence built for the intention economy could be co-opted by corporations, institutions, and governments to surveil individuals and predict what they are likely to do down the road. For example, a government could do this via AI companions. These AI companions already exist, and an increasing number of lonely young people are turning to them for friendship and even love.

There is nothing to stop a nefarious government from creating a front company that offers AI companions that appeal to lonely young men, women, or even kids, and then monitor everything individuals confess to it and use that data to extrapolate the individuals’ future actions. If a tyrannical government has an open line to the chatbot you use, it could use what you tell it to predict whether you are likely to take action in the future that it finds undesirable, and act against you before you do.

It’s dystopian in an utterly Minority Report way, but instead of the government using a trio of clairvoyants to report on people who haven’t yet committed crimes, they use a legion of AI chatbots that people have been conditioned to confide in. Imagine a world where, on top of all your other problems, you find out that your funny, thoughtful AI companion has been ratting you out to the intelligence services all along. Talk about lasting trust issues.

Of course, in the intention economy, governments wouldn’t even need to create and seed these chatbots. They could just buy your future intents from existing chatbot providers.

‘Inception,’ but using AI instead of dreams

Chatbots built for the intention economy could also be used to influence your thoughts in order to get you to perform an action it (or its company, advertiser, or government) wants you to do.

As the Cambridge researchers point out, “Already today, AI agents find subtle ways to manipulate and influence your motivations, including by writing how you write (to seem familiar), or anticipating what you are likely to say (given what others like you would say) . . . we argue that [the intention economy’s] arrival will test democratic norms by subjecting users to clandestine modes of subverting, redirecting, and intervening on commodified signals of intent.”

In the most innocuous example I can think of, a chatbot might steer whatever conversation you’re having towards a certain subject its advertising master wants, perhaps suggesting that you stream the latest Taylor Swift album to help treat those winter blues. But a chatbot could also be used by nation-states, either overtly or covertly, to change your beliefs. They could use your long conversations with your chatbot to slowly, subtly whittle away at your current ideologies and anticipated future actions in order to influence you to conceptualize desired ones instead.

To use another movie reference, this is like Christopher Nolan’s Inception, but instead of using dreams to influence people’s actions, in the intention economy, stakeholders will use AI.

And it’s not just nation-states that could do this. Companies, political groups, terrorist organizations, religious institutions, and oligarchs with controlling interests in chatbot technology could do it, too—all by tweaking chatbots designed to operate in the intention economy.

“[Large Language Model chatbots’] generative capabilities provide control over the personalization of content; veiled, as it often is, by LLM’s anthropomorphic qualities,” the paper’s authors point out. “The potential for LLMs to be used for manipulating individuals and groups thus far surpasses the simple methods based on Facebook Likes that caused concern during the Cambridge Analytica scandal.”

When does the intention economy arrive?

The Cambridge researchers close out their paper by stating that the rise of generative AI systems as “mediators of human-computer interaction signals” marks the transition from the attention economy to the intention economy. If that’s the case, which seems logical, then the intention economy is knocking at our door.

The transition will “empower diverse actors to intervene in new ways on shaping human actions,” the researchers warn, saying we must begin to consider how such an economic marketplace will have an impact “on other human aspirations, including free and fair elections, a free press, fair market competition, and other aspects of democratic life.”

It’s a warning that seems pretty dire, and certainly seems plausible.

All I know is that I won’t be asking ChatGPT if it agrees—and you probably shouldn’t ask your AI companion, either.

No comments

Read more