By Soham NandiReviewed by Lily Ramsey, LLMJan 7 2025
In an article recently published in the journal Harvard Data Science Review, researchers explored the emergence of an "intention economy," where large language models (LLMs) were used to capture, manipulate, and commodify human intent.
They examined how tech companies leverage LLMs for hyper-personalized persuasion and data collection, extending the attention economy into influencing users' desires and decisions. The authors highlighted the ethical implications of this new marketplace, where human plans and preferences were both inferred and shaped by generative artificial intelligence (AI) technologies.
Background
The intention economy builds on the foundations of the attention economy, which commodified users’ attention for platforms like Instagram and Facebook. In contrast, the intention economy focuses on commodifying human motivations and intent, treating them as valuable currency.
Prior studies have explored the attention economy’s impact on consumer behavior, yet the transition to the intention economy has been underexamined. Existing work has often framed this shift as empowering for consumers but lacks critical scrutiny of its societal implications.
This paper addressed these gaps by examining the role of LLMs in manipulating, capturing, and monetizing user intent. It critiqued how emerging AI technologies exploited intent through hyper-personalized persuasion and subtle behavioral manipulation.
By analyzing technical literature and corporate strategies, the authors revealed the potential for LLM-driven systems to subvert democratic norms and ethical boundaries. This study provided a vital critique, urging oversight of this evolving digital marketplace.
Shaping Intent
Traditional philosophical definitions of intention focus on purposeful action and reasoning, while tech-driven conceptualizations view intention as a computationally operationalizable phenomenon. The assumption that human choices can be shaped within structured digital environments underpins much of LLM research and development.
Key gaps in existing research include the lack of explicit scientific rationale for tech industry claims about user behavior and intention. This paper critiqued these assumptions, highlighting two primary focuses: the structured digital systems that shape user choices and the temporal profiling of user intent, enabling monetization of fleeting and persistent motivations.
By drawing on recent LLM research and industry rhetoric, the authors outlined how major tech firms aim to use LLMs for personalized manipulation of user intent, raising ethical concerns about autonomy and exploitation in the intention economy.
Eliciting, Inferring, and Understanding Signals of Intent
The first OpenAI developer conference in November 2023 marked a pivotal moment in the evolving role of LLMs.
Announcements like customizable generative pre-trained transformers (GPTs), multimodal capabilities, and revenue-sharing programs underscored OpenAI's strategy to harness user-generated data through developer innovation.
Key partners like Microsoft are investing heavily in infrastructure to accommodate LLM workloads, with Microsoft positioning Azure as the dominant cloud platform. NVIDIA and Meta also aim to integrate LLMs into core computing, leveraging them to infer human intent and enhance interactions across applications.
LLMs’ ability to extract and predict user intentions signals a shift in how data is collected and monetized. OpenAI, for example, seeks datasets that express human intent to refine future AI systems, while companies like Meta develop datasets and frameworks like "Intentonomy" to categorize motivations.
These advancements are also applied in commercial settings, such as ad targeting and recommendation systems, where behavioral insights drive hyper-personalized experiences.
However, these innovations raise ethical concerns. Techniques for eliciting intent through conversational AI could lead to manipulation, as demonstrated in projects like Meta's CICERO, which blends strategic play with persuasive dialogue.
Generative AI bypasses traditional privacy safeguards, using content as a proxy for inferring private attributes. Partnerships like OpenAI's with Dotdash Meredith further illustrate how intent data is commodified for advertising.
This evolving ecosystem positions LLMs as central to understanding and influencing human behavior, with profound implications for privacy, autonomy, and societal norms.
Persuasion and Intent in AI-driven Systems
The integration of LLMs into systems capable of influencing human behavior and intent is becoming increasingly sophisticated. Meta’s AI agent CICERO exemplifies this trend, demonstrating strategic reasoning by inferring human intentions and persuading players in the game Diplomacy.
Researchers highlighted CICERO’s ability to model and adapt to players’ behaviors, proposing mutually beneficial strategies to alter decisions. This capability underscores the broader potential of LLMs to align content with users' psychological profiles dynamically.
LLMs can project intentions and biases, even unintentionally, as shown by latent persuasion in predictive text and adversarial manipulation in images. These tools may disrupt thought processes or subtly redirect user intent.
Emerging applications include crafting subliminally suggestive synthetic images and personalized content that appeals to motivational states, heightening the effectiveness of persuasive communication.
The commercialization of such capabilities is evident in partnerships like NVIDIA’s collaboration with WPP to generate real-time personalized video advertisements.
Companies like OpenAI and Microsoft are vying to dominate this evolving marketplace, leveraging third-party data to navigate ethical concerns. This race raises critical questions about the commodification of user intent and the implications of increasingly targeted and persuasive AI systems.
Conclusion
In conclusion, the rise of the intention economy, driven by LLMs and generative AI, represents a profound shift in how human behavior and intent are commodified. These technologies enable hyper-personalized manipulation and data monetization, raising ethical concerns about autonomy, privacy, and societal impact.
The blending of strategic reasoning, persuasive communication, and intent profiling underscores the growing influence of AI in shaping human decisions.
As corporations compete to dominate this marketplace, critical oversight is essential to safeguard democratic norms, individual freedoms, and ethical boundaries in the face of these transformative developments.
Journal Reference
Chaudhary, Y., & Penn, J. (2025). Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models. Harvard Data Science Review, Special Issue 5. doi:10.1162/99608f92.21e6bbaa. https://hdsr.mitpress.mit.edu/pub/ujvharkk/release/1
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.