Blasé Capital ADVERTISING INTELLIGENCE

As Sam Altman’s OpenAI experiments with advertising in ChatGPT, its fiercest competitor, Anthropic (with Claude’s AI tool) mocked the former, and released a long blog to explain why advertising has no place in AI. In this column, we reproduce the blog:
There are many good places for advertising. A conversation with Claude is not one of them. Advertising drives competition, helps people discover new products, and allows services like email and social media to be offered for free. We have run our own ad campaigns, and our AI models have, in turn, helped many of our customers in the advertising industry. But including ads in conversations with Claude would be incompatible with what we want Claude to be a genuinely helpful assistant for work, and for deep thinking. We want Claude to act unambiguously in our users’ interests. So, we have made a choice: Claude will remain ad-free. Our users will not see ‘sponsored’ links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers, or include third-party product placements our users did not ask for.
When people use search engines or social media, they have come to expect a mixture of organic and sponsored content. Filtering signals from noise is part of the interaction. Conversations with AI assistants are meaningfully different. The format is open-ended; users often share context, and reveal more than they would in a search query. This openness is part of what makes conversations with AI valuable, but it is also what makes them susceptible to influence in ways that other digital products are not. Our analysis of conversations with Claude (conducted in a way that keeps all data private and anonymous) shows that an appreciable portion involve topics that are sensitive or deeply personal, the kinds of conversations you might have with a trusted advisor. Many other uses involve complex software engineering tasks, deep work, or thinking through difficult problems. The appearance of ads in these contexts would feel incongruous and, in many cases, inappropriate.
We still have much to learn about the impact of AI models on the people who use them. Early research suggests both benefits, like people finding support they could not access elsewhere, and risks, including the potential for models to reinforce harmful beliefs in vulnerable users. Introducing advertising incentives at this stage would add another level of complexity. Our understanding of how models translate the goals we set them into specific behaviours is still developing; an ad-based system could therefore have unpredictable results. Being genuinely helpful is one of the core principles of Claude’s Constitution, the document that describes our vision for Claude’s character, and guides how we train the model. An advertising-based business model would introduce incentives that could work against this principle.
Consider a concrete example. A user mentions they are having trouble sleeping. An assistant without advertising incentives would explore the various potential causes, stress, environment, habits and so on, based on what might be most insightful to the user. An ad-supported assistant has an additional consideration: whether the conversation presents an opportunity to make a transaction. These objectives may often align, but not always. And, unlike a list of search results, ads that influence a model’s responses may make it difficult to tell whether a given recommendation comes with a commercial motive or not. Users should not have to second-guess whether AI is genuinely helping them or subtly steering the conversation towards something monetisable. Even ads that do not directly influence an AI model’s responses, and instead appear separately within the chat window would compromise what we want Claude to be: a clear space to think and work. Such ads would also introduce an incentive to optimise for engagement, (or) for the amount of time people spend using Claude, and how often they return. These metrics are not necessarily aligned with being genuinely helpful.
The most useful AI interaction might be a short one, or one that resolves the user’s request without prompting further conversation. We recognise that not all advertising implementations are equivalent. More transparent or opt-in approaches, where users explicitly choose to see sponsored content, might avoid some of the concerns outlined above. But the history of ad-supported products suggests that advertising incentives, once introduced, tend to expand over time as they become integrated into revenue targets, and product development, blurring boundaries that were once more clear-cut. We have chosen not to introduce these dynamics into Claude.
Anthropic is focused on businesses. Developers, and helping our users flourish. Our business model is straightforward: we generate revenue through enterprise contracts, and paid subscriptions, and we reinvest that revenue into improving Claude for our users. This is a choice with tradeoffs, and we respect that other AI companies might reasonably reach different conclusions. Expanding access to Claude is central to our public benefit mission, and we want to do it without selling our users’ attention or data to advertisers. To that end, we have brought AI tools, and training to educators in over 60 countries, begun rational AI-education pilots with multiple governments, and made Claude available for non-profits at a significant discount…. Should we need to revisit this approach we will be transparent about our reasons for doing so.














