Sam Altman got exceptionally testy over Claude Super Bowl ads

Anthropic’s Super Bowl commercial, one of four ads the AI lab has dropped on Wednesdaybegins with the word “BETRAYAL” boldly splashed across the screen. The camera pans to a man earnestly asking a chatbot (obviously meant to depict ChatGPT) for advice on how to talk to his mother.
The bot, played by a blonde woman, offers some classic advice. Start listening. Try a nature walk! And then it turns into an ad for a fictional (we hope!) cougar dating site called Golden Encounters. Anthropic closes the spot by saying that while ads are coming to AI, they won’t be coming to its own chatbot, Claude.
Another commercial shows a slight young man seeking advice on building a six-pack. After providing his height, age and weight, the bot shows him an ad for height-enhancing insoles.
The Anthropic commercials are cleverly targeted at OpenAI users, following that company’s recent announcement that ads are coming to ChatGPT’s free tier. And they immediately caused an uproar, making headlines Anthropic “spotting,” “skewers” And “dunks” on OpenAI.
They’re funny enough, even Sam Altman admitted on X that he laughed at them. But he clearly didn’t find them very funny. They inspired him to write a novella-sized tirade in which he called his rival “dishonest” and “authoritarian.”
In that post, Altman explains that an ad-supported layer is intended to shoulder the burden of offering free ChatGPT to many of its millions of users. ChatGPT is still by far the most popular chatbot.
But OpenAI’s CEO insisted the ads were “dishonest” because they implied that ChatGPT would twist a conversation to insert an ad (and possibly for an off-brand product as well). “We would obviously never display ads in the manner Anthropic portrays them,” Altman wrote in the social media post. “We’re not stupid and we know our users would reject that.”
WAN event
Boston, MA
|
June 23, 2026
Indeed, OpenAI has promised that ads will be separated and labeled and will never impact a chat. But the company has also said it plans to make them conversation-specific — which is the central claim in Anthropic’s ads. As OpenAI explained his blog. “We plan to test ads at the bottom of replies in ChatGPT if there is a relevant sponsored product or service based on your current conversation.”
Then Altman threw some equally dubious claims at his rival. “Anthropic serves an expensive product to rich people,” he wrote. “We also strongly believe in bringing AI to billions of people who cannot afford subscriptions.”
But Claude also has a free chat tier, with plans of $0, $17, $100, $200. ChatGPT levels are $0, $8, $20, $200. You could say that the subscription levels are fairly equivalent.
Altman also claimed in his post that “Anthropic wants to control what people do with AI.” He claims it blocks the use of Claude Code by “companies they don’t like” like OpenAI, and said Anthropic tells people what they can and cannot use AI for.
It’s true that Anthropic’s entire marketing deal has been in place since day one “responsible AI.” After all, the company was founded by two former OpenAI alumni, who claimed that they became concerned about AI security while working there.
Yet both chatbot companies do usage policyAI guardrails, and talking about AI safety. And while OpenAI allows ChatGPT to be used for erotica during Anthropic notOpenAI, like Anthropic, has determined that some content should be blockedespecially in the field of mental health.
Yet Altman took this Anthropic-tells-you-what-to-do argument to an extreme level when he accused Anthropic of being “authoritarian.”
“One authoritarian company alone won’t get us there, not to mention the other obvious risks. It’s a dark path,” he wrote.
The use of “authoritarian” in a rant about a brazen Super Bowl ad is misguided at best. It is particularly tactless when you consider the current geopolitical environment in which protesters around the world have been murdered by agents of their own government. While business rivals have been touting it in ads since the dawn of time, Anthropic has clearly struck a nerve.




