The fixer’s dilemma: Chris Lehane and OpenAI’s impossible mission

Chris Lehane is one of the best in the business at making bad news go away. Al Gore’s press secretary during the Clinton years, Airbnb’s chief crisis manager through every regulatory nightmare from here to Brussels — Lehane knows how to pivot. Now he’s two years into what may be his most impossible job yet: As OpenAI’s vice president of global policy, his job is to convince the world that OpenAI actually cares about democratizing artificial intelligence, even as the company increasingly behaves like, well, every other tech giant that has ever claimed to be different.
I had 20 minutes with him on stage at the Elevate conference in Toronto earlier this week – 20 minutes to get past the talking points and discover the real contradictions eating away at OpenAI’s carefully crafted image. It wasn’t easy or entirely successful. Lehane is really good at his job. He’s nice. He sounds reasonable. He admits uncertainty. He even talks about waking up at 3am and worrying about whether this will actually all benefit humanity.
But good intentions don’t mean much when your company is suing critics, draining economically depressed cities of water and electricity, and resurrecting dead celebrities to assert your market dominance.
The company’s Sora problem is actually at the root of everything else. The video generator tool launched last week and apparently contains copyrighted material. It was a bold move for a company that had already been sued by the New York Times, the Toronto Star and half the publishing industry. From a business and marketing point of view it was also brilliant. The invite-only app rose to the top of the App Store as people created digital versions of themselves, OpenAI CEO Sam Altman; characters like Pikachu, Mario and Cartman from “South Park”; and dead celebrities like Tupac Shakur.
When asked what drove OpenAI’s decision to launch this latest version of Sora with these characters, Lehane gave me the standard pitch: Sora is a “general purpose technology” like electricity or the printing press, that democratizes creativity for those without talent or resources. Even he – a self-proclaimed creative zero – can make videos now, he said on stage.
What he was dancing around is that OpenAI initially “let” rights holders opt out of having their work used to train Sora, which is not how copyright use typically works. When OpenAI noticed that people really enjoyed using copyrighted images, it “evolved” to an opt-in model. That’s not really repeating. That’s testing how much you can get away with. (And, by the way, through the Motion Picture Association made some noise Last week on legal threats, OpenAI seems to have gotten away with a lot.)
Naturally, the situation is reminiscent of the annoyance of publishers who accuse OpenAI of training their work without sharing the financial spoils. When I urged Lehane to ban publishers from the economy, he invoked fair use, the American legal doctrine that balances the rights of creators with public access to knowledge. He called it the secret weapon of American technology dominance.
WAN event
San Francisco
|
October 27-29, 2025
Maybe. But I had recently interviewed Al Gore – Lehane’s old boss – and realized that everyone could just ask ChatGPT about it instead of reading my piece on TechCrunch. “It’s ‘iterative,’” I said, “but it’s also substitutional.”
For the first time, Lehane dropped his story. “We’re going to have to figure all this out,” he said. “It’s really glib and easy to sit here on stage and say we need to come up with new economic revenue models. But I think we will.” (We’re making it up as we go, in short.)
Then there is the infrastructure question that no one wants to answer honestly. OpenAI already operates a data center campus in Abilene, Texas, and recently broke ground on a massive data center in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane has likened the accessibility of AI to the advent of electricity – and says those who were last to access it are still catching up – but OpenAI’s Stargate project is seemingly targeting some of those same economically disadvantaged places as places to set up facilities with their huge appetite for water and electricity.
When asked during our conversation whether these communities will benefit or just foot the bill, Lehane discussed gigawatts and geopolitics. OpenAI requires about a gigawatt of energy per week, he noted. China produced 450 gigawatts last year plus 33 nuclear facilities. If democracies want democratic AI, they must compete. “The optimist in me says this will modernize our energy systems,” he had said, painting a picture of a re-industrialized America with transformed power grids.
It was inspiring. But it didn’t answer the question of whether people in Lordstown and Abilene will see their utility bills rise as OpenAI generates videos from John F. Kennedy and The Notorious BIG (Video Generation is the most energy intensive AI outside.)
That brought me to my most uncomfortable example. Zelda Williams spent the day before our interview begging strangers on Instagram to stop sending her AI-generated videos of her late father, Robin Williams. “You’re not making art,” she wrote. “You’re making disgusting, over-processed hot dogs out of people’s lives.”
When I asked how the company reconciles this kind of intimate harm with its mission, Lehane responded by talking about processes including responsible design, testing frameworks and government partnerships. “There’s no script for this kind of thing, is there?”
Lehane showed vulnerability at times, saying he wakes up every night at 3 a.m. worried about democratization, geopolitics and infrastructure. “This entails enormous responsibilities.”
Whether or not those moments were intended for the audience, I believe him. I left Toronto feeling like I’d seen a masterclass in political reporting — Lehane threading an impossible needle while dodging questions about corporate decisions that, as far as I know, he doesn’t even agree with. Then Friday happened.
Nathan Calvin, a lawyer who works on AI policy at a non-profit organization, Encode AI, revealed that OpenAI had sent a message at the same time I spoke with Lehane in Toronto sheriff’s deputy to his home in Washington, D.C., over dinner to serve him a subpoena. They wanted his private messages with California lawmakers, students and former OpenAI employees.
Calvin accuses OpenAI of intimidation tactics surrounding a new piece of AI regulation, California’s SB 53. He says the company has used its legal battle with Elon Musk as a pretext to target critics, implying that Encode was secretly funded by Musk. In fact, Calvin says he fought OpenAI’s opposition to California’s SB 53, an AI safety law, and that when he saw the company claim it was “working to improve the law,” he “literally laughed out loud.” In a social media atmosphere, he then specifically called Lehane the “master of the political dark arts.”
In Washington that could be a compliment. At a company like OpenAI, whose mission is “to build AI that benefits all humanity,” it sounds like an indictment.
What’s much more important is that even OpenAI’s own people are conflicted about what they will become.
As my colleague Max reported last week, a number of current and former employees have expressed their doubts on social media following the release of Sora 2, including Boaz Barak, an OpenAI researcher and professor at Harvard, who wrote about Sora 2 that it is “technically amazing, but it is premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes.”
On Friday, Josh Achiam – OpenAI’s head of mission alignment – tweeted something even more surprising about Calvin’s accusation. Achiam prefaced his comments by saying they posed “potentially a risk to my entire career” and then wrote of OpenAI: “We cannot do things that make us a fearsome force rather than a virtuous one. We have a duty to and a mission for all humanity. The bar for pursuing that duty is remarkably high.”
That is. . .something. An OpenAI executive who publicly wonders whether his company is becoming “a fearsome force instead of a virtuous one” is not on par with a competitor taking shots or a reporter asking questions. This is someone who chose to work at OpenAI, who believes in OpenAI’s mission, and who now recognizes a crisis of conscience despite the professional risk.
It’s a crystallizing moment. You can be the best political operative in the tech sector, a master at navigating impossible situations, and yet end up working for a company whose actions are increasingly at odds with its stated values – contradictions that will only increase as OpenAI rushes toward artificial general intelligence.
I don’t think the real question is whether Chris Lehane can sell OpenAI’s mission. What matters is whether others – including, crucially, the other people who work there – still believe it.




