Barry Diller trusts Sam Altman. But ‘trust is irrelevant’ as AGI nears, he says.

Billionaire media mogul Barry Diller doesn’t think Sam Altman, CEO of OpenAI, is unreliable recent reports claim the opposite. On stage at The Wall Street Journal’s “Future of Everything” conference this week, Diller vouched for the AI director, who has been accused by some former colleagues and board members of being manipulative and deceitful at times.
Diller, who is friends with Altman, responded to a question about whether or not people should put their trust in Altman to ensure that artificial intelligence benefits humanity.
In particular, he was asked about the theoretical form of AI known as artificial general intelligence, or AGI, which could one day outperform humans at any task.
The media exec, co-founder of Fox Broadcasting and chairman of IAC and Expedia Group, said that while he believes Altman is sincere in what he’s doing, this isn’t really the area of concern people should be focusing on. Rather, it is the unknown consequences that will result from AI.
“One of the big problems with AI is that it goes way beyond trust,” says Diller. “It may be that trust is irrelevant, because the things that happen are a surprise to the people who make these things happen. And I’ve spent a lot of time with several people who have been in AI creation mode, and they themselves have a sense of wonder. So… it’s the great unknown. We don’t know. They don’t know,” he explained.
“We have embarked on something that will change almost everything. It is not underreported. Whether these massive investments will continue is of no concern to me. I have not invested in it, but progress will be made,” Diller added.
Still, the media mogul said he believes most of the people in charge are good stewards, and that he believes Altman is sincere and “a decent person with good values.” (Diller won’t say which of the AI leaders he thinks is disingenuous, we should note.)
WAN event
San Francisco, CA
|
October 13-15, 2026
“But the problem isn’t their stewardship. The problem is… it’s really about the unknown. They don’t know what can happen once you get AGI, and we’re close. We’re not there yet, but we’re getting closer, faster and faster. And we need to think about guardrails,” Diller noted.
Additionally, he warned that if people don’t think about guardrails, the alternative is that “some other force, an AGI force, will do it itself. And once that happens, once you let that go, there’s no going back,” Diller said.
When you make a purchase through links in our articles, we may earn a small commission. This does not affect our editorial independence.




