Who decides what AI tells you? Campbell Brown, once Meta’s news chief, has thoughts

Campbell Brown has spent her career in the pursuit of accurate information, first as a renowned TV journalist and then as Facebook’s first and only dedicated news chief. Now that she sees how AI is changing the way people consume information, she sees that history is in danger of repeating itself. This time she doesn’t wait for someone else to solve the problem.
Her company, Forum AI — which she recently discussed with TechCrunch’s Tim Fernholz at a StrictlyVC evening in San Francisco — evaluates how basic models perform on what she calls “high-stakes topics” — geopolitics, mental health, finance, hiring — topics where “there are no clear yes-or-no answers, where it’s murky, nuanced and complex.”
The idea is to find the world’s leading experts, have them set benchmarks for architects, and then train AI judges to evaluate models at scale. For Forum AI’s geopolitical work, Brown has recruited Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy and Anne Neuberger, who led cybersecurity in the Obama administration. The goal is to get AI judges to around 90% consensus with those human experts, a threshold that Forum AI says it has been able to reach.
Brown traces the origins of Forum AI, founded seventeen months ago in New York, to a specific moment. “I was at Meta when ChatGPT was first released publicly,” she recalls, “and I remember realizing shortly afterwards that this is going to be the funnel through which all the information flows. And that’s not very good.” The consequences for her own children made the moment feel almost existential. “My kids are going to be really stupid if we don’t figure out how to fix this,” she recalled.
What frustrated her most was that accuracy didn’t seem to be anyone’s priority. Foundation model companies, she said, are “extremely focused on coding and math,” while news and information are more difficult. But tougher, she argued, does not mean optional.
When Forum AI began evaluating the leading models, the findings were less than encouraging. She quoted Gemini coming from Chinese Communist Party websites “looking for stories that have nothing to do with China,” and noted a left-wing political bias in almost all the models. More subtle failures also abound, she said, including missing context, missing perspectives and strawman arguments without recognition. “There is still a long way to go,” she said. “But I also think there are some very simple solutions that would greatly improve the results.”
Brown has spent years looking at Facebook at what happens when a platform optimizes for the wrong things. “We failed at a lot of the things we tried,” she told Fernholz. The fact-checking program she built no longer exists. The lesson, even if social media has turned a blind eye, is that optimizing engagement has been bad for society and has left many less informed.
Her hope is that AI can break that cycle. “It could go either way at this point,” she said; companies could give users what they want, or they could “give people what is real, what is honest, and what is true.” She acknowledged that the idealistic version of that – AI that optimizes for the truth – might sound naive. But she thinks entrepreneurship may be the unlikely ally here. Companies that use AI for credit decisions, lending, underwriting, and hiring are concerned with liability, and “they’re going to want you to optimize to get it right.”
That business demand is also what Forum AI is focusing its business on, although converting compliance interests into consistent revenue remains a challenge, especially given that much of today’s market is still satisfied with checkbox audits and standardized benchmarks that Brown sees as inadequate.
According to her, the compliance landscape is “a joke.” When New York City passed the first bias hiring law requiring AI audits, the state comptroller found that more than half of them had violations that went undetected. True evaluation, she said, requires domain expertise to work out not just known scenarios, but also edge cases that “can get you into trouble that people don’t think about.” And that work takes time. ‘Smart generalists won’t make it.’
Brown – whose company emerged last fall $3 million led by Lerer Hippeau – is uniquely positioned to describe the gap between the AI industry’s self-image and the reality for most users. “You hear from the leaders of the big tech companies: ‘This technology is going to change the world,’ ‘it’s going to put you out of work,’ ‘it’s going to cure cancer,'” she said. “But for a normal person who just uses a chatbot to ask basic questions, they still get a lot of sloppy and wrong answers.”
Confidence in AI is at an extremely low level and she thinks that skepticism is justified in many cases. “The conversation is pretty much happening in Silicon Valley about one thing, and there’s a whole other conversation happening among consumers.”
When you make a purchase through links in our articles, we may earn a small commission. This does not affect our editorial independence.




