AI

A roadmap for AI, if anyone will listen

While Washington’s fallout with Anthropic has exposed the complete lack of coherent rules for artificial intelligence, a bipartisan coalition of thinkers has brought together something the administration has thus far refused to produce: a framework for what responsible AI development should actually look like.

The Pro-human statement was completed before last week’s Pentagon-Anthropic standoff, but the collision of the two events was not lost on anyone involved.

“Something very remarkable has happened in America in the last four months,” said Max Tegmark, the MIT physicist and AI researcher who helped organize the effort. in conversation with this editor. “Suddenly polls [is showing] that 95% of all Americans are against an unregulated race to superintelligence.”

The newly published document, signed by hundreds of experts, former officials and public figures, begins with the no-nonsense observation that humanity is at a fork in the road. One path, which the statement calls “the race to replace,” leads to people being displaced first as workers and then as decision makers, while power accrues to unaccountable institutions and their machines. The other leads to AI that vastly increases human potential.

The latter scenario depends on five key pillars: putting humans in charge, avoiding the concentration of power, protecting the human experience, preserving individual freedom, and holding AI companies legally accountable. One of the more forceful provisions is an outright ban on the development of superintelligence until there is scientific consensus that it can be done safely and with genuine democratic buy-in; mandatory disconnect switches on high-power systems; and a ban on architectures capable of self-replication, autonomous self-improvement, or resistance to closure.

See also  Meta is reportedly using actual tents to build data centers

The publication of the statement coincides with a period when its urgency is much easier to understand. On the last Friday of February, Defense Secretary Pete Hegseth labeled Anthropic — whose AI already runs on classified military platforms — as a “supply chain risk” after the company refused to grant the Pentagon unrestricted use of its technology, a label normally reserved for companies with ties to China. Hours later, OpenAI struck its own deal with the Defense Department, a deal that legal experts say will be difficult to enforce in any meaningful way. What it all laid bare is how costly Congress’s inaction on AI has become.

As Dean Ball, a senior fellow at the Foundation for American Innovation, told The New York Times then: “This isn’t just a dispute over a contract. This is the first conversation we’ve had as a country about control of AI systems.”

WAN event

San Francisco, CA
|
October 13-15, 2026

Tegmark reached for an analogy that most people could understand when we spoke. “You never have to worry that a drug company is going to put another drug on the market that will cause enormous harm before people figure out how to make it safe,” he said, “because the FDA won’t let them release anything until it’s safe enough.”

Wars in Washington rarely generate the kind of public pressure that changes laws. Instead, Tegmark sees child safety as the pressure point that will likely break the current impasse. Indeed, the statement calls for mandatory testing before deploying AI products – especially chatbots and companion apps aimed at younger users – that cover risks including increased suicidal ideation, exacerbation of mental health problems and emotional manipulation.

See also  Equity's 2026 Predictions: AI Agents, Blockbuster IPOs, and the Future of VC

“If a creepy old man texts an 11-year-old pretending to be a young girl and tries to get that boy to commit suicide, that man could go to jail for it,” Tegmark said. “We already have laws. It’s illegal. So why is it any different if a machine does it?”

He believes that once the principle of pre-release testing for children’s products is introduced, the scope will almost inevitably be expanded. “People will come along and say, let’s add a few other requirements. Maybe we should also test that this can’t help terrorists make bioweapons. Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government.”

It’s no small feat that former Trump adviser Steve Bannon and Susan Rice, President Obama’s national security adviser, signed the same document — along with former Joint Chiefs Chairman Mike Mullen and progressive faith leaders.

“What they agree on, of course, is that they are all human,” says Tegmark. “When it comes to whether we want a future for humans or a future for machines, they are of course on the same side.”

Source link

Back to top button