Trump’s ‘anti-woke AI’ order could reshape how US tech companies train their models

When Deepseek, Alibaba and other Chinese companies released their AI models, Western researchers quickly noticed that they surrounded questions that were critical about the Chinese Communist Party. American officials later confirmed That these tools have been designed to display the Beijing talk points, so that they are concerned about censorship and bias.
American AI leaders such as OpenAI have pointed out to this as a justification to quickly promote their technology, without too many regulations or supervision. As OpenAi’s Chief Global Affairs Officer Chris Lehane wrote in one LinkedIn post Last month there was a competition between “Democratic AI guided by the US and the autocratic AI of the Chinese AI.”
An executive order On Wednesday signed by President Donald Trump that forbidden “AI” and AI models aroused that are not “ideologically neutral” of government contracts, can disturb that balance.
The Order evokes diversity, fairness and inclusion (dei) and calls it a “penetrating and destructive” ideology that “can distort the quality and accuracy of the output”. In particular, the order refers to information about race or sex, manipulation of racial or sexual representation, critical racing theory, transgenderism, unconscious bias, intersectionality and systemic racism.
Experts warn that it could create a hair -raising effect on developers who can feel the pressure to line up model outputs and data sets with rhetoric of the White House to protect federal dollars for their cash burning companies.
The order comes on the same day that the White House has published the ‘AI action plan’ of Trump that the national priorities take away from the social risk and focuses on building AI infrastructure, reducing bureaucracy for technology companies, reinforcing national security and competing with China.
The order instructs the director of the Office of Management and Budget together with the administrator for the Federal Purchasing Policy, the Manager of General Services and the director of the Office of Science and Technology Policy to do other agencies about how to satisfy.
WAN event
San Francisco
|
27-29 October 2025
“Unauthorized and forever we lost it,” Trump said on Wednesday during an AI event organized by the All-in Podcast and Hill & Valley Forum. “I will sign an order that prohibits the federal government from purchasing AI technology that is steeped in partisant bias or ideological agendas, such as the critical racing theory, which is ridiculous. And from now on the US government will only deal with AI that truth, honesty and strict impartiality.”
Determining what is impartial or objective is one of the many challenges for the order.
Philip Seargeant, Senior Applied Linguist Teacher at Open University, told WAN that nothing can ever be objective.
“One of the fundamental principles of sociolinguistics is that language is never neutral,” said Seargeant. “So the idea that you can ever get pure objectivity is a fantasy.”
Moreover, the ideology of the Trump government does not reflect the beliefs and values of all Americans. Trump has repeatedly tried financing for climate initiatives, education, public broadcaster, research, social service providers, community and agricultural support programs and gender-confirming care, and these initiatives often as examples of “awake” or politically biased government spending.
While Rumman Chowdhury, a data scientist, CEO of the technical non -profit human intelligence, and former American science envoy for AI, said: “Everything out [the Trump administration doesn’t] As is immediately thrown in this pejorative pile waking up. ‘
The definitions of “truth seeker” and “ideological neutrality” in the Wednesday published on Wednesday are vague and specific in others in some ways. Although “seeking truth” is defined as LLMS that “prioritize historical accuracy, scientific research and objectivity”, “ideological neutrality” is defined as LLMs that are “neutral, non-party-related tools that do not manipulate reactions for ideological dogmas such as Dei.”
Those definitions leave room for broad interpretation, as well as potential pressure. AI companies have urged fewer restrictions on how they work. And although an executive order does not bear the law of legislation, Frontier AI companies can still be subject to the shifting priorities of the political agenda of the administration.
Last week, OpenAi, Anthropic, Google and Xai Signed contracts With the Ministry of Defense to receive each up to $ 200 million to develop agentic AI work flows that relate to critical challenges in the field of national security.
It is unclear which of these companies is best positioned to win at the Woke AI ban, or that they will satisfy.
WAN has contacted each of them and will update this article when we hear again.
Despite the fact that he shows prejudices, Xai is perhaps the most aligned with the order – at least at this early stage. Elon Musk has grock, Xai’s chatbot, positioned as the ultimate anti-wok, “less biased”, Truthseeker. Grok’s system prompts have instructed them to prevent them from postponing to regular authorities and media, to seek out -of -the -clarification information, even if it is politically incorrect and even to refer Musk’s own views on controversial topics. In recent months, Grok has even spread anti -Semitic comments and praised Hitler on X, including hateful, racist and misogynistic messages.
Mark Lemley, professor of law at Stanford University, told WAN that the executive order ‘is clearly intended as discrimination against the point of view since then [the government] I have just signed a contract with Grok, aka ‘mechhitler.’ ”
In addition to the DOD financing of Xai, the company has announced that ‘Grock for the government“Was added to the schedule of the General Services Administration, which means that Xai products are now available for purchase in every government office and agency.
“The correct question is this: would they forbid grock, the AI with which they have just signed a large contract, because it was deliberately designed to give politically charged answers?” Said Lemley in an e -mail interview. “If not, it is clearly designed to discriminate a certain point of view.”
As the Own System Prompts of Grok have shown, model outputs can be a reflection of both the people who build the technology and the data on which the AI is trained. In some cases, an abundance of caution among developers and AI has trained on internet content that promotes values as inclusive, led to distorted model outputs. For example, Google came under fire last year after the Gemini-Chatbot showed a black George Washington and racially various Nazis, Die Trump’s order calls as an example of dei-infected AI models.
Chowdhury says that her greatest fear of this executive order is that AI companies will actively rework training data to drag the party line. She pointed to pronouncement From Musk a few weeks prior to the launch of Grok 4, and said that Xai would use the new model and the advanced reasoning options to “rewrite the entire corpus of human knowledge, add missing information and remove errors. Then re -schools.”
This would apparently bring Musk to the position to judge what is true, what huge power -reducing implications could have for how information is accessible.
Of course, companies have done conversations about what information is seen and not seen since the start of the internet.
Conservative David Sacks-the entrepreneur and investor who has been appointed Trump as ai tsar-is pronounced about his concerns about “Woke AI” on the all-in podcast, who co -organized Trump’s Day of AI announcements. Sacks has accused the makers of prominent AI products of infusing them with left-wing values, framing his arguments such as a defense of free speech and a warning against a trend in the direction of centralized ideological control in digital platforms.
The problem, experts say, is that there is no truth. Achieving unbiased or neutral results is impossible, especially in today’s world where even facts are politized.
“If the results that an AI produces say that climate science is correct, is those biased left wing?” Saygeant said. “Some people say that you have to give both sides of the argument to be objective, even if one side of the argument has no status.”




