Image 01 Image 03

New Trump EO Bans Govt Purchases of AI Systems That Promote DEI, Woke Ideologies

New Trump EO Bans Govt Purchases of AI Systems That Promote DEI, Woke Ideologies

“When ideological biases or social agendas are built into Al models, they can distort the quality and accuracy of the output.”

President Donald Trump signed an executive order on Wednesday titled, “Preventing Woke AI in the Federal Government.” The order states, “Americans will require reliable outputs from Al, but when ideological biases or social agendas are built into Al models, they can distort the quality and accuracy of the output.”

The new directive prohibits the federal government from purchasing any AI systems that default to anti-white or extreme ideologies. The EO singles out “diversity, equity, and inclusion” (DEI) as one of the most “pervasive and destructive of these ideologies.”

The order cites one major AI model that “changed the race or sex of historical figures — including the Pope, the Founding Fathers, and Vikings — when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy.”

Another AI model refused to produce images celebrating the achievements of white people, even while complying with the same request for people of other races. In yet another case, an AI model asserted that a user should not “misgender” another person even if necessary to stop a nuclear apocalypse.

While the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace, in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas.

Conservative activist and senior fellow at the Manhattan Institute, Christopher Rufo, sums up the new EO nicely in the social media post below: “Incredible: Trump has issued an executive order saying that the government will not purchase AI systems that default to black George Washington, refuse to celebrate the achievements of white people, or argue that misgendering someone is worse than a nuclear apocalypse.”

This action will no doubt upset the woke among us, who will label the order “controversial” and likely claim that Trump is trying to censor artificial intelligence. However, the goal of the directive is to ensure that AI systems used by the government are grounded in factual, unbiased data — not ideological narratives.

As the example below illustrates, models trained on woke or historically inaccurate information will inevitably produce distorted conclusions. In the world of AI, as in life, bad information in means bad information out.

Shortly after its launch (and yes, in AI terms, 18 months is practically an eternity), Google Gemini was asked whether George Washington was white or black. Rather than offering a straightforward answer, it responded that his racial identity was a “complex and nuanced topic,” and followed up with a lengthy explanation.

When Grok was asked the same question ten minutes later, it responded: “George Washington was white.”

CNBC reported that Google responded to this embarrassing episode by “pull[ing] its Gemini AI image generation feature, saying it offered ‘inaccuracies’ in historical pictures.” I’ll say it did!

And six months later, Googled introduced a new version.

While Google Gemini has no doubt sorted out some of its initial problems, this example demonstrates how easily bias can infiltrate the tools we depend on, shaping perceptions and decisions in ways we may not even notice.

When AI reflects distorted priorities or ideological slants — even a little bit — the consequences ripple far beyond a single search result or image generation. That’s why efforts to keep publicly funded AI systems grounded in accuracy and balance are not just reasonable — they’re essential.


Elizabeth writes commentary for Legal Insurrection and The Washington Examiner. She is an academy fellow at The Heritage Foundation. Please follow Elizabeth on X or LinkedIn.

DONATE

Donations tax deductible
to the full extent allowed by law.

Comments

E Howard Hunt | July 25, 2025 at 7:20 am

The Google AI system does not even execute a biased answer using the generative process when addressing hot button social questions. I got the model to admit that it uses canned, off-the-shelf, preplanned answers to such questions. It claimed this ruse is necessary to ensure safety and battle harmful stereotypes and misinformation.

    henrybowman in reply to E Howard Hunt. | July 25, 2025 at 2:24 pm

    I loved the way it blithely parrots “Race is a social construct, not a biological reality” right after citing DNA evidence for the possibility that GW had a black ancestor.
    Hell, we ALL had black ancestors. Not the point.

I approve of the goal but it’s nearly impossible to achieve given current training methods, which tend to magnify existing biases in the training data.

    henrybowman in reply to irv. | July 25, 2025 at 2:26 pm

    The big hurdle is the meta-question. How does Donald Trump believe he will ever successfully be able to determine that a two-year-old AI system has harmful biases, when he has shown that he’s not capable of doing the same with a human being who has a 40-year permanent record?

destroycommunism | July 25, 2025 at 10:03 am

with the school systems programming AI lefty bias into the kids not sure how we are going to avoid this…EO or not

This is one of those times it appears better to rely on natural stupidity than artificial “intelligence.” Black Washington?
.

FelixTheCat | July 25, 2025 at 1:10 pm

Washington is black in that AI-generated image because the AI that made it was trained to think racism is worse than nuclear war by, you guessed it, sh*tlibs.

    Paula in reply to FelixTheCat. | July 25, 2025 at 1:33 pm

    A far cry from the irrelevant list: “Victim or perpetrator, if your number’s up we’ll find you.” The AI is now programmed so that if the victim is black or perpetrator is white we’ll release your name and picture on the evening news, otherwise crickets.

    Semper Why in reply to FelixTheCat. | July 25, 2025 at 1:44 pm

    I did see a claim that the reason we saw the black George Washington was because the interface to the AI automatically inserted diversity language into the prompt. What you asked for wasn’t what you got. What you got was a response to your prompt after the woke website developers modified your prompt in the name of diversity.

    That kind of silliness is easy to fix. The misgendering is worse than the holocaust stuff… That’s a garbage source problem.

      Paula in reply to Semper Why. | July 25, 2025 at 2:26 pm

      Willie Nelson used to sing, “I am my own grandpa.”

      In the new AI version of reality black George Washington can sing, “I am my own master.”

George_Kaplan | July 26, 2025 at 12:19 am

Was George Washington African? Well obviously. According to the ‘Out of Africa’ school of dreaming we’re all of African ancestry – some of us just have ancestors that also trace back to Europe andor Asia. Perhaps Washington needs to be reclassified as EuroAfrican-American? 😇