Picture this: You ask an AI to show you images of judges, and it depicts only 3 percent as women — even though 34 percent of federal judges are women. Or imagine an AI that’s more likely to recommend harsh criminal sentences for people who use expressions rooted in Black vernacular cultures. Now imagine that same AI instructed to ignore climate impacts or treating Russian propaganda as credible information.
This isn’t science fiction. The bias problems are happening right now with existing AI systems. And under President Trump’s new artificial intelligence policies, all these problems could get much worse — while potentially handing the U.S.’s tech leadership to China.
The Trump administration’s AI Action Plan, released alongside executive orders on July 23, 2025, doesn’t just strip federal AI guidelines of bias protections. It eliminates references to diversity, climate science, and misinformation from the National AI Risk Assessment — the document that has become one of the most widely used AI governance guidelines globally.
The administration demands that AI models used by the federal government be “objective and free from top-down ideological bias.” But there’s a catch: This standard comes from an administration whose leader made over 30,573 documented false statements during his first term, according to Washington Post fact-checkers. The result could be AI systems that ignore climate science, amplify misinformation, and become so unreliable that global customers choose Chinese alternatives instead.
AI Isn’t Actually “Neutral”
The irony runs deep. While claiming to eliminate bias, Trump’s policies could embed it even more firmly into the AI systems that increasingly shape American life — from hiring decisions to law enforcement to health care.
Research shows that AI bias can actually be worse than real-world bias. When Bloomberg tested an AI image generator on common occupations, the results were stark: Prestigious, higher-paid professionals appeared almost exclusively as white and male, while lower-paid workers were depicted as women and people of color. The AI’s racial and gender sorting exceeded the differences that actually exist in our world.
Fast food workers, for example, were shown with darker skin tones 70 percent of the time by the AI — but in reality, 70 percent of fast food workers in the United States are white.
The consequences go far beyond images. Research published in Nature found that large language models were significantly more likely to suggest that people using African American speaking styles should get less prestigious jobs, be convicted of crimes, and even be sentenced to death.
“All of the language models that we examined have this very strong covert racism against speakers of African American English,” said University of Chicago linguist Sharese King.
The Grok Problem
Some of the most extreme examples of AI bias have come from Elon Musk’s AI chatbot Grok, which has described South African policies as “white genocide,” a belief it says it was “instructed by my creators” to accept.
Grok has also praised Hitler, suggested Holocaust-like responses would be “effective” against hatred toward white people, referred to itself as “MechaHitler,” and posted sexually explicit commentary.
Despite these outbursts, the White House remained silent about whether such errors should disqualify models from federal contracts. In fact, just a couple of months after reports of Grok’s Nazi rants went public, Musk’s company xAI received a Department of Defense contract for up to $200 million. Grok, along with AI models from other companies, will be used for “intelligence analysis, campaigning, logistics and data collection,” according to Defense News. xAI says it has addressed the coding that led to the earlier outbursts.
“What the president’s executive order may very well do is undercut efforts to eliminate bias, despite the fact that it’s purporting to require objectivity and fairness,” said Cody Venzke, senior ACLU policy counsel.
Climate Science Under Attack
The administration isn’t just targeting bias protections — it’s also calling for eliminating references to climate science in AI risk assessments and ignoring climate impacts in data center development.
“We need to build and maintain vast AI infrastructure and the energy to power it,” the White House said. “To do that, we will continue to reject radical climate dogma and bureaucratic red tape. Simply put, we need to ‘Build, Baby, Build!'”
But the training and deploying of AI is contributing to the climate crisis. A typical AI-focused data center consumes as much energy as 100,000 households, according to the International Energy Agency. Models currently under construction are projected to consume 20 times more.
These data centers also guzzle water — 560 billion liters annually, according to Bloomberg. Two-thirds of the water for data centers built since 2022 comes from areas already experiencing water stress.
On the same day the administration announced its new AI policies, it also released a climate analysis that downplays global warming impacts — a report being widely criticized for cherry-picking data and contradicting reputable scientific research.
The Misinformation Wild West
The new Trump policy also removes “misinformation” as a risk factor from the nation’s AI risk assessment. This comes at a time when research shows misinformation is becoming a serious problem for AI systems.
A new study by Yale’s Jeffrey A. Sonnenfeld and former USA Today editor Joanne Lipman found that AI systems often rely on the most popular responses, not the most accurate ones. “Verifiable facts can be obscured by mountains of erroneous information and misinformation,” they wrote.
Those “mountains of misinformation” are growing fast. A Russian propaganda effort called Pravda — sharing the name of the old Soviet newspaper — has published over 3 million articles per year across 150 domains in over 46 languages since the Ukraine invasion began. The strategy appears to be working: 10 major language models repeated false claims from this pro-Kremlin network 33 percent of the time in a test conducted by NewsGuard.
Even reputable news organizations have been tripped up, and had to issue embarrassing corrections. AI has gotten wrong such simple facts as Tiger Woods’ PGA Tour wins and the chronological order of Star Wars films, according to Sonnenfeld and Lipman. When the Los Angeles Times attempted to use AI for opinion pieces, it was caught short when the AI described the Ku Klux Klan as “white Protestant culture” reacting to “societal change” — not as the hate-driven movement it actually is.
The Stakes Keep Getting Higher
AI engineer and former Google researcher Deb Raji warned in a tweet that changes to the National Risk Assessment “will have consequences that I don’t think many people understand.”
As AI systems become more widespread in hiring, law enforcement, health care, and government services, the impacts of misguided policies grow more serious. Rather than addressing the technical and societal factors that create discriminatory outcomes, Trump’s policy eliminates oversight while demanding “neutrality” from systems trained on inherently biased data.
Meanwhile, technology companies are incentivized to ignore climate science, both in the training of their models and in the construction of the data centers that make AI function.
Trump’s AI Action Plan aims to make U.S. models the international standard and boost exports of U.S. technology. But there’s a fundamental flaw in this strategy: If MAGA ideology gets baked into these models, customers outside Trump’s political sphere may be less interested in buying U.S.-based AI. Instead, China’s open-source models could gain the upper hand in global markets.
The question isn’t whether AI systems should be objective — they absolutely should be. But Trump’s crusade against “woke AI” doesn’t create neutrality. If major AI companies comply with these plans, we could see existing biases supercharged and climate reality distorted, just when the planet desperately needs real science and real solutions.
These policies could systematically disadvantage marginalized communities and make established science harder to access, while undermining the U.S.’s technological leadership globally.
The ultimate irony? Policies that purport to eliminate bias and ideology may instead embed American AI systems with toxic biases that make them unreliable — handing the advantage to models from China and elsewhere, and undermining one of the AI Action Plan’s key goals.
As AI reshapes society, adopting a politically defined version of “truth” could have devastating consequences for both American democracy and American technological leadership. Along with attempts to impose political litmus tests on journalists, educators, health care providers, and scientists, Trump’s AI Action Plan could usher in an age not of artificial intelligence, but of ignorance.
Press freedom is under attack
As Trump cracks down on political speech, independent media is increasingly necessary.
Truthout produces reporting you won’t see in the mainstream: journalism from the frontlines of global conflict, interviews with grassroots movement leaders, high-quality legal analysis and more.
Our work is possible thanks to reader support. Help Truthout catalyze change and social justice — make a tax-deductible monthly or one-time donation today.
