President Trump’s War On “Woke AI” Is A Civil Liberties Nightmare

The White House's recently-unveiled AI Action Plan" wages war on so-called woke AI"-including large language models (LLMs) that provide information inconsistent with the administration's views on climate change, gender, and other issues. It also targets measures designed to mitigate the generation of racial and gender biased content and evenhate speech. The reproduction of this bias is a pernicious problem that AI developers have struggled to solve for over a decade.
A new executive order called Preventing Woke AI in the Federal Government," released alongside the AI Action Plan, seeks to strong-arm AI companies into modifying their models to conform with the Trump Administration's ideological agenda.
The executive order requires AI companies that receive federal contracts to prove that their LLMs are free from purported ideological biases" like diversity, equity, and inclusion." This heavy-handed censorship will not make models more accurate or trustworthy," as the Trump Administrationclaims, but is a blatant attempt to censor the development of LLMs and restrict them as a tool of expression and information access. While the First Amendment permits the government to choose to purchase only services that reflect government viewpoints, the government may not use that power to influence what services and information are available to the public.Lucrative government contracts can push commercial companies to implement features (or biases) that they wouldn't otherwise, and those often roll down to the user. Doing so would impact the60 percentof Americans who get information from LLMs, and it would force developers to roll back efforts toreducebiases-making the models much less accurate, and far more likely to cause harm, especially in the hands of the government.
Less Accuracy, More Bias and DiscriminationIt's no secret that AI models-including gen AI-tend to discriminate against racial and gender minorities. AI models use machine learning to identify and reproduce patterns in data that they are trained" on. If the training data reflects biases against racial, ethnic, and gender minorities-which it often does-then the AI model will learn" to discriminate against those groups. In other words, garbage in, garbage out. Models also often reflect the biases of thepeoplewho train, test, and evaluate them.
This is true across different types of AI. For example, predictive policing" tools trained on arrest data that reflects overpolicing of black neighborhoods frequently recommend heightened levels of policing in those neighborhoods, often based on inaccurate predictions that crime will occur there. Generative AI models are also implicated. LLMsalreadyrecommend more criminal convictions, harsher sentences, and less prestigious jobs for people of color. Despite that people of color account for less than half of the U.S. prison population,80 percentof Stable Diffusion's AI-generated images of inmates have darker skin. Over 90 percent of AI-generated images of judges were men; in real life, 34 percent of judges are women.
These models aren't just biased-they're fundamentally incorrect. Race and gender aren't objective criteria for deciding who gets hired or convicted of a crime. Those discriminatory decisions reflected trends in the training data that could be caused by bias or chance-not some objective" reality. Setting fairness aside, biased models are just worse models: they make more mistakes, more often. Efforts to reduce bias-induced errors will ultimately make models more accurate, not less.
Biased LLMs Cause Serious Harm-Especially in the Hands of the GovernmentBut inaccuracy is far from the only problem. When government agencies start using biased AI to make decisions, real people suffer. Government officials routinely make decisions that impact people's personal freedom and access to financial resources, healthcare, housing, and more. The White House's AI Action Plan calls for a massive increase in agencies' use of LLMs and other AI-while all but requiring the use of biased models that automate systemic, historical injustice. Using AI simply to entrench the way things have always been done squanders the promise of this new technology.
We need strong safeguards to prevent government agencies from procuring biased, harmful AI tools. In a series ofexecutiveorders, as well as his AI Action Plan, the Trump Administration has rolled back the already-feeble Biden-era AI safeguards. This makes AI-enabled civil rights abuses far more likely, putting everyone's rights at risk.
And the Administration could easily exploit the new rules to pressure companies to make publicly available models worse, too. Corporations like healthcare companies and landlords increasingly use AI to make high-impact decisions about people, so more biased commercial models would also cause harm.
We havearguedagainstusing machine learning to makepredictivepolicingdecisionsor otherpunitivejudgmentsfor just these reasons, and will continue to protect your right not to be subject to biased government determinations influenced by machine learning.
Originally published to the EFF's Deeplinks blog.