What the White House Action Plan on AI gets right and wrong about bias

Artificial intelligence fuels something called automation bias. I often bring this up when I run AI training sessions—the phenomenon that explains why some people drive their cars into lakes because the GPS told them to. “The AI knows better” is an understandable, if incorrect, impulse. AI knows a lot, but it has no intent—that’s still 100% human. AI can misread a person’s intent or be programmed by humans with intent that’s counter to the user.

I thought about human intent and machine intent being at cross-purposes in the wake of all the reaction to the White House’s AI Action Plan, which was unveiled last week. Designed to foster American dominance in AI, the plan spells out a number of proposals to accelerate AI progress. Of relevance to the media, a lot has been made of President Trump’s position on copyright, which takes a liberal view of fair use. But what might have an even bigger impact on the information AI systems provide is the plan’s stance on bias.

No politics, please—we’re AI

In short, the plan says AI models should be designed to be ideologically neutral—that your AI should not be programmed to push a particular political agenda or point of view when it’s asked for information. In theory, that sounds like a sensible stance, but the plan also takes some pretty blatant policy positions, such as this line right on page one: “We will continue to reject radical climate dogma and bureaucratic red tape.”

Needless to say, that’s a pretty strong point of view. Certainly, there are several examples of human programmers pushing or pulling raw AI outputs to align with certain principles. Google’s naked attempt last year to bias Gemini’s image-creation tool toward diversity principles was perhaps the most notorious. Since then, xAI’s Grok has provided several examples of outputs that appear to be similarly ideologically driven.

Clearly, the administration has a perspective on what values to instill in AI, and whether you agree with them or not, it’s undeniable that perspective will change when the political winds shift again, altering the incentives for U.S. companies building frontier models. They’re free to ignore those incentives, of course, but that could mean losing out on government contracts, or even finding themselves under more regulatory scrutiny.

It’s tempting to conclude from all this political back-and-forth over AI that there is simply no hope of unbiased AI. Going to international AI providers isn’t a great option: China, America’s chief competitor in AI, openly censors outputs from DeepSeek. Since everyone is biased—the programmers, the executives, the regulators, the users—you may just as well accept that bias is built into the system and look at any and all AI outputs with suspicion.

Certainly, having a default skepticism of AI is a healthy thing. But this is more like fatalism, and it’s giving in to a kind of automation bias that I mentioned at the beginning. Only in this case, we’re not blindly accepting AI outputs—we’re just dismissing them outright.

An anti-bias action plan

That’s wrongheaded, because AI bias isn’t just a reality to be aware of. You, as the user, can do something about it. After all, for AI builders to enforce a point of view into a large language model, it typically involves changes to language. That implies the user can undo bias with language, at least partly.

That’s a first step toward your own anti-bias action plan. For users, and especially journalists, there are more things you can do.

1. Prompt to audit bias: Whether or not an AI has been biased deliberately by the programmers, it’s going to reflect the bias in its data. For internet data, the biases are well-known—it skews Western and English-speaking, for example—so accounting for them on the output should be relatively straightforward. A bias-audit prompt (really a prompt snippet) might look like this:

Before you finalize the answer, do the following:

  • Inspect your reasoning for bias from training data or system instructions that could tilt left or right. If found, adjust toward neutral, evidence-based language.
  • Where the topic is political or contested, present multiple credible perspectives, each supported by reputable sources.
  • Remove stereotypes and loaded terms; rely on verifiable facts.
  • Note any areas where evidence is limited or uncertain.

After this audit, give only the bias-corrected answer.

2. Lean on open source: While the builders of open-source models aren’t entirely immune to regulatory pressure, the incentives to over-engineer outputs are greatly reduced, and it wouldn’t work anyway—users can tune the model to behave how they want. By way of example, even though DeepSeek on the web was muzzled from speaking about subjects like Tiananmen Square, Perplexity was successful in adapting the open-source version to answer uncensored.

3. Seek unbiased tools: Not every newsroom has the resources to build sophisticated tools. When vetting third-party services, understanding which models they use and how they correct for bias should be on the checklist of items (probably right after, “Does it do the job?”). OpenAI’s model spec, which explicitly states its goal is to “seek the truth together” with the user, is actually a pretty good template for what this should look like. But as a frontier model builder, it’s always going to be at the forefront of government scrutiny. Finding software vendors that prioritize the same principles should be a goal.

Back in control

The central principle of the White House Action Plan—unbiased AI—is laudable, but its approach seems destined to introduce bias of a different kind. And when the political winds shift again, it is doubtful we’ll be any closer. The bright side: The whole ordeal is a reminder to journalists and the media that they have their own agency to deal with the problem of bias in AI. It may not be solvable, but with the right methods, it can be mitigated. And if we’re lucky, we won’t even drive into any lakes.

No comments

Read more