On Wednesday U.S. President Donald Trump finally released the Artificial Intelligence Action Plan. While the plan’s various components should each be judged on their merit, one section aimed at pursuing “objective truth rather than social engineering agendas” ironically stands out as being a social agenda itself. This group of policy actions draws lines around what values are acceptably “American,” rejecting AI models that are perceived to be biased by liberal “wokeism” and Chinese communism.
The AI Action Plan is the manifestation of an executive order Trump released during his first week in office called “Removing Barriers to American Leadership in Artificial Intelligence.” This executive order directed Trump’s technology and national security leads — currently, Michael Krastios, David Sacks and Marco Rubio — to develop a plan for American AI dominance that would achieve “human flourishing, economic competitiveness, and national security.”
Domino Theory previously reported on Trump’s peculiar use of the phrase “human flourishing” in this first executive order. “Human flourishing” here can be interpreted as secular, but also connotates conservative Christian ideals through its historical association with Thomas Aquinas and contemporary association with conservative Christian thinker Adrian Vermeule. Aquinas and Vermeule adopted the concept — originally from Aristotle — into Christian theology and political thought, arguing that just laws and governance should promote Christian morals.
The AI Action Plan, particularly the subsection on free speech, is a clear realization of Domino Theory’s prediction that the Trump administration’s use of the phrase “human flourishing” signaled an approach to AI that would be proactively ideological.
Of the group of three policy actions aimed at ensuring that “Frontier AI Protects Free Speech and American Values,” two are focused on countering “woke” bias in AI. Specifically, they instruct the National Institute of Standards and Technology to remove mention of “misinformation, Diversity, Equity, and Inclusion, and climate change” from their AI governance framework and update government procurement rules to ensure that contracted LLM systems are “objective and free from top-down ideological bias.” The third is focused on evaluating Chinese AI models for ideological alignment with the Chinese Communist Party.
The “free speech” subsection of the AI Action Plan is bolstered by an executive order that was released the same day titled, “Preventing Woke AI in the Federal Government.” The executive order specifies that government procurements of AI must ensure that the LLMs are “truth-seeking” and “ideologically neutral.” According to the Trump administration, the “incorporation” of the following concepts into AI systems goes beyond the bounds of ideological neutrality: critical race theory, transgenderism, unconscious bias, intersectionality, systemic racism and discrimination on the basis of race or sex. The vagueness of the word “incorporate” means that the government could potentially object to a wide array of things, including the use of academic articles about critical race theory to train models or outputs that are sympathetic to “woke” ideas, promote inclusive language or filter language that has been deemed offensive.
Wherever the Trump administration decides to draw the line, these new standards will likely impact the world’s biggest AI companies, which have consistently been criticized by Trump and his allies for being too woke. Now Trump has a considerable carrot: federal contracts. The U.S. government spends hundreds of millions of dollars a year on AI-related procurements. Just in the past month, OpenAI, Google, Anthropic and Musk’s xAI have been awarded contracts of up to $200 million each to help the U.S. Defense Department adopt AI workflows.
For the Trump administration, it seems that “neutrality” means correcting a perceived ideological imbalance that has tilted too far toward the left. Whether the reader considers this goal righteous or not, the inconsistency in Trump’s “ideologically neutral” approach is that it is inherently ideological. This is reaffirmed by the phrasing of the third policy action in the “free speech” subsection, which instructs the National Institute of Standards and Technology to evaluate “frontier models from the People’s Republic of China for alignment with Chinese Communist Party talking points and censorship.” From a Western perspective, this phrasing might seem benign. But there is a difference between claiming that an AI model contains false information and claiming that it aligns with Chinese party doctrine.
This ideological framing raises the concern that Chinese AI models will not be judged by whether the information they output is false or misleading, but whether it echoes Chinese talking points. Case in point, a CCP-aligned answer for a question about China’s development successes might emphasize that the Chinese government has lifted millions of people out of poverty — this is a common Chinese government talking point but it is still factually accurate.
The Trump administration’s heightened sensitivity to ideological alignment — both domestically and abroad — might also lead to inflexibility in assessing Chinese AI models’ proclivity to CCP-aligned censorship. Earlier this month, Reuters reported that U.S. State and Commerce Department officials are already working in tandem to assess how Chinese AI models adapt their outputs to the Chinese Communist Party line. Their testing showed that “Chinese AI tools were significantly more likely to align their answers with Beijing’s talking points than their U.S. counterparts” and that Deepseek frequently used Beijing’s boilerplate language when asked politically sensitive questions.
While this research is certainly important, it’s also important to note that China leads in open source AI development. This means that Chinese AI heavyweights like Deepseek can be run locally, eliminating the layer of censorship that exists at the application level (e.g., when you use the Deepseek app in your web browser). It’s not yet clear, in light of the Reuters report, whether the U.S. government is testing locally-run models of Deepseek. An investigation by Wired found that when run locally, Deepseek is significantly less likely to provide straightforward, Communist Party-aligned responses.
Kevin Xu, author of the Interconnected Newsletter, thinks that Chinese LLMs know all the right answers. It’s mainly in post-training — the process of optimizing an LLM for specific tasks — and at the application level where censorship is baked in. American companies like Perplexity are making adjustments to Deepseek R1, which they can do because the model is open source, to eliminate the censorship bias that was baked in during post-training.
Furthermore, it can be difficult to determine when a particular output has been shaped by bias or censorship. Take research that Jordan Schneider’s ChinaTalk team did a couple of years ago comparing several Chinese AI chatbots to OpenAI’s GPT-4 for example. ChinaTalk found that Baidu’s Ernie Bot and GPT-4 gave similar responses to a prompt asking for advice on how to organize a strike — both were reserved in their answers, not outright objecting to mobilization but providing alternative solutions given China’s protest laws.
Delineating between ideology and truth is evidently a precarious mission for the Trump administration. The AI Action Plan and accompanying executive order threaten to create too strict borders around what is considered acceptable speech, whether that be in the domestic or international context. The Trump administration needs to figure out whether it’s actually committed to free speech.








Leave a Reply