The Trump administration uses AI to fulfill its typical antisocial ends: eroding worker power, violating civil liberties, and reducing transparency and accountability.
Trump and executives from OpenAI, Google, Oracle, Apple, Microsoft, Meta, and other tech companies gathered in the White House to “unite to power AI dominance” and, apparently, offer sycophantic quotes praising the president for a White House press release. The surplus of em-dashes raises the question: whose chatbot wrote the copy?
While nearly 7 million Americans marched in “No Kings” protests on Saturday to protest Trump’s lawlessness, the president reposted an AI-generated video of himself in a fighter jet wearing a crown and dumping massive quantities of feces on protestors below.
True to character, Trump’s most common uses of generative AI tend to be crude, offensive, or otherwise disturbing. The president also recently reposted racist AI-generated caricatures of Chuck Schumer and Hakeem Jeffries, and a parody video casting shadow president Russ Vought as the grim reaper of the government shutdown. But Trump’s embrace of artificial intelligence goes well beyond his own, well, shitposting. While agencies are embedding AI models in their operations, the Trump administration is also working its way down Silicon Valley’s deregulatory wishlist.
Tech companies willing to play along with Trump’s ideological requirements prohibiting “woke” ideology in AI models used by the government have been rewarded with significant policy concessions. Trump has issued executive orders revoking Biden-era directives for “safe, secure, and trustworthy” development and use of AI, accelerating federal permitting of new AI data centers, and supporting AI exports. The White House also released an AI Action Plan outlining how the administration will create favorable conditions for the AI industry, including by rejecting “radical climate dogma and bureaucratic red tape” to (hang on to your hat, Abundance readers): “Build, Baby, Build!”
We’ll be writing more soon about how the Trump administration is fueling Big Tech’s energy-intensive vision for “AI dominance.” The focus of this newsletter is how the administration is following through on another promise of its AI Action Plan: accelerating AI adoption in government. We continue to make regular updates to our tracker documenting uses of AI by the Trump administration, particularly to replace human workers, power surveillance and deportations, reduce transparency of government actions, and limit the public’s opportunities to hold the government accountable.
Replacing Workers with Chatbots
Across federal agencies, chatbots and other AI tools are being employed as feeble substitutes for experienced civil servants that have been fired or left in droves over the last nine months. Since Trump took office, the government has lost over 148,000 employees. Former Palantir employee Gregory Barbaccia, now the U.S. Federal Chief Information Officer, admitted that AI was “100%” how the government planned to compensate for lost staff.
The General Services Administration, which is responsible for government procurement, has been approving AI chatbots for federal agencies to use from several tech companies, including OpenAI, Anthropic, Google, and Meta. In September, GSA agreed to let agencies use Grok, the chatbot of Elon Musk’s company xAI, despite Grok frequently spouting racist and antisemitic comments. Musk’s DOGE fired huge swaths of GSA employees, while the employees who kept their jobs were given access to an internal chatbot to “allow us to do more with less.” GSA’s IT director said that the agency would pursue an “AI-first strategy.” GSA is now seeking to rehire many of its fired employees, in an apparent quiet capitulation to the value of human work.
Expanding efforts that began during the Biden administration, several agencies are rolling out chatbots for internal use. Customs and Border Patrol has its own “chatCBP,” while its parent agency, the Department of Homeland Security, has “DHSchat.” The Department of State has StateChat, which leverages tools from Palantir, Microsoft, and OpenAI, and is helping decide who sits on the panels that choose which State Department employees to promote.
The Defense Department is also incorporating bots within its everyday operations. In September, the DoD’s Undersecretary of Defense for Research and Engineering said that “We want to have an AI capability on every desktop — 3 million desktops — in six or nine months.” The AI tools would be used “for intelligence and for warfighting.” The announcement follows a notable deal earlier this year to procure AI bots for simulating war plans and operations, with tech from Scale AI, Anduril, and Microsoft. OpenAI’s ChatGPT also recently got an eyebrow-raising shoutout from a top military commander: “Chat and I” have become “really close lately,” Major General William Taylor told reporters.
Defense contracts are quickly becoming a lucrative and steadying pillar of the AI ecosystem. As my colleague Timi Iwayemi told Truthout earlier this year, “Tech folks have realized that government contracts are a very smart way to secure long-term [financial] sustainability.” While several commercial AI companies are partnering with defense tech firms to seek government contracts, they’re also securing deals of their own. The DoD contracted with Anthropic, Google, OpenAI, and xAI for up to $200 million each to develop AI bots for “key national security missions.” The Air Force is also offering 50-year leases on five bases for tech companies to build AI data centers.
Violating Civil Liberties
Before Trump and Musk’s bombastic public fallout, Musk was feeding government data into several AI models, including his own model Grok, Meta’s Llama, and Microsoft Azure. DOGE used AI to survey the emails of federal workers and sift through sensitive data, likely violating several privacy and security laws. In one of its most egregious violations of privacy rights, DOGE uploaded more than 300 million Americans’ social security information to an unprotected cloud server.
The Trump administration is also wielding AI to monitor the daily lives of legal and undocumented immigrants in the United States. The State Department’s AI-powered social media surveillance program, “Catch and Revoke,” monitors the political opinions of visa holders and other foreign nationals legally present in the country. ICE has used AI-powered tools contracted to local law enforcement to scan license plates in search of deportation targets. The Department of Homeland Security has a new AI-powered platform called ImmigrationOS, developed by Palantir, that supports deportation raids by consolidating information and documentation into one place, and incorporating sensitive census and IRS data. Acting ICE Director Todd Lyons described his vision for AI-powered deportations as “Like Prime, but with human beings.” Salesforce, whose CEO Marc Benioff recently set fire to his reputation by suggesting Trump send in troops to San Francisco, is now attempting to follow Palantir’s path to profit by pitching its AI tools to help ICE rapidly scale up the number of masked goons in its employ.
Letting The Bots Decide
Government agencies have increasingly used AI tools to replace human discretion at the cost of accuracy and accountability. DOGE tapped an undergraduate at the University of Chicago to use AI to identify where the Department of Housing and Urban Development had potentially “overreached” in its housing regulations, and deployed a faulty AI tool that misread contract amounts to eliminate thousands of Veteran Affairs contracts. The Social Security Administration has been experimenting with using AI to analyze disability claims, raising the stakes of faulty determinations for vulnerable Americans. The Food and Drug Administration used an error-prone large language model developed by Deloitte to try to accelerate the scientific review process. And as my colleague Julian Scoffield previously highlighted, RFK Jr’s “MAHA Commission” drew condemnation earlier this year for publishing a report with made-up, AI-generated citations.
With generative AI so rapidly reshaping the workplace and the information ecosystem, it’s difficult to predict what new roles large language models will be given in even a few years. But as Big Tech has decisively aligned itself with Trump’s unprincipled vision of “AI dominance,” we can expect them to pursue a riskier and more antisocial path for AI than they might otherwise have. Whether that embrace is eventually fatal remains to be seen.
This newsletter was originally published on our Substack. Read and subscribe here.
Want more? Check out some of the pieces that we have published or contributed research or thoughts to in the last week:
The Republican Plot to Destroy Education Research
Tracking the Environmental Harms of Trump Actions
New report shows oil and gas influence runs deep in Trump administration
Offshoring For Me, Not For Thee, Says MAGA
Nonprofits Warn of Potential TVA Privatization Ahead of Board Hearings