As the Trump administration continues to expand the use of artificial intelligence (AI) in its operations, the Revolving Door Project has decided to catalogue examples of where and how AI is being deployed. This tracker focuses particularly on uses of AI in the federal executive branch to replace federal workers and undermine transparency and due process.
Last updated on January 20, 2025.
Government-Wide
The Trump administration is deploying artificial intelligence to synthesize and analyze sensitive data about Americans, incorporating AI into the provision of essential public services, and replacing fired workers with AI across federal agencies. Click to read more.
New York Times: Trump Taps Palantir To Compile Data On Americans | 5/30/25
- As Palantir continues to expand its influence within the administration, the Trump administration has given the company the right to surveil Americans. In a chilling report, The New York Times notes that the company is already creating “detailed portraits of Americans based on government data,” with the Trump administration already seeking “access to hundreds of data points on citizens and others through government databases, including their bank account numbers, the amount of their student debt, their medical claims and any disability status.”
In The Public Interest: AI – What is it good for? | 6/18/25
- Analysis by In The Public Interest highlights the dangers of relying on AI to dispense public services. Looking at cities and states’ use of AI for “gunshot detection,” “drop-out detection,” and “eligibility services,” Shahrzad Habibi found that these imprecise systems regularly harm the constituents they were designed to help. Take for example ShotSpotter, the shoddy gun detection program that has led to 40,000 unfounded police deployments primarily targeting communities of color in Chicago. In one case, use of this AI-powered tech led to the wrongful imprisonment of Michael Williams for 11 months. When it comes to determining individuals’ eligibility for government programs like Medicaid, the automated systems have also spat out wrong decisions, denying people of much needed care and coverage. Habibi’s analysis is a sharp reminder that most AI systems remain incapable of adequately performing the key public services civil servants across the country provide daily.
Nextgov/FCW: Trump administration hopes AI can mitigate staffing losses, federal CIO says | 8/18/25
- Former Palantir employee Gregory Barbaccia, now chief information officer for the U.S. government, said that AI was “100%” how the Trump administration was planning to compensate for the loss of laid off and resigning government employees. Over 148,000 civil servants have left the federal workforce since Trump took office.
CNN: US government launches ‘Tech Force’ to hire AI talent | 12/15/25
- The Office of Personnel Management launched a “US Tech Force” program to create hiring pipelines for artificial intelligence and technology professionals. The program will run for two years and plans to hire 1,000 people to instill at agencies across the government. OPM is partnering with Big Tech companies, including Microsoft, Amazon, Meta, and xAI, to administer the program by holding speaker events, job fairs, and providing mentorships. As we saw through DOGE’s actions, the administration is keen on using artificial intelligence to gut federal employment, contracts, and regulations. This program could indicate an effort to further embed and institutionalize the type of personnel that filled DOGE’s ranks.
Immigration
The Trump administration is using AI to surveil immigrants’ real-time movements, comb through social media data, analyze license plate information, and assist the Customs and Border Protection workforce, among other uses. Companies including Palantir and Salesforce are jumping at the opportunity to support the Trump administration’s deportation activities with their AI-powered products. Click to read more.
- The State Department’s new policy of stripping away the legal immigration status of foreign nationals for political dissent will ramp up with its new social media surveillance program, “Catch and Revoke.” Under the guise of combatting antisemitism and terrorism, the AI-powered program will sift through the social media data of 33 million people touched by the US immigration system to dissuade First Amendment protected-activity against the Trump administration. The uptick of AI-powered surveillance in the US adds to concerns of rising authoritarianism and the violation of civil liberties.
404 Media: ICE Taps into Nationwide AI-Enabled Camera Network, Data Shows | 5/27/25
- Immigration and Customs Enforcement (ICE) is expanding its immigration crackdown by using tools contracted to local law enforcement without publicly disclosing their use. According to data reviewed by 404 Media, more than 4,000 immigration focused license plate searches have already been performed by local and state police on behalf of the federal government, aiding in the Trump administration’s push for mass deportations.
NPR: Former Palantir workers condemn company’s work with Trump administration | 5/25/25
- Former Palantir employees wrote a letter in which they spoke out against the company’s partnership with the Trump administration to aid deporting 1 million immigrants this year. The letter’s signees believe Palantir has violated the ethics requirements in its code of conduct, which said its software would protect the vulnerable and develop AI responsibly. The letter also raised concerns about Big Tech’s complicity in undermining democratic principles, calling out Palantir’s violent rhetoric and normalization of authoritarianism. Palantir, cofounded by Trump ally and JD Vance mentor Peter Thiel, has contracts with the US and Israeli militaries, and has seen its stock’s value increase by more than 200 percent since Trump won the 2024 election.
FedScoop: Customs and Border Protection taps ‘chatCBP’ to assist workforce | 5/30/25
- Customs and Border Protection (CBP) will begin using an AI-powered chatbot to provide the same type of generative AI assistance as the popular ChatGPT. The CPB chatbot is touted to improve efficiency and offers features that assist workers complete routine tasks, like document summaries and analysis. Previously, OpenAI’s ChatGPT was used for the same purposes, but the DHS reversed that policy once an in-house tool was developed.
- The Department of Homeland Security is rolling out a new AI-powered platform, ImmigrationOS, that consolidates a host of tools in order to allow the agency to aggressively ramp up deportations. According to CNN, the platform allows DHS to “approve raids, book arrests, generate legal documents and route individuals to deportation flights or detention — all in one place.” It also incorporates data from the IRS and the census, as well as counter-terrorism tools like Suspicious Activity Reports and transactions flagged by the Bank Secrecy Act. ImmigrationOS was developed by Palantir as part of a $30 million contract with the agency.
New York Times: Salesforce Offers Its Services to Boost Trump’s Immigration Force | 10/16/25
- According to a New York Times examination of internal documents and messages from Salesforce, the company is seeking a contract to support Immigration and Customs Enforcement (ICE) in rapidly increasing its staff to undertake more deportation raids on immigrants around the country.
- Agents from ICE and CPB deployed to Chicago are using an app called Mobile Fortify to scan peoples’ faces and determine whether they are a citizen. ICE stores its scans in the app, even if the individual is a US citizen. ICE does not allow people to decline to be scanned and the photos can then be stored on the app for 15 years. 404 Media also reported that CBP developed another facial recognition app called Mobile Identify.
- ICE used an artificial intelligence tool to fast track applicants with law enforcement experience, requiring only four weeks of online training rather than the usual eight-week in person training. The tool, however, mistakenly sent people with no previous experience into the fast tracked “LEO program” if their resumes contained the word “officer.”
DOGE
DOGE staffers have utilized artificial intelligence to “munch” contracts at the VA, identify housing regulations to weaken, incorporate Elon Musk’s AI chatbot into government functions to analyze sensitive data, review emails from federal workers, and more. Click to read more.
WIRED: DOGE Put a College Student in Charge of Using AI to Rewrite Regulations | 4/30/25
- DOGE tapped an undergraduate at the University of Chicago, Christopher Sweet, to use artificial intelligence to rewrite Department of Housing and Urban Development (HUD) regulations. According to WIRED, Sweet produced an Excel spreadsheet containing policy areas where the AI program flagged potential regulatory “overreach” by HUD.
ProPublica: DOGE Developed Error-Prone AI Tool to “Munch” Veterans Affairs Contracts | 6/6/25
- DOGE deployed a faulty AI tool developed by Sahil Lavingia to “munch” contracts at the Department of Veteran Affairs. Lavingia’s hastily built tool, which relied on “outdated and inexperienced AI models,” identified over 2,000 contracts for munching, often based on a misread on contract sizes and data. In fact, the tool wrongly determined that over a thousand contracts, some of which were worth just $35,000, each had a $34 million valuation. Some other “munched” contracts included a gene sequencing device used to develop cancer treatments and a tool used to measure and improve care from nurses.
Australian Financial Review: Musk pushes Grok AI on US government, raising ethics issues | 5/25/25
- While still in the apparent good graces of President Trump, Musk had DOGE utilize his AI chatbot Grok to analyze government data. According to the Financial Review, the xAI CEO had used a customized version of the company’s chatbot to sift through data at the Department of Homeland Security with no departmental approval, potentially “violat[ing] security and privacy laws.” In 2024, five secretaries of states (Michigan’s Jocelyn Benson, Minnesota’s Steve Simon, Pennsylvania’s Al Schmitt, Washington’s Steve Hobbs, and New Mexico’s Maggie Toulouse Oliver) sent a letter to Musk pleading with the billionaire to make changes to Grok, highlighting its tendency to produce inaccurate information.
WIRED: DOGE Used a Meta AI Model to Review Emails From Federal Workers | 5/22/25
- According to WIRED, DOGE affiliates at OPM were using AI to surveil the emails of federal workers who engaged in conduct that the administration would view as “disloyal.” “Materials viewed by WIRED show that DOGE affiliates within the Office of Personnel Management (OPM) tested and used Meta’s Llama 2 model to review and classify responses from federal workers to the infamous ‘Fork in the Road’ email that was sent across the government in late January. One of the most egregious examples of DOGE’s (mis)use of AI in this is using it to surveil the emails of government workers to examine conduct that would rebut the Trump agenda.” While not surprising that fealty is the most important quality to Donald Trump, going after rank-and-file government workers is an escalation of his autocratic politicization of the civil service.
Defense Department
The Defense Department is expanding its lucrative contracts with AI companies including Anthropic, Google, OpenAI, Palantir, and xAI, and seeking to get AI “on every desktop” in the military, while gutting the staffing of its internal office that tests AI systems’ safety. It is also allowing AI agents to play a role in planning and operations for complex military activities, despite widespread accuracy issues with generative AI models. Click to read more.
The Register: Pentagon to give AI agents a role in decision making, ops planning | 3/5/25
- In March 2025, the Pentagon inked a deal with scandal-ridden Scale AI to aid in military planning and operations. Scale AI’s bots will carry out tabletop war-gaming to simulate war plans. Given the widespread issues with inaccuracy and “hallucinations” across generative AI tools, the Pentagon could be relying on low quality and even false data to make complex military readiness decisions.
- In May 2025, as part of Project Maven, the Pentagon raised its contract ceiling with tech giant Palantir to nearly $1.3 billion through 2029. Palantir has already become deeply entrenched in the Trump administration; in addition to this marked increase in AI use within the military, this also shows the growing influence of the company and its co-founder Peter Thiel.
MIT Technology Review: The Pentagon is gutting the team that tests AI and weapons systems | 6/10/25
- Hegseth’s plan to slash the office that has been described as “the last gate before technology gets to the field” would facilitate department wide use of AI tools and systems that have not been thoroughly evaluated. First established in the 1980s by Congress, the Office of the Director of Operational Test and Evaluation has been pivotal in advocating for the reduction of unknown variables through rigorous testing, providing critical oversight of the “operational testing and evaluation of new systems before a decision is made to begin full-rate production.”
Defense News: Pentagon taps four commercial tech firms to expand military use of AI | 7/15/25
- The Defense Department has contracted with four of the biggest AI firms, Google, xAI, Anthropic and OpenAI, to develop “agentic AI workflows for key national security missions.” Each contract will be worth up to $200 million.
Nextgov/FCW: Pentagon research official wants to have AI on every desktop in 6 to 9 months | 9/16/25
- Emil Michael, the Defense Department’s Undersecretary of Defense for Research and Wngineering at the Department of Defense, said at a Politico event in mid-September that “We want to have an AI capability on every desktop — 3 million desktops — in six or nine months … We want to have it focus on applications for corporate use cases like efficiency, like you would use in your own company … for intelligence and for warfighting.”
Business Insider: Even Top Generals Are Looking to AI Chatbots for Answers | 10/13/25
- Major General William ‘Hank’ Taylor, commanding general of the 8th Army, told reporters at the annual Association of the United States Army conference in October 2025 that he has been experimenting with generative AI chatbots to inform his professional decision-making. “Chat and I” have become “really close lately,” he said.
Air and Space Forces: Air Force Bases to Host AI Data Centers on Unused Land | 10/15/25
- The Air Force is offering space at five bases in Tennessee, California, New Jersey, Arizona and Georgia for tech companies to build artificial intelligence data centers. It intends to select applicants in January of 2026 and to offer these leases for up to fifty years.
Axios: U.S. military to use Google Gemini for new AI platform | 12/9/25
- The Department of Defense partnered with Google to deploy a generative AI tool for all Pentagon employees to use on “unclassified work.” The tool will also be used to “analyze intelligence, model and simulate conflict.” The deal is the latest in a slew of Pentagon contracts with artificial intelligence companies to integrate AI into the agency’s work.
Energy Department
The Energy Department is making federal lands available for data center construction by private companies, incorporating AI models into the agency’s research efforts, and backing deregulatory moves to get government ‘out of the way’ of Big Tech’s AI expansion, while spearheading the increased use of AI tools throughout the government. Click to read more.
FedScoop: Energy secretary: Government should ‘get out of the way’ to fuel AI race | 5/8/25
- Likening AI to the atomic bomb, Energy Secretary Chris Wright said the federal government needs to get out of the way of AI innovation. Speaking before the House Appropriations committee, the former fracking executive claimed that to allow AI to develop, government has to get out of the way so that the industry’s energy needs can be met. AI data centers’ hunger for energy is poised to raise electric bills all over the country, while draining fresh water supply from the places that need it most.
NextGov: Energy selects 16 sites for AI data center construction, new energy development | 4/3/25
- In April 2025, the Energy Department announced it would move forward with the Trump administration’s plan to offer public lands to build AI data centers. DOE has selected 16 locations that are “are uniquely positioned for the construction of data centers ready to process the large volumes of compute needed for AI applications.” According to Kenza Bryan of the Financial Times, AI data centers are “massively contributing to the continued rise in power demand, which itself contributes to the continued rise in global emissions.” In fact, MIT scientists have estimated the power to maintain data centers went from 2,688 megawatts at the end of 2022 to 5,341 by the end of 2023, projecting the sector’s electricity consumption to reach 1,050 terawatts by the end of 2026.
Next Gov: OpenAI brings its large language models to Energy’s national labs | 1/31/25
- OpenAI inked a deal with the Department of Energy to allow access to its large language models to support the labs’ research efforts. In addition to the immense amount of fresh water that AI data centers use to keep the servers cool, the OpenAI large language model ChatGPT uses about 564 MWh of electricity per day.
FedTech Magazine: What DOE’s FASST Initiative Means for AI Technologies | 5/28/25
- The Department of Energy is leading a government program that would build one of the most powerful artificial intelligence systems in the world. The Frontiers in Artificial Intelligence for Science, Security and Technology (FAAST) initiative will leverage the data and resources from the DOE’s 17 National Laboratories to enhance the nation’s AI capabilities, with a focus on using the technology to address national security and energy challenges, and drive scientific discovery. FAAST has four main goals: transforming the DOE’s large datasets into a high quality repository for training AI; building AI-enabled supercomputing platforms and infrastructure; training and testing AI models to predict emergent behaviors while maintaining privacy; and applying FAAST AI to spaces that lack private sector investment.
New York Times: Energy Dept. Unveils Supercomputer That Merges With AI | 5/5/25
- The Department of Energy announced that its Lawrence Berkeley National Laboratory had chosen Dell Technologies to deliver its new supercomputer. Dubbed the “Doudna,” the new machine will use Nvidia processors for resource intensive tasks like training AI models and genomics research. The DOE’s most recent supercomputer, “El Capitan,” cost upwards of $2 billion to develop and build. The Trump administration has yet to release a funding target or price for the “Doudna,” though its increased specs over its predecessor will likely push the final price tag higher as well.
Department of Energy: Speed to Power Initiative | 9/18/25
- The Department of Energy announced its Speed to Power Initiative to “accelerate the speed of large-scale grid infrastructure” in order for the US to “win the global artificial intelligence (AI) race.” The DOE put out a Request for Information asking industry stakeholders for input on how to utilize funds and authorities to expand energy production and transmission to support AI data centers while maintaining grid reliability. Possible suggestions the DOE provided include financial incentives and streamlining of environmental review and permitting processes. The department launched the initiative in response to its own report in July that claimed the US would suffer from a massive increase in blackouts by 2030 if coal and gas plants continue to be retired. That report, however, was criticized by experts as it “aggressively downplays the potential for new wind, solar, batteries, and gas” and was “designed around the worst-case assumptions.”
General Service Administration
The General Service Administration, which is laying off a substantial number of its employees, is seeking to replace their workflow with a chatbot. It is also coordinating the procurement of AI tools from companies including OpenAI, Anthropic, Google, Meta, and xAI by government agencies. Click to read more.
Next Gov: GSA to ‘quadruple’ in size to centralize procurement across the government | 5/3/25
- As part of the push to gut the federal civil service, the General Service Administration staff who still have their jobs will now have access to a chatbot intended to replace the work of their fired colleagues. The GSA is an independent agency that is responsible for supporting the basic functions of the federal government, including resource procurement through real estate, technology, and government contracts.
Politico: AI Launches Across the Government | 8/14/25
- The US General Services Administration (GSA) launched a new platform that allows its employees to use artificial intelligence in their work. The platform, USAi, has access to AI models from OpenAI, Anthropic, Google, and Meta. GSA is also allowing other government agencies to purchase AI models from Anthropic, Google, and OpenAI. OpenAI and Anthropic are only charging agencies $1 for access over the next year, but, as Politico notes, “it also gives the two multi-billion dollar companies a first-mover advantage that could entrench their models within the government, possibly at the cost of smaller competitors and new entrants.” It’s a concerning development as the Trump administration openly looks to replace civil servants with AI models while AI giants could stand to make billions of dollars in future government contracts.
Reuters: Meta’s AI system Llama approved for use by US government agencies | 9/22/25
- The General Services Administration approved Meta’s artificial intelligence program Llama for government use, adding the tool to its list of approved AI tools for federal agencies. Meta’s competitors including Amazon Web Services, Anthropic, Google, Microsoft, and Open AI have also their AI tools approved at steep discounts for government use.
Wall Street Journal: Trump Administration Agrees to Use AI Models From Musk’s xAI | 9/25/25
- The GSA announced that federal agencies will have access to Grok 4 and Grok 4 Fast, two AI models developed by Elon Musk’s xAI. The agencies will only pay 42 cents for access to the programs, mirroring similar deals that gave federal agencies access to other AI models for a dollar or less. Musk and xAI have been forced to temporarily pause and update the chatbot after instances it started engaging in racism and antisemitism. In May, the chatbot was responding to questions with Holocaust denial and claims of a white genocide in South Africa. In July, Elon Musk rolled out an “improved” version of the chatbot. After a few days, Grok was referring to itself as MechaHitler and spewing anti-semitism before being updated once again.
Social Security Administration
The Social Security Administration is utilizing a chatbot to fill in the gaps left by fired employees, and experimenting with an AI tool to analyze and flag disability claims despite the flaws of various models. Click to read more.
Next Gov: SSA is rolling out a new chatbot for employees (and slashing staff) | 5/17/25
- According to internal plans, SSA intends to roll out a new chatbot to fill the gaps from the thousands of staff cuts. The chatbots are supposed to help with content creation, research, and coding. Notably, the chatbot the agency plans to deploy was not trained with any SSA data, nor was it trained to interact with users. Journalists, working for outlets such as MSNBC and CNET, who have tested the chatbot have come to a similar conclusion: it’s an inefficient tool that leads users to talk in circles. CNET’s Blake Stimac chronicled his experience with the tool, emphasizing its failure to grasp his point about lower payments.
Newsweek: Social Security Announces Major AI Rollout | 3/14/25
- In an attempt to apparently modernize its technology, SSA announced its plans to implement the “Hearing Recording and Transcriptions” (HeaRT) which seeks to improve the efficiency and accuracy of hearing recording and transcriptions. The system will be used both remotely and in-person. SSA boldly claimed 500,000 customers will benefit from the new system per year.
ADA: How AI and Technology Are Changing Social Security Disability Claims | 3/21/25
- Despite some of the flaws with various models, SSA has experimented with using AI to analyze disability claims, identify patterns, and flag applications for further review. Relying on the AI tools to expedite processing still leaves room for continuation of improper denials and more. According to Gianelli & Morris, “if the AI is trained on historical claims data that reflects patterns of improper denials, it may continue to deny valid claims in similar circumstances. Errors in data processing or programming can also lead to incorrect denials, leaving policyholders to bear the consequences of flawed technology.”
Health and Human Services
Health and Human Services, which has published a strategic plan for utilizing AI in healthcare contexts and filled several AI-related positions at the agency, has already published apparently AI-generated information riddled with falsehoods. Click to read more.
- AI experts said that a report released by the HHS was partially generated by AI, resulting in inaccuracies and even invented studies. The “Make Our Children Healthy Again” report, written to address America’s lagging health outcomes, included 522 footnotes to scientific research, but investigation showed that at least 37 appeared multiple times, 21 hyperlinks were dead, and at least one cited study appeared to be entirely fabricated. Some referenced URLs included the phrase “oaicite,” a definitive sign that the research was collected using the artificial intelligence website OpenAI. The reporting drew criticism from lawmakers and added another black mark to HHS secretary RFK Jr.’s already abysmal record on health science and research.
Healthcare Dive: HHS lays out strategic plan for healthcare AI | 1/14/25
- The plan—which covers AI in medical research, products like drugs and devices, healthcare delivery, social services and public health—comes as AI has become an enticing emerging technology for healthcare executives looking to stretch the industry’s often-beleaguered workforce. The HHS released a strategic plan laying out a road map for AI oversight in healthcare as healthcare executives are turning to AI to supplement the industry’s understaffed and overwhelmed workforce. The plan covers the use of AI in medical research, drug and medical device development, social services, and public health. The main objective is to coordinate a safe public-private framework that improves quality and accessibility of healthcare while mitigating the dangers that inaccurate or biased AI tools pose to patient health.
Akin Gump: FDA and HHS Appoint AI Chiefs | 5/8/25
- The HHS designated Peter Bowman-Davis as acting chief AI officer to support the use of AI and keep it in line with the Trump administration’s broader AI policies. Bowman-Davis is a Yale undergraduate who has previously worked at venture capital firm Andreessen Horowitz, whose founders endorsed Trump during the 2024 campaign. The Centers for Medicare & Medicaid Services Deputy Administrator noted that the HHS will likely use generative AI to analyze and draft regulations, a policy that has already led to poorly researched reports.
- The National Institutes of Health is seeking input from the healthcare industry on the agency’s AI strategy and on the use of AI in biomedical research, public health, and clinical support. The NIH is also seeking feedback on how best to promote public-private cooperation on AI development and testing, with a particular focus on “data readiness, trust, translation and workforce.”
404 Media: HHS Asks All Employees to Start Using ChatGPT | 9/9/25
- HHS employees received an email from leadership informing them that ChatGPT was available for use through their government logins. The rollout was managed by Clark Minor, a former Palantir employee and DOGE agent that burrowed at HHS as Chief Information Officer. Though HHS directed employees “be skeptical of everything you read, watch for potential bias, and treat answers as suggestions” (at that point, why even use it?), using Chat GPT could misinform employees as large language models have been shown to “generate health disinformation” and “amplify human bias” in healthcare. The agency will continue integrating AI into its work, with plans to use AI at the Centers for Medicare and Medicaid Services to determine whether beneficiaries receive the treatments prescribed by their doctors.
- Trump signed an executive order that includes grants for using artificial intelligence in childhood cancer research. The order also directed the Department of Health and Human Services to work with the White House office of Science and Technology to develop ways to use AI in childhood cancer clinical trials. This order comes after Trump and DOGE decimated funding for scientific research, including those that go towards pediatric cancer research. There is also a morbid irony to expanding AI use for cancer research given, as we’ve written before, the carcinogenic impacts of the fossil fuels that power the immense, growing energy consumption of AI.
The American Prospect: The Stealth Assault on Medicare | 9/18/25
- The Centers for Medicare and Medicaid Services is implementing a pilot program to introduce preapprovals for Medicare services in six states. The program, which is set to begin in January 2026, will use artificial intelligence and machine learning in the review and authorization process. CMS claimed the program is voluntary, but if clinicians in the pilot states refuse to use the preapproval process, the care will be subjected to a medical review and clinicians risk not getting paid. A 2022 HHS Inspector General report found that prior authorization for Medicare Advantage beneficiaries resulted in delay or denial of care that met Medicare coverage requirements. The program is rolling out in Washington, New Jersey, Oklahoma, Ohio, Texas, and Arizona.
Food and Drug Administration
The Food and Drug Administration is eagerly embracing AI tools, including an error-prone large language model used to aid in the research and review process, despite agency staff concerns that the tool was buggy and inaccurate. Click to read more.
Arstechnica: FDA rushed out agency-wide AI tool—it’s not going well | 6/5/25
- The FDA deployed an error prone Deloitte-led large language model (LLM) called Elsa to aid in scientific review. The FDA’s goals here are to accelerate the research and review process, but despite the millions of dollars Deloitte has paid to develop the software it’s still very much in beta. According to NBC News, when staff tested the tool “with questions about FDA-approved products or other public information, it provided summaries that were either incorrect or only partially accurate.”
Education Department
The White House has directed the Education Department to prioritize grants for AI instruction. Musk’s allies in the Education Department are considering replacing contract workers who field thousands of questions per day from student borrowers with an AI chatbot. Click to read more.
White House: Advancing Artificial Intelligence Education For American Youth | 5/23/25
- In April 2025, President Trump signed an executive order that directed the Department of Education to provide grants to help educators understand and better instruct students on AI. In June, 68 companies signed the “Pledge to America’s Youth,” a public-private partnership to help fund this initiative. Some of the companies that signed the pledge included Meta, Adobe, Intel, Google, Amazon, OpenAI, Microsoft, and Salesforce.
New York Times: Musk’s Staff Proposes Bigger Role for AI Bot In Education Department | 2/13/25
- Musk’s DOGE proposed replacing some of the contract workers who work with students and parents with AI chat bots. This is part of the administration’s move to shrink the federal work force. This is particularly noteworthy since these call centers where contractors work field an average of 15,000 questions per day from student borrowers.
Federal Housing Finance Agency (FHFA)
Under the direction FHFA Director Bill Pulte, Fannie Mae began using artificial intelligence to monitor for mortgage fraud through a contract with Palantir, with Pulte expressing interest in expanding AI uses at the agency. Click to read more.
Federal Housing Finance Agency
- Fannie Mae used an AI-generated voice of President Trump to narrate a new ad promoting an “all new Fannie Mae.” It’s not yet clear which AI company produced the AI voice, but ABC reported that it was done with Trump’s permission.
NBC Philadelphia: Palantir teams up with Fannie Mae in AI push to sniff out mortgage fraud | 5/28/25
- Fannie Mae agreed to a deal with defense contractor Palantir to use artificial intelligence to detect mortgage fraud. FHFA director Bill Pulte left the door open for further collaboration with the company, as well as other AI companies. In August, Slate reported that Pulte’s financial disclosures revealed holdings of $15,001 – $50,000 in Palantir stock.
Related Articles
January 27, 2026
Map: Trump Has Often Delayed or Denied Disaster Aid
January 23, 2026
RDP Audits DOGE Destruction In New Report
January 22, 2026
Tracking the Environmental Harms of Trump Actions
January 21, 2026