President Trump’s first round of executive orders included a dramatic shift in U.S. AI policy. On his first day in office, Trump repealed the cornerstone of former President Biden’s AI policy, EO 14110 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence). This EO both directed how the government understands and uses AI, and catalyzed regulatory action to give the government better access to data about the frontier. Today, we’re taking a look at what this repeal does and doesn’t affect, as well as what to watch for as the Trump administration solidifies its AI policy positions.
Biden’s AI EO created four distinct categories of new governance initiatives: standalone projects, guidelines, programs, and regulations. The EO’s standalone projects – like the NIST AI Risk Management Framework Generative AI Profile and new Department of Energy tools for permitting processing – are already out in the world. These deliverables will likely survive the EO’s repeal, barring specific objections from the new administration. However, the latter three categories are more vulnerable to dissolution.
What happens now?
The general practice after a presidential transition is to pause most ongoing work from the prior administration for review. President Trump formalized this on Monday by issuing a regulatory freeze which pauses agencies from proposing new rules, acting on published rules that haven’t gone into effect, and allowing previously sent rules from being formalized, all pending review.
Trump expanded on how his administration will handle the outputs of Biden’s EO in his own executive order, entitled “Removing Barriers to American Leadership in Artificial Intelligence.” When concerning a repealed EO, career staff’s general expectation would be to stop new work (such as intaking new programs), wind down anything disagreeable found in an administration’s review, and only resume normal operations when a program has been reaffirmed by the new administration.
The President’s new order follows this formula closely. In it, he instructs his tech policy leadership to review and revoke or suspend outputs of Biden’s EO which do not “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” With this in mind, let’s walk through the likely state of some of the most important federal AI initiatives from the last four years.
U.S. AI Safety Institute (AISI)
While many people connect AISI to Biden’s EO – AISI’s launch announcement came a day after the order was released – it is not a direct output and was instead created in NIST under longstanding Commerce authorities. What ties them together is that AISI helps fulfill the EO’s directive to establish guidelines and support industry standards around AI safety, security, and trustworthiness. Importantly, AISI survives the repeal of the Biden EO and its staff will continue their work unless and until President Trump directs the Commerce Department to change course. But because its underlying justification has been revoked, AISI will now likely be subject to intense scrutiny as the Trump administration finalizes their AI policy priorities.
With that said, the fact of the matter is that AISI remains immensely popular with a diverse array of AI industry, civil society, academic, and research groups. Last Congress, ARI and ITI led a coalition of more than 45 industry, civil society, nonprofit, university, trade association, and research laboratory groups supporting the authorization of AISI, including big industry names like Meta, Microsoft, OpenAI, Amazon, and Palantir. Lawmakers on both sides of the aisle have sponsored legislation to authorize and fund AISI.
While we expect some changes under the new administration (including, at the very least, a new name for the entity that focuses specifically on innovation and American leadership), we hope and anticipate that AISI – or an organization that replaces it – will continue to lead on developing best practices for AI. Simply eliminating the primary federal entity for testing and understanding frontier models, as well as helping promulgate best practices for development and deployment, would create a vacuum that denigrates the nation’s ability to responsibly advance AI, harming both American industry and the public.
Frontier Model Reporting Requirements
The AI EO directed the Commerce Secretary to require AI companies to report information on their frontier AI models and large computing clusters. Under authority of the Defense Production Act (DPA) – a choice that stirred some controversy – the Commerce Secretary would collect information on things like company plans to train advanced “dual-use” models, test results from model red-teaming, and plans to acquire large-scale computing clusters.
In September of this year, Commerce’s Bureau of Industry and Security (BIS) proposed a rule establishing these requirements and called for public comment, but the Biden administration never published a final rule before leaving office. This means the proposed rule and its theoretical requirements are paused until they are reviewed by the new administration (a theme we will see several more times). Some Republicans in Congress have decried the use of the DPA to collect this information, saying it is a flawed application of the law, which signals that this rule is unlikely to survive in its current form.
Separate from these requirements, the Biden administration negotiated voluntary commitments from sixteen leading AI companies on safety and security, concerning things like independent security testing, information sharing, and watermarking. It remains to be seen whether companies will adhere to these commitments moving forward.
Reports on Foreign-Initiated Large AI Training Runs Using U.S. Hardware
The EO required the Commerce Secretary to create new reporting requirements for cloud computing services – specifically, U.S. Infrastructure as a Service (IaaS) providers. Under the proposed requirements, these providers would need to report whenever a foreign person attempts to use U.S. computing resources to train large AI models, as well as provide information on those persons’ identities.
Like the reporting requirements for frontier model development, the Biden administration proposed a rule in January 2024 and solicited comments, but never published anything final while in office. Since no final rule has been issued, Trump’s regulatory freeze pauses the rule, and associated requirements, from advancing until the new administration reviews them.
National AI Research Resource (NAIRR) Pilot
The NAIRR pilot is an EO-mandated and National Science Foundation-led proof of concept for a nationally accessible infrastructure project to support AI research. The NAIRR pilot aims to democratize the production of AI safety, security, and trustworthiness research by providing relevant projects with computing power, data, models, and other necessary resources. While the repeal of the EO means the pilot loses its incepting document, it is unlikely the pilot will immediately close down.
While we wait for the landing team to review the program, we expect staff to largely maintain the status quo. The pilot is slated to run until January 2026, so it is very possible that its work will proceed as planned for existing projects until then, but the pilot may not intake new projects or start new contracts.
OMB Guidance and the National Security Memo
The EO issued over a dozen directives for various agencies to develop guidance on AI related topics, like safely developing AI tools for education, hiring, and housing application screening. Two of the most important memos that emerged were one from OMB on how the federal government can and should use AI and an interagency memo on how AI should be incorporated into national security systems.
While these documents remain in play, Trump’s regulatory freeze means that no final rules can be published based on them without the administration’s sign-off. On the national security front, much of the guidance relies on work from (and collaboration with) AISI, so this depends in part on that institution’s fate. Where the writing is potentially more on the wall is the OMB guidance, which includes restrictions on the federal government’s use of “rights-impacting” and “safety-impacting” AI – the type of framing the Trump administration has signaled it wants to move away from. Trump explicitly calls out the OMB guidance in his new EO, giving his OMB Director 60 days to revise the document to better fit his priorities.
What to Watch
While the repeal of the Biden EO signals a major shift in U.S. AI policy, its immediate impact is limited. It is possible that the Trump team will retain large elements of programs like AISI or the NAIRR pilot, but we likely will not know for several months. In the meantime, there are several key things to watch to better understand where we are headed:
- Trump’s Artificial Intelligence Action Plan – Trump’s new EO tasked his tech policy leadership to develop a plan to, again, “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” We’ll be tracking whatever form this takes.
- Howard Lutnick’s confirmation hearing – Trump’s nominee for Secretary of Commerce is scheduled for his confirmation hearing with the Senate Commerce, Science and Transportation Committee on Wednesday, January 29th. The Biden administration ran a significant number of their biggest AI policy initiatives, such as AISI and their export control efforts, through Commerce and Lutnick will likely be grilled on how much he intends to keep or change from these efforts. My colleague, Iskandar Haykel, recently published a blog post discussing exactly what Senators should ask Lutnick during his hearing.
- Michael Kratsios’s confirmation hearing – Trump’s former White House CTO and current nominee for Director of the Office of Science and Technology Policy should also provide significant insight into the administration’s thinking. Kratsios was responsible for much of the prior Trump administration’s AI policy, including helping craft an EO on top of which Biden’s own AI order was built.
- Changes to proposed Biden AI rules – in addition to the now-frozen proposed rules mentioned above, it will also be crucial to watch what Trump does with Biden’s other AI policy projects. If Trump explicitly rejects or reaffirms any of these rules, that will be very telling. Key things to watch include what happens to:
- Biden’s other AI EO, which was issued in January 2025 and aimed to expedite the buildout of AI data centers on federal land,
- Export controls on advanced AI chips going to China (of which there have been several rounds),
- Recently released controls on AI diffusion, which dramatically shifted U.S. policy on the export of AI technology, making most countries restricted-by-default.
- A more robust Trump AI EO – while Trump has not explicitly committed to releasing another order to replace Biden’s, it is one plausible output from the AI action plan mandated in Trump’s new EO. In general, the President will likely give his agencies more specific direction on where and how they should be using AI and supporting its development, so there will be more to come.