As the Trump Administration makes its first moves on AI policy, the Commerce Department will likely remain a nerve center for AI policymaking over the next four years. In addition to housing NIST and the AI Safety Institute (AISI), Commerce recently announced far-reaching export controls to limit foreign adversaries’ access to advanced AI tools. What’s more, President Trump’s AI executive order and Stargate announcements this week highlight private sector innovation and collaboration as a driving force behind this administration’s AI policy – an area Commerce is sure to have a hand in.
To head up Commerce, President Trump has nominated Howard Lutnick. And while the bulk of media attention on Lutnick has focused on the nominee’s responsibility for President Trump’s trade and tariff strategy, policymakers also need to know where Lutnick lands on AI.
With a confirmation hearing scheduled for Wednesday, January 29th, we’re here to break down the top questions lawmakers should consider asking Lutnick as he heads to Congress. Given the outsize role played by Commerce in shaping the direction of AI and coordinating geopolitical relations for sensitive technologies, Lutnick’s appointment is undoubtedly among the most significant for US AI policy.
Background: The Commerce Department & AI Policy
Lutnick’s confirmation hearing presents a major opportunity to better understand the second Trump administration’s AI policy agenda. President Trump has signaled that he is strongly committed to ensuring continued US leadership in AI, but the only policy guidance lawmakers have to go off of are Trump’s announcement of the joint venture Stargate Project, his repeal of the Biden Administration’s Executive Order on Artificial Intelligence (AI EO), and the Trump administration’s new corresponding Executive Order on removing barriers to American leadership in AI. How and where exactly will the new administration build on or undo the AI policy efforts driven by its predecessor? What role is envisioned for Commerce in all this?
Under the Biden administration, Commerce has driven the development and execution of US AI policy across several fronts. Commerce’s NIST currently houses AISI, the entity at the forefront of US AI safety evaluation and international cooperation. Via the Bureau of Industry and Security (BIS), Commerce has issued a series of export controls on advanced AI technologies to address national security risks associated with AI advancement by US geopolitical adversaries. Through its CHIPS program office, Commerce has announced over $30 billion in private sector investment to bolster the American domestic semiconductor manufacturing industry in accordance with the CHIPS & Science Act.
Several of these AI policy initiatives were spearheaded by Commerce under the repealed Biden AI EO specifically. The implications of its repeal are discussed in detail by my colleague David Robusto. The short of it is that certain mandates and programs issued by the repealed EO–some of which were undergoing a rule-making process through Commerce–very likely are now in a state of limbo. This notably includes the Biden AI EO’s directive to Commerce’s NIST to develop standards for AI model safety testing–the impetus for establishing the US AISI.
Whether the US preserves its leadership in AI may well depend on Commerce and the second Trump administration’s plans for Commerce’s existing AI policy initiatives. As Secretary of Commerce, Lutnick will have the final say on their future.
So where does Lutnick stand on these critical institutions and the role they play in US AI leadership? Let’s dive in with four questions Congress should ask the incoming Secretary of Commerce.
1. How will you ensure that the AI Safety Institute will remain a leading institution for encouraging standards setting integral to preserving the US’s lead in AI innovation?
The AISI is widely recognized across industry, academia, and civil society as key to driving the US’s lead in AI innovation. As Senator John Curtis (R-UT) highlighted at an event this month – AI guardrails are good for business. Standards on transparency, on testing, and on performance can build trust, increase investment, and create a foundation for the free market to thrive. AISI is at the forefront of that work, developing the testing, evaluations, and guidelines that will help accelerate trustworthy AI innovation in the US.
AISI also positions the US to lead the world on AI standards setting. Globally, several government-backed institutions for AI safety evaluation, testing, and standards setting have sprung up in other countries–including in the UK, Japan, France, and China, to name a few. If the US wants to steer global AI innovation in accordance with standards and values it proactively sets the terms around – as it should – then having a well-functioning, well-funded, and government-backed US AISI is integral. Otherwise, the US risks succumbing to a less favorable position on AI, similar to how it has on certain other critical emerging technologies such as 5G global connectivity.
2. Would you agree that having pre-deployment visibility into frontier AI capabilities via the AISI is vital to American public safety and national security?
Through the Defense Production Act, the Biden AI EO directed Commerce to develop a mandatory reporting regime for leading AI model developers to share AI safety test results with the US government. The EO also issued interim reporting requirements while awaiting fulfillment of the Commerce directive. With the Biden AI EO repealed, leading AI model developers are now no longer required to share any AI safety test results with the US government.
Independently, AISI has made voluntary pre-deployment testing agreements with leading AI model developers, notably OpenAI and Anthropic. In the absence of any mandatory reporting requirements, such agreements are the only independent avenue for the American public to have pre-deployment visibility into the risk profile of frontier AI capabilities. Preserving such visibility through supporting the continued existence and work of the AISI is therefore critical to ensuring US public safety and national security.
3. Please share your understanding of the way export control regimes for advanced AI technologies can strengthen America’s national security.
Recently, Commerce’s BIS updated its AI technologies export controls regime. The updated controls on advanced computing chips aim to address US geopolitical adversaries’ export control circumventions. Equally important, the updated export controls regime establishes a tiered system of export controls designed to share AI innovation among US allies and partners and to facilitate greater security and intelligence collaboration.
While this development has drawn the ire of some industry players, the new export controls are critical to preventing our nation’s biggest adversaries from poaching our most advanced technology through intermediary countries. Because Commerce plays the leading role in exercising export controls over advanced AI technologies, it is vital that Mr. Lutnick recognizes the importance of continuing to support BIS’s work on this matter.
4. Do you believe that a strong and resilient US domestic chip manufacturing industry is vital to America’s national security interest?
The majority of the CHIPS & Science Act’s appropriated funds for specific private sector investment projects have already been announced. Yet there remains a continuing need to supervise the allocation of these funds. More broadly, the CHIPS & Science Act has itself been called a national security program. Ensuring a long term US victory in AI and related emerging technologies over geopolitical adversaries requires the US to ultimately divest itself of its dependence on a global semiconductor supply chain.
The Secretary of Commerce has the responsibility of supervising much of the execution and success of the CHIPS & Science Act. As the next Secretary of Commerce, Lutnick must be able to affirm the national security implications of this Act and ensure the continuation of its mandate.
The Takeaway
These questions, and the opportunity for Congress to interview Lutnick, could provide critical insight into the second Trump administration’s vision for US AI policy. Moreover, given the vital importance of AI to US national security interests, Lutnick’s answers to these questions will be integral to assessing his ability to effectively lead the Commerce Department. In this critical moment for the future of US AI policy, we must ensure Howard Lutnick is prepared to assume the responsibilities his appointment will bestow upon him.