Skip to content

Star-Spangled AI: OpenAI’s Blueprint for Progress and Politics

Picture of David Robusto

David Robusto

Policy Analyst

At the end of December, my colleague Iskandar Haykel and I wrote a blog post recapping three major trends in the AI industry in 2024, one of which was that frontier AI labs are backing away from their prior pro-regulatory positioning. On Monday, OpenAI substantiated this view with their new “economic blueprint” for AI, a document that’s one part policy and two parts politics aimed at the incoming Administration. When you wade through some of the rhetoric, you find a number of strong policy ideas worth elevating. 

This blueprint effectively consists of two documents:

– The first is an attempt to woo the incoming administration with conservative-friendly language.

– The second, in contrast, is a collection of mostly commonsense suggestions to bolster the American AI industry, three of which I view to be especially important and timely. Let’s address these elements of the blueprint in turn.

OpenAI’s Patriotic Pivot 🦅

OpenAI has positioned itself for years as the neutral shepherd of advanced machine intelligence – a non-profit lab that cares about safety and security above all else. ChatGPT’s unexpected and remarkable popularity has led to significant pressure on that image as the company became a leader on one of the world’s most hyped technologies. Now, as President Trump prepares to return to office, OpenAI is dropping this image in favor of a more traditional pro-free market framing coupled with a heavy focus on maintaining American technological leadership .

It’s helpful to note the language OpenAI uses in their blueprint, which seems designed to curry favor with the incoming Administration. The word “freedom” appears throughout the document, including references to “individual freedoms,” “freedom for everyone,” and “freedom for developers.” The document goes so far as to suggest that in exchange for their “freedom” to use AI tools, users should bear primary liability for any harm caused by AI.

Leaning heavily on free market rhetoric, some of the document’s more provocative claims include the implication that basic AI safety regulations would be akin to forcing cars to travel below five miles per hour (pg. 3), the suggestion that unrestricted AI development is the only way for the technology to align with democratic values (pg. 6), and the warning that the U.S. needs to incentivize $175 billion of investment (likely through minimizing regulation) in order to prevent CCP AI dominance (pg. 13). 

OpenAI’s embrace of free-market ideology stands in contrast to the position of many conservative members of Congress, who’ve acknowledged that basic AI guardrails are good for business. Safeguards are crucial for the trust required to adopt AI at scale in critical industries like healthcare, education, and finance, and for the U.S. to maintain its credibility as the democratic global leader of one of history’s most powerful technologies. 

The change in the political power matrix before and after the election is enormous, and the policy document is best understood as articulating a pivot. Just two years ago, OpenAI was highlighting the risks that frontier models create and the importance of government oversight and transparency. This week’s economic blueprint rejects the need for government oversight and attempts to do so by borrowing the language of the right.

Notably, Some Good Ideas

In the second part of the document, OpenAI presents a number of “solutions,” advocating for a range of government action at the federal, state, and local levels intended to bolster the American AI industry (and give the company and other frontier AI developers lots of business).

It is worth noting (as others have emphasized), that not only do these proposals hardly call for any requirements around safety/security, OpenAI broadly concedes very little and requests total preemption from state-level safety regulations in exchange. However, while none of the proposals presented are particularly novel, many ideas from the blueprint can effectively advance the responsible development of AI and are worth consideration. There are three ideas mentioned in particular that I believe are crucial for the new Congress and administration:

Take Action on AI-generated CSAM

OpenAI’s section on “child safety” (pg. 11) takes aim at AI-generated child sexual abuse material (CSAM), making it one of the only targets for actual restrictions in the blueprint. OpenAI recommends developing policy solutions that “prevent the creation and distribution of AI-generated [CSAM],” incorporating CSAM protections throughout the AI lifecycle, and promoting partnerships with law enforcement on the issue. My only wish was that OpenAI had thought bigger with these proposals;the issue of AI-generated non-consensual intimate imagery (NCII) does not only affect children.

While CSAM is a uniquely sensitive matter, broader restrictions are needed on the use of AI to generate or otherwise help create NCII. Two pieces of legislation on this topic that nearly passed in the 118th Congress were the TAKE IT DOWN Act and the DEFIANCE Act, around which ARI organized a coalition letter. If OpenAI becomes a champion for new versions of these bills (the TAKE IT DOWN Act is already being reintroduced), or similar legislation that emerges in the 119th Congress, it would be a massive victory over the everyday harms the company identifies in the blueprint.   

AI Economic Zones for Faster Permitting

In the second half of 2024, the American AI community and much of the U.S. government reached consensus that significantly more energy is required to support the development of advanced AI. This has inspired new resources from the administration, bold legislative proposals, and significant corporate deals. In the blueprint, OpenAI takes aim at a crucial pain point in American energy development: permitting. The permitting process can currently delay proposals for critical energy infrastructure (such as datacenters) by months or years. 

To combat this, OpenAI proposes that all levels of government work with industry to designate special “Economic Zones” where permitting is rapidly expedited for building AI infrastructure. While special economic zones are not infallible, properly targeted initiatives can catalyze development in this critical industry.              

The Biden administration balanced several elements on this issue well in their recent executive order on the topic. The EO incentivizes rapid development of crucial AI infrastructure on federal land, while ensuring that development reserves space for small and medium-sized AI businesses and promotes the use of clean energy. The incoming administration can accelerate the production of crucial AI infrastructure by maintaining the recent EO and expanding opportunities to expedite AI permitting.    

Increased Collaboration Between Developers and the National Security Community

Finally, OpenAI identifies several areas where the government and industry should collaborate on matters of national security. The company’s suggestions for the government on this topic include:

– Share relevant national security information and resources with AI developers, such as via briefings,
– Share information on how AI companies should secure their IP against and mitigate AI’s catastrophic risks,
– Provide opportunities for AI companies to access specially secure infrastructure for model testing and evaluation, and
– Form a consortium allowing the AI community and national security community to share information, exchange requests, and promulgate best practices. 

These are in many ways the least controversial suggestions in the blueprint. The one thing everyone working in AI can agree on is that a tremendous amount of uncertainty remains. Building the muscles of communication, information sharing, and mutual trust between the AI and national security communities in a manner that is agnostic of specific capability developments will be critical, whatever timeline we find ourselves on.

Overall, OpenAI’s new Economic Blueprint is noteworthy on several fronts. Yes, it is important to make note of when companies drastically overhaul their engagement strategy to appease a new power center. But beyond that, much of the AI community seems to be coalescing around several ideas in the blueprint, making it an important list of recommendations to watch and an effective document for future reflection. Most importantly though, the blueprint shows just how much low hanging fruit remains for the Trump administration and 119th Congress to pick on: topics like NCII, permitting reform, and public-private security collaborations. I will be rooting for them.

Share