Skip to content

Future-Proofing Governance in the Age of AGI

Picture of Justin Bullock

Justin Bullock

A century after Mr. Smith went to Washington, artificial intelligence is on its way to town. Over the past year alone, we’ve seen the creation of Chief AI Officers at federal agencies, and now Elon Musk is working to further accelerate AI’s integration into government.

The stakes on getting this right could not be higher. Move too slow on AI’s integration and bad actors can use AI to run circles around the government. Move too fast and with too few safeguards and you risk opening the door to AI surveillance and other harmful state AI deployments.

This month, I published a paper along with co-authors Sam Hammond at the Foundation for American Innovation and Séb Krier at DeepMind which examines the narrow balancing act that governments need to perform to get AI integration just right.

Think of it like a highwire act. For centuries, liberal democracies have carefully walked a highwire, avoiding the pitfalls of authoritarianism and too much government control on the one side, and on the other side, the abyss of anarchy and an ineffective state.

Big leaps in technology send a gust of wind across the tightrope every once in a while. Computers, for example, have both enabled digital surveillance and provided new tools for undermining government, requiring states to adapt and adjust their balance.

Now, as we approach the era of artificial general intelligence (AGI), democracies may be facing a hurricane.

Unlike narrow AI tools, AGI would rival or exceed human capability across virtually all governance tasks, from policy analysis to law enforcement. AGI will offer governments the capability of monitoring and assessing a nearly limitless amount of surveillance data. Unchecked, AGI may supercharge state power.

Conversely, if advanced AI tools proliferate faster in the private sector or among bad actors than within government, the state’s authority could hollow out. Imagine criminal networks, rogue corporations, or hostile foreign actors wielding AGI while public institutions lag behind.

Perhaps most insidiously, AGI could erode the transparency and accountability that legitimize modern governance. As more government decisions are handed to autonomous algorithms, the chain of accountability blurs, eroding public trust.

Those are the perils. But with the right public policy, it’s also possible for AGI to strengthen how governments work. Harnessed responsibly, advanced AI might radically improve state capacity – making public services more efficient, decisions more data-driven, and bureaucracies more responsive​.

These risks and opportunities aren’t distant hypotheticals. Industry experts are predicting the advent of AGI in the next several years. Whether governments are ready to adapt to these technology changes ahead of bad actors, and what guardrails are in place to protect individual liberty will determine whether democracies continue the balancing act of order and freedom.

To harness AGI’s benefits while preserving democratic values, lawmakers need to proactively adapt governance structures. As I point out with Hammond and Krier, there are a few key areas where policymakers should focus their attention in preparing governments for transition in the AGI era.  

First, it’s critical that policymakers establish robust technical safeguards. Governments should require privacy-enhancing technologies and data protections to counter AI-driven surveillance​. At the same time, there’s a need to invest in explainable AI and transparency standards so that any automated decision can be audited and understood by humans. These measures will help ensure that even as algorithms take on more tasks, they remain visible and accountable to the public.

Second, there’s a need for governments to embed human oversight in AI systems. Rather than handing over the keys to an autonomous digital bureaucracy, policymakers should adopt hybrid AI-human governance models. Critical decisions should involve a human-in-the-loop or at least human review. Public agencies should leverage AGI’s power while retaining oversight – for example, requiring that algorithms explain their recommendations to a human official who must formally approve high-stakes outcomes. 

Third, we need to embrace anticipatory and collaborative governance. Policymakers must not treat AGI as a distant or purely technical issue. It demands an anticipatory governance mindset. This means investing in foresight capabilities – scenario planning, stress-testing institutions, and “red-teaming” AGI systems for potential failure modes​. Governments at all levels, from city councils to Congress, should run exercises asking “How would we govern if advanced AI were ubiquitous?” Proactively adjusting policies now is easier than scrambling after the fact.

The clock is ticking. The U.S. and other democracies need to muster the foresight and political will to act boldly before a crisis arrives. This means immediate steps to implement safeguards and adaptations, and a willingness to rethink old governance models. 

For policymakers in Washington, the mandate is clear: lead now to future-proof our democratic governance. By taking proactive steps today, we can ensure that the tremendous power of AGI remains firmly in service of our deepest democratic values and put the wind of technological change at our backs.

Share