The recent release of the Trump Administration’s AI Action Plan, as well as more than 30 provisions related to AI between both the House and Senate versions of this year’s National Defense Authorization Act, make one thing very clear: the intersection of national security and AI goes well beyond tech stacks and export controls. Policymakers from both the legislative and executive branches of government are demanding answers related to both AI adoption and responsible AI policy, and there is increasingly little time for the Department of Defense (DOD) and the Intelligence Community (IC) to provide them.
This internal pressure, coupled with the sharpening dynamics of a perceived “race” with China, has only heightened the focus of the DOD and IC to quickly assess and adopt AI as an integral part of our national defense. With issues ranging from improving business processes at the Pentagon to the development of doctrine supporting the use of fully autonomous weapon systems (AWSs), AI adoption will profoundly affect issues of national security, defense, and intelligence. To match the moment in which we now find ourselves, Americans for Responsible Innovation (ARI) is announcing a new policy portfolio focused on defense and intelligence issues to help national security leaders think through the key questions they now face.
The new policy portfolio is anchored on three key policy pillars, each of which explores critical issues related to ongoing debates and discussions throughout defense and intelligence circles about how best to integrate AI. These pillars will drive research in support of policymakers on Capitol Hill, in the Pentagon, at the White House, and throughout the national security enterprise as they grapple with how best to protect American interests in the age of AI.
Pillar 1: Strategic Competition and Deterrence in the Age of Military AI
Focus: Artificial intelligence is transforming the foundations of strategic competition and deterrence, particularly in the context of the U.S.–China rivalry. This pillar focuses on the implications of AI-enabled capabilities for force projection, escalation dynamics, and doctrinal development, as well as how to incorporate AI risks into strategic planning and wargaming.
Why It Matters: AI is altering the balance of power and the tempo of conflict, creating risks of miscalculation, crisis instability, and deterrence failure. The lack of clear AI-specific doctrines and escalation management mechanisms leaves the U.S. unprepared to navigate emerging strategic dynamics, particularly with peer competitors.
Pillar 2: Ethics, Legitimacy, and the Use of Autonomous Systems in Warfare
Focus: The normative, legal, and ethical boundaries surrounding the use of autonomous and semi-autonomous systems in armed conflict remain undefined. This pillar investigates when and how such systems should be employed, how responsibility and accountability are maintained, and how to design operational constraints that reflect both democratic values and international law.
Why It Matters: The deployment of lethal autonomous systems challenges foundational principles of just war theory, international humanitarian law, and public legitimacy. Without clear employment thresholds, accountability mechanisms, and ethical doctrine, U.S. use of AWS/LAWS risks undermining moral leadership, alliance cohesion, and long-term legitimacy.
Pillar 3: Institutional AI Readiness and Workforce Capacity
Focus: Building the human and institutional foundations for AI integration across the U.S. defense and national security apparatus presents a challenge. This pillar examines the recruitment, retention, and development of AI-literate talent, and the acquisition reforms needed to support agile, iterative development of AI tools and capabilities.
Why It Matters: The success of military AI integration depends on far more than software—it requires people, processes, and institutions ready to adapt to rapid technological change. Without a skilled workforce and reformed acquisition pathways, DoD and its allies will struggle to operationalize even the best AI tools.
Engaging with Experts and Stakeholders
To offer actionable, scalable, and thoughtful analysis and guidance on these three policy pillars, it is critical that ARI engage stakeholders in the defense and national security space. The new National Security Portfolio will actively engage senior leaders and policymakers throughout the national security enterprise to better understand how ARI can support them as they wrestle with these key questions.
As ARI works to catalyze a series of broad, inclusive, and bipartisan conversations about these key policy priorities, your voice is welcome as well. Anyone interested in contributing ideas or supporting our efforts can contact our team by emailing Morgan Plummer, our Senior Policy Director leading ARI’s national security efforts, at morgan@ari.us.