Skip to content

Five AI Policy Wins in the NDAA

Picture of Jacqueline Viteznik

Jacqueline Viteznik

Congress is laying the groundwork for the deployment of artificial intelligence by mandating its responsible use across America’s defense infrastructure. Buried in the thousands of pages of the FY26 National Defense Authorization Act are several provisions that could fundamentally reshape how the Department of War, the military services, and the whole national security apparatus develop, deploy, and secure artificial intelligence systems. These provisions lay out pragmatic and decisive policies that will allow the U.S to leverage AI for defense.

In a fast-moving policy landscape, where small but significant policymaker decisions can set the course for national security outcomes, these are wins worth celebrating. Here are five provisions in the FY26 NDAA that exemplify effective and purposeful AI use and governance.

Section 347

Integration of Commercially Available AI Capabilities into Logistics Operations

Modernizing military logistics starts with using the tools that already work. Section 347 authorizes the integration of presently available and appropriate commercial AI capabilities—specifically tools designed to assist with logistics tracking, planning, operations, and analytics. This integration will start with two suitable exercises, followed by a briefing to Congress by the combatant commanders who oversaw the exercises. The briefing will include an impact assessment and recommendations for further integration and/or development.

This is exactly where responsible AI deployment should begin. We support this provision, as it leverages AI capabilities for an efficiency-focused application that reduces operational risks and costs. As the old adage goes, “amateurs talk strategy, professionals talk logistics,” and AI-enabled logistics represents a lower-risk, higher-reward application that could free up resources and personnel for other critical defense priorities while maintaining meaningful human control over military decisions.

Section 1513

Physical and Cybersecurity Procurement Requirements for AI Systems

Section 1513 establishes a framework for cybersecurity and physical security standards/best practices for AI and ML to mitigate risks to DoD. The framework will cover workforce risks, workforce development, supply chain risks, adversarial tampering, theft of systems and data, incident reporting, and evaluations of commercially available platforms. The framework will also assess national security risks, including the risk of slowing progress.

This provision closes a dangerous gap in how military AI systems are protected. It addresses the critical vulnerabilities in AI systems beyond software, such as hardware security and supply chain integrity. These risks are often overlooked but could easily enable adversarial manipulation of military AI capabilities. The procurement requirements create market incentives for vendors to prioritize security-by-design principles and establish baseline protections against emerging threats such as model theft and training data poisoning.

✓ ARI directly advocated for this provision

Section 1533

Artificial Intelligence Model Assessment and Oversight

As AI becomes more embedded in defense operations, oversight can’t be ad hoc. Section 1533 establishes a cross-functional team to create mandatory assessment and oversight requirements for AI models used by the Department of War. Specifically, the provision will create a standardized assessment framework for models currently used, guidance for evaluating models for future use, governance structures for development/testing/deployment, appropriate assessment levels on use case-based risk, mechanisms for cross-component collaboration, and processes for use case review/approval.

This provision puts responsible AI oversight on a durable institutional footing. It directly advances the responsible use of AI by mandating systematic evaluation of military AI systems. Section 1533 moves to enforce assessment requirements that will set crucial precedents. As advanced AI becomes increasingly integrated into defense operations, the cross-functional team’s outputs will provide essential institutional guardrails that reduce risk and promote responsible use aligned with national security imperatives.

✓ ARI helped ensure this provision survived conference

Section 1535

Artificial Intelligence Futures Steering Committee

Preparing for tomorrow’s AI risks requires planning and leadership today. Section 1535 establishes an AI Futures Steering Committee led by the Secretary of Defense and composed of senior leadership and experts within the department. The steering committee will analyze advanced AI and Artificial General Intelligence (AGI) development trajectories and timelines, assess adversarial and competitor progress toward AGI capabilities, evaluate military applications, develop risk-informed adoption strategies, and map the threat landscape of adversarial advanced AI use. The Committee will submit a report to congressional defense committees by the beginning of 2027 and use its findings to inform future policy decisions.

This provision creates high-level institutional infrastructure for monitoring and governing transformative AI capabilities before they emerge. By tasking senior defense leadership with tracking both domestic and adversarial AGI progress, this forward-looking approach ensures the U.S. maintains strategic awareness and preparedness for AGI developments while establishing the coordination mechanisms necessary to respond responsibly when these capabilities materialize. The U.S. will be prepared for the AGI moment and will not fall behind in adopting the new technology. This final version incorporates content and ideas from both Section 1626 of the Senate version of the NDAA and Section 235 of the House version of the NDAA.

✓ ARI helped shape both House and Senate versions

Section 6602

Artificial Intelligence Development and Usage by the Intelligence Community

Intelligence agencies shouldn’t reinvent the wheel when it comes to AI. Section 6602 mandates the Intelligence Community Chief Information Officer to identify commonly used AI systems with potential for reuse across intelligence elements. The provision also involves tracking and evaluation of AI performance, including documentation of capabilities, limitations, data provenance, and ongoing testing.

ARI likes this provision’s emphasis on interoperability and maintaining a competitive AI marketplace while ensuring the government retains control over its data. The performance tracking requirements establish accountability essential for high-stakes intelligence applications.

These five provisions demonstrate that effective AI governance doesn’t require choosing between innovation and security. By targeting practical applications such as logistics, establishing collaborative security frameworks, mandating systematic assessments, and securing the AI supply chain, Congress is building a foundation for responsible national security and military AI deployment that enhances capabilities and maintains our competitive edge, while ensuring accountability and responsibility. The FY26 NDAA shows how the U.S. is moving beyond abstract AI principles toward concrete implementation mechanisms. 

Share