Minutes to Seconds to Machines Running the Whole Show
A naval intelligence officer aboard a destroyer in the South China Sea watches her console populate. The fused AI targeting system has correlated satellite imagery, signals intelligence, and pattern-of-life data into a 94 percent confidence score: a fast inshore attack craft, weapons hot, closing on the carrier strike group. The engagement recommendation appears with an instantaneous engagement authorization.
She has time to read the confidence score. She does not have time to ask what the remaining six percent contains, whether the SIGINT emitter the model correlated is actually attached to the vessel in the imagery, or whether the training data included this hull class at this aspect angle. She authorizes the strike. The system logs her approval.
A human was in the loop in name only.
This is the architecture for the future of warfare, but it is also the architecture under which “human in the loop” stops meaning anything.
Why Time is a Parameter
AI-enabled weapons are about to shrink that window further, possibly beyond human comprehension, and the Department of War is actively pursuing that advantage. Speed is the selling point, but it is also the mechanism by which automation bias (the well-documented tendency of operators to trust machine outputs uncritically, which intensifies as decision timelines compress) renders human control meaningless in practice even where it exists in doctrine.
Current U.S. policy (DoD Directive 3000.09) requires “appropriate levels of human judgment,” but the phrase does not specify what qualifies as “judgment,” exercised throughout the Observe Orient Decide Act (OODA) loops and AI life cycle. While the Directive includes important safeguards on testing, interfaces, transparency, and operator control, its ambiguity means that at AI-enabled speeds, compliance can be satisfied by an operator authorizing a recommendation they scanned or by automating the recommendation’s action with no human approval at all.
The fix is not a ban on autonomy. There are many strategic and operational benefits the United States cannot forfeit. We need conditions to ensure humans can comprehend machine actions before they happen. A minimum window of five seconds has to elapse between a machine’s lethal recommendation and the engagement it authorizes. Below that floor, human judgment becomes presumptively unreliable in many operational contexts.
What Makes a Delay Effective or a Liability
Interface design, training, and experience are all important factors in ensuring that human operators are able to substantively and effectively engage with AI recommendations, but having enough time to process and think precludes all of that.
A delay must be long enough to make human review more than effective in many cases, but not so long that it precludes mission-specific adaptations or exceptions.
Time, and the lack of it, has a clear impact on how autonomous weapon systems and their operators function. In the Tornado fratricide during Operation Iraqi Freedom, a Patriot operator had less than ten seconds from target detection to kinetic contact. Although the operator was nominally in control, operationally, they functioned as a witness.
Operators need enough time to assess recommendations and truly evaluate them, and one such way is to have sufficient situational awareness. The original study on situational awareness (SA) describes three levels that stack on one another: 1) perception of what is in the environment; 2) comprehension of what those elements mean together; and 3) projections of how the situation will evolve. An operator who has not completed perception cannot meaningfully perform comprehension and therefore cannot project forward to anticipating where a target, threat, or friendly force is headed. This matters for decision-time thresholds because these levels take time to build.
A delay too long in U.S. autonomous systems could cede a tempo advantage to adversaries operating without one. If China or Russia fields systems that engage at machine speed, a five-second floor will be a disadvantage, and we will no longer have a comparable OODA loop.
However, U.S. systems currently operate with other types of delays and incorporate it into their engagement logic. Latency is commonly experienced with technologies that do not have compute power at the tactical edge and where data is buffered, instead of real-time. Latency and communication-degraded environments are still the reality of operations.
The U.S. already has engagements that still keep decisions within human comprehension. The Aegis Combat System compresses detect-to-engage “into seconds” and operates with human supervisors able to intervene or override the automated pairing of targets to weapons. The Patriot system’s engagement timelines are measured in seconds. Even these “fast” systems preserve supervisory windows in the low single-digit seconds.
Of course, there are circumstances where a delay would not be appropriate. The clearest cases are systems engaging inbound munitions rather than people or platforms: Active Protection Systems like Trophy and Iron Fist, which defeat incoming RPGs and anti-tank guided missiles in milliseconds, and Counter-Rocket, Artillery, and Mortar systems, which intercept indirect fire within seconds of radar detection. In both cases, the engagement window is a function of physics, not doctrine, and the target is a projectile already in flight.
Scope of the Five-Second Floor
A delay would not impose an entirely novel logic on U.S. practice. Some existing air-defense systems already operate with compressed but non-zero supervisory windows. The point is not to freeze every mission into one timeline, but to establish a default presumption that non-time-critical lethal decisions should remain cognitively reviewable by humans.
The alternative of automation at machine speed without a human decision window has its own well-documented failure mode.
Why Five Seconds
Five seconds is not an arbitrary number. It is a policy floor that we have drawn from the best available, cross disciplinary evidence on human cognition under time pressure, situational awareness, and compressed combat decision-making.
The lower bound comes from research on human cognition under time pressure. Studies of drivers resuming control from automated vehicles, a close civilian analog to this domain given limited research on the subject, show that below roughly three seconds, operators cannot build even minimally effective situational awareness. Meaningful gains in comprehension continue to accrue up to about seven seconds, and after that, there are diminishing returns. This is the basic architecture of human perception: people need time to register a signal, orient to context, and form an intention. Anything under three seconds is below the cognitive floor. Five seconds sits in the middle of the band where minimal judgment and awareness become possible.
The upper bound comes from the combat studies. Research on combat identification support shows that delays in positional updates of roughly ten seconds significantly degraded identification effectiveness. This is the ceiling past in which additional time just waiting can have real costs. Five seconds sits well below it, leaving operational headroom.
The Cognitive Window
Then there are real life examples of compressed decision making. The Tornado fratricide occurred in a window of less than ten seconds. This does not prove that five seconds would have prevented the incident. It demonstrates how quick the break point emerges, and how quickly the distinction between automatic and manual modes becomes operationally meaningless. A window of less than ten seconds wasn’t enough for a trained operator to override a recommendation. A five-second floor doesn’t fix that problem, but it can make a fix possible.
This is a floor, not a ceiling. Nothing about a statutory minimum prevents services, commanders, or rules of engagement from requiring longer windows where reversibility is low, civilian density is high, or the target class is especially ambiguous.
A floor establishes the bottom. Mission-specific requirements build upward from it. That is the relationship between a regulatory minimum and operational doctrine in every other domain where both exist.
And any serious version of a five-second minimum includes waiver authority for operational commanders and categorical exclusions for time-critical defensive systems. The floor constrains the default case. It does not constrain defensive systems where any latency is incompatible with the mission. It is meant to empower warfighters when time is needed and to get out of their way when it isn’t.
The Window for Congress Is Also Shrinking
The goal of a temporal floor isn’t to slow warfare down for its own sake. It’s to keep warfare at a speed where humans are the ones waging it. Below that speed, lethal force is not a human decision aided by machines; it is a machine action ratified by humans.
The Anthropic-Pentagon dispute that surfaced in March 2026 is, at its core, an argument about who decides what guardrails govern AI in lethal contexts. While the lawsuit goes on about its supply chain risk designation, its model is reportedly embedded in the Maven Smart System and used in target generation against Iran. These targeting decisions may be inducing operational outcomes with fatal mistakes, alarming enough that senators wrote to Secretary Hegseth seeking answers on civilian casualties.
The Department of War is moving at “wartime speed.” Congress is not. Every year AI-enabled autonomy proliferates in procurement and doctrine without a temporal floor, the conditions for introducing one get harder.
Five seconds is small enough to be operationally cheap and large enough to preserve the possibility of human judgment while there is still time to do so.
It is the minimum. The only question is whether it gets established now, while that minimum is still affordable.