| Por Eli Talbert |
A targeting screen fills with structures, heat signatures, and a rapidly closing window. Aircraft have only minutes of fuel left when an AI highlights a single building, and suddenly the decision space collapses to one bright box.
This is the triage trap: a pattern in which AI-enabled workflows determine which options reach command judgment in the first place, before a commander weighs alternatives. What the model elevates becomes the decision — what it filters out disappears. This is not captured in mission command or command and control doctrine. Those frameworks govern who may decide and how authority is delegated, but they assume commanders retain visibility into options. The triage trap arises when AI systems prestructure the decision space itself, determining which alternatives are seen — an institutional vulnerability that delegation rules alone miss. Critically, this is not automation bias —misweighting visible options — but decision space construction, in which options never appear at all.
The triage trap is not a software problem. It is an institutional one. Doctrine, command climate, and evaluation criteria determine whether judgment is exercised as an obligation or treated as friction to be minimized. AI systems merely make those choices legible and fast. When institutions prioritize throughput, tempo, and apparent certainty over deliberation and responsibility, AI-enabled workflows will narrow the decision space long before a commander is aware a choice has been made. Misdiagnosing the problem, therefore, leads institutions to fix the wrong thing.
Consider the difference in targeting. In classic automation bias, a commander sees multiple plausible targets and selects the wrong one because the system’s recommendation is overweighted. The alternatives are present but discounted. In a triage trap failure, those alternatives never appear at all. The system surfaces a single “best” target, competing hypotheses are filtered out upstream, and the commander’s task collapses from choosing among options to approving the only one shown. The error is not misjudgment among visible choices — it is the silent disappearance of choice itself.
This distinction changes remedies. Automation bias is addressed through calibration and skepticism, command compression through delegation rules. The triage trap requires doctrine to explicitly define where choice itself must be protected — where systems are obligated to surface competing hypotheses, alternative targets, or uncertainty, even at the cost of speed. Without that requirement, judgment is not overridden — it is silently precluded.
Today, AI systems search imagery, surface likely targets, and flag what appears most threatening faster than humans can independently verify. Operational pressure increasingly treats human judgment as the bottleneck to remove rather than the safeguard to preserve. As the Pentagon accelerates AI adoption — from Project Maven to Replicator — the question is no longer whether AI will prioritize targets, but whether commanders will retain meaningful authority over how those priorities are formed.
Decision-Making vs. Autopilot
This failure mode is not new — what is new is how AI accelerates long-standing institutional pressures by shaping which choices appear at all. One contemporary example illustrates these dynamics, though the broader argument does not depend on any single case. Recent reporting on the Israeli military’s AI-assisted targeting system Lavender, though politically contested, describes operators spending little time reviewing AI-proposed strike targets. The reported dynamic is familiar: AI-generated target lists can narrow the decision space so sharply that alternative targets or non-strike options either never reach the reviewer or are present only nominally, under time constraints that reduce approval to a procedural act. The relevance lies in the institutional pressures described, not any single system.
Those pressures are not unique to modern AI systems. We have seen versions of this pattern before. In 2003, Patriot missile crews trusted automated threat classifications over contradicting human observations, contributing to two friendly aircraft shootdowns during Operation Iraqi Freedom. Those systems were rule-based and less autonomous than today’s. Unlike contemporary triage trap scenarios, the relevant alternatives in the Patriot cases were visible to humans — the failure lay in how those cues were weighted, not in their disappearance. Still, the parallel matters: The failure mode was not autonomy itself, but the interaction of tempo, interface cues, and organizational pressure — forces that push personnel toward procedural compliance rather than independent evaluation, and that modern AI systems intensify rather than eliminate.
Humans may remain present, but participation changes once systems begin determining which options appear. One of the most dangerous phrases in operations centers is, “The AI already checked that.” Presence becomes acquiescence: The human no longer evaluates the target itself, only confirms the system has done so. The risk is not that AI will refuse orders. It is that, in high-tempo environments, humans will stop issuing them.
As AI Predicts Us, Adversaries Predict AI
When AI systems learn our patterns of attention and prioritization, adversaries learn how to manipulate them.
In a recent training exercise, I was part of the red team, role-playing enemy forces. A red team AI system monitoring hotel surveillance feeds failed to track a blue team role-player slipping out a side exit. A lowered head posture and a simple disguise fell outside the model’s learned templates, so the system quietly demoted that feed while promoting higher confidence matches elsewhere. When a later sensor cue suggested the individual had left the building, an analyst manually rewound the footage. A subtle gait asymmetry — how the trainee carried weight unevenly while walking — revealed the match the AI model had missed. Nothing had malfunctioned. The target had simply adapted to what the model was not trained to prioritize, illustrating a broader pattern rather than offering definitive proof.
Foreign militaries are already experimenting with such adaptation deliberately. Open source reporting on large-scale People’s Liberation Army exercises in the Gobi Desert describes the use of decoy artillery and misleading movement patterns to draw strikes away from real assets. While details are necessarily limited, the open source accounts of these exercises describe losses on the order of half of blue team forces, their strikes drawn toward decoys while genuine threats maneuvered freely. The specifics matter less than the lesson: Capable adversaries learn how automated prioritization works and exploit its blind spots.
AI strengthens the find function. A capable adversary strengthens the hide function even faster. Automation bias begins as efficiency — the triage trap begins by narrowing choice. Against a learning opponent, it creates novel attack opportunities.
Using AI Deliberately
Commanders want speed — in peer conflict, slow can be dead.
Sometimes unconstrained AI speed is necessary — for example, in defensive reactions such as missile defense or swarm interception, where machines may need to act faster than humans. What matters more than the absolute level of speed is the deliberate choice around the tradeoffs between AI speed and human judgment.
The U.S. Air Force has tested AI-enabled targeting tools that accelerate timelines while focusing, rather than replacing, human judgment by surfacing the right cues at the right time. This is a model worth scaling: machines compressing complexity so that humans can think.
AI does not inherently erode judgment. Used well, it can enhance it. Missions should accelerate wherever machines outperform humans, while preserving human control when lethal decisions are made. But speed wins that argument easily when every metric rewards momentum and none rewards deliberation.
Not every fight demands the same friction — an intentional slowing of the process to allow for command judgment. In missile defense or swarm scenarios, where seconds determine survival, some guardrails must relax. But in deliberate or strategic strikes — where escalation, attribution, and legality carry the highest stakes — bypassing judgment invites consequences far more dangerous than delay. The goal is not to slow the kill chain, but to ensure the right parts of it remain slow.
Over the next 12 to 18 months, acquisition and doctrine decisions will hard-code how humans interact with these systems. At the strategic level, the same dynamic applies to escalation management. AI recommendations can produce strikes a commander might otherwise skip. In those moments, speed substitutes for restraint — with consequences that extend far beyond the tactical fight. No interface can preserve judgment if institutional incentives punish its exercise — evaluation criteria, command climate, and career risk shape behavior as powerfully as software design.
Friction by Design: Guardrails for Judgment at Speed
If judgment is an institutional obligation, then systems must be designed to preserve it under speed. Judgment does not survive speed by accident — it survives only where institutions deliberately protect it. Interface design matters, but friction at the human-machine boundary will fail if organizational incentives reward only speed. These guardrails are aimed at preserving choice formation, not merely improving choice selection — they operationalize an institutional commitment. The solution is not blanket slowdown, but carefully placed friction — inserted exactly where judgment is most vulnerable — so that oversight adds minimal delay while preserving operational effectiveness. These guardrails preserve human judgment over what systems present. Equally important, doctrine must specify what systems are required to present in the first place — defining when AI-enabled workflows are obligated to surface competing hypotheses, alternative targets, or explicit uncertainty rather than a single recommendation. Without that upstream constraint, no amount of downstream friction can fully prevent the triage trap.
Structural Guardrail: Separate Find from Engage
Resources permitting, the person locating AI-surfaced targets should not be the same person authorizing lethal action. U.S. dynamic targeting practice already reflects this logic: The analyst who finds a target is rarely the commander who authorizes the strike. In smaller or dispersed units, independence can still be enforced through brief cognitive breaks or secondary verification.
This approach carries real costs. It can strain manpower and introduce latency. The aim is not perfect separation, but preserving a moment of independent judgment where the cost of error is highest.
Workflow Guardrail: Conceal Confidence Until Commitment
Confidence scores anchor decisions. Systems should present an AI’s recommendation first, revealing confidence only after the analyst records an initial judgment. This forces independent evaluation before numerical cues do the work for them.
This guardrail carries costs. Used selectively, however, it counters a well-documented anchoring effect without meaningfully slowing operations.
Interface design matters here. If it takes five clicks to question a target and one click to approve it, approval becomes the default. Systems should therefore make challenge as easy as acceptance, with rapid access to underlying evidence and source data — so that friction falls on unexamined approval rather than on judgment itself.
Cognitive Guardrail: Require a One-Sentence Rationale
Before authorizing a strike, the decision-maker should briefly state why this target meets criteria at this moment. If the rationale cannot be articulated plainly, authorization pauses. Its value depends on a command climate that treats justification as a professional obligation rather than a compliance exercise.
These guardrails are not cost-free, nor are they universally applicable. In resource-constrained units or during defensive operations measured in seconds, some may prove impractical or counterproductive. Informal workarounds will emerge under pressure, and ritualized compliance can hollow out the very judgment these measures aim to preserve. The objective is not to impose friction everywhere, but to identify decisions with irreversible consequences and protect judgment there — accepting that other parts of the kill chain must run at machine speed.
Doctrine Must Constrain Software
Policy still requires a human in the decision-making loop but often underspecifies what that person must actually do at speed. Someone who merely confirms an AI model’s output is structurally present but functionally absent.
Commanders and acquisition leaders must decide where friction is mandatory and encode those decisions directly into doctrine, training, and evaluation. Judgment is an institutional achievement, not a user-interface feature. If incentives punish hesitation and reward throughput alone, no amount of interface friction will preserve command authority.
The Question That Remains
AI will speed up every part of the mission. The real question is how much judgment we are prepared to lose in exchange for speed.
The triage trap is not about unreliable technology. It is about human responsibility under pressure. If judgment is treated as an assumption rather than an engineered capability, it will fade exactly when it is most needed.
The triage trap can emerge anywhere AI systems compress choice under time pressure. Whether in intelligence analysis, cyber response, or crisis decision-making, systems that prioritize speed by narrowing attention risk quietly displacing human judgment. The specifics differ by domain, but the pattern is consistent: What the system surfaces becomes the decision, and what it filters out disappears.
Speed is a weapon. Judgment is a shield. Whether command authority survives the age of AI speed will depend on how deliberately institutions choose to preserve both.
Eli Talbert is a U.S. Army Reserve officer on active duty and a Ph.D.-trained data scientist supporting U.S. Special Operations Command. He specializes in operational AI and decision-making under uncertainty.
The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of the Department of Defense, the U.S. Army, or the U.S. Government.


