Who Holds the Reins? AI, Decision-Making, Human Control and Agency in Warfare

AI
Stylized image of AI chip and robot hand. | Designed by Freepick.

This piece builds upon an initial work I co-authored with Bradley Boyd for The Republic Journal on human-centered warfare, where we defined cognitive autonomy as a byproduct of factors such as control over choice architecture, and conceiving desired end states.

In Gaza and Ukraine, AI enabled decision-support systems have increasingly been utilized to assist, augment, or automate human choices, particularly in complex operational environments. While these systems likely offer efficiency, speed, and tremendous data processing capabilities beyond human cognitive limits, they also raise questions about ownership and control—especially the cognitive aspects of decision-making.

Does ownership correlate with control, and does that control unequivocally apply to the inputs, the outcomes, and the process by which the outcomes are generated? In short, what do we really mean when we say “ownership of a system”?

I argue that humans benefit from physical control over a system, whereby humans are capable of exerting tangible control to modify, enable or align parts of a system. It is holding truth to the “why” behind the outcomes of any decision we make, whether it is the cereal we choose to eat in the morning, the career we decide to pave for ourselves, or the targets to strike in times of war.

Reasoning, Agency, and Intent in Automated Warfare

AI-driven military systems promise efficiency — and in some situations, precision and scale — by automating tasks traditionally requiring human judgment—target selection, threat assessments, logistical optimization, among others. Yet this raises critical questions about the roles of free will, cognitive autonomy, and agency within these automated systems.

In a nutshell, free will, cognitive autonomy, and agency are related but distinct. We can maximize agency, control over actions, but if the reasoning is not our own, then we cannot claim that the decision is our own either. So, to maximize ownership, retaining control over reasoning becomes crucial.

One’s ownership of a system is ultimately reduced as automation takes over reasoning, instead of strictly automating the process by which an action is executed. This becomes a concern when systems have control over choice architecture. If a system controls the amount and variety of choices presented, that could steer us towards choices we might not have independently made, and often without our awareness. Losing control over choice architecture means losing control over reasoning itself.

AI-driven systems on the battlefield operate amid uncertainty—both about enemy intentions and the perceptions others have of our own intentions. Intent becomes a crucial dimension when delegating cognitive autonomy to machines.

As established in this initial piece, cognitive autonomy is the ability to determine when that choice needs to be maintained or altered based on thorough understanding of context, influences, and the desired state. Is the limit of cognitive autonomy drawn at the output, or is it extended towards getting the right intention across?

On one hand, even if we clearly define our intent, does it mean the adversary will align with our interpretation? Are they not also bound by their own cognitive autonomy and subjective worldview? Even the clearest intention might not resolve uncertainty if the adversary’s cognition results in misinterpretation and an escalation of tensions. 

On the other hand, can machines:

  1. Understand intent the way we reason it?
  2. Explicitly embed that intent into the decision-making process?
  3. Maintain intent consistently throughout the process?
  4. Consider adversarial perceptions of intent?

Uncertainty is a part of conflict, morphing situations and desired end states. But can we even program uncertainty into a system? Can we teach it to navigate uncertainty?

Humans have not yet succeeded in doing so.

Balancing Human Oversight and Machine Dependency

As battlefield decisions increasingly rely on machine inputs, human cognitive autonomy risks erosion. We become cognitively dependent on AI-driven systems, trusting machines to acquire, process, and present critical information for decision-making.

If humans maintain independent inputs, meaningful oversight is plausible. However, if human cognition relies on identical inputs as the machine, genuine oversight might be illusory. The human becomes cognitively dependent, outsourcing the mental processes of acquiring, storing, manipulating, and retrieving information to machines.

It's not likely a binary choice—fully machine-driven reasoning versus human cognition—but rather a spectrum. The question remains: How much cognitive autonomy should we retain, and in what contexts?

Sometimes a lot. Sometimes very little. This remains a normative judgment. Let’s strive to make it more precise, especially when integrating AI-driven reasoning in warfare.

Our ability to hold onto reasoning and the ‘why’ behind decisions—choices ultimately dictated to machines—ensures alignment with correct objective functions. Yet AI processes can become opaque, resembling a “black box,” even to developers. Would we care to preserve cognition over such a process? Or strictly preserve cognition over the end state?

Preserving cognition only over the end state might initially seem sufficient. Still, deviations in the AI process—resulting in unintended end states—highlight the importance of retaining the cognitive "why," enabling us to redirect machines back to our desired outcomes.

 

The views expressed in this article are those of the author and do not represent those of any previous or current employers, the editorial body of SIPR, the Freeman Spogli Institute, or Stanford University.

 

Stanford International Policy Review

Want to know more? Click on the following links to direct back to the homepage for more amazing content, or, to the submissions page where you can find more information about being a future author!