Technology · March 12, 2026

Defense official reveals how AI chatbots could be used for targeting decisions

The US military might use generative AI systems to rank lists of targets and make recommendations about which to strike first, which would then be vetted by humans, according to a Defense official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating.  

A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with MIT Technology Review to discuss sensitive topics, humans might ask the system to analyze the information and rank which targets are a priority, while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations. OpenAI’s ChatGPT and xAI’s Grok could, in theory, be the models used for this type of scenario in the future, as both companies recently reached agreements for their models to be used by the Pentagon in classified settings.

The official described this as an example use case of how things might work, but would not confirm or deny whether it represents how AI systems are currently being used.

Other outlets have reported that Anthropic’s Claude has been integrated into existing military AI systems and used in operations in Iran and Venezuela, but the official’s comments add insight into the specific role chatbots may play, particularly in accelerating the search for targets. They also shed light on the way it’s deploying two different AI technologies, each with distinct limitations.

Since at least 2017, the US military has been working on a “big data” initiative called Maven. It uses older types of AI, particularly computer vision, to analyze the oceans of data and imagery collected by the Pentagon. Maven might take thousands of hours of aerial drone footage, for example, and algorithmically identify targets. A 2024 report from Georgetown showed soldiers using the system to select targets and vet them, which sped up the process to get approval for these targets. Soldiers interacted with Maven through an interface with a battlefield map and dashboard, which might highlight potential targets in one color and friendly forces in another.

Now, the official’s comments suggest that generative AI is being added as a conversational, chatbot layer—one which the military would use to more quickly find and analyze the data as it makes decisions like which targets to prioritize. 

Generative AI systems, like those that underpin ChatGPT, Claude, and Grok, are a fundamentally different technology than the AI that has primarily powered Maven. Built on large language models, their use in war is much more recent and less battle-tested. And while the old interface of Maven forced users to directly inspect and interpret data on the map, the outputs given by generative AI models are easier to access but harder to verify. 

The use of generative AI for such decisions is reducing the time required in the targeting process, the official added, but did not provide detail when asked how much additional speed is possible if humans are required to spend time double checking a model’s outputs.

The use of military AI systems is under increased public scrutiny following the recent strike on a girls school in Iran in which more than one hundred children died. Multiple news outlets have reported the strike was from a US missile, though the Pentagon has said it is still under investigation. And while the Washington Post has reported that Claude and Maven have been involved in targeting decisions in Iran, there is no evidence yet to explain what role generative AI systems played, if any. The New York Times reported on Wednesday that a preliminary investigation found outdated targeting data to be partly responsible for the strike. 

The Pentagon has been ramping up its use of AI across operations in recent months. It started offering non-classified use of generative AI models, like for analyzing contracts or writing presentations, to millions of service members back in December through an effort called GenAI.mil. But only those few generative AI models have been approved by the Pentagon for classified use. 

The first was Anthropic’s Claude, which in addition to its use in Iran was reportedly used in the operations to capture Venezuelan leader Nicolas Maduro in January. But following recent disagreements between the Pentagon and Anthropic over whether Anthropic could restrict the military’s use of its AI, the Defense Department designated it a supply chain risk and President Trump on social media demanded the government to stop using its AI products within six months. Anthropic is fighting the designation in court. 

OpenAI announced an agreement on February 28 for the military to use its technologies in classified settings. Elon Musk’s company xAI has also reached a deal for the Pentagon to use its model Grok in such settings. OpenAI has said its agreement with the Pentagon came with limitations, though the practical effectiveness of those limitations is not clear. 

If you have information about the military’s use of AI, you can share it securely via Signal (username jamesodonnell.22).

About The Author