Leading AI developers like OpenAI and Anthropic are threading a thin needle to sell software to the United States military: make the Pentagon more efficient without letting their AI kill people.
Today, their tools aren’t being used as weapons, but AI is giving the Defense Department a “significant advantage” in identifying, tracking and assessing threats, Pentagon Chief Digital and AI Dr. Radha Plumb told TechCrunch . telephone interview.
“We are obviously increasing the ways in which we can accelerate the execution of the kill chain so that our commanders can respond in a timely manner to protect our forces,” Plumb said.
The “kill chain” refers to the military’s process of identifying, tracking and eliminating threats, involving a complex system of sensors, platforms and weapons. Generative AI is proving useful during the planning and strategy stages of the kill chain, according to Plumb.
The relationship between the Pentagon and AI developers is a relatively new one. OpenAI, Anthropic and Meta withdrew their usage policies in 2024 to allow US intelligence and defense agencies to use their AI systems. However, they still don’t allow their AI to harm humans.
“We’ve been really clear about what we will and won’t use their technologies for,” Plumb said, when asked how the Pentagon works with AI model providers.
However, this started a round of speed dating for AI companies and defense contractors.
Meta partnered with Lockheed Martin and Booz Allen, among others, to bring its Llama AI models to defense agencies in November. That same month, Anthropic merged with Palantir. In December, OpenAI made a similar deal with Anduril. More quietly, Cohere has also settled its models with Palantir.
As generative AI proves its usefulness in the Pentagon, it could push Silicon Valley to loosen its AI use policies and allow more military applications.
“Playing through different scenarios is something that generative AI can be useful with,” Plumb said. “This allows you to take advantage of the full range of tools that our commanders have available, but also to think creatively about different response options and possible trade-offs in an environment where there is a potential threat, or a series of threats , that should be prosecuted. “
It’s unclear whose technology the Pentagon is using for the job; the use of generative AI in the kill chain (even in the early planning stage) appears to violate the usage policies of some major model developers. Anthropic’s policy, for example, prohibits using its designs to manufacture or modify “systems designed to cause harm or loss of human life.”
In response to our questions, Anthropic directed TechCrunch to CEO Dario Amodei’s recent interview with the Financial Times, where he defended his military work:
The position that we should never use AI in defense and intelligence environments makes no sense to me. The position that we should go gangbusters and use it to do whatever we want – up to and including doomsday weapons – is obviously just as crazy. We are trying to find the middle ground, to do things responsibly.
OpenAI, Meta and Cohere did not respond to TechCrunch’s request for comment.
Life and death, and AI weapons
In recent months, a defense technology debate has erupted over whether AI weapons should really be allowed to make life-and-death decisions. Some argue that the US military already has the weapons it has.
Anduril CEO Palmer Luckey recently noted on X that the US military has a long history of acquiring and using autonomous weapon systems like a CIWS turret.
“The DOJ has been acquiring and using autonomous weapons systems for decades now. Their use (and export!) is well-understood, narrowly defined and explicitly regulated by rules that are by no means voluntary,” said Luckey.
But when TechCrunch asked if the Pentagon buys and uses weapons that are fully autonomous — ones without humans in the loop — Plumb dismissed the idea on principle.
“No, that’s the short answer,” Plumb said. “In terms of credibility and ethics, we will always have people involved in the decision to use force, and that includes our weapons systems.”
The word “autonomy” is somewhat vague and has sparked debate throughout the tech industry about when automated systems — such as AI coding agents, self-driving cars or self-firing guns — become truly autonomous.
Plumb said the idea that automated systems are independently making life-and-death decisions was “too binary” and the reality was less “science fiction”. Rather, she suggested that the Pentagon’s use of AI systems is really a collaboration between humans and machines, with senior leaders making active decisions throughout the process.
“People tend to think of it like there are robots somewhere, and then the gonculator (an imaginary autonomous car) spits out a piece of paper and people just check a box,” Plumb said. “That’s not how human-machine integration works, and that’s not an effective way to use these types of AI systems.”
AI Security at the Pentagon
Military partnerships haven’t always gone down well with Silicon Valley workers. Last year, dozens of Amazon and Google employees were fired and arrested after protesting their companies’ military contracts with Israel, cloud deals codenamed Project Nimbus.
Comparatively, there has been a fairly muted response from the AI community. Some AI researchers, like Anthropic’s Evan Hubinger, say the use of AI in the military is inevitable, and it’s critical to work directly with the military to make sure they get it right.
“If you take the catastrophic risks from AI seriously, the US government is an extremely important actor to engage with, and trying to block the US government from using AI is not a viable strategy,” Hubinger said in a November post on internet. forum Less wrong. “It’s not enough to just focus on catastrophic risks, but you also have to prevent any way the government can misuse your models.”