​​​​​​​​​​​​​​​​​         

Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

The Pentagon says AI is speeding up its ‘kill chain’


Leading AI developers, such as OpenAI and Anthropic, are threading the needle on selling software to the United States military: make the Pentagon more efficient without letting their AI kill people.

Today, their tools aren’t used as weapons, but AI gives the Department of Defense a “significant advantage” in identifying, tracking and assessing threats, the Pentagon’s chief digital intelligence and artificial intelligence officer, Dr. Radha Plumb, told TechCrunch in a phone interview.

“Obviously, we’re increasing the ways we can accelerate the execution of the kill chain so our commanders can respond at the right time and protect our forces,” Plumb said.

The “kill chain” refers to the military’s process of identifying, tracking and eliminating threats, including a complex system of sensors, platforms and weapons. Generative AI has proven useful during the planning and strategizing phases of the kill chain, according to Plumb.

The relationship between the Pentagon and AI developers is relatively new. OpenAI, Anthropic and Meta withdrew its usage policies 2024 to allow US intelligence and defense agencies to use their AI systems. However, they still do not allow their artificial intelligence to harm humans.

“We’ve been really clear about what we will and won’t use their technologies for,” Plumb said when asked how the Pentagon works with AI model providers.

Nonetheless, this has triggered a round of speed dating for AI companies and defense contractors.

Target partnered with Lockheed Martin and Booz Allenamong other things, to deliver its Llama AI models to defense agencies in November. In the same month, Anthropic has partnered with Palantir. In December of OpenAI has reached a similar deal with Anduril. Quieter, Cohere also sets up its models with Palantir.

As generative artificial intelligence proves its usefulness in the Pentagon, it could prompt Silicon Valley to loosen its policies on the use of artificial intelligence and allow more military applications.

“Playing through different scenarios is something where generative AI can be helpful,” Plumb said. “It allows you to utilize the full range of tools that our commanders have at their disposal, but also think creatively about different response options and potential trade-offs in an environment where there is a potential threat or set of threats that needs to be processed.” “

It is not clear whose technology the Pentagon is using for this job; Using generative AI in the kill chain (even at an early planning stage) seems to violate the usage rules of several leading model developers. Anthropic’s policyfor example, it prohibits the use of its models to manufacture or modify “systems designed to cause harm or loss of human life.”

In response to our questions, Anthropic directed TechCrunch to its CEO, Dario Amodei a recent interview with the Financial Timeswhere he defended military work:

The view that we should never use AI in defense and intelligence settings makes no sense to me. The view that we should go rogue and use it to make whatever we want — including doomsday weapons — is apparently just as crazy. We try to find a middle ground, to do things responsibly.

OpenAI, Meta and Cohere did not respond to TechCrunch’s request for comment.

Life and death, and AI weapons

In recent months, a debate has erupted over defense technology should AI weapons really be allowed to make life and death decisions. Some claim that the US military already has such weapons.

Andurila CEO Palmer Luckey recently recorded on X that the US military has a long history of purchasing and using autonomous weapons systems such as CIWS turret.

“The DoD has been purchasing and using autonomous weapons systems for decades. Their use (and export!) is well known, strictly defined and expressly regulated by rules that are not at all voluntary,” said Luckey.

But when TechCrunch asked if the Pentagon was buying and operating fully autonomous weapons—ones without humans in the loop—Plumb rejected the idea in principle.

“No, that’s the short answer,” Plumb said. “In terms of reliability and ethics, we will always have people involved in the decision to use force, and that includes our weapons systems.”

The word “autonomy” is somewhat ambiguous and it has fueled debates throughout the tech industry about when automated systems—like artificial intelligence coding agents, self-driving cars, or self-igniting weapons—become truly autonomous.

Plumb said the idea of ​​automated systems making life and death decisions independently is “too binary” and the reality is less “science fiction”. Instead, she suggested that the Pentagon’s use of AI systems is actually a collaboration between humans and machines, where senior leaders make active decisions throughout the process.

“People think about it as if they are robots somewhere, and then a gonculator [a fictional autonomous machine] he spits out a sheet of paper and people just check the box,” Plumb said. “That’s not how human-machine alliances work, and it’s not an efficient way to use these types of AI systems.”

AI security in the Pentagon

Military partnerships haven’t always gone down well with Silicon Valley employees. Last year there were dozens of Amazon and Google employees fired and arrested after protesting their companies’ military contracts with Israelcloud offerings under the code name “Project Nimbus”.

In comparison, there has been a rather muted response from the AI ​​community. Some AI researchers, such as Evan Hubinger of Anthropic, say the use of AI in the military is inevitable, and that working directly with the military to ensure it gets it right is key.

“If you take the catastrophic risks of AI seriously, the US government is an extremely important actor to work with, and trying to just block the US government from using AI is not a sustainable strategy,” Hubinger said in November. post on the LessWrong online forum. “It’s not enough to just focus on catastrophic risks, you also have to prevent any way the government could misuse your models.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *