Skip to content Skip to footer
OpenAI Eliminates Ban on Use for Warfare and Military Purposes

One policy analyst warned it’s a notable decision, “given the use of AI systems in the targeting of civilians in Gaza.”

In this photo illustration the OpenAI logo is displayed on a computer screen in Ankara, Turkiye, on January 11, 2024.

OpenAI Eliminates Ban on Use for Warfare and Military Purposes

One policy analyst warned it’s a notable decision, “given the use of AI systems in the targeting of civilians in Gaza.”

In this photo illustration the OpenAI logo is displayed on a computer screen in Ankara, Turkiye, on January 11, 2024.

ChatGPT maker OpenAI this week quietly removed language from its usage policy that prohibited military use of its technology, a move with serious implications given the increase use of artificial intelligence on battlefields including Gaza.

ChatGPT is a free tool that lets users enter prompts to receive text or images generated by AI. The Intercept’s Sam Biddle reported Friday that prior to Wednesday, OpenAI’s permissible uses page banned “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.”

Although the company’s new policy stipulates that users should not harm human beings or “develop or use weapons,” experts said the removal of the “military and warfare” language leaves open the door for lucrative contracts with U.S. and other militaries.

“Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission, told The Intercept.

“The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement,” she added.

OpenAI spokesperson Niko Felix told The Intercept that the company “aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs.”

“A principle like ‘don’t harm others’ is broad yet easily grasped and relevant in numerous contexts,” Felix added. “Additionally, we specifically cited weapons and injury to others as clear examples.”

As AI advances, so does its weaponization. Experts warn that AI applications including lethal autonomous weapons systems, commonly called “killer robots,” could pose a potentially existential threat to humanity that underscores the imperative of arms control measures to slow the pace of weaponization.

That’s the goal of nuclear weapons legislation introduced last year in the U.S. Congress. The bipartisan Block Nuclear Launch by Autonomous Artificial Intelligence Act — introduced by Sen. Ed Markey (D-Mass.) and Reps. Ted Lieu (D-Calif.), Don Beyer (D-Va.), and Ken Buck (R-Colo.) — asserts that “any decision to launch a nuclear weapon should not be made” by AI.

Tired of reading the same old news from the same old sources?

So are we! That’s why we’re on a mission to shake things up and bring you the stories and perspectives that often go untold in mainstream media. But being a radically, unapologetically independent news site isn’t easy (or cheap), and we rely on reader support to keep the lights on.

If you like what you’re reading, please consider making a tax-deductible donation today. We’re not asking for a handout, we’re asking for an investment: Invest in a nonprofit news site that’s not afraid to ruffle a few feathers, not afraid to stand up for what’s right, and not afraid to tell it like it is.