policy Major

OpenAI Quietly Removes Ban on Military Use from Usage Policy

Summary

OpenAI quietly updated its usage policies to remove a blanket prohibition on "military and warfare" applications of its technology, replacing it with more general language banning uses that cause harm. The change was discovered by journalists and sparked significant debate about the commercialization pressures on AI safety commitments.

What Happened

On or around January 10, 2024, OpenAI updated its usage policies page. The previous version had explicitly listed "weapons development" and "military and warfare" among prohibited uses of the company's technology. The updated version replaced this with a broader prohibition on using OpenAI's services to "harm yourself or others" and to "develop or use weapons."

The specific removal of "military and warfare" language was first reported by The Intercept on January 12, 2024. OpenAI spokesperson Niko Felix confirmed the change, stating that the new policy was "clearer" and "more readable" while maintaining that OpenAI still prohibited uses that cause harm. Felix also noted that OpenAI was exploring cybersecurity collaborations with DARPA.

The timing was notable: it came approximately two months after the board crisis that resulted in Sam Altman's reinstatement and the restructuring of OpenAI's governance, and amid growing US government interest in applying AI to national security and defense applications.

Why It Matters

The policy change was widely interpreted as a signal that OpenAI was opening the door to military and intelligence applications of its technology — a significant shift for a company that had explicitly prohibited such uses. While OpenAI maintained that the updated language was simply clearer, the removal of specific military prohibitions was substantive: "military and warfare" is an unambiguous category, while "harm yourself or others" requires interpretation.

This event became a key node in the AI military ethics thread. It demonstrated that acceptable-use policies, which AI companies frequently cite as safeguards, can be quietly modified without the kind of oversight or public process that regulatory requirements would demand. It also raised questions about whether the post-board-crisis OpenAI, with its restructured governance and increased commercial pressure, was deprioritizing the safety commitments that had ostensibly defined the organization.

The change preceded OpenAI's subsequent engagement with defense and intelligence agencies, making it — in retrospect — a pivotal moment in the normalization of military AI applications among frontier labs.

Tags

#military-ai #acceptable-use-policy #policy-change