- OpenAI has removed language from its policies specifically banning military applications of its AI systems.
- The change opens the door for potential defense partnerships to utilize OpenAI technology.
- Governments globally are ramping up AI spending for national security purposes.
- However, collaborating with defense groups raises ethical issues given their violent capabilities.
- It’s unclear if OpenAI will actually work with military/intelligence agencies, but the policy shift allows for it.
- The move sparks debate about AI’s appropriate role in global conflicts.
OpenAI has quietly removed language banning “military and warfare” applications from its usage guidelines, opening the door for potential partnerships with defense agencies seeking to utilize the company’s powerful AI systems.
As first reported by the Intercept on January 12, OpenAI’s policy previously prohibited developing weapons or applications that risked physical harm. The updated guidelines, effective January 10, still ban developing weapons but no longer specifically call out military uses.
OpenAI Quietly Removes Ban on Military Applications in Updated Policies
The policy change comes as governments around the world ramp up spending on AI technology for defense purposes. In November 2022, the U.S. Department of Defense outlined plans to promote the “responsible military use of artificial intelligence,” including decision-making systems and intelligence capabilities.
Israel’s military is already employing AI for targeting and analysis in its conflict with Palestine. Known as “The Gospel,” the system purportedly helps reduce civilian casualties. However, AI watchdogs have raised alarms about the technology’s potential to escalate violence given biases.
In a statement, an OpenAI spokesperson framed the revised principles as an effort to make the guidelines more universal and easy to understand. The company now simply states users should not “harm others” and cites weapons as an example.
However, the removal of the “military and warfare” language opens possibilities for OpenAI to sell its services. The company’s large language models like GPT-3 have clear national security applications in analysis and logistics.
Lucrative partnerships may help fund OpenAI’s ongoing research but also raise ethical issues given defense agencies’ violent capabilities. It remains unclear whether the company will work with military or intelligence groups.
For now, OpenAI states its tools should be used “safely and responsibly” while maximizing user control. But the policy shift allows wiggle room for military collaboration, sparking debate over AI’s role in global conflicts.
Two products launched right now:
AI Agent: https://orbitmoonalpha.com/shop/ai-tool-agent/
AI Drawsth: https://orbitmoonalpha.com/shop/ai-tool-drawsth/