Pentagon Chooses OpenAI Over Anthropic for AI Models in Classified Network

Share:
Audio Loading voice…
Pentagon Chooses OpenAI Over Anthropic for AI Models in Classified Network

Synopsis

The U.S. Department of Defense opts for OpenAI's AI models for its classified network, parting ways with Anthropic amid safety and military use disagreements. CEO Sam Altman highlights the commitment to AI safety and human responsibility.

Key Takeaways

Pentagon chooses OpenAI's models for classified network.
OpenAI emphasizes AI safety and ethical use.
Disagreements with Anthropic highlight ongoing debates on AI ethics .
Technical safeguards will be implemented for safe model deployment.
Agreement reflects shared principles with the Department of Defense .

New Delhi, Feb 28 (NationPress) The U.S. Department of Defense has made a significant choice to implement OpenAI’s advanced artificial intelligence models within its classified network, distancing itself from Anthropic due to ongoing disputes regarding AI safety and military applications, as confirmed by OpenAI's CEO, Sam Altman, on Saturday.

Altman acknowledged the agreement with the Pentagon and emphasized the commitment to proceed with the deployment.

In a post on X, he remarked that OpenAI’s conversations with the Department of Defense exhibited a "strong respect for safety" and a mutual aspiration for the best possible outcomes.

Referring to the department as the "Department of War" (DoW), Altman reiterated that OpenAI is dedicated to serving humanity, while recognizing the complexities and inherent dangers of the world.

"Tonight, we reached an agreement with the Department of War to implement our models in their classified network. Throughout our discussions, the DoW demonstrated a profound respect for safety and a willingness to collaborate for optimal results," Altman expressed.

He highlighted that OpenAI prioritizes AI safety and the equitable distribution of benefits, asserting that two fundamental safety principles include a prohibition on domestic mass surveillance and ensuring human accountability in the use of force, including in autonomous weapon systems.

"The core of our mission centers on AI safety and broad benefit distribution. Key safety principles include bans on domestic mass surveillance and human accountability for the use of force, especially regarding autonomous weapon systems," he stated.

Altman assured that these principles remain intact within the agreement with the Pentagon. He indicated that the Department of Defense aligns with these principles, as reflected in its laws and policies, and that they have been incorporated into the final agreement.

"The DoW endorses these principles, which are reflected in their legal and policy framework, and we have integrated them into our agreement," Altman noted.

"We are committed to serving all of humanity to the best of our abilities. The world presents complexities, messiness, and at times, danger," he added.

As part of this arrangement, OpenAI will implement technical safeguards to guarantee that its models operate as intended.

The company will also deploy field engineers to support these models and ensure their safe application. Altman emphasized that the models will only be activated on secure cloud networks.

This decision by the Pentagon arises amidst a public disagreement with Anthropic, the creator of the Claude AI model.

Reports suggest that the Defense Department advocated for the full military application of AI tools for lawful purposes, including critical areas like weapons development, intelligence collection, and battlefield operations.

Anthropic, on the other hand, reportedly sought to impose restrictions, particularly concerning fully autonomous weapons and the mass surveillance of American citizens.

Point of View

The Pentagon's decision to partner with OpenAI highlights a pivotal moment in military technology and AI ethics. OpenAI's commitment to safety and responsible AI use aligns with national security interests, while the rift with Anthropic underscores ongoing debates around the ethical implications of AI in the military. This agreement could set a precedent for future AI collaborations in defense.
NationPress
1 May 2026

Frequently Asked Questions

Why did the Pentagon choose OpenAI over Anthropic?
The Pentagon opted for OpenAI due to disagreements with Anthropic regarding AI safety protocols and military applications, emphasizing a commitment to responsible AI use.
What are the key safety principles highlighted by OpenAI?
OpenAI's key safety principles include a ban on domestic mass surveillance and ensuring human accountability in the use of force, particularly regarding autonomous weapons.
How will OpenAI ensure the safe use of its models?
OpenAI will implement technical safeguards and deploy field engineers to support the models, ensuring they operate safely on secure cloud networks.
What implications does this agreement have for AI in defense?
This agreement may set a precedent for future military applications of AI, emphasizing the importance of safety and ethical considerations in defense technologies.
What was Anthropic's position on military AI use?
Anthropic advocated for limits on military AI use, particularly concerning fully autonomous weapons and the surveillance of American citizens.
Nation Press
Google Prefer NP
On Google