OpenAI’s announcement Monday that the US military will get access to ChatGPT came after months of deliberation over whether employees would accept the deployment, according to people briefed on the matter.

The chatbot will be offered throughGenai.mil, a new program the Pentagon launched last month. The tricky part for OpenAI was that the Pentagon was asking to use its technology for “all lawful uses,” meaning the company couldn’t impose any restrictions on what it or its employees view as acceptable implementation, either for moral or technical reasons.

The “all lawful uses” clause has become a sticking point in negotiations between the Pentagon and Anthropic, which wants more control over how its technology is used. Anthropic leaders are concerned that the military might use the models in situations where the technology is unreliable or endanger lives.

The Pentagon rejected Anthropic’s requests for more control, according to people briefed on the matter, and the company’s Claude chatbot is still not available via Genai.mil. Earlier, Google and xAI agreed to the “all lawful use” clause and even removed some model-level restrictions.

OpenAI agreed to the contract, but is offering the same ChatGPT that non-military users can access. That means the standard guardrails placed on the model are still in place, and it could by default refuse some prohibited prompts. ChatGPT, unlike Claude, is not cleared for top secret use cases, which could create a de facto barrier to many military use cases.

OpenAI, Anthropic, Google, xAI, the Pentagon and the White House didn’t immediately respond to requests for comment.

In working with the US military, tech companies are forced into a delicate dance. Some employees fear that the AI models they build could be misused in combat, or used for purposes they morally oppose. Google’s decision to quickly agree to the military’s terms can now be used by competitors to recruit employees who may oppose military use.

But Anthropic’s moral stand, while popular with its employees, has drawn the ire of the Pentagon and the White House, Semafor has reported.

Still, some employees at OpenAI said they felt it was important that the company make its technology available to the military to avoid giving xAI’s Grok an advantage, according to one person familiar with the deliberations.

Some technologists fear that putting AI in charge of weapons is a stepping stone to some kind of existential event for humanity. Ironically, the best argument against that fear came from Nate Soares, coauthor of the AI doomer bookIf Anyone Builds It, Everyone Dies, who told me on a panel last year that a superintelligent AI wouldn’t need conventional weapons to take out humanity. And the use of AI on the battlefield could actually save lives by removing humans from the equation.

Source: Drudge Report