OpenAIâs announcement Monday that the US military will get access to ChatGPT came after months of deliberation over whether employees would accept the deployment, according to people briefed on the matter.
The chatbot will be offered throughGenai.mil, a new program the Pentagon launched last month. The tricky part for OpenAI was that the Pentagon was asking to use its technology for âall lawful uses,â meaning the company couldnât impose any restrictions on what it or its employees view as acceptable implementation, either for moral or technical reasons.
The âall lawful usesâ clause has become a sticking point in negotiations between the Pentagon and Anthropic, which wants more control over how its technology is used. Anthropic leaders are concerned that the military might use the models in situations where the technology is unreliable or endanger lives.
The Pentagon rejected Anthropicâs requests for more control, according to people briefed on the matter, and the companyâs Claude chatbot is still not available via Genai.mil. Earlier, Google and xAI agreed to the âall lawful useâ clause and even removed some model-level restrictions.
OpenAI agreed to the contract, but is offering the same ChatGPT that non-military users can access. That means the standard guardrails placed on the model are still in place, and it could by default refuse some prohibited prompts. ChatGPT, unlike Claude, is not cleared for top secret use cases, which could create a de facto barrier to many military use cases.
OpenAI, Anthropic, Google, xAI, the Pentagon and the White House didnât immediately respond to requests for comment.
In working with the US military, tech companies are forced into a delicate dance. Some employees fear that the AI models they build could be misused in combat, or used for purposes they morally oppose. Googleâs decision to quickly agree to the militaryâs terms can now be used by competitors to recruit employees who may oppose military use.
But Anthropicâs moral stand, while popular with its employees, has drawn the ire of the Pentagon and the White House, Semafor has reported.
Still, some employees at OpenAI said they felt it was important that the company make its technology available to the military to avoid giving xAIâs Grok an advantage, according to one person familiar with the deliberations.
Some technologists fear that putting AI in charge of weapons is a stepping stone to some kind of existential event for humanity. Ironically, the best argument against that fear came from Nate Soares, coauthor of the AI doomer bookIf Anyone Builds It, Everyone Dies, who told me on a panel last year that a superintelligent AI wouldnât need conventional weapons to take out humanity. And the use of AI on the battlefield could actually save lives by removing humans from the equation.
Source: Drudge Report