Hundreds of Google employees have signed a petition urging CEO Sundar Pichai to reject any deal that would allow the company's artificial intelligence systems to be used in classified Pentagon operations, warning of potential 'unmonitored harm' if safeguards are not guaranteed.

The letter, dated April 2026 and signed by more than 600 staff across Google DeepMind and Google Cloud, directly challenges reported negotiations between Google and the US Department of Defense over expanded military use of its Gemini AI models.

The petition comes at a time when major tech firms are increasingly being drawn into defence contracts involving artificial intelligence, raising internal and external concerns about oversight, accountability, and potential misuse.

It follows earlier tensions between AI developers and the US government over how far military agencies should be allowed to deploy advanced systems in sensitive or high-risk environments.

In the letter sent to Sundar Pichai, employees argue that Google currently cannot guarantee its AI tools will not be used in ways that could cause harm without proper monitoring or control.

'As people working on AI, we know that these systems can centralize power and that they do make mistakes,' the employees wrote, according to a copy of the letter shared withThe Hill.'We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses.'

The signatories pointed to reported discussions between Google and the Pentagon about deploying Gemini AI models in classified settings. According to reporting cited in the letter, such an agreement could allow the US military to use Google's AI systems for 'all lawful purposes,' though additional safeguards were reportedly discussed to prevent use in mass surveillance or autonomous weapons without human oversight.

However, the employees argue that those safeguards would be difficult to enforce in practice. The letter states that 'the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads.'

Google already does some limited work with the Pentagon, but only for non-classified projects. Employees are warning that if Google moves into secret or classified military work, the risks become much bigger.

They say this could seriously damage Google's reputation and how people see the company, because it would mean its AI is being used in sensitive defence operations. Some workers also argue that no matter what safety rules are added, they may not actually be enough to stop misuse in practice.

Source: International Business Times UK