Verdict: True.

More than 600 Google employees signed an open letter to CEO Sundar Pichai urging him to reject any deal allowing the Pentagon to use Google’s Gemini AI for classified military operations. The letter was sent on April 27, 2026, just before the company finalized the agreement.

What the employees actually said and did

Hundreds of Google workers – many from the DeepMind AI lab and Cloud teams – put their names on the petition. It included more than 20 vice presidents, directors, and principals. In the letter, they wrote: “We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways.” They specifically raised fears about lethal autonomous weapons, mass surveillance, and applications that could become impossible to oversee once deployed on classified networks.

The employees argued that because they work closely with the technology, they have a responsibility to prevent its most unethical uses. They told Pichai that the only way to guarantee Google avoids association with such harms is to refuse classified workloads entirely. The letter referenced ongoing negotiations with the Pentagon and came right after reports that Google was expanding its AI contract to classified environments.

What happened next

Despite the strong internal pushback, Google went ahead and signed the updated deal with the Pentagon on April 28, 2026. The agreement allows the U.S. Department of Defense to use Gemini AI models for “any lawful government purpose” on classified systems. This move places Google alongside OpenAI and xAI, which secured similar classified access recently.

The timing was striking. Employees sent the letter on Monday, and reports of the signed deal emerged the very next day. Google included some stated restrictions – such as not supporting domestic mass surveillance or fully autonomous weapons without human oversight – but the broad “lawful purpose” clause gives the Pentagon significant flexibility.

Context and background

This revolt echoes Google’s past controversies. In 2018, thousands of employees protested Project Maven, a Pentagon program using AI for drone imagery analysis. Many workers left or voiced concerns, and Google eventually pulled out. The current letter revives similar ethical debates about Silicon Valley’s role in military AI.

The Pentagon is accelerating AI adoption to maintain technological superiority. By spreading contracts across Google, OpenAI, and xAI, it avoids over-reliance on any single provider. Meanwhile, employee opposition highlights a growing divide inside tech companies between those focused on national security needs and those worried about unchecked military applications.

Clarifications

True, not exaggerated: The claim is accurate. Multiple major outlets – including The Washington Post, The Information, Bloomberg, Reuters, and CBS News – reported the letter and the exact number of signatories (over 600).

Not fabricated: The open letter exists and was widely covered within 48 hours. It is a real, documented internal action.

Some spin present: Media coverage varies. Some outlets emphasize the “revolt” and ethical crisis, framing it as Google betraying its “Don’t be evil” roots. Others present it as a minority view that did not stop a necessary national security deal. The Pentagon has stayed largely silent on specifics for security reasons.

Outcome: The opposition failed to block the deal. Google proceeded despite the protest, showing limits to employee influence on high-stakes contracts.