Tech Workers Push Back as Pentagon Labels Anthropic ‘Supply Chain Risk’
Following the US attacks on Iran, the tech world has become increasingly turbulent. Nearly 900 employees from Google and OpenAI have warned their companies not to allow AI to be used for military espionage or lethal weapons.
The increase in the usage of artificial intelligence (AI) in weapons and war has become a cause for concern among tech workers worldwide. The US Department of Defense, also known as the Pentagon, has blacklisted the AI firm Anthropic. The sole reason for this is that the firm would not allow its AI technology to be used for spying or in war.
A major protest movement has now begun. Hundreds of Google and OpenAI workers have signed a joint letter. They say the Pentagon wants to force tech companies to comply with its terms by intimidating and pitting them against each other.
This situation is not new for Google. In 2018, Google's drone program, "Project Maven," faced massive employee protests, forcing the company to back down. Now, reports suggest that Google is in talks to integrate its most powerful AI, Gemini, into the military's intelligence system.
Want to get your story featured as above? click here!
Want to get your story featured as above? click here!
Surprisingly, last year, Google secretly revised its principles and removed the explicit prohibition against weaponization. This has further fueled employee anger. Google's Chief Scientist, Jeff Dean, has also acknowledged that mass spying using AI violates freedom of expression.
Not only Google and OpenAI, but hundreds of employees from Salesforce, Databricks, IBM, and Cursor have also demanded that the Department of Defense reverse its decision to designate Anthropic as a "supply chain risk."
The letter also urges Congress to investigate whether such extraordinary powers against an American tech company are justified.
The activist group No Tech For Apartheid has also urged cloud companies like Google, Amazon, and Microsoft to reject Pentagon contracts that could lead to mass surveillance or the misuse of AI.
The group cited the potential deployment of the Gemini model in a classified environment, comparing it to the defense sector's access to xAI's Grok model.
While Anthropic and OpenAI have publicly clarified the limitations and safeguards of their agreements with the Pentagon, Google has yet to issue a concrete statement.
As AI technology becomes more powerful, questions are increasingly raised about the extent and purpose for which it should be used.
