Google Now Approves AI Use in Weapons and Surveillance
Google now plans to incorporate artificial intelligence into weapons and surveillance, marking a significant shift in its technology strategy.
Google Updates Its AI Principles
Google, the search engine behemoth, has made one of the most significant revisions to its AI principles since their initial release in 2018. The revised document removes previous commitments by the company, including its pledge not to “design or deploy” AI tools for use in weaponry or surveillance technologies.
Shift to “Responsible Development and Deployment”
The previous guidelines featured a section titled “applications we will not pursue,” which is absent in the current version. Instead, there is a new section called “responsible development and deployment”. Here, Google states it will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.”
A Broader Commitment
This represents a broader commitment compared to the more specific pledges made as recently as last month when the former version of its AI principles was still accessible on its website. For instance, regarding weapons, the company had previously stated it would not design AI for use in “weapons or other technologies primarily intended or implemented to cause or directly facilitate injury to people.” As for AI surveillance tools, the company had committed not to develop technology that violates “internationally accepted norms.”
Google’s Response
When questioned about these changes, a Google spokesperson referred Engadget to a blog post published on Thursday. In it, DeepMind CEO Demis Hassabis and James Manyika, Google’s Senior Vice President of Research, Labs, Technology, and Society, state that the evolving nature of AI as a “general-purpose technology” necessitates a policy shift.
“We believe democracies should lead AI development, guided by core values such as freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations that share these values should collaborate to develop AI that protects people, fosters global growth, and supports national security,” the executives wrote.
A Look Back
Google first introduced its AI principles in 2018, following the controversy of Project Maven, a government contract which, if renewed, would have involved Google providing AI software to the Department of Defense for drone imagery analysis. This contract led to significant internal uproar, with dozens of Google employees quitting and thousands more signing a petition in protest. However, by 2021, Google resumed pursuing military contracts, apparently with an “aggressive bid” for the Pentagon’s Joint Warfighting Cloud Capability contract.