Google has made a significant change to its artificial intelligence (AI) ethics policy, removing its previous commitment not to use AI for weapons or surveillance. This decision has sparked widespread debate about the ethical implications of AI technology in military and surveillance applications, especially in light of the evolving geopolitical landscape.
Key Takeaways
Google has updated its AI principles, eliminating the ban on using AI for weapons and surveillance.
The change comes shortly after the inauguration of President Donald Trump, who rescinded previous AI safety regulations.
The tech giant's revised policy emphasises alignment with international law and human rights, but lacks specific prohibitions against harmful uses.
Background of Google's AI Principles
In 2018, Google introduced its AI principles in response to employee protests over its involvement in the Pentagon's Project Maven, which aimed to enhance military drone capabilities using AI. The backlash led to Google not renewing its contract with the Pentagon and committing to ethical guidelines that prohibited the development of AI technologies for military purposes.
The New Policy Changes
The recent update to Google's AI principles, announced on February 4, 2025, has removed key language that previously restricted the use of AI in sensitive areas. The revised policy states that Google will pursue AI technologies in line with "widely accepted principles of international law and human rights" but does not explicitly mention weapons or surveillance.
Key changes include:
Removal of commitments not to develop technologies that cause harm or facilitate injury.
Elimination of prohibitions against technologies used for surveillance that violate international norms.
A focus on responsible AI development, with an emphasis on collaboration with governments and organisations that share democratic values.
Reactions to the Update
The decision has raised concerns among Google employees and the broader tech community. Critics argue that the removal of ethical commitments undermines the company's previous stance on responsible AI use. Parul Koul, a Google software engineer, expressed deep concern over the lack of employee input in this significant policy shift.

Geopolitical Context
The timing of this policy change coincides with a broader shift in the geopolitical landscape, particularly with the return of Donald Trump to the presidency. His administration's approach to AI regulation has been characterised by a reduction in oversight, allowing companies like Google more freedom to explore military applications of AI technology.
Future Implications
As AI technology continues to advance, the implications of Google's policy change could be far-reaching. The tech giant's decision to open the door to military and surveillance applications raises questions about the ethical responsibilities of technology companies in an increasingly competitive global environment. The potential for AI to be used in harmful ways necessitates ongoing scrutiny and dialogue about the ethical frameworks guiding its development and deployment.
In conclusion, Google's removal of its pledge not to use AI for weapons and surveillance marks a pivotal moment in the ongoing conversation about the ethical use of technology. As the landscape evolves, it will be crucial for stakeholders to engage in discussions about the implications of these changes and the responsibilities of tech companies in shaping the future of AI.
Sources
Google Lifts a Ban on Using Its AI for Weapons and Surveillance | WIRED, WIRED.
Google drops pledge against using AI for weapons, www.tag24.com.
Google drops pledge to not use AI for weapons | TechCrunch, TechCrunch.