Google, once known for its motto “Don’t be evil,” seems to be shifting its stance. On Tuesday, the tech giant announced major updates to its artificial intelligence (AI) policy, replacing guidelines that had shaped its approach since 2018.
Previously, Google’s Responsible AI Principles stated that the company would not develop AI for weaponry or projects primarily focused on surveillance. It had also pledged to avoid designing or deploying AI that could cause widespread harm or violate widely accepted international laws and human rights standards. However, these firm boundaries have now been removed.
Under Google’s revised AI Principles, the company merely states that its AI products will “align with” human rights, without clarifying how. This shift away from explicitly prohibiting harmful uses of AI raises serious concerns. AI is a rapidly evolving and complex technology, and some applications are simply too risky to pursue.
Google’s decision to abandon these self-imposed restrictions highlights why voluntary guidelines alone are insufficient. Without enforceable regulations, corporations can easily reverse course on ethical commitments. International human rights laws do apply to AI, but without proper oversight, turning principles into actionable safeguards remains a challenge.
While it’s unclear how strictly Google adhered to its previous AI commitments, those principles at least provided employees with a basis to challenge questionable AI developments. Now, the company’s shift from rejecting AI-driven weaponry to actively supporting national security initiatives marks a significant change.
As AI becomes increasingly integrated into military operations, its reliance on incomplete data and flawed algorithms raises the risk of unintended harm, particularly to civilians. Moreover, these technologies make it more difficult to hold decision-makers accountable for life-or-death consequences on the battlefield.
Google executives have spoken about a “global competition” for AI dominance and emphasized values like freedom, equality, and human rights. Yet, their actions suggest a deprioritization of ethical concerns in favor of strategic and competitive interests. This approach could lead to a dangerous race to the bottom.
In line with the United Nations Guiding Principles on Business and Human Rights, all companies have a responsibility to ensure their technologies respect fundamental human rights. When it comes to military applications of AI, the consequences couldn’t be more critical.

Ayush Kumar Jaiswal is a writer and contributor for MakingIndiaAIFirst.com, a platform dedicated to covering the latest developments, trends, and innovations in artificial intelligence (AI) with a specific focus on India’s role in the global AI landscape. His work primarily revolves around delivering insightful and up-to-date news, analysis, and commentary on AI advancements, policies, and their implications for India’s technological future.
As a tech enthusiast and AI advocate, Ayush is passionate about exploring how AI can transform industries, governance, and everyday life. His writing aims to bridge the gap between complex AI concepts and a broader audience, making AI accessible and understandable to readers from diverse backgrounds.
Through his contributions to MakingIndiaAIFirst.com, Ayush strives to highlight India’s progress in AI research, startups, and policy frameworks, positioning the country as a leader in the global AI race. His work reflects a commitment to fostering awareness and dialogue around AI’s potential to drive economic growth, innovation, and societal impact in India.
For more of his work and insights on AI, visit MakingIndiaAIFirst.com.