Google Removes Pledge to Avoid AI Use in Weapon Development, Sparking Ethics Debate
Google's recent decision to remove its pledge to avoid using its artificial intelligence (AI) technology for developing weapons from its technological ethics guidelines page has sparked intense debate and raised concerns about the application of AI technology in the creation of lethal weaponry. The company had initially promised to avoid using its AI technology for harmful purposes, with a clause that explicitly stated it would not use its AI for "weapons or other technologies whose purpose is to cause or directly facilitate harm to people." This removal has triggered widespread discussion and questioning, particularly regarding the potential consequences of AI development for military purposes.
![Background Image](/_next/image?url=https%3A%2F%2Fwx4.sinaimg.cn%2Flarge%2F001Ucsatgy1hya29yrnnwj60k0100gpg02.jpg&w=3840&q=75)
6 February 2025
The deleted clause was introduced in 2018, after a group of employees revealed Google's involvement in a project with the US military, known as "Project Maven," which aimed to develop AI tools for drone systems used on the battlefield. The backlash from employees and the public led Google to announce that it would not renew its contract for Project Maven and to introduce its AI Principles as a framework for guiding the company's future development and use of AI. However, the recent update to Google's AI ethics guidelines has removed the specific language that explicitly commits to not using its AI for weapons or technologies that cause harm to people.
Google's high-ranking vice presidents, James Manyika and Demis Hassabis, have defended the company's decision, stating that they believe companies should work with governments to create AI that "protects people, promotes global growth, and supports national security." However, this rationale has been met with skepticism, as it appears to contradict Google's initial commitment to avoiding the development of AI for harmful purposes. Critics argue that Google's decision undermines the principles of responsible AI development and may have severe implications for global security and human rights.
![环球时报](/_next/image?url=https%3A%2F%2Ftvax2.sinaimg.cn%2Fcrop.0.0.600.600.180%2F0029D7FZly8h8vg4kqwr4j60go0got9i02.jpg%3FKID%3Dimgbed%2Ctva%26Expires%3D1738810801%26ssig%3Dp2h%252BGIjm%252Bk&w=128&q=75)
![Post picture](/_next/image?url=https%3A%2F%2Fwx3.sinaimg.cn%2Flarge%2F0029D7FZly1hy9z1bnerxj60km0sqtiy02.jpg&w=1080&q=75)
The controversy surrounding Google's revised AI ethics guidelines highlights the need for ongoing discussion and regulation of AI development, particularly in the context of military applications. Recent reports that the "Maven" project has continued to operate, despite Google's public announcement of its withdrawal, and is now being tested on the battlefield in Ukraine, raise questions about the company's transparency and accountability in its AI development endeavors. The recent contracts between Google and the US and Israeli militaries have further fueled protests and criticisms from within the company, highlighting the need for a more nuanced and transparent approach to AI development and its potential applications.
![凤凰网科技](/_next/image?url=https%3A%2F%2Ftvax2.sinaimg.cn%2Fcrop.0.0.512.512.180%2F6ea67c04ly8h0pl2dem57j20e80e8gm4.jpg%3FKID%3Dimgbed%2Ctva%26Expires%3D1738810801%26ssig%3D8FlmvJMBmd&w=128&q=75)
![Post picture](/_next/image?url=https%3A%2F%2Fwx3.sinaimg.cn%2Flarge%2F6ea67c04ly1hy9gouh4tlj23cj1w6hdz.jpg&w=1080&q=75)
As AI technology becomes increasingly prevalent in various fields, ensuring the safe and responsible use of AI has become a pressing concern. The change in Google's AI principles has led to concerns among employees and the public about the potential misuse of AI in military and surveillance applications. The updated principles, while emphasizing the importance of reducing harmful outcomes and avoiding unfair bias, have failed to alleviate concerns about the potential risks associated with AI development. Ultimately, the removal of Google's pledge serves as a reminder of the need for a broader discussion about the ethics of AI development and the importance of prioritizing human safety and well-being in the pursuit of technological advancements.
![香港商報](/_next/image?url=https%3A%2F%2Ftvax1.sinaimg.cn%2Fcrop.0.0.1024.1024.180%2F6e18de6fly1h8vg7331ggj20sg0sggml.jpg%3FKID%3Dimgbed%2Ctva%26Expires%3D1738810801%26ssig%3DZj7KyEGgKA&w=128&q=75)
![Post picture](/_next/image?url=https%3A%2F%2Fwx4.sinaimg.cn%2Flarge%2F6e18de6fgy1hy9zzswpcuj20s00hfwm0.jpg&w=1080&q=75)