Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    Antigravity’s 360-degree A1 drone is 15 percent off

    Antigravity’s 360-degree A1 drone is 15 percent off

    February 1, 2026
    Nvidia CEO denies he’s ‘unhappy’ with OpenAI

    Nvidia CEO denies he’s ‘unhappy’ with OpenAI

    January 31, 2026
    SpaceX wants to put 1 million solar-powered data centers into orbit

    SpaceX wants to put 1 million solar-powered data centers into orbit

    January 31, 2026
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Google Lifts a Ban on Using Its AI for Weapons and Surveillance
    Business

    Google Lifts a Ban on Using Its AI for Weapons and Surveillance

    News RoomBy News RoomFebruary 5, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    Google Lifts a Ban on Using Its AI for Weapons and Surveillance

    Google announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue “technologies that cause or are likely to cause overall harm,” “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” “technologies that gather or use information for surveillance violating internationally accepted norms,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.”

    The changes were disclosed in a note appended to the top of a 2018 blog post unveiling the guidelines. “We’ve made updates to our AI Principles. Visit AI.Google for the latest,” the note reads.

    In a blog post on Tuesday, a pair of Google executives cited the increasingly widespread use of AI, evolving standards, and geopolitical battles over AI as the “backdrop” to why Google’s principles needed to be overhauled.

    Google first published the principles in 2018 as it moved to quell internal protests over the company’s decision to work on a US military drone program. In response, it declined to renew the government contract and also announced a set of principles to guide future uses of its advanced technologies, such as artificial intelligence. Among other measures, the principles stated Google would not develop weapons, certain surveillance systems, or technologies that undermine human rights.

    But in an announcement on Tuesday, Google did away with those commitments. The new webpage no longer lists a set of banned uses for Google’s AI initiatives. Instead, the revised document offers Google more room to pursue potentially sensitive use cases. It states Google will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.” Google also now says it will work to “mitigate unintended or harmful outcomes.”

    “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” wrote James Manyika, Google senior vice president for research, technology, and society, and Demis Hassabis, CEO of Google DeepMind, the company’s esteemed AI research lab. “And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

    They added that Google will continue to focus on AI projects “that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights.”

    Multiple Google employees expressed concern about the changes in conversations with WIRED. “It’s deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public, despite long-standing employee sentiment that the company should not be in the business of war,” says Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA.


    Got a Tip?

    Are you a current or former employee at Google? We’d like to hear from you. Using a nonwork phone or computer, contact Paresh Dave on Signal/WhatsApp/Telegram at +1-415-565-1302 or [email protected], or Caroline Haskins on Signal at +1 785-813-1084 or at [email protected]


    US President Donald Trump’s return to office last month has galvanized many companies to revise policies promoting equity and other liberal ideals. Google spokesperson Alex Krasov says the changes have been in the works much longer.

    Google lists its new goals as pursuing bold, responsible, and collaborative AI initiatives. Gone are phrases such as “be socially beneficial” and maintain “scientific excellence.” Added is a mention of “respecting intellectual property rights.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleQwertykeys halts keyboard shipments to US over tariff costs and confusion
    Next Article There’s a hidden message in Amazon’s event invites

    Related Posts

    What Happens When Your Coworkers Are AI Agents

    What Happens When Your Coworkers Are AI Agents

    December 9, 2025
    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    December 9, 2025
    An AI Dark Horse Is Rewriting the Rules of Game Design

    An AI Dark Horse Is Rewriting the Rules of Game Design

    December 9, 2025
    Watch the Highlights From WIRED’s Big Interview Event Right Here

    Watch the Highlights From WIRED’s Big Interview Event Right Here

    December 9, 2025
    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    December 4, 2025
    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    December 4, 2025
    Our Picks
    Nvidia CEO denies he’s ‘unhappy’ with OpenAI

    Nvidia CEO denies he’s ‘unhappy’ with OpenAI

    January 31, 2026
    SpaceX wants to put 1 million solar-powered data centers into orbit

    SpaceX wants to put 1 million solar-powered data centers into orbit

    January 31, 2026
    The AirPods 4 and Google’s 4K streamer are just two of this week’s best deals

    The AirPods 4 and Google’s 4K streamer are just two of this week’s best deals

    January 31, 2026
    ChatGPT isn’t the only chatbot pulling answers from Elon Musk’s Grokipedia

    ChatGPT isn’t the only chatbot pulling answers from Elon Musk’s Grokipedia

    January 31, 2026
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    A nice upgrade for Apple’s simplest gadget News

    A nice upgrade for Apple’s simplest gadget

    By News RoomJanuary 31, 2026

    Hi, friends! Welcome to Installer No. 114, your guide to the best and Verge-iest stuff…

    The latest Instax printer is a pricey but worthy upgrade

    The latest Instax printer is a pricey but worthy upgrade

    January 31, 2026
    Peloton lays off 11 percent of its staff just a few months after launching its AI hardware

    Peloton lays off 11 percent of its staff just a few months after launching its AI hardware

    January 30, 2026
    Jeffrey Epstein was permanently banned from Xbox Live

    Jeffrey Epstein was permanently banned from Xbox Live

    January 30, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2026 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.