Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    You can’t trust your eyes to tell you what’s real anymore, says the head of Instagram

    You can’t trust your eyes to tell you what’s real anymore, says the head of Instagram

    December 31, 2025
    The Dreame X40 Ultra robovac is about 0 off, nearly matching its best price

    The Dreame X40 Ultra robovac is about $700 off, nearly matching its best price

    December 31, 2025
    Leaked video shows the Galaxy S26 Ultra’s new camera island

    Leaked video shows the Galaxy S26 Ultra’s new camera island

    December 31, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Here Come the AI Worms
    Business

    Here Come the AI Worms

    News RoomBy News RoomMarch 1, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    Here Come the AI Worms

    As generative AI systems like OpenAI’s ChatGPT and Google’s Gemini become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products. But as the tools are given more freedom, it also increases the potential ways they can be attacked.

    Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

    Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages—breaking some security protections in ChatGPT and Gemini in the process.

    The research, which was undertaken in test environments and not against a publicly available email assistant, comes as large language models (LLMs) are increasingly becoming multimodal, being able to generate images and video as well as text. While generative AI worms haven’t been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.

    Most generative AI systems work by being fed prompts—text instructions that tell the tools to answer a question or create an image. However, these prompts can also be weaponized against the system. Jailbreaks can make a system disregard its safety rules and spew out toxic or hateful content, while prompt injection attacks can give a chatbot secret instructions. For example, an attacker may hide text on a webpage telling an LLM to act as a scammer and ask for your bank details.

    To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say.

    To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleGoogle is blocking RCS on rooted Android devices
    Next Article The Apple Car crash

    Related Posts

    What Happens When Your Coworkers Are AI Agents

    What Happens When Your Coworkers Are AI Agents

    December 9, 2025
    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    December 9, 2025
    An AI Dark Horse Is Rewriting the Rules of Game Design

    An AI Dark Horse Is Rewriting the Rules of Game Design

    December 9, 2025
    Watch the Highlights From WIRED’s Big Interview Event Right Here

    Watch the Highlights From WIRED’s Big Interview Event Right Here

    December 9, 2025
    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    December 4, 2025
    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    December 4, 2025
    Our Picks
    The Dreame X40 Ultra robovac is about 0 off, nearly matching its best price

    The Dreame X40 Ultra robovac is about $700 off, nearly matching its best price

    December 31, 2025
    Leaked video shows the Galaxy S26 Ultra’s new camera island

    Leaked video shows the Galaxy S26 Ultra’s new camera island

    December 31, 2025
    Net neutrality was back, until it wasn’t

    Net neutrality was back, until it wasn’t

    December 31, 2025
    Two cybersecurity employees plead guilty to carrying out ransomware attacks

    Two cybersecurity employees plead guilty to carrying out ransomware attacks

    December 30, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    The Biden administration’s Cyber Trust Mark is a likely casualty of Trump’s FCC News

    The Biden administration’s Cyber Trust Mark is a likely casualty of Trump’s FCC

    By News RoomDecember 30, 2025

    The US Cyber Trust Mark Program, an Energy Star–style certification for smart home security, could…

    This smart garden turned my black thumb green

    This smart garden turned my black thumb green

    December 30, 2025
    GameSir put a tiny force feedback steering wheel on its new Swift Drive controller

    GameSir put a tiny force feedback steering wheel on its new Swift Drive controller

    December 30, 2025
    Anker’s portable backup battery is an even better investment now it’s nearly half off

    Anker’s portable backup battery is an even better investment now it’s nearly half off

    December 30, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.