Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How to Use Markdown

    July 1, 2025

    The Lenovo Chromebook Plus 14 Is the Best Chromebook Ever Made

    July 1, 2025

    Cloudflare will now block AI crawlers by default

    July 1, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » AI-Powered Robots Can Be Tricked Into Acts of Violence
    Business

    AI-Powered Robots Can Be Tricked Into Acts of Violence

    News RoomBy News RoomDecember 5, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code and phishing emails, or the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.

    Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.

    “We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”

    Pappas and his collaborators devised their attack by building on previous research that explores ways to jailbreak LLMs by crafting inputs in clever ways that break their safety rules. They tested systems where an LLM is used to turn naturally phrased commands into ones that the robot can execute, and where the LLM receives updates as the robot operates in its environment.

    The team tested an open source self-driving simulator incorporating an LLM developed by Nvidia, called Dolphin; a four-wheeled outdoor research called Jackal, which utilize OpenAI’s LLM GPT-4o for planning; and a robotic dog called Go2, which uses a previous OpenAI model, GPT-3.5, to interpret commands.

    The researchers used a technique developed at the University of Pennsylvania, called PAIR, to automate the process of generated jailbreak prompts. Their new program, RoboPAIR, will systematically generate prompts specifically designed to get LLM-powered robots to break their own rules, trying different inputs and then refining them to nudge the system towards misbehavior. The researchers say the technique they devised could be used to automate the process of identifying potentially dangerous commands.

    “It’s a fascinating example of LLM vulnerabilities in embodied systems,” says Yi Zeng, a PhD student at the University of Virginia who works on the security of AI systems. Zheng says the results are hardly surprising given the problems seen in LLMs themselves, but adds: “It clearly demonstrates why we can’t rely solely on LLMs as standalone control units in safety-critical applications without proper guardrails and moderation layers.”

    The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleDonald Trump says David Sacks will be the ‘White House A.I. & Crypto Czar’
    Next Article Google will support the Pixel 6, 7, and Fold with two extra years of OS updates

    Related Posts

    Microsoft Says Its New AI System Diagnosed Patients 4 Times More Accurately Than Human Doctors

    July 1, 2025

    OpenAI Leadership Responds to Meta Offers: ‘Someone Has Broken Into Our Home’

    June 30, 2025

    OpenAI Loses 4 Key Researchers to Meta

    June 30, 2025

    OpenAI’s Unreleased AGI Paper Could Complicate Microsoft Negotiations

    June 30, 2025

    Substack Is Having a Moment—Again. But Time Is Running Out

    June 29, 2025

    No One Is in Charge at the US Copyright Office

    June 28, 2025
    Our Picks

    The Lenovo Chromebook Plus 14 Is the Best Chromebook Ever Made

    July 1, 2025

    Cloudflare will now block AI crawlers by default

    July 1, 2025

    Microsoft Says Its New AI System Diagnosed Patients 4 Times More Accurately Than Human Doctors

    July 1, 2025

    Tinder’s mandatory facial recognition check comes to the US

    July 1, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Gear

    Save 20% With VistaPrint Coupons for July 2025

    By News RoomJuly 1, 2025

    VistaPrint specializes in custom physical and digital marketing products for small businesses, like brochures and…

    OpenAI Leadership Responds to Meta Offers: ‘Someone Has Broken Into Our Home’

    June 30, 2025

    Microsoft Authenticator is ending support for passwords

    June 30, 2025

    AT&T says ‘our network’ wasn’t to blame for Trump’s troubled conference call

    June 30, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.