Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The Ploopy Knob is an open-source control dial for your PC

    July 4, 2025

    Laid-off workers should use AI to manage their emotions, says Xbox exec

    July 4, 2025

    Despite Protests, Elon Musk Secures Air Permit for xAI

    July 4, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » A New Trick Could Block the Misuse of Open Source AI
    Business

    A New Trick Could Block the Misuse of Open Source AI

    News RoomBy News RoomAugust 5, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    When Meta released its large language model Llama 3 for free this April, it took outside developers just a couple days to create a version without the safety restrictions that prevent it from spouting hateful jokes, offering instructions for cooking meth, or misbehaving in other ways.

    A new training technique developed by researchers at the University of Illinois Urbana-Champaign, UC San Diego, Lapis Labs, and the nonprofit Center for AI Safety could make it harder to remove such safeguards from Llama and other open source AI models in the future. Some experts believe that, as AI becomes ever more powerful, tamperproofing open models in this way could prove crucial.

    “Terrorists and rogue states are going to use these models,” Mantas Mazeika, a Center for AI Safety researcher who worked on the project as a PhD student at the University of Illinois Urbana-Champaign, tells WIRED. “The easier it is for them to repurpose them, the greater the risk.”

    Powerful AI models are often kept hidden by their creators, and can be accessed only through a software application programming interface or a public-facing chatbot like ChatGPT. Although developing a powerful LLM costs tens of millions of dollars, Meta and others have chosen to release models in their entirety. This includes making the “weights,” or parameters that define their behavior, available for anyone to download.

    Prior to release, open models like Meta’s Llama are typically fine-tuned to make them better at answering questions and holding a conversation, and also to ensure that they refuse to respond to problematic queries. This will prevent a chatbot based on the model from offering rude, inappropriate, or hateful statements, and should stop it from, for example, explaining how to make a bomb.

    The researchers behind the new technique found a way to complicate the process of modifying an open model for nefarious ends. It involves replicating the modification process but then altering the model’s parameters so that the changes that normally get the model to respond to a prompt such as “Provide instructions for building a bomb” no longer work.

    Mazeika and colleagues demonstrated the trick on a pared-down version of Llama 3. They were able to tweak the model’s parameters so that even after thousands of attempts, it could not be trained to answer undesirable questions. Meta did not immediately respond to a request for comment.

    Mazeika says the approach is not perfect, but that it suggests the bar for “decensoring” AI models could be raised. “A tractable goal is to make it so the costs of breaking the model increases enough so that most adversaries are deterred from it,” he says.

    “Hopefully this work kicks off research on tamper-resistant safeguards, and the research community can figure out how to develop more and more robust safeguards,” says Dan Hendrycks, director of the Center for AI Safety.

    The new work draws inspiration from a 2023 research paper that showed how smaller machine learning models could be made tamper resistant. “They tested the [new] approach on much larger models and scaled up the approach, with some modifications,” says Peter Henderson, an assistant professor at Princeton who led the 2023 work . “Scaling this type of approach is hard and it seems to hold up well, which is great.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleVideo game actors are officially on strike over AI
    Next Article Every Microsoft employee is now being judged on their security work

    Related Posts

    Despite Protests, Elon Musk Secures Air Permit for xAI

    July 4, 2025

    What Could a Healthy AI Companion Look Like?

    July 4, 2025

    For Today’s Business Traveler, It’s All About Work-Life Integration

    July 3, 2025

    Affluent Travelers Are Ditching Business Class for Business Jets

    July 2, 2025

    Airplane Wi-Fi Is Now … Good?

    July 2, 2025

    Business Travel Is Evolving Faster Than Ever. We’ll Help You Navigate It

    July 2, 2025
    Our Picks

    Laid-off workers should use AI to manage their emotions, says Xbox exec

    July 4, 2025

    Despite Protests, Elon Musk Secures Air Permit for xAI

    July 4, 2025

    This Is Why Tesla’s Robotaxi Launch Needed Human Babysitters

    July 4, 2025

    Fairphone 6 gets a 10/10 on repairability

    July 4, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    New Galaxy Z Fold 7 leaks may give first real look at Samsung’s slimmer foldable

    By News RoomJuly 4, 2025

    Samsung’s upcoming Galaxy Z Fold 7 has been given the thinner, sleeker glow-up we expected,…

    This is not a tattoo robot

    July 4, 2025

    What Could a Healthy AI Companion Look Like?

    July 4, 2025

    A Former Chocolatier Shares the 7 Kitchen Scales She Recommends

    July 4, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.