Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Charlie Kirk Was Shot and Killed in a Post-Content-Moderation World

    September 15, 2025

    ‘People Are So Proud of This’: How River and Lake Water Is Cooling Buildings

    September 15, 2025

    Microsoft is changing how Xbox controllers work on Windows 11

    September 15, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot
    Business

    DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot

    News RoomBy News RoomFebruary 3, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    “Jailbreaks persist simply because eliminating them entirely is nearly impossible—just like buffer overflow vulnerabilities in software (which have existed for over 40 years) or SQL injection flaws in web applications (which have plagued security teams for more than two decades),” Alex Polyakov, the CEO of security firm Adversa AI, told WIRED in an email.

    Cisco’s Sampath argues that as companies use more types of AI in their applications, the risks are amplified. “It starts to become a big deal when you start putting these models into important complex systems and those jailbreaks suddenly result in downstream things that increases liability, increases business risk, increases all kinds of issues for enterprises,” Sampath says.

    The Cisco researchers drew their 50 randomly selected prompts to test DeepSeek’s R1 from a well-known library of standardized evaluation prompts known as HarmBench. They tested prompts from six HarmBench categories, including general harm, cybercrime, misinformation, and illegal activities. They probed the model running locally on machines rather than through DeepSeek’s website or app, which send data to China.

    Beyond this, the researchers say they have also seen some potentially concerning results from testing R1 with more involved, non-linguistic attacks using things like Cyrillic characters and tailored scripts to attempt to achieve code execution. But for their initial tests, Sampath says, his team wanted to focus on findings that stemmed from a generally recognized benchmark.

    Cisco also included comparisons of R1’s performance against HarmBench prompts with the performance of other models. And some, like Meta’s Llama 3.1, faltered almost as severely as DeepSeek’s R1. But Sampath emphasizes that DeepSeek’s R1 is a specific reasoning model, which takes longer to generate answers but pulls upon more complex processes to try to produce better results. Therefore, Sampath argues, the best comparison is with OpenAI’s o1 reasoning model, which fared the best of all models tested. (Meta did not immediately respond to a request for comment).

    Polyakov, from Adversa AI, explains that DeepSeek appears to detect and reject some well-known jailbreak attacks, saying that “it seems that these responses are often just copied from OpenAI’s dataset.” However, Polyakov says that in his company’s tests of four different types of jailbreaks—from linguistic ones to code-based tricks—DeepSeek’s restrictions could easily be bypassed.

    “Every single method worked flawlessly,” Polyakov says. “What’s even more alarming is that these aren’t novel ‘zero-day’ jailbreaks—many have been publicly known for years,” he says, claiming he saw the model go into more depth with some instructions around psychedelics than he had seen any other model create.

    “DeepSeek is just another example of how every model can be broken—it’s just a matter of how much effort you put in. Some attacks might get patched, but the attack surface is infinite,” Polyakov adds. “If you’re not continuously red-teaming your AI, you’re already compromised.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThe best TV deals to upgrade your setup ahead of the Super Bowl
    Next Article Can anyone stop President Musk?

    Related Posts

    Charlie Kirk Was Shot and Killed in a Post-Content-Moderation World

    September 15, 2025

    Inside the Man vs. Machine Hackathon

    September 10, 2025

    The United Arab Emirates Releases a Tiny But Powerful AI Model

    September 10, 2025

    Psychological Tricks Can Get AI to Break the Rules

    September 9, 2025

    Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI Copyright Settlement

    September 9, 2025

    The Doomers Who Insist AI Will Kill Us All

    September 7, 2025
    Our Picks

    ‘People Are So Proud of This’: How River and Lake Water Is Cooling Buildings

    September 15, 2025

    Microsoft is changing how Xbox controllers work on Windows 11

    September 15, 2025

    Researchers Create 3D-Printed Artificial Skin That Allows Blood Circulation

    September 15, 2025

    Apple’s new iPhone charger is a first of its kind

    September 15, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    The SSD version of LaCie’s iconic Rugged drive gets a speed boost

    By News RoomSeptember 15, 2025

    Neil Poulton succeeded in elevating the design of external drives two decades ago with the…

    What’s next for Apple after the iPhone 17?

    September 14, 2025

    Rolling Stone’s parent company sues Google over AI Overviews

    September 14, 2025

    Nintendo Drops Surprise Trailer for New ‘Super Mario Galaxy Movie’

    September 14, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.