Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Microsoft’s Xbox PC app adds Steam games and access to other stores

    September 16, 2025

    Amazon’s October Prime Day sale is happening on October 7th

    September 16, 2025

    The unbearable sameness of Liquid Glass

    September 15, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot
    Business

    DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot

    News RoomBy News RoomFebruary 3, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    “Jailbreaks persist simply because eliminating them entirely is nearly impossible—just like buffer overflow vulnerabilities in software (which have existed for over 40 years) or SQL injection flaws in web applications (which have plagued security teams for more than two decades),” Alex Polyakov, the CEO of security firm Adversa AI, told WIRED in an email.

    Cisco’s Sampath argues that as companies use more types of AI in their applications, the risks are amplified. “It starts to become a big deal when you start putting these models into important complex systems and those jailbreaks suddenly result in downstream things that increases liability, increases business risk, increases all kinds of issues for enterprises,” Sampath says.

    The Cisco researchers drew their 50 randomly selected prompts to test DeepSeek’s R1 from a well-known library of standardized evaluation prompts known as HarmBench. They tested prompts from six HarmBench categories, including general harm, cybercrime, misinformation, and illegal activities. They probed the model running locally on machines rather than through DeepSeek’s website or app, which send data to China.

    Beyond this, the researchers say they have also seen some potentially concerning results from testing R1 with more involved, non-linguistic attacks using things like Cyrillic characters and tailored scripts to attempt to achieve code execution. But for their initial tests, Sampath says, his team wanted to focus on findings that stemmed from a generally recognized benchmark.

    Cisco also included comparisons of R1’s performance against HarmBench prompts with the performance of other models. And some, like Meta’s Llama 3.1, faltered almost as severely as DeepSeek’s R1. But Sampath emphasizes that DeepSeek’s R1 is a specific reasoning model, which takes longer to generate answers but pulls upon more complex processes to try to produce better results. Therefore, Sampath argues, the best comparison is with OpenAI’s o1 reasoning model, which fared the best of all models tested. (Meta did not immediately respond to a request for comment).

    Polyakov, from Adversa AI, explains that DeepSeek appears to detect and reject some well-known jailbreak attacks, saying that “it seems that these responses are often just copied from OpenAI’s dataset.” However, Polyakov says that in his company’s tests of four different types of jailbreaks—from linguistic ones to code-based tricks—DeepSeek’s restrictions could easily be bypassed.

    “Every single method worked flawlessly,” Polyakov says. “What’s even more alarming is that these aren’t novel ‘zero-day’ jailbreaks—many have been publicly known for years,” he says, claiming he saw the model go into more depth with some instructions around psychedelics than he had seen any other model create.

    “DeepSeek is just another example of how every model can be broken—it’s just a matter of how much effort you put in. Some attacks might get patched, but the attack surface is infinite,” Polyakov adds. “If you’re not continuously red-teaming your AI, you’re already compromised.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThe best TV deals to upgrade your setup ahead of the Super Bowl
    Next Article Can anyone stop President Musk?

    Related Posts

    OpenAI Ramps Up Robotics Work in Race Toward AGI

    September 15, 2025

    How China’s Propaganda and Surveillance Systems Really Operate

    September 15, 2025

    I Wasn’t Sure I Wanted Anthropic to Pay Me for My Books—I Do Now

    September 15, 2025

    Charlie Kirk Was Shot and Killed in a Post-Content-Moderation World

    September 15, 2025

    Inside the Man vs. Machine Hackathon

    September 10, 2025

    The United Arab Emirates Releases a Tiny But Powerful AI Model

    September 10, 2025
    Our Picks

    Amazon’s October Prime Day sale is happening on October 7th

    September 16, 2025

    The unbearable sameness of Liquid Glass

    September 15, 2025

    OpenAI Ramps Up Robotics Work in Race Toward AGI

    September 15, 2025

    Facebook gave our data to Cambridge Analytica and all I got was this $38.36

    September 15, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    The Supreme Court is Google’s last hope to avoid an Epic reckoning in October

    By News RoomSeptember 15, 2025

    4. For a period of three years ending on November 1, 2027, Google may not…

    Meta leaks its new smart glasses with a display

    September 15, 2025

    ‘Hades II’ Is Coming to Nintendo Switch This Month

    September 15, 2025

    Google thinks it can have AI summaries and a healthy web, too

    September 15, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.