Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    This Is the Platform Google Claims Is Behind a ‘Staggering’ Scam Text Operation

    This Is the Platform Google Claims Is Behind a ‘Staggering’ Scam Text Operation

    November 19, 2025
    Google’s new Scholar Labs search uses AI to find relevant studies

    Google’s new Scholar Labs search uses AI to find relevant studies

    November 19, 2025
    For Black Friday, get the M4 MacBook Air that’s only 0 more than the slower M1 model

    For Black Friday, get the M4 MacBook Air that’s only $150 more than the slower M1 model

    November 19, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Researchers Propose a Better Way to Report Dangerous AI Flaws
    Business

    Researchers Propose a Better Way to Report Dangerous AI Flaws

    News RoomBy News RoomMarch 14, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    Researchers Propose a Better Way to Report Dangerous AI Flaws

    In late 2023, a team of third-party researchers discovered a troubling glitch in OpenAI’s widely used artificial intelligence model GPT-3.5.

    When asked to repeat certain words a thousand times, the model began repeating the word over and over, then suddenly switched to spitting out incoherent text and snippets of personal information drawn from its training data, including parts of names, phone numbers, and email addresses. The team that discovered the problem worked with OpenAI to ensure the flaw was fixed before revealing it publicly. It is just one of scores of problems found in major AI models in recent years.

    In a proposal released today, more than 30 prominent AI researchers, including some who found the GPT-3.5 flaw, say that many other vulnerabilities affecting popular models are reported in problematic ways. They suggest a new scheme supported by AI companies that gives outsiders permission to probe their models and a way to disclose flaws publicly.

    “Right now it’s a little bit of the Wild West,” says Shayne Longpre, a PhD candidate at MIT and the lead author of the proposal. Longpre says that some so-called jailbreakers share their methods of breaking AI safeguards the social media platform X, leaving models and users at risk. Other jailbreaks are shared with only one company even though they might affect many. And some flaws, he says, are kept secret because of fear of getting banned or facing prosecution for breaking terms of use. “It is clear that there are chilling effects and uncertainty,” he says.

    The security and safety of AI models is hugely important given widely the technology is now being used, and how it may seep into countless applications and services. Powerful models need to be stress-tested, or red-teamed, because they can harbor harmful biases, and because certain inputs can cause them to break free of guardrails and produce unpleasant or dangerous responses. These include encouraging vulnerable users to engage in harmful behavior or helping a bad actor to develop cyber, chemical, or biological weapons. Some experts fear that models could assist cyber criminals or terrorists, and may even turn on humans as they advance.

    The authors suggest three main measures to improve the third-party disclosure process: adopting standardized AI flaw reports to streamline the reporting process; for big AI firms to provide infrastructure to third-party researchers disclosing flaws; and for developing a system that allows flaws to be shared between different providers.

    The approach is borrowed from the cybersecurity world, where there are legal protections and established norms for outside researchers to disclose bugs.

    “AI researchers don’t always know how to disclose a flaw and can’t be certain that their good faith flaw disclosure won’t expose them to legal risk,” says Ilona Cohen, chief legal and policy officer at HackerOne, a company that organizes bug bounties, and a coauthor on the report.

    Large AI companies currently conduct extensive safety testing on AI models prior to their release. Some also contract with outside firms to do further probing. “Are there enough people in those [companies] to address all of the issues with general-purpose AI systems, used by hundreds of millions of people in applications we’ve never dreamt?” Longpre asks. Some AI companies have started organizing AI bug bounties. However, Longpre says that independent researchers risk breaking the terms of use if they take it upon themselves to probe powerful AI models.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleHow to go back in time using Google Maps
    Next Article Tesla registrations — and public opinion — are in free fall

    Related Posts

    Meta Claims Downloaded Porn at Center of AI Lawsuit Was for ‘Personal Use’

    Meta Claims Downloaded Porn at Center of AI Lawsuit Was for ‘Personal Use’

    November 19, 2025
    You Won’t Be Able to Offload Your Holiday Shopping to AI Agents Anytime Soon

    You Won’t Be Able to Offload Your Holiday Shopping to AI Agents Anytime Soon

    November 19, 2025
    ‘Odd Lots’ Cohost Joe Weisenthal Has Predictions About How the AI Bubble Will Burst

    ‘Odd Lots’ Cohost Joe Weisenthal Has Predictions About How the AI Bubble Will Burst

    November 19, 2025
    Anthropic’s Claude Takes Control of a Robot Dog

    Anthropic’s Claude Takes Control of a Robot Dog

    November 19, 2025
    The AI Boom Is Fueling a Need for Speed in Chip Networking

    The AI Boom Is Fueling a Need for Speed in Chip Networking

    November 18, 2025
    Chinese Beverage Chains Spread Across the US, Challenging Starbucks’ Dominance

    Chinese Beverage Chains Spread Across the US, Challenging Starbucks’ Dominance

    November 17, 2025
    Our Picks
    Google’s new Scholar Labs search uses AI to find relevant studies

    Google’s new Scholar Labs search uses AI to find relevant studies

    November 19, 2025
    For Black Friday, get the M4 MacBook Air that’s only 0 more than the slower M1 model

    For Black Friday, get the M4 MacBook Air that’s only $150 more than the slower M1 model

    November 19, 2025
    Meta Claims Downloaded Porn at Center of AI Lawsuit Was for ‘Personal Use’

    Meta Claims Downloaded Porn at Center of AI Lawsuit Was for ‘Personal Use’

    November 19, 2025
    You Won’t Be Able to Offload Your Holiday Shopping to AI Agents Anytime Soon

    You Won’t Be Able to Offload Your Holiday Shopping to AI Agents Anytime Soon

    November 19, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Future Google TV devices might come with a solar-powered remote News

    Future Google TV devices might come with a solar-powered remote

    By News RoomNovember 19, 2025

    Epishine, a company that makes solar cells optimized for indoor lighting, has announced its technology…

    Screw it, I’m installing Linux

    Screw it, I’m installing Linux

    November 19, 2025
    ‘Odd Lots’ Cohost Joe Weisenthal Has Predictions About How the AI Bubble Will Burst

    ‘Odd Lots’ Cohost Joe Weisenthal Has Predictions About How the AI Bubble Will Burst

    November 19, 2025
    This viral AI pen didn’t help me cheat

    This viral AI pen didn’t help me cheat

    November 19, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.