Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Why am I internet-stalking the pope?

    May 8, 2025

    Congress votes to pull funding for free Wi-Fi hotspots at schools and libraries

    May 8, 2025

    The US is reportedly encouraging countries to adopt Musk’s Starlink in tariff trade talks

    May 8, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Microsoft’s new safety system can catch hallucinations in its customers’ AI apps
    News

    Microsoft’s new safety system can catch hallucinations in its customers’ AI apps

    News RoomBy News RoomMarch 28, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    Sarah Bird, Microsoft’s chief product officer of responsible AI, tells The Verge in an interview that her team has designed several new safety features that will be easy to use for Azure customers who aren’t hiring groups of red teamers to test the AI services they built. Microsoft says these LLM-powered tools can detect potential vulnerabilities, monitor for hallucinations “that are plausible yet unsupported,” and block malicious prompts in real time for Azure AI customers working with any model hosted on the platform. 

    “We know that customers don’t all have deep expertise in prompt injection attacks or hateful content, so the evaluation system generates the prompts needed to simulate these types of attacks. Customers can then get a score and see the outcomes,” she says. 

    Three features: Prompt Shields, which blocks prompt injections or malicious prompts from external documents that instruct models to go against their training; Groundedness Detection, which finds and blocks hallucinations; and safety evaluations, which assess model vulnerabilities, are now available in preview on Azure AI. Two other features for directing models toward safe outputs and tracking prompts to flag potentially problematic users will be coming soon. 

    Whether the user is typing in a prompt or if the model is processing third-party data, the monitoring system will evaluate it to see if it triggers any banned words or has hidden prompts before deciding to send it to the model to answer. After, the system then looks at the response by the model and checks if the model hallucinated information not in the document or the prompt.

    In the case of the Google Gemini images, filters made to reduce bias had unintended effects, which is an area where Microsoft says its Azure AI tools will allow for more customized control. Bird acknowledges that there is concern Microsoft and other companies could be deciding what is or isn’t appropriate for AI models, so her team added a way for Azure customers to toggle the filtering of hate speech or violence that the model sees and blocks. 

    In the future, Azure users can also get a report of users who attempt to trigger unsafe outputs. Bird says this allows system administrators to figure out which users are its own team of red teamers and which could be people with more malicious intent.

    Bird says the safety features are immediately “attached” to GPT-4 and other popular models like Llama 2. However, because Azure’s model garden contains many AI models, users of smaller, less used open-source systems may have to manually point the safety features to the models. 

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleSteelSeries’ multi-platform Arctis Nova Pro Wireless headset is nearly $70 off
    Next Article Apple’s OLED iPad Pro will reportedly arrive in May

    Related Posts

    Why am I internet-stalking the pope?

    May 8, 2025

    Congress votes to pull funding for free Wi-Fi hotspots at schools and libraries

    May 8, 2025

    The US is reportedly encouraging countries to adopt Musk’s Starlink in tariff trade talks

    May 8, 2025

    Apple has a new ‘Viral’ playlist on Apple Music and Shazam

    May 8, 2025

    Instagram CEO testifies about competing with TikTok: ‘You’re either growing, or you’re slowly dying’

    May 8, 2025

    Celsius founder Alex Mashinsky sentenced to 12 years in prison

    May 8, 2025
    Our Picks

    Congress votes to pull funding for free Wi-Fi hotspots at schools and libraries

    May 8, 2025

    The US is reportedly encouraging countries to adopt Musk’s Starlink in tariff trade talks

    May 8, 2025

    Apple has a new ‘Viral’ playlist on Apple Music and Shazam

    May 8, 2025

    Scientists Believe They’ve Witnessed ‘Planetary Suicide’ for the First Time

    May 8, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Instagram CEO testifies about competing with TikTok: ‘You’re either growing, or you’re slowly dying’

    By News RoomMay 8, 2025

    When Adam Mosseri took over Meta-owned Instagram as CEO in 2018, the app was experiencing…

    The Signal Clone Mike Waltz Was Caught Using Has Direct Access to User Chats

    May 8, 2025

    Celsius founder Alex Mashinsky sentenced to 12 years in prison

    May 8, 2025

    A ‘Trump Card Visa’ Is Already Showing Up in Immigration Forms

    May 8, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.