Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    Inside Microsoft’s big Xbox leadership shake-up

    Inside Microsoft’s big Xbox leadership shake-up

    February 23, 2026
    Does Big Tech actually care about fighting AI slop?

    Does Big Tech actually care about fighting AI slop?

    February 23, 2026
    Kohler’s new shower reuses dirty water to get you clean

    Kohler’s new shower reuses dirty water to get you clean

    February 23, 2026
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Microsoft’s new safety system can catch hallucinations in its customers’ AI apps
    News

    Microsoft’s new safety system can catch hallucinations in its customers’ AI apps

    News RoomBy News RoomMarch 28, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    Microsoft’s new safety system can catch hallucinations in its customers’ AI apps

    Sarah Bird, Microsoft’s chief product officer of responsible AI, tells The Verge in an interview that her team has designed several new safety features that will be easy to use for Azure customers who aren’t hiring groups of red teamers to test the AI services they built. Microsoft says these LLM-powered tools can detect potential vulnerabilities, monitor for hallucinations “that are plausible yet unsupported,” and block malicious prompts in real time for Azure AI customers working with any model hosted on the platform. 

    “We know that customers don’t all have deep expertise in prompt injection attacks or hateful content, so the evaluation system generates the prompts needed to simulate these types of attacks. Customers can then get a score and see the outcomes,” she says. 

    Three features: Prompt Shields, which blocks prompt injections or malicious prompts from external documents that instruct models to go against their training; Groundedness Detection, which finds and blocks hallucinations; and safety evaluations, which assess model vulnerabilities, are now available in preview on Azure AI. Two other features for directing models toward safe outputs and tracking prompts to flag potentially problematic users will be coming soon. 

    Whether the user is typing in a prompt or if the model is processing third-party data, the monitoring system will evaluate it to see if it triggers any banned words or has hidden prompts before deciding to send it to the model to answer. After, the system then looks at the response by the model and checks if the model hallucinated information not in the document or the prompt.

    In the case of the Google Gemini images, filters made to reduce bias had unintended effects, which is an area where Microsoft says its Azure AI tools will allow for more customized control. Bird acknowledges that there is concern Microsoft and other companies could be deciding what is or isn’t appropriate for AI models, so her team added a way for Azure customers to toggle the filtering of hate speech or violence that the model sees and blocks. 

    In the future, Azure users can also get a report of users who attempt to trigger unsafe outputs. Bird says this allows system administrators to figure out which users are its own team of red teamers and which could be people with more malicious intent.

    Bird says the safety features are immediately “attached” to GPT-4 and other popular models like Llama 2. However, because Azure’s model garden contains many AI models, users of smaller, less used open-source systems may have to manually point the safety features to the models. 

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleSteelSeries’ multi-platform Arctis Nova Pro Wireless headset is nearly $70 off
    Next Article Apple’s OLED iPad Pro will reportedly arrive in May

    Related Posts

    Inside Microsoft’s big Xbox leadership shake-up

    Inside Microsoft’s big Xbox leadership shake-up

    February 23, 2026
    Does Big Tech actually care about fighting AI slop?

    Does Big Tech actually care about fighting AI slop?

    February 23, 2026
    Kohler’s new shower reuses dirty water to get you clean

    Kohler’s new shower reuses dirty water to get you clean

    February 23, 2026
    Apple’s newest AirTags are already on sale if you’re looking to upgrade

    Apple’s newest AirTags are already on sale if you’re looking to upgrade

    February 23, 2026
    The creators of Dark Sky have a new weather app that shares multiple predictions

    The creators of Dark Sky have a new weather app that shares multiple predictions

    February 23, 2026
    Discord distances itself from Persona age verification after user backlash

    Discord distances itself from Persona age verification after user backlash

    February 23, 2026
    Our Picks
    Does Big Tech actually care about fighting AI slop?

    Does Big Tech actually care about fighting AI slop?

    February 23, 2026
    Kohler’s new shower reuses dirty water to get you clean

    Kohler’s new shower reuses dirty water to get you clean

    February 23, 2026
    Apple’s newest AirTags are already on sale if you’re looking to upgrade

    Apple’s newest AirTags are already on sale if you’re looking to upgrade

    February 23, 2026
    The creators of Dark Sky have a new weather app that shares multiple predictions

    The creators of Dark Sky have a new weather app that shares multiple predictions

    February 23, 2026
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Discord distances itself from Persona age verification after user backlash News

    Discord distances itself from Persona age verification after user backlash

    By News RoomFebruary 23, 2026

    Discord is attempting to distance itself from the age verification provider Persona following a steady…

    Hank Green lets loose on YouTube, billionaires, and algorithms

    Hank Green lets loose on YouTube, billionaires, and algorithms

    February 23, 2026
    Nothing couldn’t wait to show off the Phone 4A

    Nothing couldn’t wait to show off the Phone 4A

    February 23, 2026
    Taara Beam provides 25Gbps connectivity over invisible beams of light

    Taara Beam provides 25Gbps connectivity over invisible beams of light

    February 23, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2026 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.