Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    Meet your new phone away from phone

    Meet your new phone away from phone

    March 6, 2026
    Lawmakers just advanced online safety laws that require age verification at the app store

    Lawmakers just advanced online safety laws that require age verification at the app store

    March 5, 2026
    Microsoft teases its next Xbox, says ‘Project Helix’ will play PC games too

    Microsoft teases its next Xbox, says ‘Project Helix’ will play PC games too

    March 5, 2026
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI’s Transcription Tool Hallucinates. Hospitals Are Using It Anyway
    Business

    OpenAI’s Transcription Tool Hallucinates. Hospitals Are Using It Anyway

    News RoomBy News RoomOctober 30, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    OpenAI’s Transcription Tool Hallucinates. Hospitals Are Using It Anyway

    On Saturday, an Associated Press investigation revealed that OpenAI’s Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use. The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a “confabulation” or “hallucination” in the AI field.

    Upon its release in 2022, OpenAI claimed that Whisper approached “human level robustness” in audio transcription accuracy. However, a University of Michigan researcher told the AP that Whisper created false text in 80 percent of public meeting transcripts examined. Another developer, unnamed in the AP report, claimed to have found invented content in almost all of his 26,000 test transcriptions.

    The fabrications pose particular risks in health care settings. Despite OpenAI’s warnings against using Whisper for “high-risk domains,” over 30,000 medical workers now use Whisper-based tools to transcribe patient visits, according to the AP report. The Mankato Clinic in Minnesota and Children’s Hospital Los Angeles are among 40 health systems using a Whisper-powered AI copilot service from medical tech company Nabla that is fine-tuned on medical terminology.

    Nabla acknowledges that Whisper can confabulate, but it also reportedly erases original audio recordings “for data safety reasons.” This could cause additional issues, since doctors cannot verify accuracy against the source material. And deaf patients may be highly impacted by mistaken transcripts since they would have no way to know if medical transcript audio is accurate or not.

    The potential problems with Whisper extend beyond health care. Researchers from Cornell University and the University of Virginia studied thousands of audio samples and found Whisper adding nonexistent violent content and racial commentary to neutral speech. They found that 1 percent of samples included “entire hallucinated phrases or sentences which did not exist in any form in the underlying audio” and that 38 percent of those included “explicit harms such as perpetuating violence, making up inaccurate associations, or implying false authority.”

    In one case from the study cited by AP, when a speaker described “two other girls and one lady,” Whisper added fictional text specifying that they “were Black.” In another, the audio said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.” Whisper transcribed it to, “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”

    An OpenAI spokesperson told the AP that the company appreciates the researchers’ findings and that it actively studies how to reduce fabrications and incorporates feedback in updates to the model.

    Why Whisper Confabulates

    The key to Whisper’s unsuitability in high-risk domains comes from its propensity to sometimes confabulate, or plausibly make up, inaccurate outputs. The AP report says, “Researchers aren’t certain why Whisper and similar tools hallucinate,” but that isn’t true. We know exactly why Transformer-based AI models like Whisper behave this way.

    Whisper is based on technology that is designed to predict the next most likely token (chunk of data) that should appear after a sequence of tokens provided by a user. In the case of ChatGPT, the input tokens come in the form of a text prompt. In the case of Whisper, the input is tokenized audio data.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleMeta is pushing for the government to use its AI
    Next Article Google’s AI-powered weather app is rolling out to older Pixels

    Related Posts

    What Happens When Your Coworkers Are AI Agents

    What Happens When Your Coworkers Are AI Agents

    December 9, 2025
    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    December 9, 2025
    An AI Dark Horse Is Rewriting the Rules of Game Design

    An AI Dark Horse Is Rewriting the Rules of Game Design

    December 9, 2025
    Watch the Highlights From WIRED’s Big Interview Event Right Here

    Watch the Highlights From WIRED’s Big Interview Event Right Here

    December 9, 2025
    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    December 4, 2025
    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    December 4, 2025
    Our Picks
    Lawmakers just advanced online safety laws that require age verification at the app store

    Lawmakers just advanced online safety laws that require age verification at the app store

    March 5, 2026
    Microsoft teases its next Xbox, says ‘Project Helix’ will play PC games too

    Microsoft teases its next Xbox, says ‘Project Helix’ will play PC games too

    March 5, 2026
    You can now fill your home with Ikea’s cheap and tiny new Bluetooth speaker

    You can now fill your home with Ikea’s cheap and tiny new Bluetooth speaker

    March 5, 2026
    Amazon.com is up and down, with login errors and prices not loading

    Amazon.com is up and down, with login errors and prices not loading

    March 5, 2026
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Osmo is trying to crack AR edutainment (again) News

    Osmo is trying to crack AR edutainment (again)

    By News RoomMarch 5, 2026

    This is Lowpass by Janko Roettgers, a newsletter on the ever-evolving intersection of tech and…

    Birdbuddy’s AI-powered hummingbird feeder is matching its best price to date

    Birdbuddy’s AI-powered hummingbird feeder is matching its best price to date

    March 5, 2026
    Roblox is censoring chats with AI

    Roblox is censoring chats with AI

    March 5, 2026
    Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya

    Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya

    March 5, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2026 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.