Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Top Hydrow Discount Codes for July

    July 3, 2025

    For Today’s Business Traveler, It’s All About Work-Life Integration

    July 3, 2025

    Phil Spencer isn’t retiring as the chief of Xbox “anytime soon”

    July 2, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » In Defense of AI Hallucinations
    Business

    In Defense of AI Hallucinations

    News RoomBy News RoomJanuary 10, 20244 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    No one knows whether artificial intelligence will be a boon or curse in the far future. But right now, there’s almost universal discomfort and contempt for one habit of these chatbots and agents: hallucinations, those made-up facts that appear in the outputs of large language models like ChatGPT. In the middle of what seems like a carefully constructed answer, the LLM will slip in something that seems reasonable but is a total fabrication. Your typical chatbot can make disgraced ex-congressman George Santos look like Abe Lincoln. Since it looks inevitable that chatbots will one day generate the vast majority of all prose ever written, all the AI companies are obsessed with minimizing and eliminating hallucinations, or at least convincing the world the problem is in hand.

    Obviously, the value of LLMs will reach a new level when and if hallucinations approach zero. But before that happens, I ask you to raise a toast to AI’s confabulations.

    Hallucinations fascinate me, even though AI scientists have a pretty good idea why they happen. An AI startup called Vectara has studied them and their prevalence, even compiling the hallucination rates of various models when asked to summarize a document. (OpenAI’s GPT-4 does best, hallucinating only around 3 percent of the time; Google’s now outdated Palm Chat—not its chatbot Bard!—had a shocking 27 percent rate, although to be fair, summarizing documents wasn’t in Palm Chat’s wheelhouse.) Vectara’s CTO, Amin Ahmad, says that LLMs create a compressed representation of all the training data fed through its artificial neurons. “The nature of compression is that the fine details can get lost,” he says. A model ends up primed with the most likely answers to queries from users but doesn’t have the exact facts at its disposal. “When it gets to the details it starts making things up,” he says.

    Santosh Vempala, a computer science professor at Georgia Tech, has also studied hallucinations. “A language model is just a probabilistic model of the world,” he says, not a truthful mirror of reality. Vempala explains that an LLM’s answer strives for a general calibration with the real world—as represented in its training data—which is “a weak version of accuracy.” His research, published with OpenAI’s Adam Kalai, found that hallucinations are unavoidable for facts that can’t be verified using the information in a model’s training data.

    That’s the science/math of AI hallucinations, but they’re also notable for the experience they can elicit in humans. At times, these generative fabrications can seem more plausible than actual facts, which are often astonishingly bizarre and unsatisfying. How often do you hear something described as so strange that no screenwriter would dare script it in a movie? These days, all the time! Hallucinations can seduce us by appearing to ground us to a world less jarring than the actual one we live in. What’s more, I find it telling to note just which details the bots tend to concoct. In their desperate attempt to fill in the blanks of a satisfying narrative, they gravitate toward the most statistically likely version of reality as represented in their internet-scale training data, which can be a truth in itself. I liken it to a fiction writer penning a novel inspired by real events. A good author will veer from what actually happened to an imagined scenario that reveals a deeper truth, striving to create something more real than reality.

    When I asked ChatGPT to write an obituary for me—admit it, you’ve tried this too—it got many things right but a few things wrong. It gave me grandchildren I didn’t have, bestowed an earlier birth date, and added a National Magazine Award to my résumé for articles I didn’t write about the dotcom bust in the late 1990s. In the LLM’s assessment of my life, this is something that should have happened based on the facts of my career. I agree! It’s only because of real life’s imperfectness that the American Society of Magazine Editors failed to award me the metal elephant sculpture that comes with that honor. After almost 50 years of magazine writing, that’s on them, not me! It’s almost as if ChatGPT took a poll of possible multiverses and found that in most of them I had an Ellie award. Sure, I would have preferred that, here in my own corner of the multiverse, human judges had called me to the podium. But recognition from a vamping artificial neural net is better than nothing.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleMy Parents’ Dementia Felt Like the End of Joy. Then Came the Robots
    Next Article The Plan to Put Pig Genes in Soy Beans for Tastier Fake Meat

    Related Posts

    For Today’s Business Traveler, It’s All About Work-Life Integration

    July 3, 2025

    Affluent Travelers Are Ditching Business Class for Business Jets

    July 2, 2025

    Airplane Wi-Fi Is Now … Good?

    July 2, 2025

    Business Travel Is Evolving Faster Than Ever. We’ll Help You Navigate It

    July 2, 2025

    Airport Lounges Are Sexy Again—if You Can Get In

    July 2, 2025

    Business Class Ain’t What It Used to Be. Don’t Tell First Class

    July 2, 2025
    Our Picks

    For Today’s Business Traveler, It’s All About Work-Life Integration

    July 3, 2025

    Phil Spencer isn’t retiring as the chief of Xbox “anytime soon”

    July 2, 2025

    Affluent Travelers Are Ditching Business Class for Business Jets

    July 2, 2025

    The Next Acetaminophen Tablet You Take Could Be Made From PET

    July 2, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Google’s fix for Pixel 6A battery overheating issues arrives next week

    By News RoomJuly 2, 2025

    A mandatory update for Google’s mid-range Pixel 6A phones is coming next week that is…

    Racist videos made with AI are going viral on TikTok

    July 2, 2025

    Airplane Wi-Fi Is Now … Good?

    July 2, 2025

    Blizzard cancels all new content for its tower defense mobile game Warcraft Rumble in light of recent heavy layoffs at parent company Microsoft.

    July 2, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.