Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Top Verizon Promo Codes and Deals for August 2025

    July 30, 2025

    Opera is filing a complaint over Microsoft’s tricks that push you to use Edge

    July 29, 2025

    YouTube tells creators they can drop more F-bombs

    July 29, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » The Real Demon Inside ChatGPT
    Business

    The Real Demon Inside ChatGPT

    News RoomBy News RoomJuly 29, 20254 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    But perhaps the most convincing piece of evidence suggesting that ChatGPT regurgitated the language of Warhammer 40,000 is that it kept asking if The Atlantic was interested in PDFs. The publishing division of Games Workshop, the UK company that owns the Warhammer franchise, regularly puts out updated rulebooks and guides to various characters. Buying all these books can get expensive, so some fans try to find pirated copies online.

    The Atlantic and OpenAI declined to comment.

    Earlier this month, the newsletter Garbage Day reported on similar experiences that a prominent tech investor may have had with ChatGPT. On social media, the investor shared screenshots of his conversations with the chatbot, in which it referenced an ominous-sounding entity he called a “non-governmental system.” He seemed to believe it had “negatively impacted over 7,000 lives,” and “extinguished 12 lives, each fully pattern-traced.” Other tech industry figures said the posts made them worry about the investor’s mental health.

    According to Garbage Day, the investor’s conversations with ChatGPT closely resemble writing from a science fiction project that began in the late 2000s called SCP, which stands for “secure, contain, protect.” Participants invent different SCPs—essentially spooky objects and mysterious phenomena—and then write fictional reports analyzing them. They often contain things like classification numbers and references to made-up science experiments, details that also appeared in the investor’s chat logs. (The investor did not respond to a request for comment.)

    There are plenty of other, more mundane examples of what can be thought of as the AI context problem. The other day, for instance, I did a Google search for “cavitation surgery,” a medical term I had seen cited in a random TikTok video. At the time, the top result was an automatically generated “AI Overview” explaining that cavitation surgery is “focused on removing infected or dead bone tissue from the jaw.”

    I couldn’t find any reputable scientific studies outlining such a condition, let alone research supporting that surgery is a good way to treat it. The American Dental Association doesn’t mention “cavitation surgery” anywhere on its website. Google’s AI Overview, it turns out, was pulled from sources like blog posts promoting alternative “holistic” dentists across the US. I learned this by clicking on a tiny icon next to the AI Overview, which opened a list of links Google had used to generate its answer.

    These citations are clearly better than nothing. Jennifer Kutz, a spokesperson for Google, says “we prominently showcase supporting links so people can dig deeper and learn more about what sources on the web are saying.” But by the time the links show up, Google’s AI has often already provided a satisfactory answer to many queries, one that reduces the visibility of pesky details like the website where the information was sourced and the identities of its authors.

    What remains is the language created by the AI, which, devoid of additional context, may understandably appear authoritative to many people. In just the past few weeks, tech executives have repeatedly used rhetoric implying generative AI is a source of expert information: Elon Musk claimed his latest AI model is “better than PhD level” in every academic discipline, with “no exceptions.” OpenAI CEO Sam Altman wrote that automated systems are now “smarter than people in many ways” and predicted the world is “close to building digital superintelligence.”

    Individual humans, though, don’t typically possess expertise in a wide range of fields. To make decisions, we take into consideration not only information itself, but where it comes from and how it’s presented. While I know nothing about the biology of jawbones, I generally don’t read random marketing blogs when I’m trying to learn about medicine. But AI tools often erase the kind of context people need to make snap decisions about where to direct their attention.

    The open internet is powerful because it connects people directly to the largest archive of human knowledge the world has ever created, spanning everything from Italian Renaissance paintings to PornHub comments. After ingesting all of it, AI companies used what amounts to the collective history of our species to create software that obscures its very richness and complexity. Becoming overly dependent on it may rob people of the opportunity to draw conclusions from looking at the evidence for themselves.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThe Hunt for a Fundamental Theory of Quantum Gravity
    Next Article Sony’s DualSense Edge is $30 off for a limited time

    Related Posts

    Programmers Aren’t So Humble Anymore—Maybe Because Nobody Codes in Perl

    July 29, 2025

    60 Italian Mayors Want to Be the Unlikely Solution to Self-Driving Cars in Europe

    July 29, 2025

    Tesla Readies a Taxi Service in San Francisco—but Not With Robotaxis

    July 29, 2025

    Trump’s Anti-Bias AI Order Is Just More Bias

    July 28, 2025

    Cursor’s New Bugbot Is Designed to Save Vibe Coders From Themselves

    July 26, 2025

    Americans Are Obsessed With Watching Short Video Dramas From China

    July 25, 2025
    Our Picks

    Opera is filing a complaint over Microsoft’s tricks that push you to use Edge

    July 29, 2025

    YouTube tells creators they can drop more F-bombs

    July 29, 2025

    Lovense was told its sex toy app leaked users’ emails and didn’t fix it

    July 29, 2025

    LG’s StanbyMe 2 is an unquestionably cool TV at a questionably high price

    July 29, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    The chaos and confusion of Itch and Steam’s abrupt adult game ban

    By News RoomJuly 29, 2025

    Two of the biggest digital games stores have stopped selling thousands of titles following pressure…

    Sony’s DualSense Edge is $30 off for a limited time

    July 29, 2025

    The Real Demon Inside ChatGPT

    July 29, 2025

    The Hunt for a Fundamental Theory of Quantum Gravity

    July 29, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.