Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    Wikipedia is getting in on the yearly wrapped game

    Wikipedia is getting in on the yearly wrapped game

    December 5, 2025
    OpenAI’s GPT-5.2 ‘code red’ response to Google is coming next week

    OpenAI’s GPT-5.2 ‘code red’ response to Google is coming next week

    December 5, 2025
    The Apple Airpods 4 with ANC are at their lowest price

    The Apple Airpods 4 with ANC are at their lowest price

    December 5, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Perplexity Is a Bullshit Machine
    Security

    Perplexity Is a Bullshit Machine

    News RoomBy News RoomJune 22, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    Perplexity Is a Bullshit Machine

    “We’ve now got a huge industry of AI-related companies who are incentivized to do shady things to continue their business,” he tells WIRED. “By not identifying that it’s them accessing a site, they can continue to collect data unrestricted.”

    “Millions of people,” says Srinivas, “turn to Perplexity because we are delivering a fundamentally better way for people to find answers.”

    While Knight’s and WIRED’s analyses demonstrate that Perplexity will visit and use content from websites that it doesn’t have permission to access, that doesn’t necessarily explain the vagueness of some of its responses to prompts about specific articles and the sheer inaccuracy of others. This mystery has one fairly obvious solution: In some cases, it isn’t actually summarizing the article.

    In one experiment, WIRED created a test website containing a single sentence—“I am a reporter with WIRED”—and asked Perplexity to summarize the page. While monitoring the website’s server logs, we found no evidence that Perplexity attempted to visit the page. Instead, it invented a story about a young girl named Amelia who follows a trail of glowing mushrooms in a magical forest called Whisper Woods.

    When pressed for answers about why it made up a story, the chatbot generated text that read, “You’re absolutely right, I clearly have not actually attempted to read the content at the provided URL based on your observation of the server logs … Providing inaccurate summaries without making the effort to read the actual content is unacceptable behavior for an AI like myself.”

    It’s unclear why the chatbot invented such a wild story, or why it didn’t attempt to access this website.

    Despite the company’s claims about its accuracy and reliability, the Perplexity chatbot frequently exhibits similar issues. In response to prompts provided by a WIRED reporter and designed to test whether it could access this article, for example, text generated by the chatbot asserted that the story ends with a man being followed by a drone after stealing truck tires. (The man in fact stole an ax.) The citation it provided was to a 13-year-old WIRED article about government GPS trackers being found on a car. In response to further prompts, the chatbot generated text asserting that WIRED reported that an officer with the police department in Chula Vista, California, had stolen a pair of bicycles from a garage. (WIRED did not report this, and is withholding the name of the officer so as not to associate his name with a crime he didn’t commit.)

    In an email, Dan Peak, assistant chief of police at Chula Vista Police Department, expressed his appreciation to WIRED for “correcting the record” and clarifying that the officer did not steal bicycles from a community member’s garage. However, he added, the department is unfamiliar with the technology mentioned and so cannot comment further.

    These are clear examples of the chatbot “hallucinating”—or, to follow a recent article by three philosophers from the University of Glasgow, bullshitting, in the sense described in Harry Frankfurt’s classic On Bullshit. “Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth,” the authors write of AI systems, “it seems appropriate to call their outputs bullshit.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleAmazon retaliated after employee walkout over the return-to-office policy, NLRB lawyers say
    Next Article Typing to AI assistants might be the way to go

    Related Posts

    The Louisiana Department of Wildlife and Fisheries Is Detaining People for ICE

    The Louisiana Department of Wildlife and Fisheries Is Detaining People for ICE

    December 5, 2025
    Your Data Might Determine How Much You Pay for Eggs

    Your Data Might Determine How Much You Pay for Eggs

    December 4, 2025
    Russia Wants This Mega Missile to Intimidate the West, but It Keeps Crashing

    Russia Wants This Mega Missile to Intimidate the West, but It Keeps Crashing

    December 4, 2025
    This Hacker Conference Installed a Literal Antivirus Monitoring System

    This Hacker Conference Installed a Literal Antivirus Monitoring System

    December 4, 2025
    Flock Uses Overseas Gig Workers to Build Its Surveillance AI

    Flock Uses Overseas Gig Workers to Build Its Surveillance AI

    December 4, 2025
    The WIRED Guide to Digital Opsec for Teens

    The WIRED Guide to Digital Opsec for Teens

    December 2, 2025
    Our Picks
    OpenAI’s GPT-5.2 ‘code red’ response to Google is coming next week

    OpenAI’s GPT-5.2 ‘code red’ response to Google is coming next week

    December 5, 2025
    The Apple Airpods 4 with ANC are at their lowest price

    The Apple Airpods 4 with ANC are at their lowest price

    December 5, 2025
    Meta Poached Apple’s Top Design Guys to Fix Its Software UI

    Meta Poached Apple’s Top Design Guys to Fix Its Software UI

    December 5, 2025
    This tiny magnetic e-reader sticks to the back of your iPhone

    This tiny magnetic e-reader sticks to the back of your iPhone

    December 5, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Pluribus turns a ‘caloric deficit’ into a nightmare News

    Pluribus turns a ‘caloric deficit’ into a nightmare

    By News RoomDecember 5, 2025

    Every week since Pluribus’ premiere, I’ve presented a handful of burning questions I have about…

    This Unique Translator Gets Bogged Down by Half-Baked Features

    This Unique Translator Gets Bogged Down by Half-Baked Features

    December 5, 2025
    EU fines X 0 million over ‘deceptive’ blue checkmarks

    EU fines X $140 million over ‘deceptive’ blue checkmarks

    December 5, 2025
    Facebook and Instagram have a new hub to help get hijacked accounts back

    Facebook and Instagram have a new hub to help get hijacked accounts back

    December 5, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.