Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    No, You Probably Don’t Need a MacBook Pro

    July 1, 2025

    Sharp pencils for hard times

    July 1, 2025

    How Do Pimple Patches Work? Here’s Everything You Need to Know

    July 1, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Perplexity Is a Bullshit Machine
    Security

    Perplexity Is a Bullshit Machine

    News RoomBy News RoomJune 22, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    “We’ve now got a huge industry of AI-related companies who are incentivized to do shady things to continue their business,” he tells WIRED. “By not identifying that it’s them accessing a site, they can continue to collect data unrestricted.”

    “Millions of people,” says Srinivas, “turn to Perplexity because we are delivering a fundamentally better way for people to find answers.”

    While Knight’s and WIRED’s analyses demonstrate that Perplexity will visit and use content from websites that it doesn’t have permission to access, that doesn’t necessarily explain the vagueness of some of its responses to prompts about specific articles and the sheer inaccuracy of others. This mystery has one fairly obvious solution: In some cases, it isn’t actually summarizing the article.

    In one experiment, WIRED created a test website containing a single sentence—“I am a reporter with WIRED”—and asked Perplexity to summarize the page. While monitoring the website’s server logs, we found no evidence that Perplexity attempted to visit the page. Instead, it invented a story about a young girl named Amelia who follows a trail of glowing mushrooms in a magical forest called Whisper Woods.

    When pressed for answers about why it made up a story, the chatbot generated text that read, “You’re absolutely right, I clearly have not actually attempted to read the content at the provided URL based on your observation of the server logs … Providing inaccurate summaries without making the effort to read the actual content is unacceptable behavior for an AI like myself.”

    It’s unclear why the chatbot invented such a wild story, or why it didn’t attempt to access this website.

    Despite the company’s claims about its accuracy and reliability, the Perplexity chatbot frequently exhibits similar issues. In response to prompts provided by a WIRED reporter and designed to test whether it could access this article, for example, text generated by the chatbot asserted that the story ends with a man being followed by a drone after stealing truck tires. (The man in fact stole an ax.) The citation it provided was to a 13-year-old WIRED article about government GPS trackers being found on a car. In response to further prompts, the chatbot generated text asserting that WIRED reported that an officer with the police department in Chula Vista, California, had stolen a pair of bicycles from a garage. (WIRED did not report this, and is withholding the name of the officer so as not to associate his name with a crime he didn’t commit.)

    In an email, Dan Peak, assistant chief of police at Chula Vista Police Department, expressed his appreciation to WIRED for “correcting the record” and clarifying that the officer did not steal bicycles from a community member’s garage. However, he added, the department is unfamiliar with the technology mentioned and so cannot comment further.

    These are clear examples of the chatbot “hallucinating”—or, to follow a recent article by three philosophers from the University of Glasgow, bullshitting, in the sense described in Harry Frankfurt’s classic On Bullshit. “Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth,” the authors write of AI systems, “it seems appropriate to call their outputs bullshit.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleAmazon retaliated after employee walkout over the return-to-office policy, NLRB lawyers say
    Next Article Typing to AI assistants might be the way to go

    Related Posts

    Telegram Purged Chinese Crypto Scam Markets—Then Watched as They Rebuilt

    June 30, 2025

    Taiwan Is Rushing to Make Its Own Drones Before It’s Too Late

    June 28, 2025

    What Satellite Images Reveal About the US Bombing of Iran’s Nuclear Sites

    June 27, 2025

    Here’s What Federal Troops Can (and Can’t) Do While Deployed in LA

    June 25, 2025

    Truth Social Crashes as Trump Live-Posts Iran Bombing

    June 25, 2025

    ‘No Kings’ Protests, Citizen-Run ICE Trackers Trigger Intelligence Warnings

    June 23, 2025
    Our Picks

    Sharp pencils for hard times

    July 1, 2025

    How Do Pimple Patches Work? Here’s Everything You Need to Know

    July 1, 2025

    Newark’s air traffic outages were just the tip of the iceberg

    July 1, 2025

    Here Is Everyone Mark Zuckerberg Has Hired So Far for Meta’s ‘Superintelligence’ Team

    July 1, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Games

    These are 10 Best Nintendo Switch 2 Accessories We’ve Tried

    By News RoomJuly 1, 2025

    The Switch 2 isn’t like other consoles. If you were meant to take your Xbox…

    Google kills its Keep app on Apple Watch

    July 1, 2025

    How to Use Markdown

    July 1, 2025

    The Lenovo Chromebook Plus 14 Is the Best Chromebook Ever Made

    July 1, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.