Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    You don’t have to spend more than  on a great USB-C dock for your Switch 2

    You don’t have to spend more than $50 on a great USB-C dock for your Switch 2

    April 11, 2026
    The new show making fun of tech bros

    The new show making fun of tech bros

    April 11, 2026
    Is the ‘Holy Grail of batteries’ finally ready to bless us with its presence?

    Is the ‘Holy Grail of batteries’ finally ready to bless us with its presence?

    April 11, 2026
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » A Lawsuit Against Perplexity Calls Out Fake News AI Hallucinations
    Business

    A Lawsuit Against Perplexity Calls Out Fake News AI Hallucinations

    News RoomBy News RoomOctober 23, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    A Lawsuit Against Perplexity Calls Out Fake News AI Hallucinations

    Perplexity did not respond to requests for comment.

    In a statement emailed to WIRED, News Corp chief executive Robert Thomson compared Perplexity unfavorably to OpenAI. “We applaud principled companies like OpenAI, which understands that integrity and creativity are essential if we are to realize the potential of Artificial Intelligence,” the statement says. “Perplexity is not the only AI company abusing intellectual property and it is not the only AI company that we will pursue with vigor and rigor. We have made clear that we would rather woo than sue, but, for the sake of our journalists, our writers and our company, we must challenge the content kleptocracy.”

    OpenAI is facing its own accusations of trademark dilution, though. In New York Times v. OpenAI, the Times alleges that ChatGPT and Bing Chat will attribute made-up quotes to the Times, and accuses OpenAI and Microsoft of damaging its reputation through trademark dilution. In one example cited in the lawsuit, the Times alleges that Bing Chat claimed that the Times called red wine (in moderation) a “heart-healthy” food, when in fact it did not; the Times argues that its actual reporting has debunked claims about the healthfulness of moderate drinking.

    “Copying news articles to operate substitutive, commercial generative AI products is unlawful, as we made clear in our letters to Perplexity and our litigation against Microsoft and OpenAI,” says NYT director of external communications Charlie Stadtlander. “We applaud this lawsuit from Dow Jones and the New York Post, which is an important step toward ensuring that publisher content is protected from this kind of misappropriation.”

    Some legal experts are unsure that the false designation of origin and trademark dilution charge will be fruitful. Intellectual property lawyer Vincent Allen, a partner at Carstens, Allen & Gourley, believes that the copyright infringement claims in this lawsuit are stronger, and that he will “be surprised” if the false designation of origin charge stands. Both Allen and James Grimmelmann, a professor of digital and internet law at Cornell University, believe that the landmark trademark case, Dastar v. Twentieth Century Fox Film Corp., could stymie this line of attack. (In that ruling, about a dispute over old World War II footage, the Supreme Court held that “origin” doesn’t apply to authorship for trademark law, but is instead limited to tangible goods—like a bootleg purse—rather than counterfeit creative work like films. Further, Grimmelmann is skeptical that the trademark dilution claim will hold water. “Dilution involves the use of a trademark on ones own goods or services in a way that impairs the distinctiveness of a famous mark. I … just don’t see that here,” he says.

    If publishers prevail in arguing that hallucinations can violate trademark law, AI companies could face “immense difficulties” according to Matthew Sag, a professor of law and artificial intelligence at Emory University.

    “It is absolutely impossible to guarantee that a language model will not hallucinate,” Sag says. In his view, the way language models operate by predicting words that sound correct in response to prompts is always a type of hallucination—sometimes it’s just more plausible-sounding than others.

    “We only call it a hallucination if it doesn’t match up with our reality, but the process is exactly the same whether we like the output or not.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThe Boox Palma 2 has a faster processor and adds a fingerprint reader
    Next Article Apple is ‘concerned’ about AI turning real photos into ‘fantasy’

    Related Posts

    What Happens When Your Coworkers Are AI Agents

    What Happens When Your Coworkers Are AI Agents

    December 9, 2025
    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    December 9, 2025
    An AI Dark Horse Is Rewriting the Rules of Game Design

    An AI Dark Horse Is Rewriting the Rules of Game Design

    December 9, 2025
    Watch the Highlights From WIRED’s Big Interview Event Right Here

    Watch the Highlights From WIRED’s Big Interview Event Right Here

    December 9, 2025
    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    December 4, 2025
    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    December 4, 2025
    Our Picks
    The new show making fun of tech bros

    The new show making fun of tech bros

    April 11, 2026
    Is the ‘Holy Grail of batteries’ finally ready to bless us with its presence?

    Is the ‘Holy Grail of batteries’ finally ready to bless us with its presence?

    April 11, 2026
    Congress can finally close a mass surveillance loophole — but will they?

    Congress can finally close a mass surveillance loophole — but will they?

    April 10, 2026
    20-year-old man arrested for allegedly throwing a Molotov cocktail at Sam Altman’s house

    20-year-old man arrested for allegedly throwing a Molotov cocktail at Sam Altman’s house

    April 10, 2026
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    The Iranian Lego AI video creators credit their virality to ‘heart’ News

    The Iranian Lego AI video creators credit their virality to ‘heart’

    By News RoomApril 10, 2026

    Donald Trump has spun the recent rescue of a downed airman whose fighter jet was…

    Amazon Luna axes third-party game purchases

    Amazon Luna axes third-party game purchases

    April 10, 2026
    Microsoft finally lets Windows 11 testers unlock experimental features without ViVeTool

    Microsoft finally lets Windows 11 testers unlock experimental features without ViVeTool

    April 10, 2026
    Little Snitch’s software counter surveillance jumps from Mac to Linux

    Little Snitch’s software counter surveillance jumps from Mac to Linux

    April 10, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2026 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.