Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    Civilization VII is coming to Apple Arcade

    Civilization VII is coming to Apple Arcade

    January 14, 2026
    DJI Neo 2 review: I guess I’m a drone guy now

    DJI Neo 2 review: I guess I’m a drone guy now

    January 14, 2026
    Parents can put a time limit on YouTube Shorts scrolling

    Parents can put a time limit on YouTube Shorts scrolling

    January 14, 2026
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Should AI Get Legal Rights?
    Business

    Should AI Get Legal Rights?

    News RoomBy News RoomSeptember 6, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    Should AI Get Legal Rights?

    In one paper Eleos AI published, the nonprofit argues for evaluating AI consciousness using a “computational functionalism” approach. A similar idea was once championed by none other than Putnam, though he criticized it later in his career. The theory suggests that human minds can be thought of as specific kinds of computational systems. From there, you can then figure out if other computational systems, such as a chabot, have indicators of sentience similar to those of a human.

    Eleos AI said in the paper that “a major challenge in applying” this approach “is that it involves significant judgment calls, both in formulating the indicators and in evaluating their presence or absence in AI systems.”

    Model welfare is, of course, a nascent and still evolving field. It’s got plenty of critics, including Mustafa Suleyman, the CEO of Microsoft AI, who recently published a blog about “seemingly conscious AI.”

    “This is both premature, and frankly dangerous,” Suleyman wrote, referring generally to the field of model welfare research. “All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.”

    Suleyman wrote that “there is zero evidence” today that conscious AI exists. He included a link to a paper that Long coauthored in 2023 that proposed a new framework for evaluating whether an AI system has “indicator properties” of consciousness. (Suleyman did not respond to a request for comment from WIRED.)

    I chatted with Long and Campbell shortly after Suleyman published his blog. They told me that, while they agreed with much of what he said, they don’t believe model welfare research should cease to exist. Rather, they argue that the harms Suleyman referenced are the exact reasons why they want to study the topic in the first place.

    “When you have a big, confusing problem or question, the one way to guarantee you’re not going to solve it is to throw your hands up and be like ‘Oh wow, this is too complicated,’” Campbell says. “I think we should at least try.”

    Testing Consciousness

    Model welfare researchers primarily concern themselves with questions of consciousness. If we can prove that you and I are conscious, they argue, then the same logic could be applied to large language models. To be clear, neither Long nor Campbell think that AI is conscious today, and they also aren’t sure it ever will be. But they want to develop tests that would allow us to prove it.

    “The delusions are from people who are concerned with the actual question, ‘Is this AI, conscious?’ and having a scientific framework for thinking about that, I think, is just robustly good,” Long says.

    But in a world where AI research can be packaged into sensational headlines and social media videos, heady philosophical questions and mind-bending experiments can easily be misconstrued. Take what happened when Anthropic published a safety report that showed Claude Opus 4 may take “harmful actions” in extreme circumstances, like blackmailing a fictional engineer to prevent it from being shut off.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleFirst look: Dyson’s Spot+Scrub Ai robot seeks out stains
    Next Article SSA Whistleblower’s Resignation Email Mysteriously Disappeared From Inboxes

    Related Posts

    What Happens When Your Coworkers Are AI Agents

    What Happens When Your Coworkers Are AI Agents

    December 9, 2025
    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    December 9, 2025
    An AI Dark Horse Is Rewriting the Rules of Game Design

    An AI Dark Horse Is Rewriting the Rules of Game Design

    December 9, 2025
    Watch the Highlights From WIRED’s Big Interview Event Right Here

    Watch the Highlights From WIRED’s Big Interview Event Right Here

    December 9, 2025
    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    December 4, 2025
    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    December 4, 2025
    Our Picks
    DJI Neo 2 review: I guess I’m a drone guy now

    DJI Neo 2 review: I guess I’m a drone guy now

    January 14, 2026
    Parents can put a time limit on YouTube Shorts scrolling

    Parents can put a time limit on YouTube Shorts scrolling

    January 14, 2026
    NBC Sports’ new real-time player tracking lets viewers focus on their favorite athletes

    NBC Sports’ new real-time player tracking lets viewers focus on their favorite athletes

    January 14, 2026
    Nvidia’s new app update includes DLSS 4.5 and more Control Panel features

    Nvidia’s new app update includes DLSS 4.5 and more Control Panel features

    January 14, 2026
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    UK police blame Microsoft Copilot for intelligence mistake News

    UK police blame Microsoft Copilot for intelligence mistake

    By News RoomJanuary 14, 2026

    The chief constable of one of Britain’s largest police forces has admitted that Microsoft’s Copilot…

    Why the White House keeps shitposting

    Why the White House keeps shitposting

    January 13, 2026
    Meta confirms Reality Labs layoffs and shifts to invest more in wearables

    Meta confirms Reality Labs layoffs and shifts to invest more in wearables

    January 13, 2026
    Google’s Veo now turns portrait images into vertical AI videos

    Google’s Veo now turns portrait images into vertical AI videos

    January 13, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2026 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.