Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    NBC Sports’ new real-time player tracking lets viewers focus on their favorite athletes

    NBC Sports’ new real-time player tracking lets viewers focus on their favorite athletes

    January 14, 2026
    Nvidia’s new app update includes DLSS 4.5 and more Control Panel features

    Nvidia’s new app update includes DLSS 4.5 and more Control Panel features

    January 14, 2026
    UK police blame Microsoft Copilot for intelligence mistake

    UK police blame Microsoft Copilot for intelligence mistake

    January 14, 2026
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » GPT-5 Doesn’t Dislike You—It Might Just Need a Benchmark for Emotional Intelligence
    Business

    GPT-5 Doesn’t Dislike You—It Might Just Need a Benchmark for Emotional Intelligence

    News RoomBy News RoomAugust 14, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    GPT-5 Doesn’t Dislike You—It Might Just Need a Benchmark for Emotional Intelligence

    Since the all-new ChatGPT launched on Thursday, some users have mourned the disappearance of a peppy and encouraging personality in favor of a colder, more businesslike one (a move seemingly designed to reduce unhealthy user behavior.) The backlash shows the challenge of building artificial intelligence systems that exhibit anything like real emotional intelligence.

    Researchers at MIT have proposed a new kind of AI benchmark to measure how AI systems can manipulate and influence their users—in both positive and negative ways—in a move that could perhaps help AI builders avoid similar backlashes in the future while also keeping vulnerable users safe.

    Most benchmarks try to gauge intelligence by testing a model’s ability to answer exam questions, solve logical puzzles, or come up with novel answers to knotty math problems. As the psychological impact of AI use becomes more apparent, we may see MIT propose more benchmarks aimed at measuring more subtle aspects of intelligence as well as machine-to-human interactions.

    An MIT paper shared with WIRED outlines several measures that the new benchmark will look for, including encouraging healthy social habits in users; spurring them to develop critical thinking and reasoning skills; fostering creativity; and stimulating a sense of purpose. The idea is to encourage the development of AI systems that understand how to discourage users from becoming overly reliant on their outputs or that recognize when someone is addicted to artificial romantic relationships and help them build real ones.

    ChatGPT and other chatbots are adept at mimicking engaging human communication, but this can also have surprising and undesirable results. In April, OpenAI tweaked its models to make them less sycophantic, or inclined to go along with everything a user says. Some users appear to spiral into harmful delusional thinking after conversing with chatbots that role play fantastic scenarios. Anthropic has also updated Claude to avoid reinforcing “mania, psychosis, dissociation or loss of attachment with reality.”

    The MIT researchers led by Pattie Maes, a professor at the institute’s Media Lab, say they hope that the new benchmark could help AI developers build systems that better understand how to inspire healthier behavior among users. The researchers previously worked with OpenAI on a study that showed users who view ChatGPT as a friend could experience higher emotional dependence and experience “problematic use”.

    Valdemar Danry, a researcher at MIT’s Media Lab who worked on this study and helped devise the new benchmark, notes that AI models can sometimes provide valuable emotional support to users. “You can have the smartest reasoning model in the world, but if it’s incapable of delivering this emotional support, which is what many users are likely using these LLMs for, then more reasoning is not necessarily a good thing for that specific task,” he says.

    Danry says that a sufficiently smart model should ideally recognize if it is having a negative psychological effect and be optimized for healthier results. “What you want is a model that says ‘I’m here to listen, but maybe you should go and talk to your dad about these issues.’”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleI Replaced My Mac With an iPad for an Entire Week. It Went as Well as You’d Expect
    Next Article Verge readers can get an exclusive discount on the Boox Palma 2

    Related Posts

    What Happens When Your Coworkers Are AI Agents

    What Happens When Your Coworkers Are AI Agents

    December 9, 2025
    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    December 9, 2025
    An AI Dark Horse Is Rewriting the Rules of Game Design

    An AI Dark Horse Is Rewriting the Rules of Game Design

    December 9, 2025
    Watch the Highlights From WIRED’s Big Interview Event Right Here

    Watch the Highlights From WIRED’s Big Interview Event Right Here

    December 9, 2025
    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    December 4, 2025
    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    December 4, 2025
    Our Picks
    Nvidia’s new app update includes DLSS 4.5 and more Control Panel features

    Nvidia’s new app update includes DLSS 4.5 and more Control Panel features

    January 14, 2026
    UK police blame Microsoft Copilot for intelligence mistake

    UK police blame Microsoft Copilot for intelligence mistake

    January 14, 2026
    Why the White House keeps shitposting

    Why the White House keeps shitposting

    January 13, 2026
    Meta confirms Reality Labs layoffs and shifts to invest more in wearables

    Meta confirms Reality Labs layoffs and shifts to invest more in wearables

    January 13, 2026
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Google’s Veo now turns portrait images into vertical AI videos News

    Google’s Veo now turns portrait images into vertical AI videos

    By News RoomJanuary 13, 2026

    Google is making its Veo 3.1 AI video model pay closer attention to the reference…

    Lego Smart Brick: watch an immersive 15-minute demo like you’re right there with us at CES

    Lego Smart Brick: watch an immersive 15-minute demo like you’re right there with us at CES

    January 13, 2026
    Meta is closing down three VR studios as part of its metaverse cuts

    Meta is closing down three VR studios as part of its metaverse cuts

    January 13, 2026
    Verizon gets FCC permission to end 60-day phone unlocking rule

    Verizon gets FCC permission to end 60-day phone unlocking rule

    January 13, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2026 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.