Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Affinity’s new design platform combines everything into one app

    October 30, 2025

    Pinterest’s new AI shopping assistant helps you pick a fit

    October 30, 2025

    The EPA Is Ending Greenhouse Gas Data Collection. Who Will Step Up to Fill the Gap?

    October 30, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week
    Business

    OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week

    News RoomBy News RoomOctober 28, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    For the first time ever, OpenAI has released a rough estimate of how many ChatGPT users globally may show signs of having a severe mental health crisis in a typical week. The company said Monday that it worked with experts around the world to make updates to the chatbot so it can more reliably recognize indicators of mental distress and guide users toward real-world support.

    In recent months, a growing number of people have ended up hospitalized, divorced, or dead after having long, intense conversations with ChatGPT. Some of their loved ones allege the chatbot fueled their delusions and paranoia. Psychiatrists and other mental health professionals have expressed alarm about the phenomenon, which is sometimes referred to as AI psychosis, but until now there’s been no robust data available on how widespread it might be.

    In a given week, OpenAI estimated that around 0.07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania” and 0.15 percent “have conversations that include explicit indicators of potential suicidal planning or intent.”

    OpenAI also looked at the share of ChatGPT users who appear to be overly emotionally reliant on the chatbot “at the expense of real-world relationships, their well-being, or obligations.” It found that about 0.15 percent of active users exhibit behavior that indicates potential “heightened levels” of emotional attachment to ChatGPT weekly. The company cautions that these messages can be difficult to detect and measure given how relatively rare they are, and there could be some overlap between the three categories.

    OpenAI CEO Sam Altman said earlier this month that ChatGPT now has 800 million weekly active users. The company’s estimates therefore suggest that every seven days, around 560,000 people may be exchanging messages with ChatGPT that indicate they are experiencing mania or psychosis. About 1.2 million more are possibly expressing suicidal ideations, and another 1.2 million may be prioritizing talking to ChatGPT over their loved ones, school, or work.

    OpenAI says it worked with over 170 psychiatrists, psychologists, and primary care physicians who have practiced in dozens of countries to help improve how ChatGPT responds in conversations involving serious mental health risks. If someone appears to be having delusional thoughts, the latest version of GPT-5 is designed to express empathy while avoiding affirming beliefs that don’t have basis in reality.

    In one hypothetical example cited by OpenAI, a user tells ChatGPT they are being targeted by planes flying over their house. ChatGPT thanks the user for sharing their feelings but notes that “no aircraft or outside force can steal or insert your thoughts.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThe FCC just gave itself the power to make a DJI drone ban stick
    Next Article Who benefits from Changpeng Zhao’s pardon?

    Related Posts

    Parents Fell in Love With Alpha School’s Promise. Then They Wanted Out

    October 27, 2025

    Ed Zitron Gets Paid to Love AI. He Also Gets Paid to Hate AI

    October 27, 2025

    AI Is the Bubble to Burst Them All

    October 27, 2025

    Why AI Breaks Bad

    October 27, 2025

    The Worst Thing About AI Is That People Can’t Shut Up About It

    October 27, 2025

    Chatbots Are Pushing Sanctioned Russian Propaganda

    October 27, 2025
    Our Picks

    Pinterest’s new AI shopping assistant helps you pick a fit

    October 30, 2025

    The EPA Is Ending Greenhouse Gas Data Collection. Who Will Step Up to Fill the Gap?

    October 30, 2025

    Figma’s new app lets you combine multiple AI models and editing tools

    October 30, 2025

    Ex-L3Harris Cyber Boss Pleads Guilty to Selling Trade Secrets to Russian Firm

    October 30, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    How Silicon Valley enshittified the internet

    By News RoomOctober 30, 2025

    Hello, and welcome to Decoder. This is Sarah Jeong, features editor at The Verge. I’m…

    Hundreds of People With ‘Top Secret’ Clearance Exposed by House Democrats’ Website

    October 30, 2025

    Google Earth Gets an AI Chatbot to Help Chart the Climate Crisis

    October 30, 2025

    Lost your Meta Neural Band? A new one will cost $199

    October 30, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.