Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    Samsung’s next Unpacked is confirmed for later this month

    Samsung’s next Unpacked is confirmed for later this month

    February 10, 2026
    Bezos could have saved WaPo’s sports and local journalists. He laid them off instead.

    Bezos could have saved WaPo’s sports and local journalists. He laid them off instead.

    February 10, 2026
    Amazon Ring’s Super Bowl ad sparks backlash amid fears of mass surveillance

    Amazon Ring’s Super Bowl ad sparks backlash amid fears of mass surveillance

    February 10, 2026
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI’s Teen Safety Features Will Walk a Thin Line
    Business

    OpenAI’s Teen Safety Features Will Walk a Thin Line

    News RoomBy News RoomSeptember 18, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    OpenAI’s Teen Safety Features Will Walk a Thin Line

    OpenAI announced new teen safety features for ChatGPT on Tuesday as part of an ongoing effort to respond to concerns about how minors engage with chatbots. The company is building an age-prediction system that identifies if a user is under 18 years old and routes them to an “age-appropriate” system that blocks graphic sexual content. If the system detects that the user is considering suicide or self-harm, it will contact the user’s parents. In cases of imminent danger, if a user’s parents are unreachable, the system may contact the authorities.

    In a blog post about the announcement, CEO Sam Altman wrote that the company is attempting to balance freedom, privacy, and teen safety.

    “We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict,” Altman wrote. “These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”

    While OpenAI tends to prioritize privacy and freedom for adult users, for teens the company says it puts safety first. By the end of September, the company will roll out parental controls so that parents can link their child’s account to their own, allowing them to manage the conversations and disable features. Parents can also receive notifications when “the system detects their teen is in a moment of acute distress,” according to the company’s blog post, and set limits on the times of day their children can use ChatGPT.

    The moves come as deeply troubling headlines continue to surface about people dying by suicide or committing violence against family members after engaging in lengthy conversations with AI chatbots. Lawmakers have taken notice, and both Meta and OpenAI are under scrutiny. Earlier this month, the Federal Trade Commission asked Meta, OpenAI, Google, and other AI firms to hand over information about how their technologies impact kids, according to Bloomberg.

    At the same time, OpenAI is still under a court order mandating that it preserve consumer chats indefinitely—a fact that the company is extremely unhappy about, according to sources I’ve spoken to. Today’s news is both an important step toward protecting minors and a savvy PR move to reinforce the idea that conversations with chatbots are so personal that consumer privacy should only be breached in the most extreme circumstances.

    “A Sexbot Avatar in ChatGPT”

    From the sources I’ve spoken to at OpenAI, the burden of protecting users weighs heavily on many researchers. They want to create a user experience that is fun and engaging, but it can quickly veer into becoming disastrously sycophantic. It’s positive that companies like OpenAI are taking steps to protect minors. At the same time, in the absence of federal regulation, there’s still nothing forcing these firms to do the right thing.

    In a recent interview, Tucker Carlson pushed Altman to answer exactly who is making these decisions that impact the rest of us. The OpenAI chief pointed to the model behavior team, which is responsible for tuning the model for certain attributes. “The person I think you should hold accountable for those calls is me,” Altman added. “Like, I’m a public face. Eventually, like, I’m the one that can overrule one of those decisions or our board.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleYou can turn off iOS 26’s obtrusive new screenshot previews
    Next Article Anker’s recent power bank recall involves over 481,000 units

    Related Posts

    What Happens When Your Coworkers Are AI Agents

    What Happens When Your Coworkers Are AI Agents

    December 9, 2025
    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    December 9, 2025
    An AI Dark Horse Is Rewriting the Rules of Game Design

    An AI Dark Horse Is Rewriting the Rules of Game Design

    December 9, 2025
    Watch the Highlights From WIRED’s Big Interview Event Right Here

    Watch the Highlights From WIRED’s Big Interview Event Right Here

    December 9, 2025
    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    December 4, 2025
    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    December 4, 2025
    Our Picks
    Bezos could have saved WaPo’s sports and local journalists. He laid them off instead.

    Bezos could have saved WaPo’s sports and local journalists. He laid them off instead.

    February 10, 2026
    Amazon Ring’s Super Bowl ad sparks backlash amid fears of mass surveillance

    Amazon Ring’s Super Bowl ad sparks backlash amid fears of mass surveillance

    February 10, 2026
    Boston Dynamics CEO Robert Playter is stepping down after six years

    Boston Dynamics CEO Robert Playter is stepping down after six years

    February 10, 2026
    ChatGPT’s deep research tool adds a built-in document viewer so you can read its reports

    ChatGPT’s deep research tool adds a built-in document viewer so you can read its reports

    February 10, 2026
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    FBI releases recovered footage from Nancy Guthrie’s Nest cam News

    FBI releases recovered footage from Nancy Guthrie’s Nest cam

    By News RoomFebruary 10, 2026

    More than a week after Nancy Guthrie disappeared, the FBI has released footage from a…

    Ayaneo’s new Windows handheld will cost up to ,299 with maxed out specs

    Ayaneo’s new Windows handheld will cost up to $4,299 with maxed out specs

    February 10, 2026
    Discord says ‘vast majority’ of users won’t see its new age verification setup

    Discord says ‘vast majority’ of users won’t see its new age verification setup

    February 10, 2026
    Sharge’s new power bank can charge two laptops while putting on a light show

    Sharge’s new power bank can charge two laptops while putting on a light show

    February 10, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2026 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.