Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    You can now buy Microsoft’s Windows XP Crocs for $79.95

    October 10, 2025

    You can still get the latest AirPods Max at their Prime Day price

    October 10, 2025

    Bose is yanking key features from its SoundTouch speakers

    October 10, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI’s Teen Safety Features Will Walk a Thin Line
    Business

    OpenAI’s Teen Safety Features Will Walk a Thin Line

    News RoomBy News RoomSeptember 18, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    OpenAI announced new teen safety features for ChatGPT on Tuesday as part of an ongoing effort to respond to concerns about how minors engage with chatbots. The company is building an age-prediction system that identifies if a user is under 18 years old and routes them to an “age-appropriate” system that blocks graphic sexual content. If the system detects that the user is considering suicide or self-harm, it will contact the user’s parents. In cases of imminent danger, if a user’s parents are unreachable, the system may contact the authorities.

    In a blog post about the announcement, CEO Sam Altman wrote that the company is attempting to balance freedom, privacy, and teen safety.

    “We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict,” Altman wrote. “These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”

    While OpenAI tends to prioritize privacy and freedom for adult users, for teens the company says it puts safety first. By the end of September, the company will roll out parental controls so that parents can link their child’s account to their own, allowing them to manage the conversations and disable features. Parents can also receive notifications when “the system detects their teen is in a moment of acute distress,” according to the company’s blog post, and set limits on the times of day their children can use ChatGPT.

    The moves come as deeply troubling headlines continue to surface about people dying by suicide or committing violence against family members after engaging in lengthy conversations with AI chatbots. Lawmakers have taken notice, and both Meta and OpenAI are under scrutiny. Earlier this month, the Federal Trade Commission asked Meta, OpenAI, Google, and other AI firms to hand over information about how their technologies impact kids, according to Bloomberg.

    At the same time, OpenAI is still under a court order mandating that it preserve consumer chats indefinitely—a fact that the company is extremely unhappy about, according to sources I’ve spoken to. Today’s news is both an important step toward protecting minors and a savvy PR move to reinforce the idea that conversations with chatbots are so personal that consumer privacy should only be breached in the most extreme circumstances.

    “A Sexbot Avatar in ChatGPT”

    From the sources I’ve spoken to at OpenAI, the burden of protecting users weighs heavily on many researchers. They want to create a user experience that is fun and engaging, but it can quickly veer into becoming disastrously sycophantic. It’s positive that companies like OpenAI are taking steps to protect minors. At the same time, in the absence of federal regulation, there’s still nothing forcing these firms to do the right thing.

    In a recent interview, Tucker Carlson pushed Altman to answer exactly who is making these decisions that impact the rest of us. The OpenAI chief pointed to the model behavior team, which is responsible for tuning the model for certain attributes. “The person I think you should hold accountable for those calls is me,” Altman added. “Like, I’m a public face. Eventually, like, I’m the one that can overrule one of those decisions or our board.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleYou can turn off iOS 26’s obtrusive new screenshot previews
    Next Article Anker’s recent power bank recall involves over 481,000 units

    Related Posts

    How China Is Hoping to Attract Tech Talent

    October 10, 2025

    The City That Made the World Fall for a Monster

    October 10, 2025

    OpenAI Sneezes, and Software Firms Catch a Cold

    October 9, 2025

    Patreon CEO Jack Conte Wants You to Get Off of Your Phone

    October 9, 2025

    Inside Intel’s Hail Mary to Reclaim Chip Dominance

    October 9, 2025

    This Startup Wants to Spark a US DeepSeek Moment

    October 8, 2025
    Our Picks

    You can still get the latest AirPods Max at their Prime Day price

    October 10, 2025

    Bose is yanking key features from its SoundTouch speakers

    October 10, 2025

    How China Is Hoping to Attract Tech Talent

    October 10, 2025

    OpenAI is trying to clamp down on ‘bias’ in ChatGPT

    October 10, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Edifier’s new wireless speaker looks like a gaming PC

    By News RoomOctober 10, 2025

    With RGB lighting, exposed electronics, and a full color screen inside a transparent enclosure, Edifier’s…

    Lenovo’s Latest Gaming Laptop Is $200 Off Right Now

    October 10, 2025

    You can still snag LG’s C4 OLED TV at half the price 

    October 10, 2025

    Chrome will automatically disable web notifications you don’t care about

    October 10, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.