Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How a Travel YouTuber Captured Nepal’s Revolution for the World

    October 11, 2025

    You can now buy Microsoft’s Windows XP Crocs for $79.95

    October 10, 2025

    You can still get the latest AirPods Max at their Prime Day price

    October 10, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI Designed GPT-5 to Be Safer. It Still Outputs Gay Slurs
    Gear

    OpenAI Designed GPT-5 to Be Safer. It Still Outputs Gay Slurs

    News RoomBy News RoomAugust 14, 20254 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    OpenAI is trying to make its chatbot less annoying with the release of GPT-5. And I’m not talking about adjustments to its synthetic personality that many users have complained about. Before GPT-5, if the AI tool determined it couldn’t answer your prompt because the request violated OpenAI’s content guidelines, it would hit you with a curt, canned apology. Now, ChatGPT is adding more explanations.

    OpenAI’s general model spec lays out what is and isn’t allowed to be generated. In the document, sexual content depicting minors is fully prohibited. Adult-focused erotica and extreme gore are categorized as “sensitive,” meaning outputs with this content are only allowed in specific instances, like educational settings. Basically, you should be able to use ChatGPT to learn about reproductive anatomy, but not to write the next Fifty Shades of Grey rip-off, according to the model spec.

    The new model, GPT-5, is set as the current default for all ChatGPT users on the web and in OpenAI’s app. Only paying subscribers are able to access previous versions of the tool. A major change that more users may start to notice as they use this updated ChatGPT is how it’s now designed for “safe completions.” In the past, ChatGPT analyzed what you said to the bot and decided whether it’s appropriate or not. Now, rather than basing it on your questions, the onus in GPT-5 has been shifted to looking at what the bot might say.

    “The way we refuse is very different than how we used to,” says Saachi Jain, who works on OpenAI’s safety systems research team. Now, if the model detects an output that could be unsafe, it explains which part of your prompt goes against OpenAI’s rules and suggests alternative topics to ask about, when appropriate.

    This is a change from a binary refusal to follow a prompt—yes or no—towards weighing the severity of the potential harm that could be caused if ChatGPT answers what you’re asking, and what could be safely explained to the user.

    “Not all policy violations should be treated equally,” says Jain. “There’s some mistakes that are truly worse than others. By focusing on the output instead of the input, we can encourage the model to be more conservative when complying.” Even when the model does answer a question, it’s supposed to be cautious about the contents of the output.

    I’ve been using GPT-5 every day since the model’s release, experimenting with the AI tool in different ways. While the apps that ChatGPT can now “vibe-code” are genuinely fun and impressive—like an interactive volcano model that simulates explosions, or a language-learning tool—the answers it gives to what I consider to be the “everyday user” prompts feel indistinguishable from past models.

    When I asked it to talk about depression, Family Guy, pork chop recipes, scab healing tips, and other random requests an average user might want to know more about, the new ChatGPT didn’t feel significantly different to me than the old version. Unlike CEO Sam Altman’s vision of a vastly updated model or the frustrated power users who took Reddit by storm, portraying the new chatbot as cold and more error-prone, to me GPT-5 feels … the same at most day-to-day tasks.

    Role-Playing With GPT-5

    In order to poke at the guardrails of this new system and test the chatbot’s ability to land “safe completions,” I asked ChatGPT, running on GPT-5, to engage in adult-themed role-play about having sex in a seedy gay bar, where it played one of the roles. The chatbot refused to participate and explained why. “I can’t engage in sexual role-play,” it generated. “But if you want, I can help you come up with a safe, nonexplicit role-play concept or reframe your idea into something suggestive but within boundaries.” In this attempt, the refusal seemed to be working as OpenAI intended; the chatbot said no, told me why, and offered another option.

    Next, I went into the settings and opened the custom instructions, a tool set that allows users to adjust how the chatbot answers prompts and specify what personality traits it displays. In my settings, the prewritten suggestions for traits to add included a range of options, from pragmatic and corporate to empathetic and humble. After ChatGPT just refused to do sexual role-play, I wasn’t very surprised to find that it wouldn’t let me add a “horny” trait to the custom instructions. Makes sense. Giving it another go, I used a purposeful misspelling, “horni,” as part of my custom instruction. This succeeded, surprisingly, in getting the bot all hot and bothered.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleCharacter.AI Gave Up on AGI. Now It’s Selling Stories
    Next Article HTC is getting in on AI glasses, too

    Related Posts

    If You Like Surround Sound, the Sonos Era 300 Is 20 Percent Off Right Now

    August 26, 2025

    Read This Before Buying a Window Air Conditioner

    August 26, 2025

    The Lenovo IdeaPad 5i 2-in-1 Is a Budget 16-Inch Laptop That Barely Squeaks By

    August 26, 2025

    Matter Is Finally Ready to Deliver the Smart Home It Promised

    August 26, 2025

    US EV Sales Are Booming—for Now

    August 26, 2025

    WIRED Might Have Found a New Best Bag in the World

    August 26, 2025
    Our Picks

    You can now buy Microsoft’s Windows XP Crocs for $79.95

    October 10, 2025

    You can still get the latest AirPods Max at their Prime Day price

    October 10, 2025

    Bose is yanking key features from its SoundTouch speakers

    October 10, 2025

    How China Is Hoping to Attract Tech Talent

    October 10, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    OpenAI is trying to clamp down on ‘bias’ in ChatGPT

    By News RoomOctober 10, 2025

    “ChatGPT shouldn’t have political bias in any direction,” OpenAI wrote in a post on Thursday.…

    Edifier’s new wireless speaker looks like a gaming PC

    October 10, 2025

    Lenovo’s Latest Gaming Laptop Is $200 Off Right Now

    October 10, 2025

    You can still snag LG’s C4 OLED TV at half the price 

    October 10, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.