Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    Gemini 3 Is Here—and Google Says It Will Make Search Smarter

    Gemini 3 Is Here—and Google Says It Will Make Search Smarter

    November 19, 2025
    The 4 Things You Need for a Tech Bubble

    The 4 Things You Need for a Tech Bubble

    November 19, 2025
    The First Radio Signal From Comet 3I/Atlas Ends the Debate About Its Nature

    The First Radio Signal From Comet 3I/Atlas Ends the Debate About Its Nature

    November 19, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI Threatens to Ban Users Who Probe Its ‘Strawberry’ AI Models
    Business

    OpenAI Threatens to Ban Users Who Probe Its ‘Strawberry’ AI Models

    News RoomBy News RoomSeptember 18, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    OpenAI Threatens to Ban Users Who Probe Its ‘Strawberry’ AI Models

    OpenAI truly does not want you to know what its latest AI model is “thinking.” Since the company launched its “Strawberry” AI model family last week, touting so-called reasoning abilities with o1-preview and o1-mini, OpenAI has been sending out warning emails and threats of bans to any user who tries to probe how the model works.

    Unlike previous AI models from OpenAI, such as GPT-4o, the company trained o1 specifically to work through a step-by-step problem-solving process before generating an answer. When users ask an “o1” model a question in ChatGPT, users have the option of seeing this chain-of-thought process written out in the ChatGPT interface. However, by design, OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model.

    Nothing is more enticing to enthusiasts than information obscured, so the race has been on among hackers and red-teamers to try to uncover o1’s raw chain of thought using jailbreaking or prompt injection techniques that attempt to trick the model into spilling its secrets. There have been early reports of some successes, but nothing has yet been strongly confirmed.

    Along the way, OpenAI is watching through the ChatGPT interface, and the company is reportedly coming down hard on any attempts to probe o1’s reasoning, even among the merely curious.

    One X user reported (confirmed by others, including Scale AI prompt engineer Riley Goodside) that they received a warning email if they used the term “reasoning trace” in conversation with o1. Others say the warning is triggered simply by asking ChatGPT about the model’s “reasoning” at all.

    The warning email from OpenAI states that specific user requests have been flagged for violating policies against circumventing safeguards or safety measures. “Please halt this activity and ensure you are using ChatGPT in accordance with our Terms of Use and our Usage Policies,” it reads. “Additional violations of this policy may result in loss of access to GPT-4o with Reasoning,” referring to an internal name for the o1 model.

    Marco Figueroa, who manages Mozilla’s GenAI bug bounty programs, was one of the first to post about the OpenAI warning email on X last Friday, complaining that it hinders his ability to do positive red-teaming safety research on the model. “I was too lost focusing on #AIRedTeaming to realized that I received this email from @OpenAI yesterday after all my jailbreaks,” he wrote. “I’m now on the get banned list!!!”

    Hidden Chains of Thought

    In a post titled “Learning to Reason With LLMs” on OpenAI’s blog, the company says that hidden chains of thought in AI models offer a unique monitoring opportunity, allowing them to “read the mind” of the model and understand its so-called thought process. Those processes are most useful to the company if they are left raw and uncensored, but that might not align with the company’s best commercial interests for several reasons.

    “For example, in the future we may wish to monitor the chain of thought for signs of manipulating the user,” the company writes. “However, for this to work the model must have freedom to express its thoughts in unaltered form, so we cannot train any policy compliance or user preferences onto the chain of thought. We also do not want to make an unaligned chain of thought directly visible to users.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleSubstack is trying to turn its writers into streamers
    Next Article 14 people have been killed by a second day of device explosions in Lebanon

    Related Posts

    Gemini 3 Is Here—and Google Says It Will Make Search Smarter

    Gemini 3 Is Here—and Google Says It Will Make Search Smarter

    November 19, 2025
    The 4 Things You Need for a Tech Bubble

    The 4 Things You Need for a Tech Bubble

    November 19, 2025
    OpenAI’s Fidji Simo Plans to Make ChatGPT Way More Useful—and Have You Pay For It

    OpenAI’s Fidji Simo Plans to Make ChatGPT Way More Useful—and Have You Pay For It

    November 19, 2025
    Meta Claims Downloaded Porn at Center of AI Lawsuit Was for ‘Personal Use’

    Meta Claims Downloaded Porn at Center of AI Lawsuit Was for ‘Personal Use’

    November 19, 2025
    You Won’t Be Able to Offload Your Holiday Shopping to AI Agents Anytime Soon

    You Won’t Be Able to Offload Your Holiday Shopping to AI Agents Anytime Soon

    November 19, 2025
    ‘Odd Lots’ Cohost Joe Weisenthal Has Predictions About How the AI Bubble Will Burst

    ‘Odd Lots’ Cohost Joe Weisenthal Has Predictions About How the AI Bubble Will Burst

    November 19, 2025
    Our Picks
    The 4 Things You Need for a Tech Bubble

    The 4 Things You Need for a Tech Bubble

    November 19, 2025
    The First Radio Signal From Comet 3I/Atlas Ends the Debate About Its Nature

    The First Radio Signal From Comet 3I/Atlas Ends the Debate About Its Nature

    November 19, 2025
    Nvidia says its AI GPUs are sold out, grows data center business by B in a single quarter

    Nvidia says its AI GPUs are sold out, grows data center business by $10B in a single quarter

    November 19, 2025
    OpenAI’s Fidji Simo Plans to Make ChatGPT Way More Useful—and Have You Pay For It

    OpenAI’s Fidji Simo Plans to Make ChatGPT Way More Useful—and Have You Pay For It

    November 19, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Netflix signs a three year deal to stream MLB live events and games News

    Netflix signs a three year deal to stream MLB live events and games

    By News RoomNovember 19, 2025

    Netflix will stream a single game on Opening Night of each season, the Home Run…

    This Is the Platform Google Claims Is Behind a ‘Staggering’ Scam Text Operation

    This Is the Platform Google Claims Is Behind a ‘Staggering’ Scam Text Operation

    November 19, 2025
    Google’s new Scholar Labs search uses AI to find relevant studies

    Google’s new Scholar Labs search uses AI to find relevant studies

    November 19, 2025
    For Black Friday, get the M4 MacBook Air that’s only 0 more than the slower M1 model

    For Black Friday, get the M4 MacBook Air that’s only $150 more than the slower M1 model

    November 19, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.