Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Nothing’s charging case Super Mic is a small upgrade to earbud audio

    September 18, 2025

    Razer’s BlackShark V3 Pro Are the Best High-End Gaming Headphones

    September 18, 2025

    Nvidia and Intel’s $5 billion deal is apparently about eating AMD’s lunch

    September 18, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » DeepSeek Has Gotten OpenAI Fired Up
    Business

    DeepSeek Has Gotten OpenAI Fired Up

    News RoomBy News RoomJanuary 31, 20254 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    It’s been just over a week since DeepSeek upended the AI world. The introduction of its open-weight model—apparently trained on a fraction of the specialized computing chips that power industry leaders—set off shock waves inside OpenAI. Not only did employees claim to see hints that DeepSeek had “inappropriately distilled” OpenAI’s models to create its own, but the startup’s success had Wall Street questioning whether companies like OpenAI were wildly overspending on compute.

    “DeepSeek R1 is AI’s Sputnik moment,” wrote Marc Andreessen, one of Silicon Valley’s most influential and provocative inventors, on X.

    In response, OpenAI is preparing to launch a new model today, ahead of its originally planned schedule. The model, o3-mini, will debut in both API and chat. Sources say it has o1 level reasoning with 4o-level speed. In other words, it’s fast, cheap, smart, and designed to crush DeepSeek. (OpenAI spokesperson Niko Felix says work on o3-mini began long before DeepSeek’s debut and the goal was to launch by the end of January).

    The moment has galvanized OpenAI staff. Inside the company, there’s a feeling that—particularly as DeepSeek dominates the conversation—OpenAI must become more efficient or risk falling behind its newest competitor.

    Part of the issue stems from OpenAI’s origins as a nonprofit research organization before becoming a profit-seeking powerhouse. An ongoing power struggle between the research and product groups, employees claim, has resulted in a rift between the teams working on advanced reasoning and those working on chat. (OpenAI spokesperson Niko Felix says this is “incorrect” and notes that the leaders of these teams, chief product officer Kevin Weil and chief research officer Mark Chen, “meet every week and work closely to align on product and research priorities.”)

    Some inside OpenAI want the company to build a unified chat product, one model that can tell whether a question requires advanced reasoning. So far, that hasn’t happened. Instead, a drop-down menu in ChatGPT prompts users to decide whether they want to use GPT-4o (“great for most questions”) or o1 (“uses advanced reasoning”).

    Some staffers claim that while chat brings in the lion’s share of OpenAI’s revenue, o1 gets more attention—and computing resources—from leadership. “Leadership doesn’t care about chat,” says a former employee who worked on (you guessed it) chat. “Everyone wants to work on o1 because it’s sexy, but the code base wasn’t built for experimentation, so there’s no momentum.” The former employee asked to remain anonymous, citing a nondisclosure agreement.

    OpenAI spent years experimenting with reinforcement learning to fine-tune the model that eventually became the advanced reasoning system called o1. (Reinforcement learning is a process that trains AI models with a system of penalties and rewards.) DeepSeek built off the reinforcement learning work that OpenAI had pioneered in order to create its advanced reasoning system, called R1. “They benefited from knowing that reinforcement learning, applied to language models, works,” says a former OpenAI researcher who is not authorized to speak publicly about the company.

    “The reinforcement learning [DeepSeek] did is similar to what we did at OpenAI,” says another former OpenAI researcher, “but they did it with better data and cleaner stack.”

    OpenAI employees say research that went into o1 was done in a code base, called the “berry” stack, built for speed. “There were trade-offs—experimental rigor for throughput,” says a former employee with direct knowledge of the situation.

    Those trade-offs made sense for o1, which was essentially an enormous experiment, code base limitations notwithstanding. They did not make as much sense for chat, a product used by millions of users that was built on a different, more reliable stack. When o1 launched and became a product, cracks started to emerge in OpenAI’s internal processes. “It was like, ‘why are we doing this in the experimental codebase, shouldn’t we do this in the main product research codebase?’” the employee explains. “There was major pushback to that internally.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleNvidia says new GPUs are the fastest for DeepSeek AI, which kind of misses the point
    Next Article How to stop your MacBook from turning on when you open the lid

    Related Posts

    Nvidia CEO Jensen Huang Is Bananas for Google Gemini’s AI Image Generator

    September 18, 2025

    OpenAI’s Teen Safety Features Will Walk a Thin Line

    September 18, 2025

    Matthew Prince Wants AI Companies to Pay for Their Sins

    September 17, 2025

    How AI Is Upending Politics, Tech, the Media, and More

    September 16, 2025

    Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions

    September 16, 2025

    USA Today Enters Its Gen AI Era With a Chatbot

    September 16, 2025
    Our Picks

    Razer’s BlackShark V3 Pro Are the Best High-End Gaming Headphones

    September 18, 2025

    Nvidia and Intel’s $5 billion deal is apparently about eating AMD’s lunch

    September 18, 2025

    Nvidia CEO Jensen Huang Is Bananas for Google Gemini’s AI Image Generator

    September 18, 2025

    Verge readers can get 20 percent off Nanoleaf wall lights

    September 18, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Ubiquiti’s new desktop NAS looks more like a wireless router

    By News RoomSeptember 18, 2025

    Ubiquiti has announced a collection of new storage solutions compatible with its UniFi lineup of…

    Microsoft is filling Teams with AI agents

    September 18, 2025

    Satya Nadella is haunted at the prospect of Microsoft not surviving the AI era

    September 18, 2025

    Anker’s recent power bank recall involves over 481,000 units

    September 18, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.