Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Facebook removes ICE-tracking page after US government ‘outreach’

    October 14, 2025

    Mark Cuban Would Still Have Dinner With Donald Trump

    October 14, 2025

    Apple teases M5 MacBook

    October 14, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » DeepSeek Has Gotten OpenAI Fired Up
    Business

    DeepSeek Has Gotten OpenAI Fired Up

    News RoomBy News RoomJanuary 31, 20254 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    It’s been just over a week since DeepSeek upended the AI world. The introduction of its open-weight model—apparently trained on a fraction of the specialized computing chips that power industry leaders—set off shock waves inside OpenAI. Not only did employees claim to see hints that DeepSeek had “inappropriately distilled” OpenAI’s models to create its own, but the startup’s success had Wall Street questioning whether companies like OpenAI were wildly overspending on compute.

    “DeepSeek R1 is AI’s Sputnik moment,” wrote Marc Andreessen, one of Silicon Valley’s most influential and provocative inventors, on X.

    In response, OpenAI is preparing to launch a new model today, ahead of its originally planned schedule. The model, o3-mini, will debut in both API and chat. Sources say it has o1 level reasoning with 4o-level speed. In other words, it’s fast, cheap, smart, and designed to crush DeepSeek. (OpenAI spokesperson Niko Felix says work on o3-mini began long before DeepSeek’s debut and the goal was to launch by the end of January).

    The moment has galvanized OpenAI staff. Inside the company, there’s a feeling that—particularly as DeepSeek dominates the conversation—OpenAI must become more efficient or risk falling behind its newest competitor.

    Part of the issue stems from OpenAI’s origins as a nonprofit research organization before becoming a profit-seeking powerhouse. An ongoing power struggle between the research and product groups, employees claim, has resulted in a rift between the teams working on advanced reasoning and those working on chat. (OpenAI spokesperson Niko Felix says this is “incorrect” and notes that the leaders of these teams, chief product officer Kevin Weil and chief research officer Mark Chen, “meet every week and work closely to align on product and research priorities.”)

    Some inside OpenAI want the company to build a unified chat product, one model that can tell whether a question requires advanced reasoning. So far, that hasn’t happened. Instead, a drop-down menu in ChatGPT prompts users to decide whether they want to use GPT-4o (“great for most questions”) or o1 (“uses advanced reasoning”).

    Some staffers claim that while chat brings in the lion’s share of OpenAI’s revenue, o1 gets more attention—and computing resources—from leadership. “Leadership doesn’t care about chat,” says a former employee who worked on (you guessed it) chat. “Everyone wants to work on o1 because it’s sexy, but the code base wasn’t built for experimentation, so there’s no momentum.” The former employee asked to remain anonymous, citing a nondisclosure agreement.

    OpenAI spent years experimenting with reinforcement learning to fine-tune the model that eventually became the advanced reasoning system called o1. (Reinforcement learning is a process that trains AI models with a system of penalties and rewards.) DeepSeek built off the reinforcement learning work that OpenAI had pioneered in order to create its advanced reasoning system, called R1. “They benefited from knowing that reinforcement learning, applied to language models, works,” says a former OpenAI researcher who is not authorized to speak publicly about the company.

    “The reinforcement learning [DeepSeek] did is similar to what we did at OpenAI,” says another former OpenAI researcher, “but they did it with better data and cleaner stack.”

    OpenAI employees say research that went into o1 was done in a code base, called the “berry” stack, built for speed. “There were trade-offs—experimental rigor for throughput,” says a former employee with direct knowledge of the situation.

    Those trade-offs made sense for o1, which was essentially an enormous experiment, code base limitations notwithstanding. They did not make as much sense for chat, a product used by millions of users that was built on a different, more reliable stack. When o1 launched and became a product, cracks started to emerge in OpenAI’s internal processes. “It was like, ‘why are we doing this in the experimental codebase, shouldn’t we do this in the main product research codebase?’” the employee explains. “There was major pushback to that internally.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleNvidia says new GPUs are the fastest for DeepSeek AI, which kind of misses the point
    Next Article How to stop your MacBook from turning on when you open the lid

    Related Posts

    Mark Cuban Would Still Have Dinner With Donald Trump

    October 14, 2025

    Programming in Assembly Is Brutal, Beautiful, and Maybe Even a Path to Better AI

    October 14, 2025

    New Rules Could Force Tesla to Redesign Its Door Handles. That’s Harder Than It Sounds

    October 14, 2025

    Google Search Could Change Forever in the UK

    October 13, 2025

    Meta Tells Its Metaverse Workers to Use AI to ‘Go 5X Faster’

    October 13, 2025

    How China Is Hoping to Attract Tech Talent

    October 10, 2025
    Our Picks

    Mark Cuban Would Still Have Dinner With Donald Trump

    October 14, 2025

    Apple teases M5 MacBook

    October 14, 2025

    The Mystery of How Quasicrystals Form

    October 14, 2025

    Motorola has a super-thin Air phone too

    October 14, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Business

    Programming in Assembly Is Brutal, Beautiful, and Maybe Even a Path to Better AI

    By News RoomOctober 14, 2025

    Rollercoaster Tycoon wasn’t the most fashionable computer game out there in 1999. But if you…

    Discord blamed a vendor for its data breach — now the vendor says it was ‘not hacked’

    October 14, 2025

    ‘Happy Gilmore’ Producer Buys Spyware Maker NSO Group

    October 14, 2025

    California cracks down on ‘predatory’ early cancellation fees

    October 14, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.