Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Microsoft launches its first in-house AI models

    August 28, 2025

    Xbox’s cross-device play history syncs your recently played games on every screen

    August 28, 2025

    The best Labor Day sales on TVs

    August 28, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » DeepSeek Has Gotten OpenAI Fired Up
    Business

    DeepSeek Has Gotten OpenAI Fired Up

    News RoomBy News RoomJanuary 31, 20254 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    It’s been just over a week since DeepSeek upended the AI world. The introduction of its open-weight model—apparently trained on a fraction of the specialized computing chips that power industry leaders—set off shock waves inside OpenAI. Not only did employees claim to see hints that DeepSeek had “inappropriately distilled” OpenAI’s models to create its own, but the startup’s success had Wall Street questioning whether companies like OpenAI were wildly overspending on compute.

    “DeepSeek R1 is AI’s Sputnik moment,” wrote Marc Andreessen, one of Silicon Valley’s most influential and provocative inventors, on X.

    In response, OpenAI is preparing to launch a new model today, ahead of its originally planned schedule. The model, o3-mini, will debut in both API and chat. Sources say it has o1 level reasoning with 4o-level speed. In other words, it’s fast, cheap, smart, and designed to crush DeepSeek. (OpenAI spokesperson Niko Felix says work on o3-mini began long before DeepSeek’s debut and the goal was to launch by the end of January).

    The moment has galvanized OpenAI staff. Inside the company, there’s a feeling that—particularly as DeepSeek dominates the conversation—OpenAI must become more efficient or risk falling behind its newest competitor.

    Part of the issue stems from OpenAI’s origins as a nonprofit research organization before becoming a profit-seeking powerhouse. An ongoing power struggle between the research and product groups, employees claim, has resulted in a rift between the teams working on advanced reasoning and those working on chat. (OpenAI spokesperson Niko Felix says this is “incorrect” and notes that the leaders of these teams, chief product officer Kevin Weil and chief research officer Mark Chen, “meet every week and work closely to align on product and research priorities.”)

    Some inside OpenAI want the company to build a unified chat product, one model that can tell whether a question requires advanced reasoning. So far, that hasn’t happened. Instead, a drop-down menu in ChatGPT prompts users to decide whether they want to use GPT-4o (“great for most questions”) or o1 (“uses advanced reasoning”).

    Some staffers claim that while chat brings in the lion’s share of OpenAI’s revenue, o1 gets more attention—and computing resources—from leadership. “Leadership doesn’t care about chat,” says a former employee who worked on (you guessed it) chat. “Everyone wants to work on o1 because it’s sexy, but the code base wasn’t built for experimentation, so there’s no momentum.” The former employee asked to remain anonymous, citing a nondisclosure agreement.

    OpenAI spent years experimenting with reinforcement learning to fine-tune the model that eventually became the advanced reasoning system called o1. (Reinforcement learning is a process that trains AI models with a system of penalties and rewards.) DeepSeek built off the reinforcement learning work that OpenAI had pioneered in order to create its advanced reasoning system, called R1. “They benefited from knowing that reinforcement learning, applied to language models, works,” says a former OpenAI researcher who is not authorized to speak publicly about the company.

    “The reinforcement learning [DeepSeek] did is similar to what we did at OpenAI,” says another former OpenAI researcher, “but they did it with better data and cleaner stack.”

    OpenAI employees say research that went into o1 was done in a code base, called the “berry” stack, built for speed. “There were trade-offs—experimental rigor for throughput,” says a former employee with direct knowledge of the situation.

    Those trade-offs made sense for o1, which was essentially an enormous experiment, code base limitations notwithstanding. They did not make as much sense for chat, a product used by millions of users that was built on a different, more reliable stack. When o1 launched and became a product, cracks started to emerge in OpenAI’s internal processes. “It was like, ‘why are we doing this in the experimental codebase, shouldn’t we do this in the main product research codebase?’” the employee explains. “There was major pushback to that internally.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleNvidia says new GPUs are the fastest for DeepSeek AI, which kind of misses the point
    Next Article How to stop your MacBook from turning on when you open the lid

    Related Posts

    Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors

    August 28, 2025

    Alexis Ohanian’s Next Social Platform Has One Rule: Don’t Act Like an Asshole

    August 27, 2025

    AI Is Eliminating Jobs for Younger Workers

    August 26, 2025

    Elon Musk’s xAI Sues Apple and OpenAI Over App Store Rankings

    August 26, 2025

    A Crypto Micronation Is Making Friends at the White House

    August 26, 2025

    The Trump-Intel Deal Is Official

    August 25, 2025
    Our Picks

    Xbox’s cross-device play history syncs your recently played games on every screen

    August 28, 2025

    The best Labor Day sales on TVs

    August 28, 2025

    Google’s Pixel Care Plus includes free screen and battery repair

    August 28, 2025

    Microsoft Word now automatically saves new documents to the cloud

    August 28, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Kobo replaces Pocket with Instapaper on its e-readers in a free update

    By News RoomAugust 28, 2025

    A big draw of Kobo e-readers has always been Pocket integration. But, when Mozilla announced…

    Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors

    August 28, 2025

    NASA’s Largest Satellite Antenna Ever Has Just Unfurled in Space

    August 28, 2025

    MSI’s gaming laptop with the RTX 5070 is $1,099 and includes Battlefield 6

    August 28, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.