Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    New Rules Could Force Tesla to Redesign Its Door Handles. That’s Harder Than It Sounds

    October 14, 2025

    Gmail now uses AI to help you find meeting times

    October 14, 2025

    The latest Moto Razr Ultra foldable is an even better value at $999

    October 14, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » These Clues Hint at the True Nature of OpenAI’s Shadowy Q* Project
    Business

    These Clues Hint at the True Nature of OpenAI’s Shadowy Q* Project

    News RoomBy News RoomDecember 1, 20233 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    There are other clues to what Q* could be. The name may be an allusion to Q-learning, a form of reinforcement learning that involves an algorithm learning to solve a problem through positive or negative feedback, which has been used to create game-playing bots and to tune ChatGPT to be more helpful. Some have suggested that the name may also be related to the A* search algorithm, widely used to have a program find the optimal path to a goal.

    The Information throws another clue into the mix: “Sutskever’s breakthrough allowed OpenAI to overcome limitations on obtaining enough high-quality data to train new models,” its story says. “The research involved using computer-generated [data], rather than real-world data like text or images pulled from the internet, to train new models.” That appears to be a reference to the idea of training algorithms with so-called synthetic training data, which has emerged as a way to train more powerful AI models.

    Subbarao Kambhampati, a professor at Arizona State University who is researching the reasoning limitations of LLMs, thinks that Q* may involve using huge amounts of synthetic data, combined with reinforcement learning, to train LLMs to specific tasks such as simple arithmetic. Kambhampati notes that there is no guarantee that the approach will generalize into something that can figure out how to solve any possible math problem.

    For more speculation on what Q* might be, read this post by a machine-learning scientist who pulls together the context and clues in impressive and logical detail. The TLDR version is that Q* could be an effort to use reinforcement learning and a few other techniques to improve a large language model’s ability to solve tasks by reasoning through steps along the way. Although that might make ChatGPT better at math conundrums, it’s unclear whether it would automatically suggest AI systems could evade human control.

    That OpenAI would try to use reinforcement learning to improve LLMs seems plausible because many of the company’s early projects, like video-game-playing bots, were centered on the technique. Reinforcement learning was also central to the creation of ChatGPT, because it can be used to make LLMs produce more coherent answers by asking humans to provide feedback as they converse with a chatbot. When WIRED spoke with Demis Hassabis, the CEO of Google DeepMind, earlier this year, he hinted that the company was trying to combine ideas from reinforcement learning with advances seen in large language models.

    Rounding up the available clues about Q*, it hardly sounds like a reason to panic. But then, it all depends on your personal P(doom) value—the probability you ascribe to the possibility that AI destroys humankind. Long before ChatGPT, OpenAI’s scientists and leaders were initially so freaked out by the development of GPT-2, a 2019 text generator that now seems laughably puny, that they said it could not be released publicly. Now the company offers free access to much more powerful systems.

    OpenAI refused to comment on Q*. Perhaps we will get more details when the company decides it’s time to share more results from its efforts to make ChatGPT not just good at talking but good at reasoning too.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleXreal’s Latest Augmented Reality Glasses Aren’t Worth the Upgrade
    Next Article Analogue is shipping its TurboGrafx console and restocking Pockets

    Related Posts

    New Rules Could Force Tesla to Redesign Its Door Handles. That’s Harder Than It Sounds

    October 14, 2025

    Google Search Could Change Forever in the UK

    October 13, 2025

    Meta Tells Its Metaverse Workers to Use AI to ‘Go 5X Faster’

    October 13, 2025

    How China Is Hoping to Attract Tech Talent

    October 10, 2025

    The City That Made the World Fall for a Monster

    October 10, 2025

    OpenAI Sneezes, and Software Firms Catch a Cold

    October 9, 2025
    Our Picks

    Gmail now uses AI to help you find meeting times

    October 14, 2025

    The latest Moto Razr Ultra foldable is an even better value at $999

    October 14, 2025

    Instagram is making all teen accounts ‘PG-13’

    October 14, 2025

    Apple Announces $2 Million Bug Bounty Reward for the Most Dangerous Exploits

    October 14, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    TiVo no longer makes DVRs

    By News RoomOctober 14, 2025

    It’s the end of an era for TiVo. Cord Cutters flagged that the company has…

    Apple Took Down These ICE-Tracking Apps. The Developers Aren’t Giving Up

    October 13, 2025

    Nvidia’s ‘personal AI supercomputer’ goes on sale October 15th

    October 13, 2025

    Facebook is adding job listings, again

    October 13, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.