Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Satellites Are Leaking the World’s Secrets: Calls, Texts, Military and Corporate Data

    October 14, 2025

    Facebook removes ICE-tracking page after US government ‘outreach’

    October 14, 2025

    Mark Cuban Would Still Have Dinner With Donald Trump

    October 14, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI’s Ilya Sutskever Has a Plan for Keeping Super-Intelligent AI in Check
    Business

    OpenAI’s Ilya Sutskever Has a Plan for Keeping Super-Intelligent AI in Check

    News RoomBy News RoomDecember 18, 20233 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    OpenAI was founded on a promise to build artificial intelligence that benefits all of humanity—even when that AI becomes considerably smarter than its creators. Since the debut of ChatGPT last year and during the company’s recent governance crisis, its commercial ambitions have been more prominent. Now, the company says a new research group working on wrangling the supersmart AIs of the future is starting to bear fruit.

    “AGI is very fast approaching,” says Leopold Aschenbrenner, a researcher at OpenAI involved with the Superalignment research team established in July. “We’re gonna see superhuman models, they’re gonna have vast capabilities, and they could be very, very dangerous, and we don’t yet have the methods to control them.” OpenAI has said it will dedicate a fifth of its available computing power to the Superalignment project.

    A research paper released by OpenAI today touts results from experiments designed to test a way to let an inferior AI model guide the behavior of a much smarter one without making it less smart. Although the technology involved is far from surpassing the flexibility of humans, the scenario was designed to stand in for a future time when humans must work with AI systems more intelligent than themselves.

    OpenAI’s researchers examined the process, called supervision, which is used to tune systems like GPT-4, the large language model behind ChatGPT, to be more helpful and less harmful. Currently this involves humans giving the AI system feedback on which answers are good and which are bad. As AI advances, researchers are exploring how to automate this process to save time—but also because they think it may become impossible for humans to provide useful feedback as AI becomes more powerful.

    In a control experiment using OpenAI’s GPT-2 text generator first released in 2019 to teach GPT-4, the more recent system became less capable and similar to the inferior system. The researchers tested two ideas for fixing this. One involved training progressively larger models to reduce the performance lost at each step. In the other, the team added an algorithmic tweak to GPT-4 that allowed the stronger model to follow the guidance of the weaker model without blunting its performance as much as would normally happen. This was more effective, although the researchers admit that these methods do not guarantee that the stronger model will behave perfectly, and they describe it as a starting point for further research.

    “It’s great to see OpenAI proactively addressing the problem of controlling superhuman AIs,” says Dan Hendryks, director of the Center for AI Safety, a nonprofit in San Francisco dedicated to managing AI risks. “We’ll need many years of dedicated effort to meet this challenge.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleAdobe refuses to compromise on its bid for Figma
    Next Article The Verge’s favorite books from 2023

    Related Posts

    Mark Cuban Would Still Have Dinner With Donald Trump

    October 14, 2025

    Programming in Assembly Is Brutal, Beautiful, and Maybe Even a Path to Better AI

    October 14, 2025

    New Rules Could Force Tesla to Redesign Its Door Handles. That’s Harder Than It Sounds

    October 14, 2025

    Google Search Could Change Forever in the UK

    October 13, 2025

    Meta Tells Its Metaverse Workers to Use AI to ‘Go 5X Faster’

    October 13, 2025

    How China Is Hoping to Attract Tech Talent

    October 10, 2025
    Our Picks

    Facebook removes ICE-tracking page after US government ‘outreach’

    October 14, 2025

    Mark Cuban Would Still Have Dinner With Donald Trump

    October 14, 2025

    Apple teases M5 MacBook

    October 14, 2025

    The Mystery of How Quasicrystals Form

    October 14, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Motorola has a super-thin Air phone too

    By News RoomOctober 14, 2025

    Thin phones are in, and Motorola doesn’t want to miss out. It’s teasing the imminent…

    Programming in Assembly Is Brutal, Beautiful, and Maybe Even a Path to Better AI

    October 14, 2025

    Discord blamed a vendor for its data breach — now the vendor says it was ‘not hacked’

    October 14, 2025

    ‘Happy Gilmore’ Producer Buys Spyware Maker NSO Group

    October 14, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.