Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Anthropic blames Claude AI for ‘embarrassing and unintentional mistake’ in legal filing

    May 16, 2025

    Apple blocks Fortnite’s App Store return as downloads fail in Europe

    May 16, 2025

    Grok’s white genocide fixation caused by ‘unauthorized modification’

    May 16, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI Offers a Peek Inside the Guts of ChatGPT
    Business

    OpenAI Offers a Peek Inside the Guts of ChatGPT

    News RoomBy News RoomJune 7, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    ChatGPT developer OpenAI’s approach to building artificial intelligence came under fire this week from former employees who accuse the company of taking unnecessary risks with technology that could become harmful.

    Today, OpenAI released a new research paper apparently aimed at showing it is serious about tackling AI risk by making its models more explainable. In the paper, researchers from the company lay out a way to peer inside the AI model that powers ChatGPT. They devise a method of identifying how the model stores certain concepts—including those that might cause an AI system to misbehave.

    Although the research makes OpenAI’s work on keeping AI in check more visible, it also highlights recent turmoil at the company. The new research was performed by the recently disbanded “superalignment” team at OpenAI that was dedicated to studying the technology’s long-term risks.

    The former group’s coleads, Ilya Sutskever and Jan Leike—both of whom have left OpenAI—are named as coauthors. Sutskever, a cofounder of OpenAI and formerly chief scientist, was among the board members who voted to fire CEO Sam Altman last November, triggering a chaotic few days that culminated in Altman’s return as leader.

    ChatGPT is powered by a family of so-called large language models called GPT, based on an approach to machine learning known as artificial neural networks. These mathematical networks have shown great power to learn useful tasks by analyzing example data, but their workings cannot be easily scrutinized as conventional computer programs can. The complex interplay between the layers of “neurons” within an artificial neural network makes reverse engineering why a system like ChatGPT came up with a particular response hugely challenging.

    “Unlike with most human creations, we don’t really understand the inner workings of neural networks,” the researchers behind the work wrote in an accompanying blog post. Some prominent AI researchers believe that the most powerful AI models, including ChatGPT, could perhaps be used to design chemical or biological weapons and coordinate cyberattacks. A longer-term concern is that AI models may choose to hide information or act in harmful ways in order to achieve their goals.

    OpenAI’s new paper outlines a technique that lessens the mystery a little, by identifying patterns that represent specific concepts inside a machine learning system with help from an additional machine learning model. The key innovation is in refining the network used to peer inside the system of interest by identifying concepts, to make it more efficient.

    OpenAI proved out the approach by identifying patterns that represent concepts inside GPT-4, one of its largest AI models. The company released code related to the interpretability work, as well as a visualization tool that can be used to see how words in different sentences activate concepts, including profanity and erotic content, in GPT-4 and another model. Knowing how a model represents certain concepts could be a step toward being able to dial down those associated with unwanted behavior, to keep an AI system on the rails. It could also make it possible to tune an AI system to favor certain topics or ideas.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleNetflix’s latest redesign aims to simplify your homepage
    Next Article Microsoft explains how its DLSS competitor uses AI to improve any game

    Related Posts

    Elon Musk’s Grok AI Can’t Stop Talking About ‘White Genocide’

    May 15, 2025

    Microsoft Cuts Off Access to Bing Search Data as It Shifts Focus to Chatbots

    May 15, 2025

    Google DeepMind’s AI Agent Dreams Up Algorithms Beyond Human Expertise

    May 15, 2025

    Brian Chesky Lost His Mind One Night—and Now He’s Relaunching Airbnb as an Everything App

    May 14, 2025

    GM’s New Battery Tech Could Be a Breakthrough for Affordable EVs

    May 14, 2025

    A VIP Seat at Donald Trump’s Crypto Dinner Cost at Least $2 Million

    May 14, 2025
    Our Picks

    Apple blocks Fortnite’s App Store return as downloads fail in Europe

    May 16, 2025

    Grok’s white genocide fixation caused by ‘unauthorized modification’

    May 16, 2025

    Thanks, Trump tariffs, now I gotta replace my phone battery

    May 15, 2025

    Meta asks judge to throw out antitrust case mid-trial

    May 15, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Tim Sweeney is mocking Apple for letting Fortnite fakes into the App Store

    By News RoomMay 15, 2025

    Epic Games has been waiting for Apple to approve Fortnite for the US iOS App…

    TikTok will show teens guided meditation after 10PM

    May 15, 2025

    Elon Musk’s Grok AI Can’t Stop Talking About ‘White Genocide’

    May 15, 2025

    Jeff Bezos makes his most ghoulish deal yet

    May 15, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.