Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Google is rolling out a fix for Pixel back button issues

    August 5, 2025

    Apple is suing Apple Cinemas

    August 5, 2025

    The best iPad deals you can get in August

    August 5, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI’s Boardroom Drama Could Mess Up Your Future
    Business

    OpenAI’s Boardroom Drama Could Mess Up Your Future

    News RoomBy News RoomNovember 30, 20235 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    In June I had a conversation with chief scientist Ilya Sutskever at OpenAI’s headquarters, as I reported WIRED’s October cover story. Among the topics we discussed was the unusual structure of the company.

    OpenAI began as a nonprofit research lab whose mission was to develop artificial intelligence on par or beyond human level—termed artificial general intelligence or AGI—in a safe way. The company discovered a promising path in large language models that generate strikingly fluid text, but developing and implementing those models required huge amounts of computing infrastructure and mountains of cash. This led OpenAI to create a commercial entity to draw outside investors, and it netted a major partner: Microsoft. Virtually everyone in the company worked for this new for-profit arm. But limits were placed on the company’s commercial life. The profit delivered to investors was to be capped—for the first backers at 100 times what they put in—after which OpenAI would revert to a pure nonprofit. The whole shebang was governed by the original nonprofit’s board, which answered only to the goals of the original mission and maybe God.

    Sutskever did not appreciate it when I joked that the bizarre org chart that mapped out this relationship looked like something a future GPT might come up with when prompted to design a tax dodge. “We are the only company in the world which has a capped profit structure,” he admonished me. “Here is the reason it makes sense: If you believe, like we do, that if we succeed really well, then these GPUs are going to take my job and your job and everyone’s jobs, it seems nice if that company would not make truly unlimited amounts of returns.” In the meantime, to make sure that the profit-seeking part of the company doesn’t shirk its commitment to ensuring the AI doesn’t get out of control, there’s that board, keeping an eye on things.

    This would-be guardian of humanity is the same board that fired Sam Altman last Friday, saying that it no longer had confidence in the CEO because “he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” No examples of that alleged behavior were provided, and almost no one at the company knew about the firing until just before it was publicly announced. Microsoft CEO Satya Nadella and other investors got no advance notice. The four directors, representing a majority of the six-person board, also kicked OpenAI president and chairman Greg Brockman off the board. Brockman quickly resigned.

    After speaking to someone familiar with the board’s thinking, it appears to me that in firing Altman the directors believed they were executing their mission of making sure the company develops powerful AI safely—as was its sole reason for existing. Increasing profits or ChatGPT usage, maintaining workplace comity, and keeping Microsoft and other investors happy were not of their concern. In the view of directors Adam D’Angelo, Helen Toner, and Tasha McCauley—and Sutskever—Altman didn’t deal straight with them. Bottom line: The board no longer trusted Altman to pursue OpenAI’s mission. If the board can’t trust the CEO, how can it protect or even monitor progress on the mission?

    I can’t say whether Altman’s conduct truly endangered OpenAI’s mission, but I do know this: The board seems to have missed the possibility that a poorly explained execution of a beloved and charismatic leader might harm that mission. The directors appear to have thought that they would give Altman his walking papers and unfussily slot in a replacement. Instead, the consequences were immediate and volcanic. Altman, already something of a cult hero, became even revered in this new narrative. He did little or nothing to dissuade the outcry that followed. To the board, Altman’s effort to reclaim his post, and the employee revolt of the past few days, was kind of a vindication that it was right to dismiss him. Clever Sam is still up to something! Meanwhile, all of Silicon Valley blew up, tarnishing OpenAI’s status, maybe permanently.

    Altman’s fingerprints do not appear on the open letter released Monday morning and signed by more than 95 percent of OpenAI’s roughly 770 employees that says the directors are “incapable of overseeing OpenAI.” It says that if the board members don’t reinstate Altman and resign, the workers who signed may quit and join a new advanced AI research division at Microsoft, formed by Altman and Brockman. At first, this threat did not seem to dent the resolve of the directors, who apparently felt like they were being asked to negotiate with terrorists. Presumably one director felt differently: Sutskever, who now says he regrets his actions. His signature appeared on the you-quit-or-we’ll-quit letter. Having apparently deleted his distrust of Altman, Sutskever and Altman have been sending love notes to each other on X, the platform owned by Elon Musk, another fellow OpenAI cofounder, now estranged from the project.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleInside the Race to Secure Formula 1’s Las Vegas Grand Prix
    Next Article Interview: Sam Altman on being fired and rehired by OpenAI

    Related Posts

    ChatGPT’s Study Mode Is Here. It Won’t Fix Education’s AI Problems

    August 5, 2025

    Anthropic Revokes OpenAI’s Access to Claude

    August 5, 2025

    WIRED Roundup: ChatGPT Goes Full Demon Mode

    August 5, 2025

    Inside Jeffrey Epstein’s Forgotten AI Summit

    August 4, 2025

    Donald Trump’s New Crypto Bible Is Everything the Industry Ever Wanted

    August 1, 2025

    Inside the Summit Where China Pitched Its AI Agenda to the World

    August 1, 2025
    Our Picks

    Apple is suing Apple Cinemas

    August 5, 2025

    The best iPad deals you can get in August

    August 5, 2025

    The best laptop deals you can get right now

    August 5, 2025

    WhatsApp will show a ‘safety overview’ before you join unknown group chats

    August 5, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Business

    ChatGPT’s Study Mode Is Here. It Won’t Fix Education’s AI Problems

    By News RoomAugust 5, 2025

    The school year starts soon for many students, and ChatGPT has announced a new “study…

    Grok’s ‘spicy’ video setting instantly made me Taylor Swift nude deepfakes

    August 5, 2025

    Anthropic Revokes OpenAI’s Access to Claude

    August 5, 2025

    Google Gemini can now create AI-generated bedtime stories

    August 5, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.