Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    You can save over ,500 on LG’s 65-inch C5 OLED TV

    You can save over $1,500 on LG’s 65-inch C5 OLED TV

    January 15, 2026
    The US claims it just strongarmed Taiwan into spending 0 billion on American chip manufacturing

    The US claims it just strongarmed Taiwan into spending $250 billion on American chip manufacturing

    January 15, 2026
    Amazon’s New World: Aeternum MMO will shut down next year

    Amazon’s New World: Aeternum MMO will shut down next year

    January 15, 2026
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI Employees Warn of a Culture of Risk and Retaliation
    Business

    OpenAI Employees Warn of a Culture of Risk and Retaliation

    News RoomBy News RoomJune 5, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    OpenAI Employees Warn of a Culture of Risk and Retaliation

    A group of current and former OpenAI employees have issued a public letter warning that the company and its rivals are building artificial intelligence with undue risk, without sufficient oversight, and while muzzling employees who might witness irresponsible activities.

    “These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction,” reads the letter published at righttowarn.ai. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable.”

    The letter calls for not just OpenAI but all AI companies to commit to not punishing employees who speak out about their activities. It also calls for companies to establish “verifiable” ways for workers to provide anonymous feedback on their activities. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the letter reads. “Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry.”

    OpenAI came under criticism last month after a Vox article revealed that the company has threatened to claw back employees’ equity if they do not sign non-disparagement agreements that forbid them from criticizing the company or even mentioning the existence of such an agreement. OpenAI’s CEO, Sam Altman, said on X recently that he was unaware of such arrangements and the company had never clawed back anyone’s equity. Altman also said the clause would be removed, freeing employees to speak out.


    Got a Tip?

    Are you a current or former employee at OpenAI? We’d like to hear from you. Using a nonwork phone or computer, contact Will Knight at [email protected] or securely on Signal at wak.01.


    OpenAI has also recently changed its approach to managing safety. Last month, an OpenAI research group responsible for assessing and countering the long-term risks posed by the company’s more powerful AI models was effectively dissolved after several prominent figures left and the remaining members of the team were absorbed into other groups. A few weeks later, the company announced that it had created a Safety and Security Committee, led by Altman and other board members.

    Last November, Altman was fired by OpenAI’s board for allegedly failing to disclose information and deliberately misleading them. After a very public tussle, Altman returned to the company and most of the board was ousted.

    “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” said OpenAI spokesperson Liz Bourgeois in a statement. “We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world.”

    The letters’ signatories include people who worked on safety and governance at OpenAI, current employees who signed anonymously, and researchers who currently work at rival AI companies. It was also endorsed by several big-name AI researchers including Geoffrey Hinton and Yoshua Bengio, who both won the Turing Award for pioneering AI research, and Stuart Russell, a leading expert on AI safety.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleWhen a Video Game Developer Gets Outed as Abusive, What Happens Next?
    Next Article This Hacker Tool Extracts All the Data Collected by Windows’ New Recall AI

    Related Posts

    What Happens When Your Coworkers Are AI Agents

    What Happens When Your Coworkers Are AI Agents

    December 9, 2025
    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    December 9, 2025
    An AI Dark Horse Is Rewriting the Rules of Game Design

    An AI Dark Horse Is Rewriting the Rules of Game Design

    December 9, 2025
    Watch the Highlights From WIRED’s Big Interview Event Right Here

    Watch the Highlights From WIRED’s Big Interview Event Right Here

    December 9, 2025
    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    December 4, 2025
    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    December 4, 2025
    Our Picks
    The US claims it just strongarmed Taiwan into spending 0 billion on American chip manufacturing

    The US claims it just strongarmed Taiwan into spending $250 billion on American chip manufacturing

    January 15, 2026
    Amazon’s New World: Aeternum MMO will shut down next year

    Amazon’s New World: Aeternum MMO will shut down next year

    January 15, 2026
    Apple lost the AI race — now the real challenge starts

    Apple lost the AI race — now the real challenge starts

    January 15, 2026
    Democrats push FTC to investigate Trump Mobile

    Democrats push FTC to investigate Trump Mobile

    January 15, 2026
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Raspberry Pi’s new add-on board has 8GB of RAM for running gen AI models News

    Raspberry Pi’s new add-on board has 8GB of RAM for running gen AI models

    By News RoomJanuary 15, 2026

    Raspberry Pi is launching a new add-on board capable of running generative AI models locally…

    The Sony Bravia 8 II looks absolutely stellar

    The Sony Bravia 8 II looks absolutely stellar

    January 15, 2026
    Verizon will pay you  following its nationwide outage — here’s how to get it

    Verizon will pay you $20 following its nationwide outage — here’s how to get it

    January 15, 2026
    Ricoh’s black-and-white-only camera launches in February for ,200

    Ricoh’s black-and-white-only camera launches in February for $2,200

    January 15, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2026 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.