Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    OpenAI Leadership Responds to Meta Offers: ‘Someone Has Broken Into Our Home’

    June 30, 2025

    Microsoft Authenticator is ending support for passwords

    June 30, 2025

    AT&T says ‘our network’ wasn’t to blame for Trump’s troubled conference call

    June 30, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI is plagued by safety concerns
    News

    OpenAI is plagued by safety concerns

    News RoomBy News RoomJuly 12, 20245 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    OpenAI is a leader in the race to develop AI as intelligent as a human. Yet, employees continue to show up in the press and on podcasts to voice their grave concerns about safety at the $80 billion nonprofit research lab. The latest comes from The Washington Post, where an anonymous source claimed OpenAI rushed through safety tests and celebrated their product before ensuring its safety.

    “They planned the launch after-party prior to knowing if it was safe to launch,” an anonymous employee told The Washington Post. “We basically failed at the process.”

    Safety issues loom large at OpenAI — and seem to just keep coming. Current and former employees at OpenAI recently signed an open letter demanding better safety and transparency practices from the startup, not long after its safety team was dissolved following the departure of cofounder Ilya Sutskever. Jan Leike, a key OpenAI researcher, resigned shortly after, claiming in a post that “safety culture and processes have taken a backseat to shiny products” at the company.

    Safety is core to OpenAI’s charter, with a clause that claims OpenAI will assist other organizations to advance safety if AGI is reached at a competitor, instead of continuing to compete. It claims to be dedicated to solving the safety problems inherent to such a large, complex system. OpenAI even keeps its proprietary models private, rather than open (causing jabs and lawsuits), for the sake of safety. The warnings make it sound as though safety has been deprioritized despite being so paramount to the culture and structure of the company.

    It’s clear that OpenAI is in the hot seat — but public relations efforts alone won’t suffice to safeguard society

    “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” OpenAI spokesperson Taya Christianson said in a statement to The Verge. “Rigorous debate is critical given the significance of this technology, and we will continue to engage with governments, civil society and other communities around the world in service of our mission.” 

    The stakes around safety, according to OpenAI and others studying the emergent technology, are immense. “Current frontier AI development poses urgent and growing risks to national security,” a report commissioned by the US State Department in March said. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.”

    The alarm bells at OpenAI also follow the boardroom coup last year that briefly ousted CEO Sam Altman. The board said he was removed due to a failure to be “consistently candid in his communications,” leading to an investigation that did little to reassure the staff.

    OpenAI spokesperson Lindsey Held told the Post the GPT-4o launch “didn’t cut corners” on safety, but another unnamed company representative acknowledged that the safety review timeline was compressed to a single week. We “are rethinking our whole way of doing it,” the anonymous representative told the Post. “This [was] just not the best way to do it.”

    Do you know more about what’s going on inside OpenAI? I’d love to chat. You can reach me securely on Signal @kylie.01 or via email at [email protected].

    In the face of rolling controversies (remember the Her incident?), OpenAI has attempted to quell fears with a few well timed announcements. This week, it announced it is teaming up with Los Alamos National Laboratory to explore how advanced AI models, such as GPT-4o, can safely aid in bioscientific research, and in the same announcement it repeatedly pointed to Los Alamos’s own safety record. The next day, an anonymous spokesperson told Bloomberg that OpenAI created an internal scale to track the progress its large language models are making toward artificial general intelligence.

    This week’s safety-focused announcements from OpenAI appear to be defensive window dressing in the face of growing criticism of its safety practices. It’s clear that OpenAI is in the hot seat — but public relations efforts alone won’t suffice to safeguard society. What truly matters is the potential impact on those beyond the Silicon Valley bubble if OpenAI continues to fail to develop AI with strict safety protocols, as those internally claim: the average person doesn’t have a say in the development of privatized-AGI, and yet they have no choice in how protected they’ll be from OpenAI’s creations.

    “AI tools can be revolutionary,” FTC chair Lina Khan told Bloomberg in November. But “as of right now,” she said, there are concerns that “the critical inputs of these tools are controlled by a relatively small number of companies.”

    If the numerous claims against their safety protocols are accurate, this surely raises serious questions about OpenAI’s fitness for this role as steward of AGI, a role that the organization has essentially assigned to itself. Allowing one group in San Francisco to control potentially society-altering technology is cause for concern, and there’s an urgent demand even within its own ranks for transparency and safety now more than ever.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleAmazon Will Brick Its $2,350 Astro Robots Just 10 Months After Release
    Next Article Now your iPhone can have a Notes app and a note pad

    Related Posts

    Microsoft Authenticator is ending support for passwords

    June 30, 2025

    AT&T says ‘our network’ wasn’t to blame for Trump’s troubled conference call

    June 30, 2025

    The government’s Apple antitrust lawsuit is still on

    June 30, 2025

    Apple’s AI Siri might be powered by OpenAI

    June 30, 2025

    The Nintendo Switch 2 will be available in-store at Best Buy on July 1st

    June 30, 2025

    Mark Zuckerberg announces his AI ‘superintelligence’ super-group

    June 30, 2025
    Our Picks

    Microsoft Authenticator is ending support for passwords

    June 30, 2025

    AT&T says ‘our network’ wasn’t to blame for Trump’s troubled conference call

    June 30, 2025

    The government’s Apple antitrust lawsuit is still on

    June 30, 2025

    Apple’s AI Siri might be powered by OpenAI

    June 30, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Reviews

    The best Switch 2 screen protector you should buy

    By News RoomJune 30, 2025

    All of the protectors in this buying guide claim to be a 9 on the…

    The Nintendo Switch 2 will be available in-store at Best Buy on July 1st

    June 30, 2025

    Telegram Purged Chinese Crypto Scam Markets—Then Watched as They Rebuilt

    June 30, 2025

    Mark Zuckerberg announces his AI ‘superintelligence’ super-group

    June 30, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.