Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Meet Soham Parekh, the engineer burning through tech by working at three to four startups simultaneously

    July 3, 2025

    Identities of More Than 80 Americans Stolen for North Korean IT Worker Scams

    July 3, 2025

    Here are the letters that convinced Google and Apple to keep TikTok online

    July 3, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI says its latest GPT-4o model is ‘medium’ risk
    News

    OpenAI says its latest GPT-4o model is ‘medium’ risk

    News RoomBy News RoomAugust 9, 20242 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    OpenAI has released its GPT-4o System Card, a research document that outlines the safety measures and risk evaluations the startup conducted before releasing its latest model.

    GPT-4o was launched publicly in May of this year. Before its debut, OpenAI used an external group of red teamers, or security experts trying to find weaknesses in a system, to find key risks in the model (which is a fairly standard practice). They examined risks like the possibility that GPT-4o would create unauthorized clones of someone’s voice, erotic and violent content, or chunks of reproduced copyrighted audio. Now, the results are being released.

    According to OpenAI’s own framework, the researchers found GPT-4o to be of “medium” risk. The overall risk level was taken from the highest risk rating of four overall categories: cybersecurity, biological threats, persuasion, and model autonomy. All of these were deemed low risk except persuasion, where the researchers found some writing samples from GPT-4o could be better at swaying readers’ opinions than human-written text — although the model’s samples weren’t more persuasive overall.

    An OpenAI spokesperson, Lindsay McCallum Rémy, told The Verge that the system card includes preparedness evaluations created by an internal team, alongside external testers listed on OpenAI’s website as Model Evaluation and Threat Research (METR) and Apollo Research, both of which build evaluations for AI systems.

    Moreover, the company is releasing a highly capable multimodal model just ahead of a US presidential election. There’s a clear potential risk of the model accidentally spreading misinformation or getting hijacked by malicious actors — even if OpenAI is hoping to highlight that the company is testing real-world scenarios to prevent misuse.

    There have been plenty of calls for OpenAI to be more transparent, not just with the model’s training data (is it trained on YouTube?), but with its safety testing. In California, where OpenAI and many other leading AI labs are based, state Sen. Scott Wiener is working to pass a bill to regulate large language models, including restrictions that would hold companies legally accountable if their AI is used in harmful ways. If that bill is passed, OpenAI’s frontier models would have to comply with state-mandated risk assessments before making models available for public use. But the biggest takeaway from the GPT-4o System Card is that, despite the group of external red teamers and testers, a lot of this relies on OpenAI to evaluate itself.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThis new EV plug automatically delivers the right connection for your car
    Next Article Public Health Experts Want Coca-Cola and the Olympics to Break Up

    Related Posts

    Meet Soham Parekh, the engineer burning through tech by working at three to four startups simultaneously

    July 3, 2025

    Here are the letters that convinced Google and Apple to keep TikTok online

    July 3, 2025

    Apple’s colorful Watch Solo Loop bands are up to 70 percent off now

    July 3, 2025

    E Ink is turning the laptop touchpad into an e-reader for AI apps

    July 3, 2025

    Paramount Plus slashes prices to $2 for two months

    July 3, 2025

    Whoop MG review: a big whoop for a small crowd

    July 3, 2025
    Our Picks

    Identities of More Than 80 Americans Stolen for North Korean IT Worker Scams

    July 3, 2025

    Here are the letters that convinced Google and Apple to keep TikTok online

    July 3, 2025

    A Group of Young Cybercriminals Poses the ‘Most Imminent Threat’ of Cyberattacks Right Now

    July 3, 2025

    Apple’s colorful Watch Solo Loop bands are up to 70 percent off now

    July 3, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Security

    How to Protest Safely in the Age of Surveillance

    By News RoomJuly 3, 2025

    If you insist on using biometric unlocking methods to have faster access to your devices,…

    E Ink is turning the laptop touchpad into an e-reader for AI apps

    July 3, 2025

    US Supreme Court Upholds Texas Porn ID Law

    July 3, 2025

    A European Startup’s Spacecraft Made It to Orbit. Now It’s Lost at Sea

    July 3, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.