Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Pocket Scion is a synth you play with plants

    September 6, 2025

    Bluetti says it can reduce vanlife power installations to ‘30 minutes’

    September 6, 2025

    Google Pixel 10 review: perfectly fine

    September 6, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » OpenAI says its latest GPT-4o model is ‘medium’ risk
    News

    OpenAI says its latest GPT-4o model is ‘medium’ risk

    News RoomBy News RoomAugust 9, 20242 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    OpenAI has released its GPT-4o System Card, a research document that outlines the safety measures and risk evaluations the startup conducted before releasing its latest model.

    GPT-4o was launched publicly in May of this year. Before its debut, OpenAI used an external group of red teamers, or security experts trying to find weaknesses in a system, to find key risks in the model (which is a fairly standard practice). They examined risks like the possibility that GPT-4o would create unauthorized clones of someone’s voice, erotic and violent content, or chunks of reproduced copyrighted audio. Now, the results are being released.

    According to OpenAI’s own framework, the researchers found GPT-4o to be of “medium” risk. The overall risk level was taken from the highest risk rating of four overall categories: cybersecurity, biological threats, persuasion, and model autonomy. All of these were deemed low risk except persuasion, where the researchers found some writing samples from GPT-4o could be better at swaying readers’ opinions than human-written text — although the model’s samples weren’t more persuasive overall.

    An OpenAI spokesperson, Lindsay McCallum Rémy, told The Verge that the system card includes preparedness evaluations created by an internal team, alongside external testers listed on OpenAI’s website as Model Evaluation and Threat Research (METR) and Apollo Research, both of which build evaluations for AI systems.

    Moreover, the company is releasing a highly capable multimodal model just ahead of a US presidential election. There’s a clear potential risk of the model accidentally spreading misinformation or getting hijacked by malicious actors — even if OpenAI is hoping to highlight that the company is testing real-world scenarios to prevent misuse.

    There have been plenty of calls for OpenAI to be more transparent, not just with the model’s training data (is it trained on YouTube?), but with its safety testing. In California, where OpenAI and many other leading AI labs are based, state Sen. Scott Wiener is working to pass a bill to regulate large language models, including restrictions that would hold companies legally accountable if their AI is used in harmful ways. If that bill is passed, OpenAI’s frontier models would have to comply with state-mandated risk assessments before making models available for public use. But the biggest takeaway from the GPT-4o System Card is that, despite the group of external red teamers and testers, a lot of this relies on OpenAI to evaluate itself.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThis new EV plug automatically delivers the right connection for your car
    Next Article Public Health Experts Want Coca-Cola and the Olympics to Break Up

    Related Posts

    Pocket Scion is a synth you play with plants

    September 6, 2025

    Bluetti says it can reduce vanlife power installations to ‘30 minutes’

    September 6, 2025

    Google Pixel 10 review: perfectly fine

    September 6, 2025

    First look: Dyson’s Spot+Scrub Ai robot seeks out stains

    September 6, 2025

    Silicon Valley’s most powerful alliance just got stronger

    September 5, 2025

    Tech leaders take turns flattering Trump at White House dinner

    September 5, 2025
    Our Picks

    Bluetti says it can reduce vanlife power installations to ‘30 minutes’

    September 6, 2025

    Google Pixel 10 review: perfectly fine

    September 6, 2025

    No, Trump Can’t Legally Federalize US Elections

    September 6, 2025

    Hungry Worms Could Help Solve Plastic Pollution

    September 6, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Security

    SSA Whistleblower’s Resignation Email Mysteriously Disappeared From Inboxes

    By News RoomSeptember 6, 2025

    On Friday, the Social Security Administration’s chief data officer, Chuck Borges, sent an email to…

    Should AI Get Legal Rights?

    September 6, 2025

    First look: Dyson’s Spot+Scrub Ai robot seeks out stains

    September 6, 2025

    Silicon Valley’s most powerful alliance just got stronger

    September 5, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.