Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    ‘Donkey Kong Bananza’ Could be the Next Killer Game the Switch 2 Needs

    July 7, 2025

    xAI updated Grok to be more ‘politically incorrect’

    July 7, 2025

    Apple just added more frost to its Liquid Glass design

    July 7, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws
    Security

    The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws

    News RoomBy News RoomAugust 24, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    At the 2023 Defcon hacker conference in Las Vegas, prominent AI tech companies partnered with algorithmic integrity and transparency groups to sic thousands of attendees on generative AI platforms and find weaknesses in these critical systems. This “red-teaming” exercise, which also had support from the US government, took a step in opening these increasingly influential yet opaque systems to scrutiny. Now, the ethical AI and algorithmic assessment nonprofit Humane Intelligence is taking this model one step further. On Wednesday, the group announced a call for participation with the US National Institute of Standards and Technology, inviting any US resident to participate in the qualifying round of a nationwide red-teaming effort to evaluate AI office productivity software.

    The qualifier will take place online and is open to both developers and anyone in the general public as part of NIST’s AI challenges, known as Assessing Risks and Impacts of AI, or ARIA. Participants who pass through the qualifying round will take part in an in-person red-teaming event at the end of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The goal is to expand capabilities for conducting rigorous testing of the security, resilience, and ethics of generative AI technologies.

    “The average person utilizing one of these models doesn’t really have the ability to determine whether or not the model is fit for purpose,” says Theo Skeadas, chief of staff at Humane Intelligence. “So we want to democratize the ability to conduct evaluations and make sure everyone using these models can assess for themselves whether or not the model is meeting their needs.”

    The final event at CAMLIS will split the participants into a red team trying to attack the AI systems and a blue team working on defense. Participants will use the AI 600-1 profile, part of NIST’s AI risk management framework, as a rubric for measuring whether the red team is able to produce outcomes that violate the systems’ expected behavior.

    “NIST’s ARIA is drawing on structured user feedback to understand real-world applications of AI models,” says Humane Intelligence founder Rumman Chowdhury, who is also a contractor in NIST’s Office of Emerging Technologies and a member of the US Department of Homeland Security AI safety and security board. “The ARIA team is mostly experts on sociotechnical test and evaluation, and [is] using that background as a way of evolving the field toward rigorous scientific evaluation of generative AI.”

    Chowdhury and Skeadas say the NIST partnership is just one of a series of AI red team collaborations that Humane Intelligence will announce in the coming weeks with US government agencies, international governments, and NGOs. The effort aims to make it much more common for the companies and organizations that develop what are now black-box algorithms to offer transparency and accountability through mechanisms like “bias bounty challenges,” where individuals can be rewarded for finding problems and inequities in AI models.

    “The community should be broader than programmers,” Skeadas says. “Policymakers, journalists, civil society, and nontechnical people should all be involved in the process of testing and evaluating of these systems. And we need to make sure that less represented groups like individuals who speak minority languages or are from nonmajority cultures and perspectives are able to participate in this process.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleMicrosoft to host CrowdStrike and others to discuss Windows security changes
    Next Article The Covid-19 Summer Wave Is So Big, the FDA Might Release New Vaccines Early

    Related Posts

    The Promise and Peril of Digital Security in the Age of Dictatorship

    July 5, 2025

    Identities of More Than 80 Americans Stolen for North Korean IT Worker Scams

    July 3, 2025

    A Group of Young Cybercriminals Poses the ‘Most Imminent Threat’ of Cyberattacks Right Now

    July 3, 2025

    How to Protest Safely in the Age of Surveillance

    July 3, 2025

    US Supreme Court Upholds Texas Porn ID Law

    July 3, 2025

    ICE Rolls Facial Recognition Tools Out to Officers’ Phones

    July 3, 2025
    Our Picks

    xAI updated Grok to be more ‘politically incorrect’

    July 7, 2025

    Apple just added more frost to its Liquid Glass design

    July 7, 2025

    The next Switch 2 restock kicks off at 7PM ET online at Walmart

    July 7, 2025

    Tesla’s real struggles have only just begun

    July 7, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Bluesky can really keep up with the news now that it has activity notifications

    By News RoomJuly 7, 2025

    One thing that has been missing from Bluesky until now was the ability to turn…

    Epic reaches mystery settlement with Samsung days before new Galaxy phones

    July 7, 2025

    Apple’s latest AirPods are already on sale for $99 before Prime Day

    July 7, 2025

    Is It Time to Stop Protecting the Grizzly Bear?

    July 7, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.