Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    Nvidia has lost the plot with gamers

    Nvidia has lost the plot with gamers

    March 18, 2026
    DLSS 5: Has Nvidia’s AI graphics technology gone too far?

    DLSS 5: Has Nvidia’s AI graphics technology gone too far?

    March 18, 2026
    Tumblr reverses its changes to reblogs after user backlash

    Tumblr reverses its changes to reblogs after user backlash

    March 17, 2026
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws
    Security

    The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws

    News RoomBy News RoomAugust 24, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws

    At the 2023 Defcon hacker conference in Las Vegas, prominent AI tech companies partnered with algorithmic integrity and transparency groups to sic thousands of attendees on generative AI platforms and find weaknesses in these critical systems. This “red-teaming” exercise, which also had support from the US government, took a step in opening these increasingly influential yet opaque systems to scrutiny. Now, the ethical AI and algorithmic assessment nonprofit Humane Intelligence is taking this model one step further. On Wednesday, the group announced a call for participation with the US National Institute of Standards and Technology, inviting any US resident to participate in the qualifying round of a nationwide red-teaming effort to evaluate AI office productivity software.

    The qualifier will take place online and is open to both developers and anyone in the general public as part of NIST’s AI challenges, known as Assessing Risks and Impacts of AI, or ARIA. Participants who pass through the qualifying round will take part in an in-person red-teaming event at the end of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The goal is to expand capabilities for conducting rigorous testing of the security, resilience, and ethics of generative AI technologies.

    “The average person utilizing one of these models doesn’t really have the ability to determine whether or not the model is fit for purpose,” says Theo Skeadas, chief of staff at Humane Intelligence. “So we want to democratize the ability to conduct evaluations and make sure everyone using these models can assess for themselves whether or not the model is meeting their needs.”

    The final event at CAMLIS will split the participants into a red team trying to attack the AI systems and a blue team working on defense. Participants will use the AI 600-1 profile, part of NIST’s AI risk management framework, as a rubric for measuring whether the red team is able to produce outcomes that violate the systems’ expected behavior.

    “NIST’s ARIA is drawing on structured user feedback to understand real-world applications of AI models,” says Humane Intelligence founder Rumman Chowdhury, who is also a contractor in NIST’s Office of Emerging Technologies and a member of the US Department of Homeland Security AI safety and security board. “The ARIA team is mostly experts on sociotechnical test and evaluation, and [is] using that background as a way of evolving the field toward rigorous scientific evaluation of generative AI.”

    Chowdhury and Skeadas say the NIST partnership is just one of a series of AI red team collaborations that Humane Intelligence will announce in the coming weeks with US government agencies, international governments, and NGOs. The effort aims to make it much more common for the companies and organizations that develop what are now black-box algorithms to offer transparency and accountability through mechanisms like “bias bounty challenges,” where individuals can be rewarded for finding problems and inequities in AI models.

    “The community should be broader than programmers,” Skeadas says. “Policymakers, journalists, civil society, and nontechnical people should all be involved in the process of testing and evaluating of these systems. And we need to make sure that less represented groups like individuals who speak minority languages or are from nonmajority cultures and perspectives are able to participate in this process.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleMicrosoft to host CrowdStrike and others to discuss Windows security changes
    Next Article The Covid-19 Summer Wave Is So Big, the FDA Might Release New Vaccines Early

    Related Posts

    Cloudflare Has Blocked 416 Billion AI Bot Requests Since July 1

    Cloudflare Has Blocked 416 Billion AI Bot Requests Since July 1

    December 6, 2025
    The Louisiana Department of Wildlife and Fisheries Is Detaining People for ICE

    The Louisiana Department of Wildlife and Fisheries Is Detaining People for ICE

    December 5, 2025
    Your Data Might Determine How Much You Pay for Eggs

    Your Data Might Determine How Much You Pay for Eggs

    December 4, 2025
    Russia Wants This Mega Missile to Intimidate the West, but It Keeps Crashing

    Russia Wants This Mega Missile to Intimidate the West, but It Keeps Crashing

    December 4, 2025
    This Hacker Conference Installed a Literal Antivirus Monitoring System

    This Hacker Conference Installed a Literal Antivirus Monitoring System

    December 4, 2025
    Flock Uses Overseas Gig Workers to Build Its Surveillance AI

    Flock Uses Overseas Gig Workers to Build Its Surveillance AI

    December 4, 2025
    Our Picks
    DLSS 5: Has Nvidia’s AI graphics technology gone too far?

    DLSS 5: Has Nvidia’s AI graphics technology gone too far?

    March 18, 2026
    Tumblr reverses its changes to reblogs after user backlash

    Tumblr reverses its changes to reblogs after user backlash

    March 17, 2026
    Starfield is coming to the PS5 and getting a pair of major updates in April

    Starfield is coming to the PS5 and getting a pair of major updates in April

    March 17, 2026
    Microsoft appoints a new Copilot boss after AI leadership shake-up

    Microsoft appoints a new Copilot boss after AI leadership shake-up

    March 17, 2026
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    The Beats Studio Pro are nearly 0 off ahead of Amazon’s big spring sale News

    The Beats Studio Pro are nearly $200 off ahead of Amazon’s big spring sale

    By News RoomMarch 17, 2026

    Earlier this week, Apple unveiled the AirPods Max 2. The forthcoming over-ear headphones build on…

    Several Sonos audio products got their first 2026 discounts

    Several Sonos audio products got their first 2026 discounts

    March 17, 2026
    Ikea tried to build a smart home for everyone — here’s why it’s not working yet

    Ikea tried to build a smart home for everyone — here’s why it’s not working yet

    March 17, 2026
    Antigravity’s 360-degree drone is 20 percent off ahead of its next feature drop

    Antigravity’s 360-degree drone is 20 percent off ahead of its next feature drop

    March 17, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2026 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.