Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Bike Friday’s Tiny, Purple, Lightweight Ebike Fits on the Most Crowded Bike Rack

    August 1, 2025

    Bose’s QuietComfort Headphones are $130 off for back-to-school season

    August 1, 2025

    Google backtracks on plans to deactivate shortened goo.gl links

    August 1, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » AI security bill aims to prevent safety breaches of AI models
    News

    AI security bill aims to prevent safety breaches of AI models

    News RoomBy News RoomMay 1, 20242 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    A new bill seeking to track security issues by mandating the creation of a database recording all breaches of AI systems has been filed in the Senate. 

    The Secure Artificial Intelligence Act, introduced by Sens. Mark Warner (D-VA) and Thom Tillis (R-NC), would establish an Artificial Intelligence Security Center at the National Security Agency. This center would lead research into what the bill calls “counter-AI,” or techniques to learn how to manipulate AI systems. This center would also develop guidance for preventing counter-AI measures. 

    The bill will also require the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency to create a database of AI breaches, including “near-misses.” 

    Warner and Tillis’ proposed bill focuses on techniques to counter AI and classifies them as data poisoning, evasion attacks, privacy-based attacks, and abuse attacks. Data poisoning refers to a method where code is inserted into data scraped by an AI model, corrupting the model’s output. It emerged as a popular method to prevent AI image generators from copying art on the internet. Evasion attacks change data studied by AI models to the point the model gets confused. 

    AI safety was one of the key items in the Biden administration’s AI executive order, which directed NIST to establish “red-teaming” guidelines and required AI developers to submit safety reports. Red teaming is when developers intentionally attempt to get AI models to respond to prompts they’re not supposed to. 

    Ideally, developers of powerful AI models test the platforms for safety and have them undergo extensive red teaming before being released to the public. Some companies, like Microsoft, have created tools to help make adding safety guardrails to AI projects easier. 

    The Secure Artificial Intelligence Act will have to go through a committee before it can be taken up by the larger Senate.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleWhatsApp’s new feature lets you plan your next event
    Next Article Crunchyroll is about to get a little more expensive, just like all the other streamers

    Related Posts

    Bose’s QuietComfort Headphones are $130 off for back-to-school season

    August 1, 2025

    Google backtracks on plans to deactivate shortened goo.gl links

    August 1, 2025

    Verizon is upping its fees again

    August 1, 2025

    Google has just two weeks to begin cracking open Android, it admits in emergency filing

    August 1, 2025

    Reddit pauses its paywall plans

    August 1, 2025

    Bing made Google dance and then stole some search traffic

    August 1, 2025
    Our Picks

    Bose’s QuietComfort Headphones are $130 off for back-to-school season

    August 1, 2025

    Google backtracks on plans to deactivate shortened goo.gl links

    August 1, 2025

    Verizon is upping its fees again

    August 1, 2025

    Donald Trump’s New Crypto Bible Is Everything the Industry Ever Wanted

    August 1, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Google has just two weeks to begin cracking open Android, it admits in emergency filing

    By News RoomAugust 1, 2025

    4. For a period of three years ending on November 1, 2027, Google may not…

    Reddit pauses its paywall plans

    August 1, 2025

    Inside the Summit Where China Pitched Its AI Agenda to the World

    August 1, 2025

    The Grave Long-Term Effects of the Gaza Malnutrition Crisis

    August 1, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.