Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    A Fentanyl Vaccine Is About to Get Its First Major Test

    A Fentanyl Vaccine Is About to Get Its First Major Test

    December 6, 2025
    Apple’s chip chief might be the next exec to leave

    Apple’s chip chief might be the next exec to leave

    December 6, 2025
    The tech world is sleeping on the most exciting Bluetooth feature in years

    The tech world is sleeping on the most exciting Bluetooth feature in years

    December 6, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Etching AI Controls Into Silicon Could Keep Doomsday at Bay
    Business

    Etching AI Controls Into Silicon Could Keep Doomsday at Bay

    News RoomBy News RoomJanuary 25, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    Etching AI Controls Into Silicon Could Keep Doomsday at Bay

    Even the cleverest, most cunning artificial intelligence algorithm will presumably have to obey the laws of silicon. Its capabilities will be constrained by the hardware that it’s running on.

    Some researchers are exploring ways to exploit that connection to limit the potential of AI systems to cause harm. The idea is to encode rules governing the training and deployment of advanced algorithms directly into the computer chips needed to run them.

    In theory—the sphere where much debate about dangerously powerful AI currently resides—this might provide a powerful new way to prevent rogue nations or irresponsible companies from secretly developing dangerous AI. And one harder to evade than conventional laws or treaties. A report published earlier this month by the Center for New American Security, an influential US foreign policy think tank, outlines how carefully hobbled silicon might be harnessed to enforce a range of AI controls.

    Some chips already feature trusted components designed to safeguard sensitive data or guard against misuse. The latest iPhones, for instance, keep a person’s biometric information in a “secure enclave.” Google uses a custom chip in its cloud servers to ensure nothing has been tampered with.

    The paper suggests harnessing similar features built into GPUs—or etching new ones into future chips—to prevent AI projects from accessing more than a certain amount of computing power without a license. Because hefty computing power is needed to train the most powerful AI algorithms, like those behind ChatGPT, that would limit who can build the most powerful systems.

    CNAS says licenses could be issued by a government or international regulator and refreshed periodically, making it possible to cut off access to AI training by refusing a new one. “You could design protocols such that you can only deploy a model if you’ve run a particular evaluation and gotten a score above a certain threshold—let’s say for safety,” says Tim Fist, a fellow at CNAS and one of three authors of the paper.

    Some AI luminaries worry that AI is now becoming so smart that it could one day prove unruly and dangerous. More immediately, some experts and governments fret that even existing AI models could make it easier to develop chemical or biological weapons or automate cybercrime. Washington has already imposed a series of AI chip export controls to limit China’s access to the most advanced AI, fearing it could be used for military purposes—although smuggling and clever engineering has provided some ways around them. Nvidia declined to comment, but the company has lost billions of dollars worth of orders from China due to the last US export controls.

    Fist of CNAS says that although hard-coding restrictions into computer hardware might seem extreme, there’s precedent in establishing infrastructure to monitor or control important technology and enforce international treaties. “If you think about security and nonproliferation in nuclear, verification technologies were absolutely key to guaranteeing treaties,” says Fist of CNAS. “The network of seismometers that we now have to detect underground nuclear tests underpin treaties that say we shall not test underground weapons above a certain kiloton threshold.”

    The ideas put forward by CNAS aren’t entirely theoretical. Nvidia’s all-important AI training chips—crucial for building the most powerful AI models—already come with secure cryptographic modules. And in November 2023, researchers at the Future of Life Institute, a nonprofit dedicated to protecting humanity from existential threats, and Mithril Security, a security startup, created a demo that shows how the security module of an Intel CPU could be used for a cryptographic scheme that can restrict unauthorized use of an AI model.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleTrolls have flooded X with graphic Taylor Swift AI fakes
    Next Article I love my GPT, but I can’t find a use for anybody else’s

    Related Posts

    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    December 4, 2025
    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    December 4, 2025
    ByteDance and DeepSeek Are Placing Very Different AI Bets

    ByteDance and DeepSeek Are Placing Very Different AI Bets

    December 4, 2025
    Jeff Bezos’ New AI Venture Quietly Acquired an Agentic Computing Startup

    Jeff Bezos’ New AI Venture Quietly Acquired an Agentic Computing Startup

    December 4, 2025
    Melinda French Gates on Secrets: ‘Live a Truthful Life, Then You Don’t Have Any’

    Melinda French Gates on Secrets: ‘Live a Truthful Life, Then You Don’t Have Any’

    December 2, 2025
    WIRED Roundup: Gemini 3 Release, Nvidia Earnings, Epstein Files Fallout

    WIRED Roundup: Gemini 3 Release, Nvidia Earnings, Epstein Files Fallout

    December 2, 2025
    Our Picks
    Apple’s chip chief might be the next exec to leave

    Apple’s chip chief might be the next exec to leave

    December 6, 2025
    The tech world is sleeping on the most exciting Bluetooth feature in years

    The tech world is sleeping on the most exciting Bluetooth feature in years

    December 6, 2025
    GoTrax’s Mustang Electric Bike Makes Me Feel Like I’m in ‘Stranger Things’

    GoTrax’s Mustang Electric Bike Makes Me Feel Like I’m in ‘Stranger Things’

    December 6, 2025
    Gear News of the Week: Google Drops Another Android Update, and the Sony A7 V Is Here

    Gear News of the Week: Google Drops Another Android Update, and the Sony A7 V Is Here

    December 6, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Cloudflare Has Blocked 416 Billion AI Bot Requests Since July 1 Security

    Cloudflare Has Blocked 416 Billion AI Bot Requests Since July 1

    By News RoomDecember 6, 2025

    As the large language models powering generative AI tools slurp up ever more data across…

    The Oceans Are Going to Rise—but When?

    The Oceans Are Going to Rise—but When?

    December 6, 2025
    Taste the Future With the Best Meal Replacement Shakes

    Taste the Future With the Best Meal Replacement Shakes

    December 6, 2025
    The best Christmas gifts we love under

    The best Christmas gifts we love under $50

    December 5, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.