Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Blind patients read again with smart glasses-linked eye implant

    October 20, 2025

    Apple adds a new toggle to make Liquid Glass less glassy

    October 20, 2025

    Interstellar Comet 3I/ATLAS Is Spewing Water Like a Cosmic Fire Hydrant

    October 20, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?
    Business

    Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?

    News RoomBy News RoomOctober 20, 20252 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    At the end of August, the AI company Anthropic announced that its chatbot Claude wouldn’t help anyone build a nuclear weapon. According to Anthropic, it had partnered with the Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) to make sure Claude wouldn’t spill nuclear secrets.

    The manufacture of nuclear weapons is both a precise science and a solved problem. A lot of the information about America’s most advanced nuclear weapons is Top Secret, but the original nuclear science is 80 years old. North Korea proved that a dedicated country with an interest in acquiring the bomb can do it, and it didn’t need a chatbot’s help.

    How, exactly, did the US government work with an AI company to make sure a chatbot wasn’t spilling sensitive nuclear secrets? And also: Was there ever a danger of a chatbot helping someone build a nuke in the first place?

    The answer to the first question is that it used Amazon. The answer to the second question is complicated.

    Amazon Web Services (AWS) offers Top Secret cloud services to government clients where they can store sensitive and classified information. The DOE already had several of these servers when it started to work with Anthropic.

    “We deployed a then-frontier version of Claude in a Top Secret environment so that the NNSA could systematically test whether AI models could create or exacerbate nuclear risks,” Marina Favaro, who oversees National Security Policy & Partnerships at Anthropic tells WIRED. “Since then, the NNSA has been red-teaming successive Claude models in their secure cloud environment and providing us with feedback.”

    The NNSA red-teaming process—meaning, testing for weaknesses—helped Anthropic and America’s nuclear scientists develop a proactive solution for chatbot-assisted nuclear programs. Together, they “codeveloped a nuclear classifier, which you can think of like a sophisticated filter for AI conversations,” Favaro says. “We built it using a list developed by the NNSA of nuclear risk indicators, specific topics, and technical details that help us identify when a conversation might be veering into harmful territory. The list itself is controlled but not classified, which is crucial, because it means our technical staff and other companies can implement it.”

    Favaro says it took months of tweaking and testing to get the classifier working. “It catches concerning conversations without flagging legitimate discussions about nuclear energy or medical isotopes,” she says.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleKohler’s new toilet camera provides health insights based on your bathroom breaks
    Next Article Zocdoc CEO: ‘Dr. Google is going to be replaced by Dr. AI’

    Related Posts

    Can AI Avoid the Enshittification Trap?

    October 20, 2025

    ByteDance’s Other AI Chatbot Is Quietly Gaining Traction Around the World

    October 20, 2025

    How ByteDance Made China’s Most Popular AI Chatbot

    October 20, 2025

    Spit On, Sworn At, and Undeterred: What It’s Like to Own a Cybertruck

    October 17, 2025

    The AI Industry’s Scaling Obsession Is Headed for a Cliff

    October 17, 2025

    A Plan to Rebuild Gaza Lists Nearly 30 Companies. Many Say They’re Not Involved

    October 16, 2025
    Our Picks

    Apple adds a new toggle to make Liquid Glass less glassy

    October 20, 2025

    Interstellar Comet 3I/ATLAS Is Spewing Water Like a Cosmic Fire Hydrant

    October 20, 2025

    Zocdoc CEO: ‘Dr. Google is going to be replaced by Dr. AI’

    October 20, 2025

    Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?

    October 20, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Kohler’s new toilet camera provides health insights based on your bathroom breaks

    By News RoomOctober 20, 2025

    Knowing what’s coming out of your body can be just as useful for maintaining a…

    One Republican Now Controls a Huge Chunk of US Election Infrastructure

    October 20, 2025

    Iniu’s tiny 10,000mAh power bank is almost half off right now

    October 20, 2025

    Can AI Avoid the Enshittification Trap?

    October 20, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.