Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    I know why Mark Zuckerberg risked live demo failure

    September 19, 2025

    First look at the Google Home app powered by Gemini

    September 19, 2025

    MAGA influencers are already fighting over Charlie Kirk’s death

    September 19, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » The US Is Forming a Global AI Safety Network With Key Allies
    Business

    The US Is Forming a Global AI Safety Network With Key Allies

    News RoomBy News RoomMay 21, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    The US is widely seen as the global leader in artificial intelligence, thanks to companies like OpenAI, Google, and Meta. But the US government says it needs help from other nations to manage the risks posed by AI technology.

    At an international summit on AI Safety in Seoul on Tuesday, the US delivered a message from Secretary of Commerce Gina Raimondo announcing that a global network of AI safety institutes spanning the US, UK, Japan, Canada, and other allies will collaborate to contain the technology’s risks. She also urged other countries to join up.

    “Recent advances in AI carry exciting, life-changing potential for our society, but only if we do the hard work to mitigate the very real dangers,” Secretary Raimondo said in a statement released ahead of the announcement. “It is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”

    The US government has previously said advances in AI create national security risks, including the potential to automate or accelerate the development of bioweapons or to enable more damaging cyberattacks on critical infrastructure.

    One challenge for the US, alluded to in Raimondo’s statement, is that some national governments may not be eager to fall in line with its approach to AI. She said the US, the UK, Japan, Canada, Singapore, and the European AI Office would work together as the founding members of a “global network of AI safety institutes.”

    The Commerce Department declined to comment on whether China had been invited to join the new AI safety network. Fears that China will use advanced AI to empower its military or threaten the US led first the Trump administration and now the Biden administration to roll out a series of restrictions on Chinese access to key technology.

    The US and China have at least opened a line of communication. A meeting between President Biden and Chinese president Xi Jinping in November saw the two superpowers agree to hold talks on AI risks and safety. Representatives from the nations met in Switzerland last week to hold the first round of discussions.

    The Commerce Department said that representatives of the new global AI safety network’s members will meet in San Francisco later this year. A blueprint issued by the agency says that the network will work together to develop and agree upon methodologies and tools for evaluating AI models and ways to mitigate the risks of AI. “We hope to help develop the science and practices that underpin future arrangements for international AI governance,” the document says. A commerce department spokesperson said that the network would help nations tap into talent, experiment more quickly, and agree on AI standards.

    The Seoul summit on AI safety this week is cohosted by the UK government, which convened the first major international meeting on the topic last November. That summit culminated in more than 28 countries including the US, members of the EU, and China signing a declaration warning that artificial intelligence is advancing with such speed and uncertainty that it could cause “serious, even catastrophic, harm.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleApple’s Photo Bug Exposes the Myth of ‘Deleted’
    Next Article Here’s the eight-inch Snapdragon PC for your Windows on Arm experiments

    Related Posts

    Jensen Huang Wants You to Know He’s Getting a Lot Out of the ‘Fantastic’ Nvidia-Intel Deal

    September 19, 2025

    China Turns Legacy Chips Into a Trade Weapon

    September 19, 2025

    Nvidia CEO Jensen Huang Is Bananas for Google Gemini’s AI Image Generator

    September 18, 2025

    OpenAI’s Teen Safety Features Will Walk a Thin Line

    September 18, 2025

    Matthew Prince Wants AI Companies to Pay for Their Sins

    September 17, 2025

    How AI Is Upending Politics, Tech, the Media, and More

    September 16, 2025
    Our Picks

    First look at the Google Home app powered by Gemini

    September 19, 2025

    MAGA influencers are already fighting over Charlie Kirk’s death

    September 19, 2025

    Jensen Huang Wants You to Know He’s Getting a Lot Out of the ‘Fantastic’ Nvidia-Intel Deal

    September 19, 2025

    Meta’s quest to own your face

    September 19, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Reviews

    How I went from an e-bike hater to a believer

    By News RoomSeptember 19, 2025

    A wise person once observed that cycling in my neighborhood in Seattle is like going…

    The strongest argument for smart glasses is accessibility

    September 19, 2025

    Nothing teases the first headphones from its budget CMF brand

    September 19, 2025

    Meta’s failed smart glasses demos had nothing to do with the Wi-Fi

    September 19, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.