Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Exclusive LegalZoom Promo Code for 10% Off Services

    July 29, 2025

    Top LG Promo Codes for July 2025

    July 29, 2025

    Microsoft Edge transforms into an AI browser with new Copilot Mode

    July 28, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Singapore’s Vision for AI Safety Bridges the US-China Divide
    Business

    Singapore’s Vision for AI Safety Bridges the US-China Divide

    News RoomBy News RoomMay 9, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    The government of Singapore released a blueprint today for global collaboration on artificial intelligence safety following a meeting of AI researchers from the US, China, and Europe. The document lays out a shared vision for working on AI safety through international cooperation rather than competition.

    “Singapore is one of the few countries on the planet that gets along well with both East and West,” says Max Tegmark, a scientist at MIT who helped convene the meeting of AI luminaries last month. “They know that they’re not going to build [artificial general intelligence] themselves—they will have it done to them—so it is very much in their interests to have the countries that are going to build it talk to each other.”

    The countries thought most likely to build AGI are, of course, the US and China—and yet those nations seem more intent on outmaneuvering each other than working together. In January, after Chinese startup DeepSeek released a cutting-edge model, President Trump called it “a wakeup call for our industries” and said the US needed to be “laser-focused on competing to win.”

    The Singapore Consensus on Global AI Safety Research Priorities calls for researchers to collaborate in three key areas: studying the risks posed by frontier AI models, exploring safer ways to build those models, and developing methods for controlling the behavior of the most advanced AI systems.

    The consensus was developed at a meeting held on April 26 alongside the International Conference on Learning Representations (ICLR), a premier AI event held in Singapore this year.

    Researchers from OpenAI, Anthropic, Google DeepMind, xAI, and Meta all attended the AI safety event, as did academics from institutions including MIT, Stanford, Tsinghua, and the Chinese Academy of Sciences. Experts from AI safety institutes in the US, UK, France, Canada, China, Japan and Korea also participated.

    “In an era of geopolitical fragmentation, this comprehensive synthesis of cutting-edge research on AI safety is a promising sign that the global community is coming together with a shared commitment to shaping a safer AI future,” Xue Lan, dean of Tsinghua University, said in a statement.

    The development of increasingly capable AI models, some of which have surprising abilities, has caused researchers to worry about a range of risks. While some focus on near-term harms including problems caused by biased AI systems or the potential for criminals to harness the technology, a significant number believe that AI may pose an existential threat to humanity as it begins to outsmart humans across more domains. These researchers, sometimes referred to as “AI doomers,” worry that models may deceive and manipulate humans in order to pursue their own goals.

    The potential of AI has also stoked talk of an arms race between the US, China, and other powerful nations. The technology is viewed in policy circles as critical to economic prosperity and military dominance, and many governments have sought to stake out their own visions and regulations governing how it should be developed.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThreads adds dashboard to better explain post and account restrictions
    Next Article Amazon now sells prescription pet pills

    Related Posts

    Trump’s Anti-Bias AI Order Is Just More Bias

    July 28, 2025

    Cursor’s New Bugbot Is Designed to Save Vibe Coders From Themselves

    July 26, 2025

    Americans Are Obsessed With Watching Short Video Dramas From China

    July 25, 2025

    Trump Says He’s ‘Getting Rid of Woke’ and Dismisses Copyright Concerns in AI Policy Speech

    July 25, 2025

    Trump’s AI Action Plan Is a Crusade Against ‘Bias’—and Regulation

    July 24, 2025

    A New Era for WIRED—That Starts With You

    July 24, 2025
    Our Picks

    Top LG Promo Codes for July 2025

    July 29, 2025

    Microsoft Edge transforms into an AI browser with new Copilot Mode

    July 28, 2025

    Women’s ‘red flag’ app Tea is a privacy nightmare

    July 28, 2025

    Samsung’s One UI 8 might shut down bootloader unlocking on Galaxy phones

    July 28, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    The controversial legal tactic The Trump Organization is using to take down fake merch

    By News RoomJuly 28, 2025

    Unauthorized merch promoting Donald Trump — from hats and mugs to signs and T-shirts —…

    Sony calls Tencent game ‘slavish clone’ of Horizon in new lawsuit

    July 28, 2025

    Yet another Pixel 6A caught fire

    July 28, 2025

    Your Whistle pet tracker will stop working next month

    July 28, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.