Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Sports streaming hub Streameast ‘shut down’ in sting

    September 3, 2025

    Google’s best-ever 4K streamer is within $1 of its lowest price

    September 3, 2025

    Why I love my Kenmore canister vacuum

    September 3, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Google DeepMind’s new AI models help robots perform physical tasks, even without training
    News

    Google DeepMind’s new AI models help robots perform physical tasks, even without training

    News RoomBy News RoomMarch 12, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    Google DeepMind is launching two new AI models designed to help robots “perform a wider range of real-world tasks than ever before.” The first, called Gemini Robotics, is a vision-language-action model capable of understanding new situations, even if it hasn’t been trained on them.

    Gemini Robotics is built on Gemini 2.0, the latest version of Google’s flagship AI model. During a press briefing, Carolina Parada, the senior director and head of robotics at Google DeepMind, said Gemini Robotics “draws from Gemini’s multimodal world understanding and transfers it to the real world by adding physical actions as a new modality.”

    The new model makes advancements in three key areas that Google DeepMind says are essential to building helpful robots: generality, interactivity, and dexterity. In addition to the ability to generalize new scenarios, Gemini Robotics is better at interacting with people and their environment. It’s also capable of performing more precise physical tasks, such as folding a piece of paper or removing a bottle cap.

    “While we have made progress in each one of these areas individually in the past with general robotics, we’re bringing [drastically] increasing performance in all three areas with a single model,” Parada said. “This enables us to build robots that are more capable, that are more responsive and that are more robust to changes in their environment.”

    Google DeepMind is also launching Gemini Robotics-ER (or embodied reasoning), which the company describes as an advanced visual language model that can “understand our complex and dynamic world.”

    As Parada explains, when you’re packing a lunchbox and have items on a table in front of you, you’d need to know where everything is, as well as how to open the lunchbox, how to grasp the items, and where to place them. That’s the kind of reasoning Gemini Robotics-ER is expected to do. It’s designed for roboticists to connect with existing low-level controllers — the system that controls a robot’s movements — allowing them to enable new capabilities powered by Gemini Robotics-ER.

    In terms of safety, Google DeepMind researcher Vikas Sindhwani told reporters that the company is developing a “layered-approach,” adding that Gemini Robotics-ER models “are trained to evaluate whether or not a potential action is safe to perform in a given scenario.” The company is also releasing new benchmarks and frameworks to help further safety research in the AI industry. Last year, Google DeepMind introduced its “Robot Constitution,” a set of Isaac Asimov-inspired rules for its robots to follow.

    Google DeepMind is working with Apptronik to “build the next generation of humanoid robots.” It’s also giving “trusted testers” access to its Gemini Robotics-ER model, including Agile Robots, Agility Robotics, Boston Dynamics, and Enchanted Tools. “We’re very focused on building the intelligence that is going to be able to understand the physical world and be able to act on that physical world,” Parada said. “We’re very excited to basically leverage this across multiple embodiments and many applications for us.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleAll this bad AI is wrecking a whole generation of gadgets
    Next Article What’s Driving Tesla’s Woes?

    Related Posts

    Sports streaming hub Streameast ‘shut down’ in sting

    September 3, 2025

    Google’s best-ever 4K streamer is within $1 of its lowest price

    September 3, 2025

    Why I love my Kenmore canister vacuum

    September 3, 2025

    Microsoft’s PowerToys are about to add two big missing Windows features

    September 3, 2025

    The UK’s largest energy supplier has created its own EV charger

    September 3, 2025

    Samsung’s new party speakers are less subtle than ever

    September 3, 2025
    Our Picks

    Google’s best-ever 4K streamer is within $1 of its lowest price

    September 3, 2025

    Why I love my Kenmore canister vacuum

    September 3, 2025

    Microsoft’s PowerToys are about to add two big missing Windows features

    September 3, 2025

    The UK’s largest energy supplier has created its own EV charger

    September 3, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Business

    Researchers Are Already Leaving Meta’s New Superintelligence Lab

    By News RoomSeptember 3, 2025

    At least three artificial intelligence researchers have resigned from Meta’s new superintelligence lab, just two…

    Samsung’s new party speakers are less subtle than ever

    September 3, 2025

    DOGE Put Everyone’s Social Security Data at Risk, Whistleblower Claims

    September 3, 2025

    Latam-GPT: The Free, Open Source, and Collaborative AI of Latin America

    September 3, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.