Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    Fear and loathing at OpenAI

    Fear and loathing at OpenAI

    April 10, 2026
    YouTube Premium is getting pricier

    YouTube Premium is getting pricier

    April 10, 2026
    Microsoft starts removing Copilot buttons from Windows 11 apps

    Microsoft starts removing Copilot buttons from Windows 11 apps

    April 10, 2026
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Google DeepMind’s new AI models help robots perform physical tasks, even without training
    News

    Google DeepMind’s new AI models help robots perform physical tasks, even without training

    News RoomBy News RoomMarch 12, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    Google DeepMind’s new AI models help robots perform physical tasks, even without training

    Google DeepMind is launching two new AI models designed to help robots “perform a wider range of real-world tasks than ever before.” The first, called Gemini Robotics, is a vision-language-action model capable of understanding new situations, even if it hasn’t been trained on them.

    Gemini Robotics is built on Gemini 2.0, the latest version of Google’s flagship AI model. During a press briefing, Carolina Parada, the senior director and head of robotics at Google DeepMind, said Gemini Robotics “draws from Gemini’s multimodal world understanding and transfers it to the real world by adding physical actions as a new modality.”

    The new model makes advancements in three key areas that Google DeepMind says are essential to building helpful robots: generality, interactivity, and dexterity. In addition to the ability to generalize new scenarios, Gemini Robotics is better at interacting with people and their environment. It’s also capable of performing more precise physical tasks, such as folding a piece of paper or removing a bottle cap.

    “While we have made progress in each one of these areas individually in the past with general robotics, we’re bringing [drastically] increasing performance in all three areas with a single model,” Parada said. “This enables us to build robots that are more capable, that are more responsive and that are more robust to changes in their environment.”

    Google DeepMind is also launching Gemini Robotics-ER (or embodied reasoning), which the company describes as an advanced visual language model that can “understand our complex and dynamic world.”

    As Parada explains, when you’re packing a lunchbox and have items on a table in front of you, you’d need to know where everything is, as well as how to open the lunchbox, how to grasp the items, and where to place them. That’s the kind of reasoning Gemini Robotics-ER is expected to do. It’s designed for roboticists to connect with existing low-level controllers — the system that controls a robot’s movements — allowing them to enable new capabilities powered by Gemini Robotics-ER.

    In terms of safety, Google DeepMind researcher Vikas Sindhwani told reporters that the company is developing a “layered-approach,” adding that Gemini Robotics-ER models “are trained to evaluate whether or not a potential action is safe to perform in a given scenario.” The company is also releasing new benchmarks and frameworks to help further safety research in the AI industry. Last year, Google DeepMind introduced its “Robot Constitution,” a set of Isaac Asimov-inspired rules for its robots to follow.

    Google DeepMind is working with Apptronik to “build the next generation of humanoid robots.” It’s also giving “trusted testers” access to its Gemini Robotics-ER model, including Agile Robots, Agility Robotics, Boston Dynamics, and Enchanted Tools. “We’re very focused on building the intelligence that is going to be able to understand the physical world and be able to act on that physical world,” Parada said. “We’re very excited to basically leverage this across multiple embodiments and many applications for us.”

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleAll this bad AI is wrecking a whole generation of gadgets
    Next Article What’s Driving Tesla’s Woes?

    Related Posts

    Fear and loathing at OpenAI

    Fear and loathing at OpenAI

    April 10, 2026
    YouTube Premium is getting pricier

    YouTube Premium is getting pricier

    April 10, 2026
    Microsoft starts removing Copilot buttons from Windows 11 apps

    Microsoft starts removing Copilot buttons from Windows 11 apps

    April 10, 2026
    The EFF is quitting X

    The EFF is quitting X

    April 9, 2026
    Florida launches investigation into OpenAI

    Florida launches investigation into OpenAI

    April 9, 2026
    Samsung’s Galaxy Watch 8 is easier to recommend now it starts at 0

    Samsung’s Galaxy Watch 8 is easier to recommend now it starts at $260

    April 9, 2026
    Our Picks
    YouTube Premium is getting pricier

    YouTube Premium is getting pricier

    April 10, 2026
    Microsoft starts removing Copilot buttons from Windows 11 apps

    Microsoft starts removing Copilot buttons from Windows 11 apps

    April 10, 2026
    The EFF is quitting X

    The EFF is quitting X

    April 9, 2026
    Florida launches investigation into OpenAI

    Florida launches investigation into OpenAI

    April 9, 2026
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Samsung’s Galaxy Watch 8 is easier to recommend now it starts at 0 News

    Samsung’s Galaxy Watch 8 is easier to recommend now it starts at $260

    By News RoomApril 9, 2026

    Samsung’s Galaxy Watch 8 is one of the best Android smartwatches you can buy right…

    Framework is teasing a lot of Linux for its April 21st event

    Framework is teasing a lot of Linux for its April 21st event

    April 9, 2026
    Instagram now lets you edit your own comments

    Instagram now lets you edit your own comments

    April 9, 2026
    Google’s Gemini AI can answer your questions with 3D models and simulations

    Google’s Gemini AI can answer your questions with 3D models and simulations

    April 9, 2026
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2026 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.