Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Norbauer Seneca review: a $3,600 luxury keyboard for the keyboard obsessed

    June 15, 2025

    How to build the best keyboard in the world

    June 15, 2025

    Would you switch browsers for a chatbot?

    June 15, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Google DeepMind’s Chatbot-Powered Robot Is Part of a Bigger Revolution
    Business

    Google DeepMind’s Chatbot-Powered Robot Is Part of a Bigger Revolution

    News RoomBy News RoomJuly 14, 20244 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    In a cluttered open-plan office in Mountain View, California, a tall and slender wheeled robot has been busy playing tour guide and informal office helper—thanks to a large language model upgrade, Google DeepMind revealed today. The robot uses the latest version of Google’s Gemini large language model to both parse commands and find its way around.

    When told by a human “Find me somewhere to write,” for instance, the robot dutifully trundles off, leading the person to a pristine whiteboard located somewhere in the building.

    Gemini’s ability to handle video and text—in addition to its capacity to ingest large amounts of information in the form of previously recorded video tours of the office—allows the “Google helper” robot to make sense of its environment and navigate correctly when given commands that require some commonsense reasoning. The robot combines Gemini with an algorithm that generates specific actions for the robot to take, such as turning, in response to commands and what it sees in front of it.

    When Gemini was introduced in December, Demis Hassabis, CEO of Google DeepMind, told WIRED that its multimodal capabilities would likely unlock new robot abilities. He added that the company’s researchers were hard at work testing the robotic potential of the model.

    In a new paper outlining the project, the researchers behind the work say that their robot proved to be up to 90 percent reliable at navigating, even when given tricky commands such as “Where did I leave my coaster?” DeepMind’s system “has significantly improved the naturalness of human-robot interaction, and greatly increased the robot usability,” the team writes.

    Courtesy of Google DeepMind

    A photo of a Google DeepMind employee interacting with an AI robot.

    Photograph: Muinat Abdul; Google DeepMind

    The demo neatly illustrates the potential for large language models to reach into the physical world and do useful work. Gemini and other chatbots mostly operate within the confines of a web browser or app, although they are increasingly able to handle visual and auditory input, as both Google and OpenAI have demonstrated recently. In May, Hassabis showed off an upgraded version of Gemini capable of making sense of an office layout as seen through a smartphone camera.

    Academic and industry research labs are racing to see how language models might be used to enhance robots’ abilities. The May program for the International Conference on Robotics and Automation, a popular event for robotics researchers, lists almost two dozen papers that involve use of vision language models.

    Investors are pouring money into startups aiming to apply advances in AI to robotics. Several of the researchers involved with the Google project have since left the company to found a startup called Physical Intelligence, which received an initial $70 million in funding; it is working to combine large language models with real-world training to give robots general problem-solving abilities. Skild AI, founded by roboticists at Carnegie Mellon University, has a similar goal. This month it announced $300 million in funding.

    Just a few years ago, a robot would need a map of its environment and carefully chosen commands to navigate successfully. Large language models contain useful information about the physical world, and newer versions that are trained on images and video as well as text, known as vision language models, can answer questions that require perception. Gemini allows Google’s robot to parse visual instructions as well as spoken ones, following a sketch on a whiteboard that shows a route to a new destination.

    In their paper, the researchers say they plan to test the system on different kinds of robots. They add that Gemini should be able to make sense of more complex questions, such as “Do they have my favorite drink today?” from a user with a lot of empty Coke cans on their desk.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleShooting conspiracies trend on X as Musk endorses Trump
    Next Article Health Care Should Be Designed for the Extremes of Life

    Related Posts

    Cheap AI Tools May Come at a Bigger Long-Term Cost

    June 12, 2025

    Vibe Coding Is Coming for Engineering Jobs

    June 12, 2025

    How Steve Jobs Wrote the Greatest Commencement Speech Ever

    June 12, 2025

    How Waymo Handles Footage From Events Like the LA Immigration Protests

    June 12, 2025

    Disney and Universal Sue AI Company Midjourney for Copyright Infringement

    June 12, 2025

    A Google Shareholder Is Suing the Company Over the TikTok Ban

    June 11, 2025
    Our Picks

    How to build the best keyboard in the world

    June 15, 2025

    Would you switch browsers for a chatbot?

    June 15, 2025

    Astronomers Are Using Artificial Intelligence to Unlock the Secrets of Black Holes

    June 15, 2025

    Shot by His Father and Left Blind—Now He’s a Hardcore Gamer

    June 15, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Sony’s WH-1000XM6 are already on sale with a $30 gift card

    By News RoomJune 14, 2025

    Less than a month after making their debut, the WH-1000XM6 are on sale at Amazon…

    Tern’s Newest GSD Cargo Bike Now Has Antilock Brakes

    June 14, 2025

    Fujifilm’s X-E5, New Bose Speakers, and Qualcomm’s Smart Glasses Chip—Your Gear News of the Week

    June 14, 2025

    Nintendo Switch 2 review: exactly good enough

    June 14, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.