Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Trump pulls Musk ally’s NASA Administrator nomination

    May 31, 2025

    This Staples Standing Desk Isn’t Flashy but It’s Reliable for the Money

    May 31, 2025

    The Nike x Hyperice Hyperboots Will Give You a Heated Foot Massage While You Walk

    May 31, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Google DeepMind’s Chatbot-Powered Robot Is Part of a Bigger Revolution
    Business

    Google DeepMind’s Chatbot-Powered Robot Is Part of a Bigger Revolution

    News RoomBy News RoomJuly 14, 20244 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    In a cluttered open-plan office in Mountain View, California, a tall and slender wheeled robot has been busy playing tour guide and informal office helper—thanks to a large language model upgrade, Google DeepMind revealed today. The robot uses the latest version of Google’s Gemini large language model to both parse commands and find its way around.

    When told by a human “Find me somewhere to write,” for instance, the robot dutifully trundles off, leading the person to a pristine whiteboard located somewhere in the building.

    Gemini’s ability to handle video and text—in addition to its capacity to ingest large amounts of information in the form of previously recorded video tours of the office—allows the “Google helper” robot to make sense of its environment and navigate correctly when given commands that require some commonsense reasoning. The robot combines Gemini with an algorithm that generates specific actions for the robot to take, such as turning, in response to commands and what it sees in front of it.

    When Gemini was introduced in December, Demis Hassabis, CEO of Google DeepMind, told WIRED that its multimodal capabilities would likely unlock new robot abilities. He added that the company’s researchers were hard at work testing the robotic potential of the model.

    In a new paper outlining the project, the researchers behind the work say that their robot proved to be up to 90 percent reliable at navigating, even when given tricky commands such as “Where did I leave my coaster?” DeepMind’s system “has significantly improved the naturalness of human-robot interaction, and greatly increased the robot usability,” the team writes.

    Courtesy of Google DeepMind

    A photo of a Google DeepMind employee interacting with an AI robot.

    Photograph: Muinat Abdul; Google DeepMind

    The demo neatly illustrates the potential for large language models to reach into the physical world and do useful work. Gemini and other chatbots mostly operate within the confines of a web browser or app, although they are increasingly able to handle visual and auditory input, as both Google and OpenAI have demonstrated recently. In May, Hassabis showed off an upgraded version of Gemini capable of making sense of an office layout as seen through a smartphone camera.

    Academic and industry research labs are racing to see how language models might be used to enhance robots’ abilities. The May program for the International Conference on Robotics and Automation, a popular event for robotics researchers, lists almost two dozen papers that involve use of vision language models.

    Investors are pouring money into startups aiming to apply advances in AI to robotics. Several of the researchers involved with the Google project have since left the company to found a startup called Physical Intelligence, which received an initial $70 million in funding; it is working to combine large language models with real-world training to give robots general problem-solving abilities. Skild AI, founded by roboticists at Carnegie Mellon University, has a similar goal. This month it announced $300 million in funding.

    Just a few years ago, a robot would need a map of its environment and carefully chosen commands to navigate successfully. Large language models contain useful information about the physical world, and newer versions that are trained on images and video as well as text, known as vision language models, can answer questions that require perception. Gemini allows Google’s robot to parse visual instructions as well as spoken ones, following a sketch on a whiteboard that shows a route to a new destination.

    In their paper, the researchers say they plan to test the system on different kinds of robots. They add that Gemini should be able to make sense of more complex questions, such as “Do they have my favorite drink today?” from a user with a lot of empty Coke cans on their desk.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleShooting conspiracies trend on X as Musk endorses Trump
    Next Article Health Care Should Be Designed for the Extremes of Life

    Related Posts

    A United Arab Emirates Lab Announces Frontier AI Projects—and a New Outpost in Silicon Valley

    May 30, 2025

    Why Anthropic’s New AI Model Sometimes Tries to ‘Snitch’

    May 30, 2025

    Donald Trump’s Media Conglomerate Is Becoming a Bitcoin Reserve

    May 29, 2025

    Businesses Got Squeezed by Trump’s Tariffs. Now Some of Them Want Their Money Back

    May 28, 2025

    There’s a Very Simple Pattern to Elon Musk’s Broken Promises

    May 28, 2025

    Freedom of the Press Foundation Threatens Legal Action if Paramount Settles With Trump Over ’60 Minutes’ Interview

    May 27, 2025
    Our Picks

    This Staples Standing Desk Isn’t Flashy but It’s Reliable for the Money

    May 31, 2025

    The Nike x Hyperice Hyperboots Will Give You a Heated Foot Massage While You Walk

    May 31, 2025

    Apple’s Big OS Rebrand, OnePlus Embraces AI, and Samsung’s Next Folds—Your Gear News of the Week

    May 31, 2025

    Sony’s DualSense Edge controller is receiving a rare $30 discount

    May 31, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Slate Auto FAQ: your questions answered

    By News RoomMay 31, 2025

    Alright, we get it. Y’all are excited about Slate. We thought the little Slate Truck…

    A New Study Reveals the Makeup of Uranus’ Atmosphere

    May 31, 2025

    Never Drink Alone: A Guide to Turkish Coffee

    May 31, 2025

    Twitch is getting vertical livestreams

    May 31, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.