Google is preparing to hold its annual Google I/O developer conference next week, and naturally, it will be all about AI. The company has made no secret of that. Since last year’s I/O, it has debuted Gemini, its new, more powerful model meant to compete with OpenAI’s ChatGPT, and has been deep in testing new features for Search, Google Maps, and Android. Expect to hear a lot about that stuff this year.
Google I/O kicks off on Tuesday, May 14th at 10AM PT / 1PM ET with a keynote talk. You can catch that on Google’s site or its YouTube channel, via the livestream link that’s also embedded at the top of this page. (There’s also a version with an American Sign Language interpreter.) Set a good amount of time aside for that; I/O tends to go on for a couple of hours.
Google will probably also focus on ways it plans to turn your smartphone into more of an AI gadget. That means more generative AI features for Google’s apps. It’s been working on AI features that help with dining and shopping or finding EV chargers in Google Maps, for instance. Google is also testing a feature that uses AI to call a business and wait on hold for you until there’s actually a human being available to talk to.
I/O could also see the debut of a new, more personal version of its digital assistant, rumored to be called “Pixie.” The Gemini-powered assistant is expected to integrate multimodal features like the ability to take pictures of objects to learn how to use them or get directions to a place to buy them.
That kind of thing could be bad news for devices like the Rabbit R1 and the Human Ai Pin, which each recently launched and struggled to justify their existence. At the moment, the only advantage they maybe sort of have is that it’s kind of hard (though not impossible) to pull off using a smartphone as an AI wearable.