Today, I’m talking to Alphabet and Google CEO Sundar Pichai, who joined the show the day after the Google I/O developer conference last week. Google’s focus during the conference was AI, of course — Google is building AI into virtually all of its products. My personal favorite is the new AI search in Google Photos that lets you ask things like, “What’s my license plate number?” and get an answer back from your entire photo library. All in all, Google executives said “AI” more than 120 times during the keynote — we counted.

But there was one particular announcement at I/O that’s sending shockwaves around the web: Google is rolling out what it calls AI Overviews in Search to everyone in the United States by this week and around the world to more than a billion users by the end of the year. That means when you search for something on Google, you’ll get AI-powered results at the top of the page for a number of queries. The company literally describes this as “letting Google do the Googling for you.” Google has been testing this for a year now, in what it called the Search Generative Experience, so you may have already seen a version of this — but now it’s here, and it will change the web as we know it. 

Until now, Google’s ecosystem has been based on links to everyone else’s content: you type something into a search box, you see some links, and you click one. That sends traffic to websites, which their owners can try to monetize in various ways, and ideally everyone wins.

Google is by far the biggest source of traffic on the web today, so if it starts keeping that traffic for itself by answering questions with AI, that will change or potentially even destroy the internet ecosystem as we know it. The News/Media Alliance, which represents a bunch of fancy news publishers, put out a press release calling AI previews in search “catastrophic to our traffic.”

If you’re a Decoder listener, you’ve heard me talk about this idea a lot over the past year: I call it Google Zero, and I’ve been asking web and media CEOs what would happen to their businesses if their Google traffic were to go to zero. If AI chatbots and AI-powered search results are summarizing everything for you, why would you go to a website? And if we all stop going to websites, what’s the incentive to put new content on the web? What’s going to stop shady characters from flooding the web with AI-generated spam to try and game these systems? And if we succeed in choking the web with AI, what are all these bots going to summarize when people ask them questions?

Sundar has some ideas. For one, he’s not convinced the web, which he says he cares deeply about, is in all that much danger. You’ll hear him mention Wired’s famous 2010 headline, “The web Is dead,” and he makes the argument that new, transformative technologies like AI always cause some short-term disruptions. 

He says injecting AI into Search is about creating value for users, and those users are telling him that they find these new features to be helpful — and even clicking on links at higher rates in the AI previews. But he didn’t say where that leaves the people who put the content on the internet in the first place. We really sat with that idea for a while — and we talked a lot about the anger creative people feel toward AI systems training on their work.

I’ve talked to Sundar quite a bit over the past few years, and this was the most fired up I’ve ever seen him. You can really tell that there is a deep tension between the vision Google has for the future — where AI magically makes us smarter, more productive, and more artistic — and the very real fears and anxieties creators and website owners are feeling right now about how search has changed and how AI might swallow the internet forever. Sundar is wrestling with that tension.

One note: you’ll hear me say I think Sundar keeps making oblique references to OpenAI, which he pushes back on pretty strongly. I thought about it afterward, and it’s pretty clear he wasn’t just talking about OpenAI but also Meta, which has openly turned away from sending any traffic to any websites whatsoever and has been explicit that it doesn’t want to support news on its platforms at all anymore. I wish that had clicked for me during this conversation, because I would have asked about it more directly.

Okay, Google CEO Sundar Pichai. Here we go.

This transcript has been lightly edited for length and clarity. 

Sundar Pichai, you are the CEO of both Alphabet and Google. Welcome to Decoder.

I am excited to talk to you. I feel like I talk to you every year at Google I/O, and we talk about all the things you’ve announced. There’s a lot of AI news to talk about. As you know, I’m particularly interested in the future of the web, so I really want to talk about that with you, but I figured I’d start with an easy one.

Do you think language is the same as intelligence?

Wow, that’s not an easy question! I don’t think I’m the expert on it. I think language does encode a lot of intelligence, probably more than people thought. It explains the successes of large language models to a great extent. But my intuition tells me, as humans, there’s a lot more to the way we consume information than language alone. But I’d say language is a lot more than people think it is.

The reason I asked that question to start is: I look at the announcements at I/O with AI and what you’re doing, I look at your competitors with AI and what they’re doing, and everything is very language-heavy. It’s LLMs that have really led to this explosion of interest in innovation and investment, and I wonder if the intelligence is increasing at the same rate as the facility with language. I kind of don’t see it, to be perfectly honest. I see computers getting much better at language and actually in some cases getting dumber. I’m wondering if you see that same gap.

Yeah, it’s a great question. Part of the reason we made Gemini natively multimodal — and you’re beginning to see glimpses of it now but it hasn’t made its way fully into products yet — is so that with audio, video, text, images, and code, when we have multimodality working on the input and output side — and we are training models using all of that — maybe in the next cycle, that’ll encapsulate a lot more than just today, which is primarily text-based. I think that continuum will shift as we take in a lot more information that way. So maybe there’s more to come.

Last year the tagline was “Bold but responsible.” That’s Google’s approach. You said it again onstage this year. And then I look at our reactions to AI getting things wrong, and it seems like they’re getting more and more tempered over time.

I’ll give you an example. In the demos you had yesterday, you showed multimodal video search of someone trying to fix a broken film camera. And the answer was just wrong. The answer that was highlighted in the video was, “Just open the back of the film camera and jiggle it.” It’s like, well, that would ruin all of your film. No one who had an intelligent understanding of how that camera [worked] would suggest that.

I was talking to the team and, ironically, as part of making the video, they consulted with a bunch of subject matter experts who all reviewed the answer and thought it was okay. I understand the nuance. I agree with you. Obviously, you don’t want to expose your film by taking it outside of a darkroom. There are certain contexts in which it makes sense to do that. If you don’t want to break the camera and if what you’ve taken is not that valuable, it makes sense to do that.

You’re right. There is a lot of nuance to it. Part of what I hope Search serves to do is to give you a lot more context around that answer and allow people to explore it deeply. But I think these are the kinds of things for us to keep getting better at. But to your earlier question, look, I do see the capability frontier continuing to move forward. I think we are a bit limited if we were just training on text data, but we are all making it more multimodal. So I see more opportunities there.

Let’s talk about Search. This is the thing that I am most interested in — I think this is the thing that is changing the most. In an abstract way, it’s the thing that’s the most exciting. You can ask a computer a question, and it will just happily tell you an answer. That feels new. I see the excitement around it.

Yesterday, you announced AI Overviews are coming to Search. That’s an extension of what was called the Search Generative Experience, which was announced in a rollout to everyone in the United States. I would describe the reactions to that news from the people who make websites as fundamentally apocalyptic. The CEO of the News/Media Alliance said to CNN, “This will be catastrophic to our traffic.” Another media CEO forwarded me a newsletter and the headline was, “This is a death blow to publishers.” Were you expecting that kind of response to rolling out AI Overviews in Search?

I recall, in 2010, there were headlines that the web was dead. I’ve long worked on the web, obviously. I care deeply about it. When the transition from desktop to mobile happened, there was a lot of concern because people were like, “Oh, it’s a small screen. How will people read content? Why would they look at content?” We had started introducing what we internally called “Web Answers” in 2014, which are featured snippets outside [the list of links]. So you had questions like that.

I remain optimistic. Empirically, what we are seeing throughout the years, I think human curiosity is boundless. It’s something we have deeply understood in Search. More than any other company, we will differentiate ourselves in our approach even through this transition. As a company, we realize the value of this ecosystem, and it’s symbiotic. If there isn’t a rich ecosystem making unique and useful content, what are you putting together and organizing? So we feel it.

I would say, through all of these transitions, things have played out a bit differently. I think users are looking for high-quality content. The counterintuitive part, which I think almost always plays out, is [that] it’s not a zero-sum game. People are responding very positively to AI Overviews. It’s one of the most positive changes I’ve seen in Search based on metrics. But people do jump off on it. And when you give context around it, they actually jump off it. It actually helps them understand, and so they engage with content underneath, too. In fact, if you put content and links within AI Overviews, they get higher clickthrough rates than if you put it outside of AI Overviews.

But I understand the sentiment. It’s a big change. These are disruptive moments. AI is a big platform shift. People are projecting out, and people are putting a lot into creating content. It’s their businesses. So I understand the perspective [and] I’m not surprised. We are engaging with a lot of players, both directly and indirectly, but I remain optimistic about how it’ll actually play out. But it’s a good question. I’m happy to talk about it more.

I have this concept I call “Google Zero,” which is born of my own paranoia. Every referrer that The Verge has ever had has gone up and then it’s gone down, and Google is the last large-scale referrer of traffic on the web for almost every website now. And I can see that for a lot of sites, Google Zero is playing out. Their Google traffic has gone to zero, particularly independent sites that aren’t part of some huge publishing conglomerate. There’s an air purifier blog that we covered called HouseFresh. There’s a gaming site called Retro Dodo. Both of these sites have said, “Look, our Google traffic went to zero. Our businesses are doomed.”

Is that the right outcome here in all of this — that the people who care so much about video games or air purifiers that they started websites and made the content for the web are the ones getting hurt the most in the platform shift?

It’s always difficult to talk about individual cases, and at the end of the day, we are trying to satisfy user expectations. Users are voting with their feet, and people are trying to figure out what’s valuable to them. We are doing it at scale, and I can’t answer on the particular site—

A bunch of small players are feeling the hurt. Loudly, they’re saying it: “Our businesses are going away.” And that’s the thing you’re saying: “We’re engaging, we’re talking.” But this thing is happening very clearly

It’s not clear to me if that’s a uniform trend. I have to look at data on an aggregate [basis], so anecdotally, there are always times when people have come in an area and said, “Me, as a specific site, I have done worse.” But it’s like an individual restaurant saying, “I’ve started getting fewer customers this year. People have stopped eating food,” or whatever it is. It’s not necessarily true. Some other restaurant might have opened next door that’s doing very well. So it’s tough to say.

From our standpoint, when I look historically even over the past decade, we have provided more traffic to the ecosystem, and we’ve driven that growth. You may be making a secondary point about small sites versus more aggregating sites, which is the second point you’re talking about. Ironically, there are times when we have made changes to actually send more traffic to the smaller sites. Some of those sites that complain a lot are the aggregators in the middle. So should the traffic go to the restaurant that has created a website with their menus and stuff or people writing about these restaurants? These are deep questions. I’m not saying there’s a right answer.

But you’re about to flip over the whole apple cart, right? You’re about to start answering some of these questions very directly. And where that content comes from in the future, I think you want the people who care the most to publish that information directly to be the thing that you synthesize. 

The incentives for that seem to be getting lower and lower — on the web, anyway.

I feel it’s the opposite. If anything, I feel like through AI Overviews, when you give people context, yes, there are times all people want is a quick answer and they bounce back. But overall, when we look at user journeys, when you give the context, it also exposes people to jumping-off points, and so they engage more. Actually, this is what drives growth over time. I look at desktop to mobile, and there were similar questions. In fact, there was a [magazine] cover I’m almost tempted to pull out, saying, “The web is dead.” There was a Google Zero argument 10 years ago. But you yourself made the point that it’s not an accident that we still remain as one of the largest referrers because we’ve cared about it deeply for a long, long time.

I look at our journey, even the last year through the Search Generative Experience, and I constantly found us prioritizing approaches that would send more traffic while meeting user expectations. We think through that deeply and we actually change our approach. If there are areas where we feel like we haven’t fully gotten it right, we are careful about rolling it out. But I think what’s positively surprising us is that people engage more, and that will lead to more growth over time for high-quality content.

There’s a lot of debate about what high-quality content is. At least in my experience, I value independent sources, I value smaller things, I want more authentic voices. And I think those are important attributes we are constantly trying to improve.

You mentioned that you think more people will click through links in AI Overviews. Liz [Reid] who runs Search had a blog post making the same claim. There’s no public data that says that is true yet. Are you going to release that data? Are you going to show people that this is actually happening?

On an aggregate, I think people rely on this value of the ecosystem. If people over time don’t see value, website owners don’t see value coming back from Google, I think we’ll pay a price. We have the right incentive structure. But obviously, look, we are careful about… there are a lot of individual variations, and some of it is users choosing which way to go. That part is hard to sort out. But I do think we are committed at an aggregate level to do the right thing.

I was reading some SEO community trade publications this morning responding to the changes, and one of the things that was pointed out was that, in Search Console, it doesn’t show you if the clicks are coming from a featured snippet or an AI Overview or just Google’s regular 10 blue links. Would you break that out? Would you commit to breaking that out so people can actually audit and verify and measure that the AI Overviews are sending out as much traffic as you say they are?

It’s a good question for the Search team. They think about this at a deeper level than I do. I think we are constantly trying to give more visibility, but also we want people to create content that’s good. And we are trying to rank it and organize it, so I think there’s a balance to be had. The more we spec it out, then the more people design for that. There’s a tradeoff there, so it’s not clear to me what the right answer is.

That tradeoff between what you spec out and say and what people make, that’s been the story of the web for quite some time. It had reached, I think, a steady state. Whether you thought that steady state was good or bad, it was at least at a steady state. Now, that state is changing — AI is obviously changing it.

The 10 blue link model, the old steady state, is very much based on an exchange: “We’re going to let you index our content. We’re going to [have] featured snippets. We’re going to let you see all of our information. In return, you will send us traffic.” That formed the basis of what you might call a fair-use argument. Google’s going to index this stuff, [and] there’s not going to be a lot of payments in the middle.

In the AI era, no one knows how that’s going to go. There are some major lawsuits happening. There are deals being made by Google and OpenAI for training data. Do you think it’s appropriate for Google to start making more deals to pay for data to train search results? Because those AI snippets are not really the same as the 10 blue links or anything else you’ve done in the past.

To be very clear, there’s a myth that Google’s search has been 10 blue links for — I look at our mobile experience — many, many years. And we have had answers, we allow you to refine questions, we’ve had featured snippets, and so on. The product has evolved significantly. 

Having said that, as a company, even as we look at AI, we have done Google [News] Showcase, we have done licensing deals. To the extent there is value there, we obviously think there is a case for fair use in the context of beneficial, transformative use. I’m not going to argue that with you given your background. But I think there are cases in which we will see dedicated incremental value to our models, and we’ll be looking at partnerships to get at that. I do think we’ll approach it that way.

Let me ask this question in a different way. I won’t do too much fair-use analysis with you, I promise, as much as I like doing it.

There were some news reports recently that OpenAI had trained its video generation product, Sora, on YouTube. How did you feel when you heard that news?

Look, we don’t know the details. Our YouTube team is following up and trying to understand it. We have terms and conditions, and we would expect people to abide by those terms and conditions when you build a product, so that’s how I felt about it. 

So you felt like they had potentially broken your terms and conditions? Or if they had, that wouldn’t have been appropriate?

The reason I asked that question — which is a much more emotional question — is okay, maybe that’s not appropriate. And what OpenAI has said is essentially “We’ve trained on publicly available information,” which means they found it on the web. 

Most people don’t get to make that deal. They don’t have a YouTube team of licensing professionals who can say, “We have terms and conditions.” They don’t even have terms and conditions. They’re just putting their stuff on the internet. Do you understand why, emotionally, there’s the reaction to AI from the creative community — that it feels the same as you might have felt about OpenAI training on YouTube?

Absolutely. Look, be it website owners or content creators or artists, I can understand how emotional a transformation this is. Part of the reason you saw, even through Google I/O when we were working on products like music generation, we have really taken an approach by which we are working first to make tools for artists. We haven’t put a general-purpose tool out there for anyone to create songs.

The way we have taken that approach in many of these cases is to put the creator community as much at the center of it as possible. We’ve long done that with YouTube. Through it all, we are trying to figure out what the right ways to approach this.

But it is a transformative moment as well, and there are other players in this. We are not the only player in the ecosystem. But, to your earlier question, yes, I understand people’s emotions about it. I definitely am very empathetic to how people are perceiving this moment.

They feel like it’s a taking — that they put work on the internet and the big companies are coming, taking it for free, and then making products that they are charging $20 a month for or that will lift their creative work and remix it for other people. The thing that makes it feel like a taking is [that] very little value accrues back to them.

That’s really the thing I’m asking about: how do you bring value back to them? How do you bring incentives back to the small creator or the independent business that’s saying, “Look, this feels a taking.” 

Look. [Sighs] The whole reason we’ve been successful on platforms like YouTube is we have worked hard to answer this question. You’ll continue to see us dig deep about how to do this well. And I think the players who end up doing better here will have more winning strategies over time. I genuinely believe that.

Across everything we do, we have to sort that out. Anytime you’re running a platform, it’s the basis on which you can build a sustainable long-term platform. Through this AI moment, over time, there’ll be players who will do better by the content creators that support their platforms, and whoever does it better will emerge as the winner. I believe that to be a tenet of these things over time.

One thing that I think is really interesting about the YouTube comparison in particular — it’s been described to me many times that YouTube is a licensing business. You license a lot of content from the creators. You obviously pay them back in terms of the advertising model there. The music industry has a huge licensing business with YouTube. It is an existential relationship for both sides. Susan Wojcicki used to describe YouTube as a music service, which I think confused everyone until you looked at the data.

Universal Music is mad about AI on YouTube. YouTube reacts. It builds a bunch of tools. It writes a constitution about what AI will and will not do. People are mad about the Search Generative Experience or AI [Overviews] on the web. Google doesn’t react the same way. I’m wondering if you can square that circle.

That is so far from reality.

That’s so far from reality. I look at other players and how they’ve approached—

You’re talking about OpenAI, which is just out there taking stuff.

In general, when you look at how we have approached the Search Generative Experience, even through a moment like this, the time we have taken to test, iterate, and prioritize approaches, and the way we’ve done it over the years, I would say I definitely disagree with the notion we don’t listen. We care deeply; we listen. People may not agree with everything we do. When you’re running an ecosystem, you are balancing different needs. I think that’s the essence of what makes a product successful.

Let me talk about the other side of this. There’s search: people are going to game search and that’s always going to happen and that’s a chicken-and-egg problem.

The other thing that I see happening is the web is being flooded with AI content. There was an example a few months ago where some unsavory SEO character said, “I stole a bunch of traffic from a competitor. I copied their site map. I fed it into an AI system and had it generate copy for a website that matched their site map, and I put up this website and stole a bunch of traffic from my competitor.” I think that’s a bad outcome. I don’t think we want to incentivize that in any way, shape, or form.

That’s going to happen at scale. More and more of the internet that we experience will be synthetic in some important way. How do you, on the one hand, build the systems that create the synthetic content for people and, on the other hand, rank it so that you’re only getting the best stuff? Because at some point, the defining line for a lot of people is, “I want stuff made by a human, and not stuff made by AI.”

I think there are multiple parts to your question. One, how do we differentiate high quality from low quality? I literally view it as our mission statement, and it is what has defined Search over many, many years.

I actually think people underestimate… Anytime you have these disruptive platform shifts, you’re going to go through a phase like this. I have seen that team invest so much. Our entire search quality team has been spending the last year gearing up our ranking systems, etc., to better get at what high-quality content is. If I take the next decade, [the] people who can do that better, who can sift through that, I think, will win out.

I think you’re right in your assessment that people will value human-created experiences. I hope the data bears that out. We have to be careful every time there’s a new technology. There are filmmakers, if you go and talk about CGI in films, they’re going to react very emotionally, and there are still esteemed filmmakers who never use CGI in films. But then there are people who use it and produce great films. And so you may be using AI to lay out and enhance video effects in your video.

But I agree with you. I think using AI to produce content en masse without adding any value is not what users are looking for.

But there is a big continuum and, over time, users are adapting. We are trying hard to make sure we do it in a responsible way, but we’re also listening to what users consider to be high quality and trying to get that balance right. That continuum will look different a few years out than it does today, but I think I view it as the essence of what search quality is. Do I feel confident we will be able to approach it better than others? Yes. And I think that’s what defines the work we do.

For the listener, there have been a lot of subtle shots at OpenAI today.

Can I put this into practice? I actually just did this search. It is a search for “best Chromebook.” As you know, I once bought my mother a Chromebook Pixel. It’s one of my favorite tech purchases of all time. This is a search for “best Chromebook.” I’m going to hit “generate” at the top, it’s going to generate the answer, and then I’m going to do something terrifying, which is, I’m going to hand my phone to the CEO of Google. This is my personal phone. Don’t dig through it.

You look at that — it’s the same generation that I’ve seen earlier. I asked it for the best Chromebook, and it says, “Here’s some stuff you might think of.” Then you scroll, and it’s some Chromebooks. It doesn’t say whether they’re the best Chromebooks, and then it’s a bunch of headlines, some of which are Verge headlines, that are like, “Here are some of the best Chromebooks.” That feels like the exact kind of thing that an AI-generated search could answer in a better way. Do you think that’s a good experience? Is that a waypoint or is that the destination?

I think, look, you’re showing me a query in which we didn’t automatically generate the AI.

There was a button that said, “Do you want to do this?”

But let me push back. There’s an important differentiation. There’s a reason we are giving a view without the generated AI Overview, and as a user, you’re initiating an action, so we’re respecting the user intent there. When I scroll, I see Chromebooks. I also see a whole set of links, which I can go to and that tell me all the ways you can think about Chromebooks. I see a lot of links. We didn’t show an AI Overview in this case. As a user, you’re generating the follow-up question. I think it’s right that we respect the user’s intent. If you don’t do that, people will go somewhere else, too.

But I’m saying — I did not write, “What is the best Chromebook?” I just wrote “best Chromebook — [but] the answer, a thing that identifies itself as an answer, is not on that page. The leap from “I had to push the button” to “Google pushes the button for me and then says what it believes to be the answer” is very small. I’m wondering if you think a page like that today is the destination of the search experience, or if this is a waypoint and you can see a better future version of that experience.

I think the direction of how these things will go, it’s tough to fully predict. Users keep evolving. It’s a more dynamic moment than ever. We are testing all of this, and this is a case where we didn’t trigger the AI Overview because we felt like our AI Overview is not necessarily the first experience we want to provide for that query because what’s underlying is maybe a better first look for the user — those are all quality tradeoffs we are making. But if the user is asking for a summary, we are summarizing and giving links. I think that seems like a reasonable direction to me.

I’ll show you another one where it did expand automatically. This one I only have screenshots for. I don’t think I’m fully opted in. This is Dave Lee from Bloomberg, who did a search. He got an AI Overview, and he just searched for “JetBlue Mint Lounge SFO.” And it just says the answer, which I think is fine. That’s the answer.

If you swipe one over — I cannot believe I’m letting the CEO of Google swipe on my camera roll — but if you swipe one over, you see the site it pulled from. It is a word-for-word rewrite of that site. This is the thing I’m getting at.

The AI-generated overview of that answer, if you just look at where it came from, is almost the same sentence as the source. And that’s what I mean. At some point, the better experience is the AI overview, and it’s just the thing that exists on all the sites underneath it. It’s the same information.

[Sighs] The thing with Search — we handle billions of queries. You can absolutely find a query and hand it to me and say, “Could we have done better on that query?” Yes, for sure. But in many cases, part of what is making people respond positively to AI Overviews is that the summary we are providing clearly adds value and helps them look at things they may not have otherwise thought about. If you’re adding value at that level, I think people notice it over time, and I think that’s the bar you’re trying to meet. Our data would show, over 25 years, if you aren’t doing something that users find valuable or enjoyable, they let us know right away. Over and over again we see that.

Through this transition, everything is the opposite. It’s one of the biggest quality improvements we are driving in our product. People are valuing this experience. There’s a general presumption that people don’t know what they’re doing, which I disagree with strongly. People who use Google are savvy. They understand. And so, to me, I can give plenty of examples where I’ve used AI Overviews as a user. I’m like, “Oh, this is giving context. Oh, maybe there are these dimensions I didn’t even think of in my original query. How do I expand upon it and look at it?”

You’ve made oblique mention of OpenAI a few times, I think.

You’re saying “others.” There’s one other big competitor that is, I think, a little more—

You’re putting words in my mouth, but that’s okay.

I saw OpenAI’s demo the other day of GPT-4o, Omni. It looked a lot like the demos you gave at I/O. This idea of multimodal search, the idea that you have this character you can talk to — you have Gems, which are the same kind of idea — it feels like there’s a race to get to the same outcome for a search-like experience or an agent-like experience. Do you feel the pressure from that competition?

This is no different from Siri and Alexa. When you’re working in the technology industry, I think there is relentless innovation we felt a few years ago, all of us building voice assistants. You could have asked the same version of this question: what was Alexa trying to do and what was Siri trying to do? It’s a natural extension of that. I think you have a new technology now, and it’s evolving rapidly.

I felt like it was a good week for technology. There was a lot of innovation, I felt, on Monday and Tuesday and so on. That’s how I feel, and I think it’s going to be that way for a while. I’d rather have it that way. You’d rather be in a place where the underlying technology is evolving, which means you can radically improve the experiences you’re putting out. I’d rather have that any time than a static phase in which you feel like you’re not able to move forward quickly.

A lot of us have had this vision for what a powerful assistant can be, but we were held back by the underlying technology not being able to serve that goal. I think we have a technology that is better able to serve that. That’s why you’re seeing the progress again. I think that’s exciting. To me, I look at it and say, “We can actually make Google Assistant a whole lot better.” You’re seeing visions of that with Project Astra. It’s incredibly magical to me when I use it, so I’m very excited by it.

This brings me back to the first question I asked: language versus intelligence. To make these products, I think you need a core level of intelligence. Do you have in your head a measure of “This is when it’s going to be good enough. I can trust this”?

On all of your demo slides and all of OpenAI’s demo slides, there’s a disclaimer that says “Check this info,” and to me, it’s ready when you don’t need that anymore. You didn’t have “Check this info” at the bottom of the 10 blue links. You didn’t have “Check this info” at the bottom of featured snippets.

You’re getting at a deeper point where hallucination is still an unsolved problem. In some ways, it’s an inherent feature. It’s what makes these models very creative. It’s why it can immediately write a poem about Thomas Jefferson in the style of Nilay. It can do that. It’s incredibly creative. But LLMs aren’t necessarily the best approach to always get at factuality, which is part of why I feel excited about Search. 

Because in Search we are bringing LLMs in a way, but we are grounding it with all the work we do in Search and layering it with enough context that we can deliver a better experience from that perspective. But I think the reason you’re seeing those disclaimers is because of the inherent nature. There are still times it’s going to get it wrong, but I don’t think I would look at that and underestimate how useful it can be at the same time. I think that would be the wrong way to think about it.

Google Lens is a good example. When we first put Google Lens out, it didn’t recognize all objects well. But the curve year on year has been pretty dramatic, and users are using it more and more. We’ve had billions of queries now with Google Lens. It’s because the underlying image recognition, paired with our knowledge entity understanding, has dramatically expanded over time.

I would view it as a continuum, and I think, again, I go back to this saying that users vote with their feet. Fewer people used Lens in the first year. We also didn’t put it everywhere because we realized the limitations of the product.

When you talk to the DeepMind Google Brain team, is there a solution to the hallucination problem on the roadmap?

It’s Google DeepMind. [Laughs]

Are we making progress? Yes, we are. We have definitely made progress when we look at metrics on factuality year on year. We are all making it better, but it’s not solved. Are there interesting ideas and approaches that they’re working on? Yes, but time will tell. I would view it as LLMs are an aspect of AI. We are working on AI in a much broader way, but it’s an area where we are all definitely working to drive more progress.

Five years from now, this technology, the paradigm shift, it feels like we’ll be through it. What does the best version of the web look like for you five years from now?

I hope the web is much richer in terms of modality. Today, I feel like the way humans consume information is still not fully encapsulated in the web. Today, things exist in very different ways — you have webpages, you have YouTube, etc. But over time, I hope the web is much more multimodal, it’s much richer, much more interactive. It’s a lot more stateful, which it’s not today.

I view it as, while fully acknowledging the point that people may use AI to generate a lot of spam, I also feel every time there’s a new wave of technology, people don’t quite know how to use it. When mobile came, everyone took webpages and shoved them into mobile applications. Then, later, people evolved [into making] really native mobile applications.

The way people use AI to actually solve new things, new use cases, etc. is yet to come. When that happens, I think the web will be much, much richer, too. So: dynamically composing a UI in a way that makes sense for you. Different people have different needs, but today you’re not dynamically composing that UI. AI can help you do that over time. You can also do it badly and in the wrong way and people can use it shallowly, but there will be entrepreneurs who figure out an extraordinarily good way to do it, and out of it, there’ll be great new things to come.

Google creates a lot of incentives for development on the web through Search, through Chrome, through everything that you do. How do you make sure those incentives are aligned with those goals? Because maybe the biggest thing here is that the web ecosystem is in a moment of change, and Google has a lot of trust to build and rebuild. How do you think about making sure those incentives point at the right goals?

Look, not everything is in Google’s control. I wish I could influence what the single toughest experience when I go to websites today as a user is — you have a lot of cookie dialogues to accept, etc. So I would argue there are many things outside of that. You can go poll 100 users.

But what are the incentives we would like to create? I think, and this is a complex question, which is how do you reward originality, creativity, and independent voice at whatever scale at which you’re able to and give a chance for that to thrive in this content ecosystem we create? That’s what I think about. That’s what the Search team thinks about. But I think it’s an important principle, and I think it’ll be important for the web and important for us as a company.

That’s great. Well, Sundar, thank you so much for the time. Thank you for being on Decoder.

Thanks, Nilay. I greatly enjoyed it.

Decoder with Nilay Patel /

A podcast about big ideas and other problems.

SUBSCRIBE NOW!

Share.
Exit mobile version