Hello, and welcome to Decoder! This is Jon Fortt, CNBC journalist, cohost of Closing Bell Overtime, and creator and host of the Fortt Knox podcast. As you just heard Nilay say, I’m stepping in to guest host a few episodes of Decoder this summer while he’s out on parental leave, and I’m very excited about what we’ve been working on.

For my first episode of Decoder, a show about how people make decisions, I wanted to talk to an expert. So I sat down with Cassie Kozyrkov, the founder and CEO of AI consultancy Kozyr. She’s also the former chief decision scientist at Google.

For a long time, Cassie has studied the ins and outs of decision-making: not just decision frameworks but also the underlying social dynamics, psychology, and even, in some cases, the role that the human brain plays in how and why we make certain choices. This is an interdisciplinary field that Cassie calls decision intelligence, which mixes everything from statistics and data science to machine learning. Her expertise landed her a top advisor role at Google, where she spent nearly a decade helping the company make smarter use of data.

In recent years, her work has collided with artificial intelligence. As you’ll hear Cassie explain it, generative AI systems like ChatGPT are making it easier and cheaper than ever to get advice and analysis. But unless you have a clear vision of what it is you’re looking for, and what values underlie the decisions you make, all you’ll get back from AI is a lot of messy data.

So Cassie and I really dug into the science behind decision-making, how it intersects with what we’re seeing in the modern AI industry, and how her current work in AI consulting helps companies better understand how to use these tools to make smarter decisions that can’t just be outsourced to agents or chatbots.

I also wanted to learn a little bit about Cassie’s own decision-making frameworks and how she made some key decisions of her own, such as what to pursue in graduate school and why she decided to leave academia for Google and then strike out on her own just as the generative AI boom was really starting to kick off. This is a fun one, and I think you’re really going to like it.

Okay: decision scientist Cassie Kozyrkov. Here we go.

This transcript has been lightly edited for length and clarity.

Cassie Kozyrkov, welcome to Decoder. I’m going to welcome myself to Decoder too, because this isn’t my podcast. I’m just having a good time punching the buttons, but it’s going to be a lot of fun.

Yeah, it’s so great to be here with you, Jon. And I guess we two friends managed to sneak on and take over this podcast, so I’m really excited for the mischief we’ll cause here.

Let the mischief begin. So the former chief decision scientist at Google, I think, starts to frame what it is you’re good at, and we’re going to get into the implications for AI and leadership and technology and all that. But first, let’s just start with the basics. What’s so hard about making decisions?

Depends on the decision. It can be very easy to make a decision, and one of the things that I advise people is, unless you’re a student of decision-making, your number one rule should be to try to match the effort you put into the decision with what’s at stake in the decision. So, of course, if you’re a student, you can go and agonize over, “How would I apply a decision theoretic approach to choosing my sandwich at lunch?” But don’t be doing that in real life, right?

Slowing down, thinking carefully, and considering the hard decisions and doing your best by them is, again, for the important decisions that will touch your life. Or even, more critically, the lives of thousands, millions, billions of other people, which is something that we see with technology that scales.

It sounds like you’re saying, in part, knowing what’s at stake is one of the first tough things about making decisions.

Exactly. And knowing your priorities. So one of the things that I find really fascinating about what AI in the large language model chatbot sense today is doing is it’s making answers really cheap. And when answers become cheap, that means the question becomes really important. Because what used to happen with decision-making for, again, the big, thorny data-driven decisions, was a decision-maker might come up with something and then ask the data science team to work on it. And then by the time that team came back with an answer, it had been, well, a week if you were lucky, but it could have been six weeks, or six months.

In that time, though, you actually got the opportunity to think about what you’d asked, refine what it meant to you, and then maybe re-ask it. There was time for that shower thought, where you’re like, “Oh, man, I should not have phrased it that way.” But today, you can go and have AI attempt an answer for you, and you can get an answer really quickly.

If you’re used to just immediately running in the direction of your answer, you won’t think as much as you should about, “Well, how do I test if this is actually what I need and what’s good for me? What did I actually ask in the first place? What was the world model, if you like? What were the assumptions that went into this decision?” So it’s all about priorities. It’s all about knowing what’s important.

Even before we get there though, staying at the very basic level, how do people learn to make decisions? There’s the fundamental idea that if you touch a hot stove, you do it once and then you know not to do that again. But how does the wiring in our brain work to teach us to become decision-makers and develop our own processes for doing it?

Oh, I didn’t know that you were going to drag my neuroscience degree into this. It has been a while. I apologize to any actual practicing neuroscientists that I’m about to offend. But at least when I was in grad school, the models that we had for this said that you have your dopaminergic midbrain, which is a region that’s very important for movement and for executing some of what you would think of as the more instinctive behaviors, or those driven by basic rewards — like sugar, avoidance of pain, those kinds of rewards.

So you have what you might think of as an evolutionarily older structure. And isn’t it fascinating that movement and decision-making are similarly controlled in the brain? Is a movement a decision? Is taking an action the same thing as making a decision? We can get into that. And then there are other structures in the prefrontal cortex.

Typically, your ventromedial and dorsolateral prefrontal cortices will be involved in various kinds of what you would think of as effortful or slowed-down decisions — such as the difference between choosing a stock because, I don’t know, you feel as if you don’t even know why, and sitting down and actually running some numbers, doing some research, integrating all of that and having a good, long-think ponder as to what you should do.

So broadly speaking, different regions from different evolutionary stages play into decision-making. The prefrontal cortex is a little newer. But you have these systems — sometimes acting in a coordinated manner, sometimes a little in conflict — involved in decision-making. But what we also really cared about back in those days was moving away from the cartoonish take that you get in popular science, that you just have one region and it just does this one thing and it only does this thing.

Instead, it’s an entire network that is constantly taking in inputs and processing all of them. So, of course, memory would be involved in decision-making and, of course, the ability to imagine, which you would think of more as engaging your visual occipital cortices — that would definitely be involved in some way or other. So it’s a whole thing. It’s a whole network of activations that are implementing human decisions. To summarize this for you, Jon, neuroscientists have no idea how we make decisions. So that’s the funny conclusion, right?

What we can do is prod and pry and get some sense of it, but at the end of the day, the actual nitty-gritty of how humans make decisions is a mystery. What’s also really funny is humans think they know how they make decisions, but quite often you can plant a decision and then unbeknownst to your participants, as we call them in the studies — I’d say victims — unbeknownst to them, the decision was made for them all along. It was primed in some way. Certain inputs got in there.

They thought they made a decision, and then afterward you ask them, so why did you pick red and not blue? They will sing you this beautiful song, explaining how it was their grandmother’s favorite color or whatever it is. Meanwhile, the experimenter implanted that, and if you don’t believe me, go see a magic show. It’s the same principle, right? Stage magicians will plant decisions in their audiences so reliably, otherwise the show wouldn’t work. I’m always fascinated by how seriously we take our human ability to know and understand ourselves and feel as if we’ve got all this agency side by side with professional stage magicians entertaining crowds every day.

But it sounds to me like maybe what really drives decisions, and maybe this motion and movement region of the brain is part of it, is want — what we want. When we’re babies, when we’re toddlers, decisions are: Do I get up? Am I hungry? Do I cry? It’s basic stuff that has to do with mostly physical things, because we’re not intellectuals yet, I guess.

So you need to have a want or a goal in order for there to be a decision to be made, right? Whether we understand what our real motivation is or not, that’s a key ingredient, having some kind of want or goal in decision-making.

Well, it depends how you define it. So with all these terms, when you try to study decision-making in the social biological sciences, you’ll have to take a word, such as “decision,” which we use casually however we like, and then you’ll have to give it a little box that makes that definition more concrete. It’s just like saying: “let X equal…,” right? At the top of your page when you’re doing math, you can say let X equal the speed of light. Now, from now on, whenever I write X, it means the speed of light. And then for some other person’s paper, let X equal five, and then whenever they write X, it means five.

So similarly, we say, “Let decision equal…” and then we define it for the purposes. Typically, what decision analysts will say defines a decision — the way they do their “let decision equal…” at the top of their page — is they say that it is an irrevocable allocation of resources. Then it’s up to you to think about, again, how you want to define what it means for the allocation to be irrevocable, and what it means for the resources to be allocated at all.

Is this an act that a human must make? Is it an act that a system downstream of a human might make? And what are resources? Are resources just money, or could they include time? Or opportunity? For example, what if I choose to go through this door? Well, in this moment, in this universe right now, I didn’t choose to go through that door, and I can’t go back. So in that sense, absolutely every movement that we make is an irrevocable allocation of resources.

And in companies, if you’re Google, do you buy YouTube or not? I mean, that was a big decision back then. Do I hire this person or that person? If it’s a key employee role, that can have a huge impact on whether your company succeeds or fails. Do I invest in AI? Do I or don’t I adopt this technology at this stage?

Right, and you can choose how to frame that to make it definitionally irrevocable. If I hire Jon right now at this point in time, then I’m maybe giving up doing something else, such as eating my sandwich instead of going through all the paperwork of hiring Jon. So I could think that’s irrevocable. If I hire Jon, I might be able to fire Jon tomorrow and release whatever resources that I cared more about than time and current opportunity. So then I could treat that as I’m able to have a two-way door on this decision.

So really, it depends on how you want to frame it, and then the rest will somewhat follow in the math. A big piece of how we think about decision-making in psychology is to separate it into judgment and decision-making.

Judgment is separate from decision-making. Judgment comes in when you undertake all the effort of deciding how to decide. What does it actually mean for you to allocate your resources in a way without take-backsies? So it’s up to the decision-maker to think about that. What are we measuring? What’s important? How might we actually want to approach this decision?

Even saying something like, “This decision should be made by gut instinct rather than by effortful calculation,” is part of that judgment process. And then the decision-making process that follows, that is just riding the mathematical consequences of whatever judgment setup you made.

So speaking of setup, give me the typical setup. Why do clients hire you? What kinds of positions are they in where they’re like, “Okay, we need a decision scientist here”?

Well, typically, the big ones are those involving deployment of AI systems. How would you think about solving a problem with AI? That’s a big decision. Should I even put this AI system in place? I’m potentially going to have to gut whatever I’m already using. So if I’ve got some handcrafted system some software developers have already written for me, and I’m getting reasonably good results from that, well, I’m not just going to throw AI in there and hope for the best. Actually, in some situations you would do that, because you want to say, “I’m an AI company.” And so you want to default to putting the AI system in unless you get talked out of it.

But quite often it’s effortful, it’s expensive, and we want to make sure that it’s going to be good enough and right for that company’s situation. So how do we think about measuring that, and how do we think about the realities of building it so it has all the features that we would require in order to want to proceed. It’s a huge decision, this AI decision.

How much does a leader’s or a company’s values matter in that assessment?

Incredibly. I think that’s something that people really miss when it comes to what looks like data or math-y situations. Once we have that bit of math, it looks objective. It looks like “you start here, you end up there,” and there was only one right answer. What we forget is that that little math piece and that data piece and that code piece form a thin layer of objectivity in a big, fat subjectivity sandwich.

That first layer is: What’s even important enough to automate? What’s important enough to do this in the first place? What would I want to improve? In which direction do I want to steer my business? What matters to me? What matters to my customers? How do I want to change the world? These questions have no one right answer, and will need to be articulated clearly in order for the rest to make sense.

The companies tend to articulate those things through a mission statement. Very often, at least in my experience, those mission statements aren’t nearly detailed enough to guide the granular and deep series of events that AI is going to lead us down, no?

Absolutely, and this is a really important point that blossoms into the whole topic of how to think about decision delegation. So the first thing leaders need to realize is that when they are at the very top of the food chain in their organizations, they don’t have the time to be involved in very granular decisions. In fact, most of the job is figuring out how to delegate decision-making to everybody else, choosing whom to trust or what to trust if we’re going to start to delegate to automated systems, and then letting go of that decision.

So you don’t want to be asking the CEO about nitty-gritty topics around, let’s say, the cybersecurity pieces of the company’s shiny new AI system. But what the company needs to do as an organization is make sure that somebody in the project is thinking about all the components that need to be thought about, and that it’s all delegated to the right people. So part of my role then is asking a lot of questions about what’s important, who can do this, how do we put it all together, and how do we make sure that we’re not operating with any blind spots or missing any components.

How typically are clients ready to provide you with that information? Is that a conversation they’re used to having?

Again, we’ve come a long way, but for the longest time, as a civilization working with data, we’ve been fascinated by just being able to potentially do a thing even if we don’t know what it’s for. We thought, “Isn’t it cool that we can move this data? Isn’t it cool that we can pull patterns out of it? Isn’t it cool that we can store or collect it at scale?” All without actually asking ourselves, “Well, where are we going, and how are we going to use it?”

We are growing out of that painful, teething phase where everyone was like, “This is fun, and let’s do it for theory.” It’s kind of like saying, “Well, we’ve invented a wheel, and now we can invent a better wheel, and we can now make it into a tire and it can have rubber on it, but maybe it’s made from carbon fiber.”

Now we are moving into, “Okay, this thing enables movement, different investments in this thing enable different speeds of movement, but where do I want to go? Because if I want to go two yards over, then I don’t actually need the car, and I don’t need to be fascinated by it for its own sake.”

Whereas if what I really need to do is be in the adjacent city tomorrow, and I don’t currently have a car, well, then we’re also not going to talk about inventing it from scratch by hiring researchers. We’re not going to think about building it in-house. We’re going to ask, “Who can get you something that will get you there on time and on spec?” These conversations are new, but this is where we’re going. We have to.

It sounds like, and correct me if I’m wrong here, AI is going to help us a lot more with giving us facts and options and less with giving us values and goals.

I hope so. That is the hope, because when you take values and goals from AI, what you’re doing is taking an average from the internet, or perhaps in a system that has a little bit more logic running on top of it to direct its output, then you might be taking those values and goals from the engineers who designed that system. So it’s like saying, “If I’m going to use AI as my rough draft every time, that rough draft might be a little bit less me and a little bit more the average soup of culture.” If everyone starts doing that, then it’s certainly a kind of blending or averaging of our insights.

Perhaps you want that, but I think there’s still a lot of value in having people who are close to their problem areas, who are close to their businesses, who have individual expertise, to think a little bit before they begin, and to really frame what the question is rather than take it from the AI system.

So Jon, how this would go for you is, you might ask an AI system, “How do I live the best possible life?” And it’s going to give you an answer, and that answer is not going to fit you. That’s the thing. It’s going to fit the average Joe. What is or who is the average Joe, and how does that apply to you?

It’s going to go to Instagram, and it’s going to look at who’s got the most likes and followers, and then decide that those people have the best lives, and then take the attributes of those people — how they look, how they talk, the level of education they say they have — and say, well, here’s what you need to do to be like these people who, the data tells us, people think have the best lives. Is that a version of what you mean?

Something like that. More convoluted, because something that is worth realizing is that an advantage machines have over us is memory and attention, right? What I mean by this is if I flash 50 digits onscreen right now and then ask you to recall them, you’re going to have no idea. Then I can go back to those 50 and say, “Yeah, the machine remembered it for us this whole time. It is clearly better at memory than Jon is.”

Then we flash these things, and I say, “Quick, what’s the sum of these digits?” Again, difficult for you, but easy for a machine. So anything that fits in our heads as we discuss it is going to be a shortcut of what’s actually possible when you have memory and attention at scale. In other words, we’ve described this Instagram process that fits in our heads right now, but you should expect that whatever is actually going on with these systems is just too big for us to hold in there.

So sure, Instagram and some other sources and probably even some websites about how to live a good life applied to us, but it’s all kinds of things all jumbled together into something too complicated for us to understand what it is. But the important thing is it’s not tailored to us specifically, not without us putting in quite a lot of effort to feed in the information required for that tailoring, which I encourage us to do.

Certainly, understanding that advice is cheaper than ever. I will frame up whatever is interesting to me and give it to the system. Of course, I’ll remove the most confidential details, but I’ve asked all kinds of things about how I might, let’s say, improve looking at real estate given my particular situation and my particular tastes. I’ll get a very different answer than if I just say, “Well, how do I invest?” I’ve even improved silly things, like I discovered that I tie my shoelaces too tight. I had no idea, thank you, AI. I now have better technique for having feet that are less sore.

Did you discover through AI that you tie your shoelaces too tight?

Yeah, I went debugging. I wanted to try to figure out why my feet were sore. To help me diagnose this I gave the system a lot of information about me, such as when my feet were sore, what I was doing at the time, what shoes I was wearing. We went through a little debugging process: “Okay, first thing we’ll try is using a different shoelace-tying technique from the one that you have used, which was loop and then loosen a little bit.” I’m like, “Wow, now my feet don’t hurt. How awesome.”

So whatever it is that’s bugging you, you could go and try to debug it a little bit with AI, and just see what you get. Maybe it’s useful, maybe it isn’t. But if you simply give the system nothing and ask something like, “How do I become as healthy as possible?” You’ll probably not get any information about what to do with your shoelaces. You’re just going to get something from very averaged-out, smoothed-out soup.

In order to get something useful, you have to bring something to the table. You have to know what’s important to you. You have to know what you’re trying to achieve. Sometimes, because your feet hurt right now, it’s important to you right now, and you’re kind of reacting the way that I was. I probably wouldn’t ask any proactive questions about my shoelaces, but sometimes what really helps is stepping back and saying, “Well, what is there in my life right now that could be better? And then why not ask for advice?”

AI makes advice cheaper than ever before. That’s the big revolution. It also helps with all kinds of nuanced advice, like pulling out some of your decision framing — “help me frame my ideas, help me ask myself the questions that would be important for getting through some or other decision.”

Where are most people making the biggest mistakes, or where do they have the biggest blind spots when it comes to decision-making? Is it asking the right questions? Is it deciding what they want? What would you say it is?

One is not getting in touch with their priorities. Again, when you’re not in touch with your priorities, anyone’s advice, even from the best person, could be bad for you. And this is something that also applies to the AI sphere. If we aren’t in touch with what we need and want, and we just ask the soup to give us back some average first draft and then we follow it to a T, what are the chances it will actually fit us very well?

Let me put a specific situation on this, because I’m the parent of a soon to be 17-year-old, second- semester junior in high school who’s getting ready to apply to colleges, and this is one of the first major decisions that young people make. It’s two-sided, which is really fraught because you’re deciding where to apply, and the schools are deciding who to let in.

It seems like that applies here too, because some people are going to apply to a school because their parents went there, or because it’s an Ivy League. So through that framing, can you talk about the types of mistakes that people make from the perspective of a high schooler applying to college?

I’m going to keep trying to tie this back a little bit to what we can learn about our own interactions with LLMs, because I think that’s helpful for people in this brave new world of how we use these AI tools. So again, we have three stages, approximately: you have to figure out what’s worth asking, what’s worth doing, and then you need to get some advice or technical help, some execution bit — that might be you, it might be the LLM, or might be your dad giving you great advice. And then when you receive the advice, you need to have a moment in which you evaluate if it’s actually good for you. Do I follow this, and is it good advice or bad advice; and do I implement it and do I execute it? It’s these three stages.

So the first one, the least comfortable one, is asking yourself, “Well, how do I actually frame what I’m asking?” So to apply it specifically to your kid, it would be what is the purpose of college for me? Why am I even asking this question? What am I imagining? What are some things I might get out of this college versus that college? What would make each different for me? What are my priorities? Why are these priorities my priorities?

These are questions where if you are not in tune with your answers, what will happen is you will receive advice from wherever — from the culture, from the internet, from your dad — and you are likely to end up doing what is good for them rather than what’s good for you, all from not asking yourself enough preliminary questions.

It’s like the magician scenario. They feed you an answer subconsciously, and you end up spitting that back without even realizing it’s not what you really wanted.

Your dad might say, as my dad did, that economics is a really interesting and cool thing to study. This kind of went into my head when I was maybe 13 years old, and it kept knocking around in there. So that’s how I found myself in economics classes and ended up majoring in economics at the University of Chicago.

Actually, it’s not always true that what your parents put in there makes its way out, of course, because both of my parents were physicists, and I very quickly discovered that I wanted nothing to do with physics because of the constant parental “you should do better in physics, and you should take more physics classes.” And then, of course, after I rebelled in college, I ended up in grad school taking physics in my neuroscience program. So there you go, it comes around full circle.

But the point is that you have to know what you want, what’s important to you, and really be in touch with this so that you’re not pushed around by other people’s advice and even what seems like the best advice — and this is important — even the best advice could be bad for you. So when you think someone is competent and capable, and so I should absolutely take their advice, that’s a mistake. Because if what’s important to them is not what’s important to you, and you haven’t communicated clearly to them or they don’t have your best interests at heart, then this intelligent advice is going to lead you off a cliff. I just want to say that with AI, it could be a performance system, but if you haven’t given it the context to help you, it’s not going to help you.

The AI point is where I wanted to go, and I think you’ve talked about this in the past too. AI presents itself as very competent and very certain that it’s correct with very little variation that I’ve seen based on the actual output. It’s not saying, “Eh, I’m not totally sure, but I think this when it’s about to hallucinate,” versus, “Oh, here’s the answer when it’s absolutely right.” It’s sure almost 100 percent of the time.

So that’s a design choice. Whenever you have actual probabilistic stages in your AI output, you can instead surface something to do with confidence, and this is achievable in many different ways. For some models, even some of the basic models, what happens there is you get a probability first, and then that converts into action or output that the user sees for other situations.

For example, in the backend, you could run that system multiple times, and you could ask it, “What is two plus two?” And then in the backend you could run this 100 times, and you discover that 99 out of 100 times, the answer comes back with a four in it. You could then show some kind of confidence around this being at least what the cultural soup thinks the answer is, right?

Let’s ask, “What is the capital of Australia?” If the cultural soup says over and over that it’s Melbourne, which it isn’t, or that it’s Sydney, which it also isn’t — for those for whom that’s a surprise, Canberra is the right answer. But if enough of the cultural soup says Sydney, and we’re only sourcing from the cultural soup, and we’re not kicking in some extra logic to go specifically to Wikipedia and only draw from that, then you would get the wrong answer with high confidence. But it would be possible to score that confidence.

In situations where the cultural soup isn’t so sure of something, then you would have a variety of different responses coming back, being averaged, and then you could say, “Well, the thing I’m showing you right now is only showing up in 20 percent of cases, or in 10 percent of cases.” Or you could even give a breakdown: “This is the modal answer, the most common answer, and then these are some answers that also show up.” Not to do this is very much a user-experience design decision plus a compute and hardware decision.

It’s also a cultural issue, isn’t it?

It seems to me that in the US, and maybe this is true of a lot of Western cultures, we value confidence, and we value certainty even more sometimes than we value correctness.

There’s this culture in business where we sort of expect right down to the moment when a company fails for the CEO to say, “I’m really confident that we’re going to make this work,” because people want to follow somebody who’s confident, and then the next day they say, “Ah, well, I failed, it didn’t work out.” We kind of accept that and think, “Oh, well, they gave it their best, and they were really confident.”

It’s the same in sports, right? The team’s down three games to one in a best of seven series, and the team that’s only got one win, they’re like, “Oh, we’re really confident we can win.” Well, really, the statistics say you’re probably not going to win, but we know that they have to be confident if they’re going to have any chance. So we accept that, and in a way we’ve created AI in our own image in that respect.

Well, we’ve certainly created AI in our own image. There’s a lot of user-experience design that goes into that, but I don’t think it’s an inevitable thing. I know that on the one hand, there is this concept of the fluency heuristic. So a person or system that appears more fluent, with less hesitation, less uncertainty, is perceived as more trustworthy. This research has been done; it’s old research in psychology.

Now you see that the fluency heuristic is absolutely hackable, because if you forget that you’re dealing with a computer system that has some advantages, like memory, attention, and, well, fluency, you could just very quickly rattle off a bunch of nonsense you don’t understand. And that lands on the user or the listener as competence, and so translates as more trustworthy. So our fluency heuristic is absolutely hackable by machine systems. It’s much harder for me to hack it as a human. Though we do have artists who manage it very well, it’s very difficult to speak fluently on a topic that you have no idea about and don’t know how any of the words go together. That only works if that’s the blind leading the blind, where no one else in the room knows how any of it works either.

On the other hand, I’ll say, at least for me, I think it has helped me in my career to form a reputation that, well, I say it like it is, and so I’m not going to pretend I don’t know a thing when I don’t know it. You asked me about neuroscience, and I told you that it’s been a long time since my graduate degree. Maybe we should adjust what I’m saying, right? I do that. That is not for all markets. Let’s just say many would think, “She has no idea what she’s talking about. Maybe we shouldn’t do business with her,” but for sure, there’s still value in my approach, and I’ve definitely found it’s helped me to become battle-bested and trustworthy.

That said, when it comes to designing AI systems, that stuttering lack of confidence would not create a great user experience. But similarly, some of the things that I talked about here would be expensive compute-wise. What I see a lot in the AI industry is that we have business people thinking that something is not technologically possible because it is not being given to users, and particularly not at scale, or even offered to businesses. Quite often, it is very much technologically possible. It’s just not profitable to offer that feature. There is no good business case. There’s no sign that users will respond to it in a way that will make it worth it.

So when I’m talking about running something 100 times and then outputting something like a confidence score, you would have some decision-making around whether it is 100, 10, or 1,000; and this depends on a slew of factors, which, of course, we could get into if that’s the problem you as a business are solving. But when you just look at it on the surface, I’m saying essentially 100 times more compute, right? Run this thing 100 times instead of once, and for what? Will the users respond to it? Will the business care about it? Yeah, frequently you’d be amazed at what’s already possible. Agents like [OpenAI’s] Operator, [Anthropic’s] Claude Computer Use, [Google’s] Project Mariner, all these things, they are underperforming, relative to where they could be performing, on purpose because it is expensive to run them well. So it will be very exciting when businesses and users are ready to pay more for these capabilities.

So back up for me now, because you left Google about two years ago, a little less than that. You were there for about 10 years, and long before the OpenAI and ChatGPT wave of AI enthusiasm had swept across the globe. But you were working on some of this stuff. So I want to understand both the work at Google and what led you there.

I think you said that your dad first mentioned economics to you when you were 13, and that sounds really young, but I think you started college a couple of years later. So you were actually on your way to those studies at the time. What made you decide to go to college that early and what was motivating you?

One of the things we don’t talk about enough is that knowing what motivates someone tells you more about that person than pretty much anything else could. Because if you’re just observing the outcomes, and you’re having to make your own inferences about how they got there, what they did, why they did it, particularly with survivorship bias occurring, it might look like they’re such total heroes. Then you look at their actual decision process, and that may tell you something very different, or you may think someone’s not very successful without realizing that they’re optimizing for a very different thing from you. This is all a very long way of saying that — I’m glad we’re friends, Jon, because I’ll go for it — but it’s always just such a private question. But yeah, why did I go to college so young? Honestly, it was because I had skipped grades in elementary school.

The reason I skipped grades in elementary school was because I came home — I was nine years old or so — and informed my mother that I wanted to do this. I cannot remember why. For the life of me, I don’t know. I was doing something on a nine-year-old’s whim, and skipping grades wasn’t a done thing in South Africa where I was growing up. So my parents had to really battle with the school and even the department of education to allow it. So there I was, getting to high school at 12, and I actually really enjoyed being younger. Okay, you get bullied a little bit, but I enjoyed it. I enjoyed seeing that you could learn a lot, and I wasn’t intellectualizing it the way I am right now, but you could learn a lot from people who were older than you.

They can kind of push you, and I’m a huge believer in just the act of being surrounded by people who will push you, which is maybe my biggest argument for why college still makes sense in the AI era. Just go be in a place where everyone’s on a journey of self-improvement. So I learned this and ended up making friends with 12th-graders when I was 13, and then at 14, they were all out already and in college. And I had spent most of my time with these older kids, and now I’m stuck, and I basically want my friends back. So that is why I went so young. It was 100 percent just a teenager being driven by being a social animal and wanting to be around my peer group, which…

But be fair to yourself. It sounds as if you just wanted to see how fast the car could go, right? That’s part of what it was at nine. You realized that you were capable of bigger challenges than the ones you had been given. So you were kind of like, “Well, let’s see.” And then you went and you saw that you were actually able to handle that, the intellectual part. People probably said, “Oh, but the social part would be hard.” But “Hey, I got friends who are seniors. That part’s working too. Well, let’s see if I can actually drive this at college speed.” That was part of it, right?

I am so easy to manipulate with the words, “You can’t do X.” So easy to manipulate. I’m like, “No, let me show you. I love a challenge. Let’s get this thing done.” So yeah, I think you’re right in your assessment.

So then you went on to do graduate work, after the University of Chicago, to study neuroscience, with some economics in there too?

So I actually went to Duke for neuroeconomics. That was the field. You know how there’s macroeconomics and microeconomics? Well, this was like nano-picoeconomics.This was about how the brain implements decision-making. So, of course, the courses involve experimental microeconomics. That was part of it, but this was from the psychology and neuroscience departments. So it’s technically a graduate degree in psychology and neuroscience with a focus on the neuroscience of decision-making, which is called neuroeconomics.

I also went to grad school twice, which is definitive proof that I’m a bad decision-maker, in case anyone was going to think that I personally am a good one. I’ve just got the technique, folks. I’ll advise you. But I went to grad school twice, and I’m just kidding. It was actually good for me to go to grad school twice, and my second time was for mathematical statistics. My undergraduate work was economics and statistics. So then I went for math statistics, where I did a lot of what we called back then machine learning, what we would call AI today.

How many PhDs were involved there?

[Laughs] No PhDs were harmed in the making of this person.

Okay, but studying both of those disciplines. What were you going to do with that?

So coming back to college, where I was taking courses around decision-making, despite having been an economics and statistics major. I got a taste for this. So I’ll tell you why I was in the stats major. The stats major happened because at about age eight or nine, just before this jumping of grades, I discovered the most beautiful thing in the world, which everybody knows is spreadsheets. That was for me the most gorgeous thing. Maybe it’s the librarian’s urge to put order into chaos.

So I had this gemstone collection. Its entire purpose was to give me another row for my spreadsheet. That was the whole thing. I get an amethyst, I could be like, Oh, it is purple, and how hard is it? And it’s translucent. And I still find, though I have no business doing it, that the act of data entry with a nice glass of wine is just such a soothing thing to do.

So I had been playing with data. Once you start collecting it, you also find that you start manipulating it. You start to have these urges like, “Oh, I wonder if I could get the data of all my files on my computer all into a spreadsheet. Well, let me figure out how to do that.” And then you learn a little bit of coding. So I just got all these data skills for free, and I thought data was really pretty. So I thought stats would be my easy A. Little did I know that it’s actually philosophy, and the philosophy bits are always the bits that should kick your butt or you’re missing the point. But of course, manipulating the data bits was super-duper easy. Statistics, I realized as I began to soak in the philosophy, is the discipline of changing your mind under uncertainty.

Economics is the discipline of scarcity, and the allocation of scarce resources. And even if money is not scarce, something is always scarce. People are mortal, time is scarce. So asking the question, “How are you going to make allocations, or what you might call decisions?” got in there through economics. Questions like “how to change your mind and what is your mind set to do. What actions are on the table? What would it take to talk you out of it?

I started asking these questions, and then how does this actually work in the human animal, and how could it work better? These questions came in through the psychology and neuroscience side of my studies. So I was studying decision-making from every perspective, and I was hoarding. So here as well, did I know what career I was going to have? I was actively discouraged from doing this. When I was at the University of Chicago, even at that liberal arts place, my undergraduate adviser said, “I have no idea what job you think you’re going to get with all this stuff.”

I said, “That’s okay, I’m learning. I think this is kind of important.” I hadn’t articulated back then what I’ll say now, which is that data is pretty, but there’s no “why” in data. The why comes from the decision-maker, right? The purpose has to come from people. It’s either your own purpose or the purpose of the people whom you represent, and that is what gives direction to all the rest of it. So [it’s] just studying data where it feels like there’s a right answer because the professor set the problem up so that there’s a right answer. If they had set it up differently, there could have been different answers.

Realizing that the setup has infinite choices, that is what gives data its why, and its meaning. That is the decision piece. That’s the most important thing I think any of us could spend our time on. Though we all do spend our time on it and do approach it from different lenses.

So then why Google? Why did you promise yourself you wouldn’t work for a company for more than 10 years?

Well, we’re really getting into all the things. So Google is a funny one, and now I’ll definitely say some things that I don’t think I’ve said on any podcasts. But the true story of that is that I was in a math stat PhD program, and what I didn’t know was that my adviser — this was at North Carolina State — had just taken an offer at Berkeley, where he could not bring any of his students along with him. That was a pretty bad thing for me, in the middle of my PhD.

Now, separate from this going on that I had no idea about, I take Halloween pretty seriously. It’s my thing. At Kozyr, it’s a work holiday, so people can enjoy Halloween properly if they want to. And I had come on Halloween morning dressed as a punch card as one does with proper Fortran to print happy Halloween as one does, and a Googler was giving a talk, and I was sitting in that audience, the only person in costume, because everyone else is lame.

Let that go on the record. My former classmates should have been in costume, but we can still be friends. And so at 9AM, I’m dressed like this. The Googler lady talking to the head of the department is like, “Who’s that grad student who was dressed as a punch card?” The head of the department, not having seen me, still said, “Oh, that’s probably Cassie. Last year she was dressed as a Sigma field,” just from measure theory. So I was being a huge nerd. The Googler thought “culture fit,” 100 percent, let’s get her application in.

And so the application was just for a summer internship, which seemed like a harmless thing to do. Sure, let’s try it. It’s an adventure. It’s Google. Then as I was signing up for it, my adviser was like, “This is a very good thing for you. You shouldn’t even hesitate. Don’t be asking me if I want you here doing summer research. Definitely go to Google. You can finish your PhD there. Go to Google.” And the rest is history. So a much, much better option than having to restart and refigure things with a new adviser.

How did you end up becoming this translator between the data people and the decision- makers?

The role that I ended up getting at Google, the formal internship name, was decision-support intern. I thought to myself, “We’ll figure out the support, and we’ll figure out the intern.” But decision, this is what I’ve been training for my whole life. The team that I was in was like a SWAT team for data-driven-decision making. It was very, very close to Google’s primary revenue. So this was a no-messing-around team of statisticians that called itself decision support. It was hardcore statistics flavored with data science, and it also had a very hardcore engineering group — it was a very big group. I learned a lot there.

I applied to potentially stay in the same group for a full-time role with strong prompting from my PhD adviser, and I thought I was going to join that group. A tangential thing happened, which is that I took a weekend in New York City before going to Mountain View, which is where I picked out my apartment. I thought I was going to join this group. I was really, really excited to be surrounded by deep experts in what I cared about. These experts were actually working more on the data side of things because what the decisions are and how we approach them are so regimented in that part of Google. But I took this trip to New York City, and I realized, and this was one of the biggest gut-punch decision-making moments for me. I realized I’m making a terrible mistake, that if I go there, I will just not enjoy my life as much as if I go to New York City.

So there was so much instinct, there was so much, “Oh, no, I should actually really reevaluate what I’m doing. Am I going to enjoy living in Mountain View?” I was just so set on getting the offer that I hadn’t done what I really should have done, which was to evaluate my priorities properly. So the first thing I did was I called the recruiter and I said, “Whoa, whoa, whoa, whoa. Can I get a role in New York City instead? It doesn’t matter which team. Is there something we can find for me to do here?” So I joined the New York office instead. Very, very different projects, very, very different group. And there I realized that not all of Google had this regimented approach to decision-making. There is so much translation, even at a place like Google, that’s necessary for products that are less close to the revenue stream.

So then there has to be a lot more conversation about why and how to do resource allocation, and who’s even in charge there, right? Things that when you’re moving billions around at the click of a mouse, you tend to have those questions answered. But in these other parts of Google, there was so much more color in how you could approach it, and such a big chasm between the people tasked with that and any of the data or engineering or data science efforts we might have.

So to really try to fill that gap — to try to put a bridge on it, so that things could be useful – I worked way more than my formal job said I should to try to build infrastructure. I built early statistical consulting, because that wasn’t there. You couldn’t just go ask a statistician who’d sit down with you and talk through what your project was going to be.

I convinced people to offer their 20 percent time, stats people by specialization, to offer their support on projects that were not their own project, to put some structure to this, and made resources and courses for decision-makers for how to think about dealing with data folk. I really tried to bring these two areas together, and eventually it became my job. But for the longest time, it wasn’t. Sometimes I faced questions. What are you? Who are you? Why are you actually doing what you’re doing? But just seeing that things could be made more effective, and kinder, for the experts who were going to work on poorly specified problems unless you specified the problems well first, was motivating, so that’s why I did it.

Trying to tie this all together, it sounds like that values and goals piece, and the philosophy element you talked about in school as being important, were coming back into play versus just focusing on the external expectation, like going to work for Google, of course, you’re going to go to Mountain View. That’s where the power is. That’s where the data people go, and you’re smart enough to be with the data people.

So if you’re going to run the car as fast as possible, you’re going to go over there, but you made a different kind of decision than perhaps the nine-year-old Cassie made. You stepped back and said, Wait a minute, what’s going to be best for me? And how can I work within that while pulling in some of this other information?

Yeah, for sure. I think that something that we can say to your 17-year-old is that it’s okay. It’s okay if it’s difficult when you’re young to take stock of what you actually are. You’re not formed yet, and maybe it’s okay to let the wind take you a little bit, particularly when you have a great dad who’s going to give you great advice. But it would be good if you can eventually mature into more of a habit of saying, “Well, I’m not the average Joe, so what do I actually want?” And working for what is thought of as — I don’t want to offend any internal Googlers — but they did have a reputation for being the top teams.

If you wanted to be number one and then number one again and number one some more times, that would’ve been the way to do it. But again, maybe it’s worth having something else that you optimize for in life. And I, as it turns out, I’m a theater kid, a lifelong theater kid. I’m an absolute nerd of theater. I’m going to London for just a few days in two weeks, and I’m seeing every evening show and matinee. I’m just going to hoard as much theater as I can for the soul. And so living in New York City was going to be just a better fit, not only for theater but for so much more that that city has to offer.

Having lived in both Silicon Valley and the New York area, I promise you that yes, the theater is far better in New York.

I mean, I went to all the plays in Silicon Valley as well, and I did my homework. I knew what I was getting into or out of. But yeah, it takes practice and skill to know that some of those questions are even questions worth asking. And I’ve developed that practice and skill from originally knowing how to do it to help others, having studied it formally, being book smart about it. These are the questions you ask. This is the order you ask them in. It’s something else to turn that on yourself and ask yourself the hard questions, that book smartness isn’t enough for that.

That’s good advice for all of us, whether we’re running businesses or just trying to figure out life, we’ve all got decisions to make. Cassie Kozyrkov, founder and CEO of Kozyr, former chief decision scientist at Google. Thanks for joining me on this episode of Decoder.

Thanks for having me, Jon.

Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!

Decoder with Nilay Patel

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!

Share.
Exit mobile version