Today, I’m talking with Arati Prabhakar, the director of the White House Office of Science and Technology Policy. That’s a cabinet-level position, where she works as the chief science and tech adviser to President Joe Biden. She’s also the first woman to hold the position, which she took on in 2022.
Arati has a long history of working in government: she was the director of the National Institute of Standards and Technology, and she headed up the Defense Advanced Research Projects Agency (DARPA) for five years during the Obama administration. In between, she spent more than a decade working at several Silicon Valley companies and as a venture capitalist, so she has extensive experience in both the public and private sectors.
Arati and her team of about 140 people at the OSTP are responsible for advising the president on big developments in science as well as major innovations in tech, much of which comes from the private sector. That means guiding regulatory efforts, government investment, and setting priorities around big-picture projects like Biden’s cancer moonshot and combating climate change.
You’ll hear Arati and me talk about that pendulum swing between public and private sector R&D — how that affects what gets funded and what doesn’t and how she manages the tension between the hyper-capitalist needs of industry and the public interest of the federal government.
We also talked a lot about AI, of course. Arati was notably the first person to show ChatGPT to President Biden; she has a funny story about how they had it write song lyrics in the style of Bruce Springsteen. But the OSTP is also now helping guide the White House’s approach to AI safety and regulation, including Biden’s AI executive order last fall. Arati and I talked at length about how she personally assesses the risks posed by AI, in particular around deepfakes, and what effect big tech’s often self-serving relationship to regulation might have on the current AI landscape.
Another big interest area for Arati is semiconductors. She got her PhD in applied physics, with a thesis on semiconductor materials, and when she arrived on the job in 2022, Biden had just signed the CHIPS Act. I wanted to know whether the $52 billion in government subsidies to bring chip manufacturing back to America is starting to show results, and Arati had a lot to say on the strength of this kind of legislation.
One note before we start: I sat down with Arati last month, just a couple of days before the first presidential debate and its aftermath, which swallowed the entire news cycle. So you’re going to hear us talk a lot about President Biden’s agenda and the White House’s policy record on AI, among other topics, but you’re not going to hear anything about the president, his age, or the presidential campaign.
Okay, OSTP Director Arati Prabhakar. Here we go.
This transcript has been lightly edited for length and clarity.
Arati Prabhakar. You are the director of the White House’s Office of Science and Technology Policy and the science and technology adviser to the president. Welcome to Decoder.
It’s great to be with you.
I am really excited to talk to you. There’s a lot of science and technology policy to talk about right now. We’re also entering what promises to be a very contentious election season where I think some of these ideas are going to be up for grabs, so I want to talk about what is politicized, what is not, and where we might be going. But just let’s start at the start. For the listener, what is the Office of Science and Technology Policy?
We’re a White House office with two roles. One is whatever the president needs advice or help on that relates to science and technology, which is in everything. That’s part one. Part two is thinking about working on nurturing the entire innovation system in the country, especially the federal component, which is the R&D that’s done across literally dozens of federal agencies. Some of it’s for public missions. A lot of it forms the foundation for everything else in the innovation ecology across this country. That’s a huge part of our daily work. And as we do that, of course what we’re working on is how do we solve the big problems of our time, how do we make sure we’re using technology in ways that build our values.
That’s a big remit. When people think about policymaking right now, I think there’s a lot of focus on Congress or maybe state-level legislatures. Which piece of the policy puzzle do you have? What are you able to most directly affect?
I’ll tell you how I think about it. The reason I was so excited when the president asked if I would do this job a couple of years ago is because my personal experience has been working in R&D and in technology and innovation from lots of different vantage points. I ran two very different parts of federal R&D. In between, I spent 15 years in Silicon Valley at a couple of companies, but most of that was early-stage venture capital. I started a nonprofit.
What I learned from all of that is that we do huge things in this country, but it takes all of us doing them together — the huge advances that we’ve made in the information revolution and in now fighting climate change and advancing American health. We know how amazing R&D was for everything that we did in the last century, but this century’s got some different challenges. Even what national security looks like is different today because the geopolitics is different. What it means to create opportunity in every part of the country is different today, and we have challenges like climate change that people weren’t focused on last century, even though we now wish that they had been.
How do you aim innovation at the great aspirations of today? That’s the organizing principle, and that’s how we set priorities for where we focus our attention and where we work to get innovation aimed in the right direction and then cranking.
Is that the lens: innovation and forward-thinking? That you need to make some science and technology policy, and all that policy should be directed at what’s to come? Or do you think about what’s happening right now?
In my view, the purpose of R&D is to help create options so that we can choose the future that we really want and to make that possible. I think that has to be the ultimate objective. The work gets done today, and it gets done in the context of what’s happening today. It’s in the context of today’s geopolitics. It’s in the context of today’s powerful technologies, AI among them.
When I think about the federal government, it’s this large complicated bureaucracy. What buttons do you get to push? Do you just get to spend money on research projects? Do you get to tell people to stop things?
No, I don’t do that. When I ran DARPA [Defense Advanced Research Projects Agency] or when I ran the National Institute of Standards and Technology (NIST) over in the Commerce Department, I ran an agency, and so I had an aligned position, I had a budget, I had a bunch of responsibilities, and I had a blast working with great people and getting big things done. This is a different job. This is a staff job to the president first and foremost, and so this is a job about looking across the entire system.
We actually have a very tiny budget, but we worry about the entire picture. So, what does that actually mean? It means, for example, helping the president find great people to lead federal R&D organizations across government. It means keeping an eye out on where shifts are happening that need to inform how we do research. Research security is a challenge today that, because of geopolitics and some of the issues with countries of concern, is going to have an impact on how universities conduct research. That’s something that we will take on working with all the agencies who work with universities.
It’s those kinds of cross-cutting issues. And then when there are strategic imperatives — whether it’s wrangling AI to make sure we get it right for the American people, whether it’s figuring out if we’re doing the work we need to decarbonize the economy fast enough to meet the climate crisis, or are we doing the things across everything it takes to cut the cancer death rate in half as fast as the president is pounding the table forward with his cancer moonshot — we sit in a place where we can look at all the puzzle pieces, make sure that they’re working together, and make sure that the gaps are getting addressed, either by the president or by Congress.
I want to draw a line here because I think most people think that the people working on tech in the government are actually affecting the functions of the government itself, like how the government might use technology. Your role seems a little more external. This is actually the policy of how technology will be developed and deployed across private industry or government, over time externally.
I would call it integrative because we’re very lucky to have great technologists who are building and using technology inside the government. That’s something we want to support and make sure is happening. Just as an example, one of our responsibilities for the AI work has been an AI talent surge to get the right kind of AI talent into government, which is now happening. Super exciting to see. But our day job is not that. It’s actually making sure that the innovation enterprise is robust and doing what it really needs to do.
How is your team structured? You’re not out there spending a bunch of money, but you have different focus areas. How do you think about structuring those focus areas, and what do they deliver?
Policy teams, and they’re organized specifically around these great aspirations that are the purpose for R&D and innovation. We have a team focused on health outcomes, among other things, that runs the president’s Cancer Moonshot. We have a team called Industrial Innovation that is about the fact that we now have, with this president, a very powerful industrial strategy that is revitalizing manufacturing in the United States, building our clean energy technologies and systems, that’s bringing leading-edge semiconductor manufacturing back to the United States. So, that’s an office that focuses on the R&D and all of that big picture of industrial revitalization that’s going on.
We have another team that focuses on climate and the environment, and that one is about things like making sure we can measure greenhouse gases appropriately. How do we use nature to fight climate change? And then we have a team that’s focused on national security just as you would expect, and each of those is a policy team. In each one of those, the leader of that organization is typically an extremely experienced person who has often worked inside and outside of government. They know how the government works, but they also really understand what it is the country’s trying to achieve, and they’re knitting together all the pieces. And then again, where there are gaps, where there are new policies that need to be advanced, that’s the work that our teams do.
Are you making direct policy recommendations? So, the environment team is saying, “Alright, every company in the country has promised a million trees. That’s great. We should incentivize some other behavior as well, and then here’s a plan to do that.” Or is it broader than that?
The way policies get implemented can be everything from agencies taking action within the laws that they live under, within their existing resources. It can be an executive order where a president says, “This is an urgent matter. We need to take action.” Again, it’s under existing law, but it’s the chief executive, the president, saying, “We need to take action.” Policy can be advanced through legislative proposals where we work with Congress to make something move forward. It’s a matter of what it takes to get what we really need, and often we start with actions within the executive branch, and then it expands from there.
How big is your office right now?
We’re about 140 people. Almost all of our team is people who are here on detail from other parts of government, sometimes from nonprofits outside of government or universities. The organization was designed that way because, again, it’s integrative. You have to have all of these different perspectives to be able to do this work effectively.
You’ve had a lot of roles. You led DARPA. That’s a very executive role within the government. You get to make decisions. You’ve been a VC. What’s your framework now for making decisions? How do you think about it?
The first question is what does the country need and what does the president care about? Again, a lot of the reason I was so excited to have this opportunity… by the time I came in, President Biden was well underway. I had my interview with him almost exactly two years ago — the summer of 2022. By then, it was already really clear, number one, that he really values science and technology because he’s all about how we build the future of this country. He understands that science and technology is a key ingredient to doing big things. Number two, he was really changing infrastructure: clean energy, meeting the climate crisis, dealing with semiconductor manufacturing. That was so exciting to see after so many decades. I’ve been waiting to see those things happen. It really gave me a lot of hope.
Across the line, I just saw his priorities really reflected what I deeply and passionately thought was so important for our country to meet the future effectively. That’s what drives the prioritization. Within that, I mean it’s like any other job where you’re leading people to try to get big hard things done. Not surprisingly, every year, I make a list of the things we want to get done, and through the year, we work to see what kind of progress we’re making, and we succeed wildly on some things, but sometimes we fail or the world changes or we have to take another run at it. But overall, I think we’re making huge progress, and that’s why I’m still running to work.
When you think about places you’ve succeeded wildly, what are the biggest wins you think you’ve had in your tenure?
In this role, I’ll tell you what happened. As I showed up in October of 2022 for this job, ChatGPT showed up in November of 2022. Not surprisingly, I would say largely my first year got hijacked by AI but in the best possible way. First, because I think it’s an important moment for society to contend with all the implications of AI, and secondly, because, as I’ve been doing this work, I think a lot of the reason AI is such an important technology in our lives today is because of its breadth. Part of what that means is that it is definitely a disruptor for every other major national ambition that we have. If we get it right, I think it can be a huge accelerator for better health outcomes, for meeting the climate crisis, for everything that we really have to get done.
In that sense, even though a lot of my personal focus was on AI matters and still is, that continues. While that was going on, I think we continued with my great team. We continued to make good progress on all the other things that we really care about.
Don’t worry, I’m going to ask a lot of AI questions. They’re coming, but I just want to get a sense of the office because you talked about coming in ’22. That office was in a little bit of turmoil, right? Trump had underfunded it. It had gone without any leadership for a minute. The person who preceded you left because they had contributed to a toxic workplace culture. You had a chance to reset it, to reboot it. The way it was was not the way anybody wanted it to be and not for some time. How did you think about making changes to the organization at that moment in time?
Between the time my predecessor left and the time I arrived, many months had passed. What was so fortunate for OSTP and the White House and for me is that Alondra Nelson stepped in during that time, and she just poured love on this organization. By the time I showed up, it had become — again, I would tell you — a very healthy organization. She gave me the great gift of a huge number of really smart, committed people who were coming to work with real passion about what they were doing. From there, we were able to build. We can talk about technology all day long, but when I think about the most meaningful work I’ve ever done in my professional life, it’s always about doing big things that change the future and improve people’s lives.
The satisfaction comes from working with great people to do that. For me, that is about infusing people with this passion for serving the country. That’s why they’re all here. But there’s a live conversation in our hallways about what we feel when we walk outside the White House gates, and we see people from around the country and around the world looking at the White House and the sense that we all share that we’re there to serve them. Those things are why people work here, but making that a live part of the culture, I think it’s important for making it a rich and meaningful experience for people, and that’s when they bring their best. I feel like we’ve really been able to do that here.
You might describe that feeling, and I’ve felt it, too, as patriotism. You look at the monuments in DC, and you feel something. One thing that I’ve been paying attention to a lot recently is the back-and-forth between the federal government spending on research, private companies spending on research. There’s a pretty enormous delta between the sums. And then I see the tech companies, particularly in AI, holding themselves out as national champions. Or you see a VC firm like Andreessen Horowitz, which did not care about the government at all, saying that its policy is America’s policy.
Is that part of your remit to balance out how much these companies are saying, “Look, we are the national champions of AI or chip manufacturing,” or whatever it might be, “and we can plug into a policy”?
Well, I think you’re talking about something that is very much my day job, which is understanding innovation in America. Of course, the federal component of it, which is integral, but we have to look at the whole because that’s the ecosystem the country needs to move forward.
Let’s zoom back for a minute. The pattern that you’re describing is something that has happened in every industrializing economy. If you go back in history, it starts with public investment and R&D. When a country is wealthy enough to put some resources into R&D, it starts doing that because it knows that’s where its growth and its prosperity can come from. But the point of doing that is actually to seed private activity. In our country, like many other developed economies, the moment came when public funding of R&D, which continued to grow, was surpassed by private investment in R&D. Then private investment, with the intensification of the innovation economy with the information technology industries, just took off, and it’s been amazing and really great to see.
The most recent numbers — I believe these are from 2021 — are something like $800 billion a year that the United States spends on R&D. Overwhelmingly, that is from private industry. The fastest growth has come from industry and specifically from the information technology industries. Other industries like pharmaceuticals and manufacturing are R&D-intensive, but their pace of growth has been just… the IT industries are wiping out everyone else’s growth [by comparison]. That’s huge. One aspect of that is that’s where we’re seeing these big tech companies plowing billions of dollars into AI. If that’s happening in the world, I’m glad it’s happening in America, and I’m glad that they’ve been able to build on what has been decades now of federal research and development that laid the groundwork for it.
Now, it does then create a whole new set of issues. That really, I think, comes to where you were going because let’s back up. What is the role of federal R&D? Number one, it is the R&D you need to achieve public missions. It’s the “R” and the “D,” product development, that you need for national security. It’s the R&D that you need for health, for meeting the climate crisis. It’s all the things that we’ve been talking about. It’s also that, in the process of doing that work, part of what federal R&D does is to lay a very broad foundation of basic research because that’s important not just for public missions, but we know that that’s something that supports economic growth, too. It’s where students get educated. It’s where the fundamental research that’s broadly shared through publications, that’s a foundation that industry counts on. Economics has told us forever that that’s not returns that can be appropriated by companies, and so it’s so important for the public sector to do that.
The question really becomes then, when you step back and you say this huge growth in private sector R&D, how do we keep federal R&D? It doesn’t have to be the biggest for sure, but it certainly has to be able to continue to support the growth and the progress that we want in our economy, but then also broadly across these public missions. That’s why it was a priority for the president from the beginning, and he made really good progress the first couple of years in his administration on building federal R&D. It grew fairly substantially in the first couple of budget cycles. Then with these Republican budget caps from Capitol Hill in the last cycle, R&D took a hit, and that’s actually been a big problem that we are focused on.
The irony is that we’ve actually cut federal R&D in this last cycle in a time in which our primary economic and military emerging competitor is the People’s Republic of China (PRC). They boosted R&D by 10 percent while we were cutting. And it’s a time when it’s AI jump ball because a lot of AI advances came from American companies, but the advantages are not limited to America. It’s a time when we should be doubling down, and we’re doing the work to get back on track.
That is the national champion’s argument, right? I listen to OpenAI, Google, or Microsoft, and they say, “We’re American companies. We’re doing this here. Don’t regulate us so much. Don’t make us think about compliance costs or safety or anything else. We’ve got to go win this fight with China, which is unconstrained and spending more money. Let us just do this. Let us get this done.” Does that work with you? Is that argument effective?
First of all, that’s not really what I would say we’re hearing. We hear a lot of things. I mean, astonishingly, this is an industry that spends a lot of time saying, “Please do regulate us.” That’s an interesting situation, and there’s a lot to sort out. But look, I think this is really the point about all the work we’ve been doing on AI. It really started with the president and the vice president recognizing it as such a consequential technology, recognizing promise and peril, and they were very clear from the beginning about what the government’s role is and what governance really looks like here.
Number one is managing its risks. And the reason for that is number two, which is to harness its benefits. The government has, I think, two very important roles. It was visible and obvious even before generative AI happened, and it’s even more so now that the breadth of applications each come with a bright side and a dark side. So, of course, there are issues of embedded bias and privacy exposure and issues of safety and security, issues about the deterioration of our information environment. We know that there are impacts on work that have started and that it will continue.
Those are all issues that require the government to play its role. It requires companies, it requires everyone to step up, and that’s a lot of the work that we have been doing. We can talk more about that, but again, in my mind, and I think for the president as well, the reason to do that work is so that we can use it to do big things. Some of those big things are being done by industry and the new markets that people are creating and the investment that comes in for that, as long as it’s done responsibly, we want to see that happen. That’s good for the country, and it can be good for the world as well.
But there are public missions that are not going to be addressed just by this private investment that are ultimately still our responsibility. When I look at what AI can bring to each of the public missions that we’ve talked about, it’s everything from weather forecasting to [whether] we finally realize the promise of education tech for changing outcomes for our kids. I think there are ways that AI opens paths that weren’t available before, so I think it’s incredibly important that we also do the public sector work. By the way, it’s not all just using an LLM that someone’s been developing commercially. These are a very different array of technologies within AI, but that has to get done as well if we’re really going to succeed and thrive in this AI era.
When you say these companies want to be regulated, I’ve definitely heard that before, and one of the arguments they make is if you don’t regulate us and we just let market forces push us forward, we might kill everyone, which is a really incredible argument all the way through: “If we’re not regulated, we won’t be able to help ourselves. Pure capitalism will lead to AI doom.” Do you buy that argument that if they don’t stop it, they’re on a path toward the end of all humanity? As a policymaker, it feels like you need to have a position here.
I’ve got a position on that. First of all, I am struck by the irony of “it’s the end of the world, and therefore we have to drive.” I hear that as well. Look, here’s the thing. I think there’s a very garbled conversation about the implications, including safety implications, of AI technology. And, again, I’ll tell you how I see it, and you can tell me if it matches up to what you’re hearing.
Number one, again, I start with the breadth of AI, and part of the cacophony in the AI conversation is that everyone is talking about the piece of it that they really care about, whether it’s bias in algorithms. If that’s what you care about, that’s killing people in your community, then, yes, that’s what you’re going to be talking about. But that’s actually a very different issue than misinformation being propagated more effectively. All of those are different issues than what kinds of new weapons can be designed.
I find it really important to be clear about what the specific applications are and the ways that the wheels can come off. I think there’s a tendency in the AI conversation to say that, in some future, there will be these devastating harms that are possible or that will happen. The fact of the matter is that there are devastating harms that are happening today, and I think we shouldn’t pretend that it’s only a future issue. The one I will cite that’s happening right now is online degradation, especially of women and girls. The idea of using nonconsensual intimate imagery to really just ruin people’s lives was around before AI, but when you have image generators that allow you to make deepfake nudes at a tremendous rate, it looks like this is actually the first manifestation of an acceleration in harms as opposed to just risks with generative AI.
The machines don’t have to make huge advances in capability for that to happen. That’s a today problem, and we need to get after it right now. We’re not philosophers; we’re trying to make policies that get this right for the country. For our work, I think it’s really important to be clear about the specific applications, the risks, the potential, and then take actions now on things that are problems now and then lay the ground so that we can avoid problems to the greatest degree possible going forward.
I hear that. That makes sense to me. What I hear often in opposition to that is, “Well, you could do that in Photoshop before, so the rules should be the same.” And then, to me at least, the difference is, “Well, you couldn’t just open Photoshop and tell it what you wanted and get it back.” You had to know what you’re doing and that there was a rate limiter there or a skill limiter there that prevented these bad things from happening at scale. The problem is I don’t know where you land the policy to prevent it. Do you tell Adobe not to do it? Do you tell Nvidia not to do it? Do you tell Apple not to do it at the operating system level? Where do you think, as a policymaker, those restrictions should live?
I’ll tell you how we’re approaching that specific issue. Number one, the president has called on Congress for legislation on privacy and on protecting our kids most particularly as well as broader legislation on AI risks and harms. And so some of the answer to this question requires legislation that we need for this problem, but also for—
Right, but is the legislation aimed at just the user? Are we just going to punish the people who are using the tools, or are we going to tell the toolmakers they can’t do the thing?
I want to reframe your question into a system because there’s not one place that this problem gets fixed, and it’s all the things that you were talking about. Some of the measures — for example, protecting kids and protecting privacy — require legislation, but they would have a broad inhibition of the kind of accelerated spread of these materials. In a very different act that we did recently working with the gender policy council here at the White House, we put out a call to action to companies because we know the legislation’s not going to happen overnight. We’ve been hoping and wishing that Congress could move on it, but this is a problem that’s right now, and the people who can take action right now are companies.
We put out a call to action that called on payment processors and called on the platform companies and called on the device companies because they each have specific things that they can do that don’t magically solve the problem but inhibit this and make it harder and can reduce the spread and the volume. Just as an example, payment processors can have terms of service that say [they] won’t provide payment processing for these kinds of uses. Some actually have that in their terms of service. They just need to enforce it, and I’ve been happy to see a response from the industry. I think that’s an important first step, and we’ll continue to work on the things that might be longer-term solutions.
I think everyone looks for a silver bullet, and almost every one of these real-world issues is something where there is no one magic solution, but there are so many things you can do if you understand all the different aspects of it — think of it as a systems problem and then just start shrinking the problem until you can choke it, right?
There’s a part of me that says, in the history of computing, there are very few things the government says I cannot do with my MacBook. I buy a MacBook or I buy a Windows laptop and I put Linux on it, and now I’m just pretty much free to run whatever code I want, and there’s a very, very tiny list of things I’m not allowed to do. I’m not allowed to counterfeit money with my computer. At some layers of the application stack, that is prevented. Printer drivers won’t let you print a dollar bill.
Once you expand that to “there’s a bunch of stuff we won’t let AI do, and there are open-source AI models that you can just go get,” the question of where do you actually stop it, to me, feels like it requires both a cultural change in that we’re going to regulate what I can do with my MacBook in a way that we’ve never done before, and we might have to regulate it at the hardware level because if I can just download some open-source AI model and tell it to make me a bomb, all the rest of it might not matter.
Hold on that. I want to pull you up out of the place that you went for a minute because what you were talking about is regulating AI models at the software level or at the hardware level, but what I’ve been talking about is regulating the use of AI in systems, the use by people who are doing things that create harm. Let’s start with that.
If you look at the applications, a lot of the things that we’re worried about with AI are already illegal. By the way, it was illegal for you to counterfeit money even if there wasn’t a hardware protection. That’s illegal, and we go after people for that. Committing fraud is illegal, and so is this kind of online degradation. So, where things are illegal, the issue is one of enforcement because it’s actually harder to keep up with the scale of acceleration with AI. But there are things that we can do about that, and our enforcement agencies are serious, and there are many examples of actions that they’re taking.
What you’re talking about is a different class of questions, and it’s one that we have been grappling with, which is what are the ways to slow and potentially control the technology itself? I think, for the reasons you mentioned and many more, that’s a very different kind of challenge because, at the end of the day, models are a collection of weights. It’s a bunch of software, and it may be computationally intensive, but it’s not like controlling nuclear materials. It’s a very different kind of situation, so I think that’s why that’s hard.
My personal view is that people would love to find a simple solution where you corral the core technology. I actually think that, in addition to being hard to do for all the reasons you mentioned, one of the persistent issues is that there’s a bright and dark side to almost every application. There’s a bright side to these image generators, which is phenomenal creativity. If you want to build biodesign tools, of course a bad actor can use them to build biological weapons. That’s going to get easier, unfortunately, unless we do the work to lock that down. But that’s actually going to have to happen if we’re going to solve vexing problems in cancer. So, I think what makes it so complex is recognizing that there’s a bright and a dark side and then finding the right way to navigate, and it’s different from one application to the next.
You talk about the shift between public and private funding over time, and it moves back and forth. Computing is largely the same. There are open eras of computing and closed eras of computing. There are more controlled eras of computing. It feels like, with AI, we are headed toward a more controlled era of computing where we do want powerful biodesign tools, but we might only want some people to have them. As opposed to, I would say, up until now, software’s been pretty widely available, right? New software, new capabilities hit, and they get pretty broadly distributed right away. Do you feel that same shift — that we might end up in a more controlled era of computing?
I don’t know because it’s a live topic, and we’ve talked about some of the factors. One is: can you actually do it, or you’re just trying to hold water in your hand and it’s slipping out? Secondly, if you do it effectively, no action comes without a cost. So, what is the cost? Does it slow down your ability to design the breakthrough drugs that you need? Cybersecurity is the classic example because the exact same advanced capabilities that allow you to find vulnerabilities quickly, if you are a bad guy, that’s bad for the world, if you’re finding those vulnerabilities and patching them quickly, then it’s good for the world, but it’s the same core capability. Again, I think it’s not yet clear to me how this will play out, but I think it’s a tough road that everyone’s trying to sort out right now.
One of the things about that road that is fascinating to me is there seems to be a core assumption baked into everyone’s mental models that the capability of AI, as we know it today, will continue to increase almost at a linear rate. Like no one is predicting a plateau anytime soon. You mentioned that last year, it was pretty crazy for you. That’s leveled off. I would attribute at least part of that to the capabilities of the AI systems have leveled off. As you’ve had time to look at this and you think about the amount of technology you’ve been involved with over your career, do you think we’re overestimating the rate of progression here? Do you think particularly the LLM systems can live up to our expectations?
I have a lot to say about this. Number one, this is how we do things, right? We get very excited about some new capability, and we just go crazy about it, and people get so jazzed about what could be possible. It’s the classic hype curve, right? It’s the classic thing, so of course that’s going to happen. Of course we’re doing that in AI. When you peel the onion for really genuinely powerful technologies, when you’re through the hype curve, really big shifts have happened, and I’m quite confident that that’s what’s happening with AI broadly in this machine learning generation.
Broadly with machine learning or broadly with LLMs and with chatbots?
Machine learning. And that’s exactly where I want to go next because I think we are having a somewhat oversimplified conversation about where advances in capability come from, and capability always comes hand in hand with risks. I think about this a lot, both because of the things I want to do for the bright side, but also because it’s going to come with a dark side. The one dimension that we talk about a lot for all kinds of reasons is primarily about LLMs, but it’s also about very large foundation models, and it’s a dimension of increasing capability that’s defined by more data and more flops of computing. That’s what has dominated the conversation. I want to introduce two other dimensions. One is training on very different kinds of data. We’ve talked about biological data, but there are many other kinds of data: all kinds of scientific data, sensor data, administrative data about people. Those each bring different kinds of advances in capability and, with it, risks.
Then, the third dimension I want to offer is the fact that, with AI models, you never interact with an AI model. AI models live inside of a system. Even a chatbot is actually an AI model embedded in a system. But as AI models become embedded in more and more systems, including systems that take action in the online world or in the physical world like a self-driving car or a missile, that’s a very different dimension of risk — what actions ensue from the output of a model? And unless we really understand and think about all three of those dimensions together, I think we’re going to have an oversimplified conversation about capability and risk.
But let me ask the simplest version of that question. Right now, what most Americans perceive as AI is not the cool photo processing that has been happening on an iPhone for years. They perceive the chatbots — this is the technology that’s going to do the thing. Retrieval, augmented generation inside your workplace is going to displace an entire floor of analysts who might otherwise have asked the questions for you. This is the—
That’s one thing that people are worried about.
This is the pitch that I hear. Do you think that, specifically, LLM technology can live up to the burden of the expectations that the industry is putting on it? Because I feel like that whether or not you think that is true kind of implicates how you might want to regulate it, and that’s what most people are experiencing now and most people are worried about now.
I talk to a broader group of people who are seeing AI, I think, in different ways. What I’m hearing from you is, I think, a very good reflection of what I’m hearing in the business community. But if you talk to the broader research and technical community, I think you do get a bigger view on it because the implications are just so different in different areas, especially when you move to different data types. I don’t know if it’s going to live up to it. I mean, I think that’s an unknown question, and I think the answer is going to be both a technical answer and a practical one that businesses are sorting out. What are the applications in which the quality of the responses is robust and accurate enough for the work that needs to get done? I think that’s all got to still play out.
I read an interview you did with Steven Levy at Wired, who is wonderful, and you described showing ChatGPT to President Biden, and I believe you generated a Bruce Springsteen soundalike, which is fascinating.
We had to write a Bruce Springsteen song. It was text, but yeah.
Wild all the way around. Incredible scene just to ponder in general. We’re talking just a couple of days after the music industry has sued a bunch of AI companies for training on their work. I’m a former copyright lawyer. I wasn’t any good at it, but I look at this, and I say, “Okay, there’s a legal house of cards that we’ve all built on, where everyone’s assumed they’re going to win the fair use argument the way that Google won the fair use argument 20 years ago, but the industry isn’t the same, the money isn’t the same, the politics aren’t the same, the optics aren’t the same.” Is there a chance that it’s actually copyright that ends up regulating this industry more than any sort of directed top-down policy from you?
I don’t know the answer to that. I talked about the places where AI accelerates harms or risks or things that we’re worried about, but they’re already illegal. You put your finger on what is my best example of new ground because this is a different use of intellectual property than we’ve had in the past. I mean, right now what’s happening is the courts are starting to sort it out as people bring lawsuits, and I think there’s a lot of sorting out to be done. I’m very interested in how that turns out from the perspective of LLMs and image generators, but I think it has huge implications for all the other things I care about using AI for.
I’ll give you an example. If you want to build biodesign tools that actually are great at generating good drug candidates, the most interesting data that you want in addition to everything you currently have is clinical data. What happens inside of human beings? Well, that data, there’s a lot of it, but it’s all locked up in one pharmaceutical company after another. Each one is really sure that they’ve got the crown jewels.
We’re starting to envision a path toward a future where you can build an AI model that trains across those data sets, but I don’t think we’re going to get there unless we find a way for all parties to come to an agreement about how they would be compensated for having their data trained on. It’s the same core issue that we’re dealing with LLMs and image generators. I think there’s a lot that the courts are going to have to sort out and that I think businesses are going to have to sort out in terms of what they consider to be fair value.
Does the Biden administration have a position on whether training is fair use?
Because this seems like the hard problem. Apple announced Apple Intelligence a few weeks ago and then sort of in the middle of the presentation said, “We trained on the public web, but now you can block it.” And that seems like, “Well, you took it. What do you want us to do now?” If you can build the models by getting a bunch of pharma companies to pool their data and extract value together from training on that, that makes sense. There’s an exchange there that feels healthy or at least negotiated for.
On the other hand, you have OpenAI, which is the darling of the moment, getting in trouble over and over again for being like, “Yeah, we just took a bunch of stuff. Sorry, Scarlett Johansson.” Is that part of the policy remit for you, or is that, “We’re definitely going to let the court sort that out”?
For sure, we’re watching to see what happens, but I think that is in the courts right now. There are proposals on Capitol Hill. I know people are looking at it, but it’s not sorted at all right now.
It does feel like a lot of tech policy conversations land on speech issues one way or another, or copyright issues in one way or another. Is that something that’s on your mind that, as you make policy about investment over time or research and development over time in these areas, there’s this whole other set of problems that the federal government in particular is just not suited to solve around speech and copyright law?
Yeah, I mean freedom of speech is one of the most fundamental American values. It’s the foundation of so much that matters for our country, for our democracy, for how it works, and so it’s such a serious factor in everything. And before we get to the current generation of AI, of course that was a huge factor in how the social media story unfolded. We’re talking about a lot of things where I think civil society has an important role to play, but I think these topics, in particular, are ones where I think civil society… really, it rests on their shoulders because there are a set of things that are appropriate for the government to do, and then it really is up to the citizens.
The reason I ask that is that social media comparison comes up all the time. I spoke to President Obama when President Biden’s executive order on AI came out, and he made essentially the direct, “We cannot screw this up the way we did with social media.”
I put it to him, and I’ll put it to you: The First Amendment is sort of in your way. If you tell a computer there are things you don’t want it to make, you have kind of passed a speech regulation one way or the other. You’ve said, “Don’t do deepfakes, but I want to deepfake President Biden or President Trump during the election season.” That’s a hard rule to write. It’s difficult in very real ways to implement that rule in a way that comports with the First Amendment, but we all know we should stop deepfakes. How do you thread that needle?
Well, I think you should go ask Senator Amy Klobuchar, who wrote the legislation on exactly that issue, because there are people who have thought very deeply and sincerely about exactly this issue. We’ve always had limits on First Amendment rights because of the harms that can come from the abuse of the First Amendment, and so I think that will be part of the situation here.
With social media, I think there’s a lot of regret about where things ended up. But again, Congress really does need to act, and there are things that can be done to protect privacy. That’s important for directly protecting privacy, but it is also a path to changing the pace at which bad information travels through our social media environment.
I think there’s been so much focus on generative AI and its potential to create bad or incorrect or misleading information. That’s true. But there wasn’t really much constraining the spread of bad information. And I’ve been thinking a lot about the fact that there’s a different AI. It’s the AI that was behind the algorithmic drive of what ads come to you and what’s next in your feed, which is based on learning more and more and more about you and understanding what will drive engagement. That’s not generative AI. It is not LLMs, but it’s a very powerful force that has been a big factor in the information environment that we were in before chatbots hit the scene.
I want to ask just one or two more questions about AI, and then I want to end on chips, which I think is an equally important aspect of this whole puzzle. President Biden’s AI executive order came out [last fall]. It prescribed a number of things. The one that stood out to me as potentially most interesting in my role as a journalist is a requirement that AI companies would have to share their safety test results and methodologies with the government. Is that happening? Have you seen the results there? Have you seen change? Have you been able to learn anything new?
As I recall, that’s above a particular threshold of compute. Again, so much of the executive order was dealing with the applications, the use of AI. This is the part that was about AI models, the technology itself, and there was a lot of thought about what was appropriate and what made sense and what worked under existing law. The upshot was a requirement to report once a company is training above a particular compute threshold, and I am not aware that we’ve yet hit that threshold. I think we’re sort of just coming into that moment, but the Department of Commerce executes that, and they’ve been putting all the guidelines in place to implement that policy, but we’re still at the beginning of that, as I understand it.
If you were to receive that data, what would you want to learn that would help you shape policy in the future?
The data about who’s training?
Not the data about who’s training. If you were to receive the safety test data from the companies as they train the next generation of models, what information is helpful for you to learn?
Let’s talk about two things. Number one, I think just understanding which companies are pursuing this particular dimension of advancement and capability, more compute, that’s helpful to understand, just to be aware of the potential for big advances, which might carry new risks with them. That’s the role that it plays.
I want to turn to safety because I think this is a really important subject. Everything that we want from AI hinges on the idea that we can count on it, that it’s effective at what it’s supposed to do, that it’s safe, that it’s trustworthy, and that’s very easy to want. It turns out, as you know, to be very hard to actually achieve, but it’s also hard to assess and measure. And all the benchmarks that exist for AI models, it’s interesting to hear how they do on standardized tests, but they just are benchmarks that tell you something. They don’t really tell you that much about what happens when humanity interacts with these AI models, right?
One of the limitations in the way we’re talking about this is we talk about the technology. All the interesting things happen when human beings interact with the technology. If you think models — AI models are complex and opaque — you should try human beings. I think we have to understand the scale of the challenge and the work that the AI Safety Institute here is doing. This is a NIST organization that was started in the executive order. They’re doing exactly the right first steps, which is working with industry, getting everyone to understand what current best practices are for red teaming. That’s exactly where to start.
But I think we also just have to be clear that our current best practices for red teaming are not very good compared to the scale of the challenge. This is actually an area that’s going to require deep research and that’s ongoing in the companies and more and more with federal backing in universities, and I think it’s essential.
Let’s spend a few minutes talking about chips because that is the other piece of the puzzle. The entire tech industry right now is thinking about chips, particularly Nvidia’s chips — where they’re made, where they might be under threat quite literally because they’re made in Taiwan. There’s obviously the geopolitics of China involved there.
There’s a lot of investment from the CHIPS Act to move ship manufacturing back in the United States. A lot of that depends again on the idea that we might have some national champions once again. I think Intel would love to be the beneficiary of all that CHIPS Act funding. They can’t operate at the same process nodes as TSMC right now. How do you think about that R&D? Is that longer range? Is that, “Well, let’s just get some TSMC fabs in Arizona and some other places and catch up”? What’s the plan?
There’s a comprehensive strategy built around the $52 billion that was funded by Congress with President Biden pushing hard to make sure we get semiconductors back at the leading edge in the United States. But I want to step back from that and tell you that this fall is 40 years since I finished my PhD, which was on semiconductor materials, and [when] I came to Washington, my hair was still black. This is really long ago.
I came to Washington on a congressional fellowship, and what I did was write a study on semiconductor R&D for Congress. Back then, the US semiconductor industry was extremely dominant, and at that time, they were worried that these Japanese companies were starting to gain market share. And then a few actions happened. A lot of really good R&D happened. I got to build the first semiconductor office at DARPA, and every time I look at my cell phone, I think about the three or five technologies that I got to help start that are in those chips.
So, a lot of good R&D got done, but over those 40 years, great things happened, but all the manufacturing at the leading edge eventually moved out of the United States, putting us in this really, really bad situation for our supply chains, for jobs all those supply chains support. The president likes to talk about the fact that when a pandemic shut down a semiconductor fab in Asia, there were auto workers in Detroit who were getting laid off. So, these are the implications. Then, from a national security perspective, the issues are huge and, I think, very, very obvious. What was shocking to me is that after four decades of admiring this problem, we finally did something about it, and with the president and the Congress pulling together, a really big investment is happening. So, how do we get from here to the point where our vulnerability has been significantly reduced?
Again, you don’t get to have a perfect world, but we can get to a far better future. The investments that have been made include Intel, which is fighting to get back in and drive to the leading edge. It’s also, as you noted, TSMC and Samsung and Micron, all at the leading edge. Three of those are logic. Micron has memory. And Secretary [Gina] Raimondo has just really driven this hard, and we’re on track to have leading-edge manufacturing. Not all leading-edge manufacturing — we don’t need it all in the United States — but a substantial portion here in America. We’ll still be part of global supply chains, but we’re going to reduce that really critical vulnerability.
Is there a part where you say, “We need to fund more bleeding-edge process technology in our universities so that we don’t miss a turn, like Intel missed a turn with the UV”?
Number one, part of the CHIPS Act is a substantial investment, over $10 billion in R&D. Number two, I spent a lot of my career on semiconductor R&D — that’s not where we fell down. It’s about turning that R&D into US manufacturing capability. Once you lose the leading edge, then the next generation and the next generation is going to get driven wherever you’re leading edge is. So, R&D eventually moves. I think it was a well-constructed package in CHIPS that said we have to get manufacturing capacity at the leading edge back, and then we build the R&D to make sure that we also win in the future and be able to move out beyond that.
I always think about the fact that the entire chips supply chain is utterly dependent on ASML, the Dutch company that makes the lithography machines. Do you have a plan to make that more competitive?
That’s one of the hardest challenges, and I think we’re very fortunate that the company is a European company and has operations around the world, and that company in the country is a good partner in the ecosystem. And I think that that’s a very hard challenge, as you well know, because the cost and the complexity of those systems has just… It’s actually mind-boggling when you see what it takes to make this thing that ends up being a square centimeter, the complexity of what goes behind that is astonishing.
We’ve talked a lot about things that are happening now. That started a long time ago. The R&D investment in AI started a long time ago. The explosion is now. The investment in chips started a long time ago. That’s your career. The explosion and the focus is now. As you think about your office and the policy recommendations you’re making, what are the small things that are happening now that might be big in the future?
I think about that all the time. That’s one of my favorite questions. Twenty and 30 years ago, the answer to that was biology starting to emerge. Now I think that’s a full-blown set of capabilities. Not just cool science, but powerful capabilities, of course for pharmaceuticals, but also for bioprocessing, biomanufacturing to make sustainable pathways for things that we currently get through petrochemicals. I think that’s a very fertile area. It’s an area that we put a lot of focus on. Now, if you ask me what’s happening in research that could have huge implications, I would tell you it’s about what’s changing in the social sciences. We tend to talk about the progression of the information revolution in terms of computing and communications and the technology.
But as that technology has gotten so intimate with us, it is giving us ways to understand individual and societal behaviors and incentives and how people form opinions in ways that we’ve never had before. If you combine the classic insights of social science research with data and AI, I think it’s starting to be very, very powerful, which, as you know from everything I’ve told you, means it’s going to come with bright and dark sides. I think that’s one of the interesting and important frontiers.
Well, that’s a great place to end it, Director Prabhakar. Thank you so much for joining Decoder. This was a pleasure.
Great to talk with you. Thanks for having me.
Decoder with Nilay Patel /
A podcast from The Verge about big ideas and other problems.
SUBSCRIBE NOW!