Today, I’m talking with Senator Brian Schatz of Hawaii. He’s been in the Senate for a little more than a decade now, and he’s seen a lot in that time. 

We joke that Decoder is ultimately a show about org charts, but there’s a lot of truth to it. A CEO can upend their whole company’s org chart if they think it will improve their business, but a member of the federal government — especially a senator — can’t unilaterally ditch the structures that have been woven into the fabric of our country for more than 200 years. That can lead to… some inefficiencies, let’s say, and you’ll hear Schatz talk about the four separate offices he has to balance against each other and the concessions he has to make to work within that structure.

As you’d expect, we also talked a lot about what Congress is — and mostly isn’t — doing to regulate tech, despite tech regulation being high on most folk’s agendas. One of the few points that Republicans and Democrats seem to agree on these days is that social media is bad for kids, especially in the wake of the Frances Haugen “Facebook Papers” leak a couple of years back.

To that end, Schatz is one of the co-sponsors of a controversial piece of legislation called the Kids Online Safety Act, or KOSA, which aims to protect kids from algorithmically amplified harms. The stated goal of the legislation is to reduce anxiety, depression, and related issues, like eating disorders, in teenagers and kids. But there are a lot of bad-faith leaders out there who want to use a bill like this to prevent kids — or adults — from accessing content about their own lives, like resources for LGBTQ+ support or reproductive health.  

Schatz is also the primary sponsor of a second child safety bill, the Protecting Kids on Social Media Act, that would require social media platforms to verify the age of account holders and completely bar children under the age of 13 from signing up. This bill also requires parental consent for teens to create accounts and prohibits platforms from subjecting any minor to an algorithmic recommendation system.

Both pieces of legislation ultimately arrive at the same conflict: Any bill regulating speech, or access to speech, is going to run up against the US Constitution. Sen. Schatz and I talked a lot about that tension between good intentions and real outcomes. We also talked a lot about the big stumbling block for passing any laws about content moderation, which is, of course, the First Amendment.

We discussed the First Amendment quite a bit. As you are undoubtedly aware, the First Amendment literally says, among other things, “Congress shall make no law abridging the freedom of speech.” A law like KOSA does exactly that: it abridges the freedom of speech online. The hope is just that saying those regulations protect kids overcomes the inevitable First Amendment challenges. It’s important stuff, and we got pretty deeply into it. And as a bonus, you’ll hear the senator describe how he hypothetically bought beer in high school while underage.

One note before we start: you’ll hear us talk about something called strict scrutiny a lot, which is how courts evaluate First Amendment cases. Strict scrutiny is complex but it fundamentally means the government has to show two things in order for a law limiting speech to stand: the law must meet a compelling government interest — in the case of KOSA, the government hopes that protecting children is a compelling government interest — and the law must also be narrowly tailored to meet that goal using the least restrictive means possible. 

It is honestly anyone’s guess whether a law like KOSA would overcome strict scrutiny in today’s courts, but you’ll hear Schatz explain why he thinks it would.

Okay: Senator Brian Schatz. Here we go.

This transcript has been lightly edited for length and clarity.

Senator Brian Schatz, you are a US Senator from Hawaii. You are a Democrat. Welcome to Decoder.

I am very excited to talk to you. There is a lot going on in this country, in this world. It’s a long list of questions here, but when I talk to politicians, I often want to just talk about how it works because I feel like that is very hidden away. You’re a senator, you work on a slightly longer timeline. You have six-year terms. It’s an election year. There’s a lot of pressure on the Senate, there’s a lot of pressure on the government. Our government’s not so functional right now. How are you thinking about making decisions in this time, in this moment in American history?

Well, I think the first thing is that there are things that the country has to accomplish, and the first among them is an emergency supplemental funding bill for Ukraine, and that is sucking all of the oxygen out of the air. It’s leaving very little room for some of the mundane but important things that we need to do in terms of governance, especially just passing a regular appropriations bill. You’ve got probably more talent in this legislature than in most local legislatures but less functionality. Even local legislatures with regular citizens, without fancy degrees, who are on paper less impressive than my colleagues, still pass a regular appropriations bill. They don’t miss their deadlines. They do something in every category, whether it’s agriculture or transportation or education or health or pensions, and we seem to be at loggerheads over a number of things.

I’m trying to help to get this supplemental passed, but underneath that, you have to work your own priorities, and to me, what you want to do is divide that into two categories. One is, “What’s public policy where you can make a difference?” and the other is, “What does your state need?” Of course, in Hawaii, I’m focused not exclusively but primarily on Maui and its recovery, and on the national level, I continue to think in terms of tech policy and climate policy, so that’s kind of how I divide my day, and I think the biggest challenge on a day to day is to try to make sure that your priorities are reflected by your schedule because you can be wandering around going from thing to thing. All of them are important, but they may not reflect how you actually view what’s the most important thing to accomplish in a short period of time or through the year.

I like the idea that your priorities are reflected in your schedule. You’re picking and choosing what to do. Obviously, your constituents can see your schedule. They can see what you are talking about and working on. Underneath that, you need to have some sort of team to support you and make sure that your priorities are executed upon or you’re writing legislative language or you’re reading bills. How have you structured your office to make that work?

I have basically four separate offices. I’ve got the Hawaii office, I’ve got the DC, what they call a personal office. In the DC personal office, I have a bunch of experts, and we work in pods. We have a climate team, a tech team, a transportation team, a housing team, and so on, and then, I’m also the chair of a committee called T-HUD — Transportation and HUD — and we’re in charge of, essentially, the federal budget as it relates to transportation and housing and urban development. I’m also chair of the Senate Committee on Indian Affairs.

I was actually a nonprofit executive, and I had a nonprofit with a couple hundred employees, and so I think a lot about how organizations are run and ought to run. The challenge for me is that, I think if you came in from the private sector, you’d say, “Wait, why are these four offices sort of semiautonomous?” And the answer is, “Well, that’s the way the Senate is structured.” We try to make sure that it all rhymes and is syncopated, but if this were a private-sector enterprise, I would simply align them and merge them. I’m not really able to do that. Trying to make sure it makes sense is one of the aspects of making sure that you’re as effective as possible because, now, I don’t have seven staff [members], I have probably between 60 and 70 people, and we’ve got to make sure we’re all, as we say in Hawaii, rowing in the same direction.

Yeah. Talk to me about the idea that the Senate structure is imposing a structure on you. This is very interesting to me because what you’re describing is divisions, right? In any company, you have a division on tech policy, you do a division on climate policy, and you’re saying that’s kind of imposed on you by the Senate.

Yeah. I’m the chair of the Transportation and HUD Committee, and I have staff for that, but they belong to the Appropriations Committee. They report to me, but they belong to another committee. The Senate Committee on Indian Affairs is its own thing established by the Senate, and I’m at the helm over there, but they also don’t belong to my personal office. Just administratively, they’re in different places. They’re paid out of different pots, and that’s not something any individual senator has the ability to change, so what you have to do is roll with it and make it work. A buddy of mine calls this every day as moving an amoeba across a bridge. You just try to have to make it work, and politics is… You can sort of plan and plan and plan, but in the end, it becomes a question of improvising and finding the opportunity to get a bill across the finish line.

Two things I’ve observed about the Senate is that even the small things are enormous, and so that’s really fun. You can pass bills that don’t make the cable shows or don’t make a podcast but make a real difference in people’s lives. The other thing is that I think not everybody stays focused on what their home state needs. It’s a heady place over here, and you can kind of get sucked into a universe where you’re at the French Embassy, and you’re trying to do Meet the Press in the afternoon, and I think it’s foundational to the job to serve your constituents. I got constituents 5,000 miles away from where I work, and we’re the most isolated populated place on the planet. We really do need federal help, and that’s job one for me.

I want to talk about how Congress is working and not working right now. A lot of it feels external, right? The nature of our politics is inherently polarized. Maybe social media is making that worse or maybe it’s not. I think people are listening to the show, saying, “Is there any part of it that’s inherent to the structure here, where you have staffers who are not your staffers, where people can’t innovate on how Congress is actually built in order to be more effective?”

I don’t know that it’s a question of that kind of organizational structure, but I think it is fair to say that the Constitution was intentionally vague in certain aspects. As Secretary Madeleine Albright, may she rest in peace, talked about in foreign policy, the Constitution was an invitation to struggle between the Article I and the Article II branch. It was intentionally vague [about] who had command over foreign policy. Obviously, the president is the commander in chief, but who gets to set foreign policy was left unsaid because they wanted that kind of tension. I think if you combine the constitutional design, which is to say that it was intentionally made difficult to pass a federal law, with the current situation and its polarization, it just makes it increasingly difficult.

The one thing I’d say that I think maybe your listeners may not be aware of is there’s a lot of stuff that happens underneath that makes a real impact on people’s lives. I’m thinking of [Senator] Dick Durbin and Senator [Roy] Blunt, who’s now retired, [who] quietly decided to increase health research funding by double over a period of time, and they just kind of quietly did it in 15 percent chunks over a seven-year period, and they just got it done. Had they announced it and tried to get on Meet the Press, it might’ve polarized. The bill that I did, actually I keep doing it because there are more iterations to it, but the telehealth transformation was enabled by changes in federal law over the last 11 years, and that has been entirely bipartisan and not at all controversial. I just do want people to understand that they’re sub-rosa, and maybe less interestingly, there are a lot of people doing a lot of important work.

I did a little experiment one time back when Facebook was still arguably the most important means to get the message out. Lamar Alexander, who’s a Republican who retired, and I were trying to get more money for ARPA-E, this Department of Energy innovation program, and we went on the floor and had what they call a colloquy. It’s basically like a light exchange of views — but not necessarily too contentious — and talked about increasing funding for this very, very important thing, and then we did it. Then I put it on the internet, and it was like the least clicky thing that I’ve ever put on the internet. All I could have said instead was, like, “Trump’s terrible, resist,” and I would have gotten 10,000 people saying, “Right on, fight the good fight.” The thing you said about the internet, I think it’s more true than anyone wants to admit, is that our incentives are now aligned toward virality. And me just kind of doing my job on a bipartisan basis and improving the people’s lives in some sort of incremental fashion is just not that interesting for Instagram and Twitter anymore.

This leads me to a criticism that I think you have made most loudly about your Republican colleagues, most loudly of anyone actually at this point, that bad-faith arguments designed for virality seem to be driving the Republican Party more than ever. I think we can probably point to some specific individuals where that is the most true and some individuals you’ve mentioned maybe that’s less true, but that seems to be the animus of the party. There are certainly members of the Democratic Party who are driven by virality in the same way. How do you stop it? Can you stop it?

I don’t know. I mean, if I had the answer I’d be implementing. I think it is extremely challenging, and by the way, it’s not like people wake up and kind of wring their hands and say, “How can I be unserious and just get on the internet?” What happens instead is that you—

Oh, I think you have some colleagues who wake up and think that.

That’s fair. There are a lot of people who wake up and think, “How can I make sure I get a social media post that performs well, where the metrics are great?” Fair enough, but I think there are also a lot of people who may go to a hearing, may say a thing and then it goes viral, and then you’re accidentally on that slippery slope where the incentives start to align around you behaving in a certain fashion. You have to just really remain conscious of “what’s the job?” which is why I go back to: find out what your home state needs and continue to fight for that. Sometimes I think for some members that sounds less impressive, less historically important, right? Just less attractive. You came to the big show to do big national and international things, and some people forget to think about the fact that their harbor needs repair or their electricity grid needs stabilization or, in the case of Maui, that we need federal funding just to put roofs over people’s heads. That is not going to get anybody marked down as a historical figure, but who cares? The job is to serve your constituents, and you got to do that first.

I have a lot of executives and founders on the show who are executives, they make decisions. The decisions are acted on very quickly. They succeed or fail. The market gives them feedback. Off to the next decision, and that feedback loop is very important to all of them. Those folks look at government and they say, “Okay, we’re going to double funding for healthcare by 15 percent over seven years and no one will even notice it until the very end when Brian Schatz comes on Decoder and says it out loud for the first time.” That is a disaster. We can’t work that way. We’re not even going to pay attention to it. We’re just going to write it off and say, “Government’s too slow.” By the time it gets around to us, the technology will have moved on, and their ideas will be obsolete. How do you respond to that? How do you build a better bridge to those folks? That is a pervasive sentiment among almost everybody I know in tech.

I guess what I would say is, someone once said democracy is the worst system except for all the others, and I think that’s the way to look at this, is that it is intentionally clunky. It is supposed to be difficult, especially to enact a federal law. You don’t want federal law to change every two or four years in massive ways. You actually do want it to be difficult to enact a federal law. Look, their complaint is with the Constitution, right? Their complaint is with the basic configuration of our government and the premise of checks and balances, and of course, I get it. I wasn’t a CEO of a big publicly traded company or some sort of unicorn, but I ran a shop with a couple hundred people, and I understand what it means to be in charge of an operation, and I understand how satisfying it is to be able to assemble your smart people and make a call and then start to execute.

This is why so many governors hate being in Congress, or otherwise successful CEOs or business people. Anyone who’s kind of a top executive has to modify their theory of change. I just respect the Constitution enough to say that the founders knew that you didn’t want a CEO of a country, that we didn’t want anyone with monarchical powers. Yes, is it maddening? When my person is in presidency, do I want there to be enhanced executive authority, and when my adversary is in power, do I want them to have less? Sure, but that’s just the nature of the beast.

The other thing I’d say is that I’ve interacted with a bunch of CEOs obviously, and the tech community in particular has a tendency to think that they can solve for X. And they just look at us and go, “What a bunch of idiots. If I were there, I would simply do—” And it’s like, you don’t think Mark Warner is smart? You don’t think Chuck Schumer is smart? You don’t think that Hakeem Jeffries is smart? There are a lot of very bright people who are capable of being chief executives in that fashion, but we are in the American system of government, and I think in a lot of ways, what they’re complaining about is that not everything works like their world works.

I don’t go into Silicon Valley and say, “Why aren’t there more checks on your power?” because I know that’s not how it operates, and they don’t have constitutional constraints. Forgive me if I sound mildly irritated at that complaint, because I feel like it’s quite shallow. It’s like this grandiose feeling that because they’ve mastered their area, that they can now just pivot to climate or democracy or civil rights and solve for X because the rest of us haven’t been thinking deeply enough about it or don’t have the courage to just act. Look, we’ve got courage. We’ve got a bunch of bright people. The Constitution makes it hard to enact legislation, and that’s on purpose.

Do you think that is part of the reason for the rise in authoritarian thinking in this country?

Maybe we should just have a king who makes some decisions. Elon [Musk], just fix it.

Yes. I think people are not sure exactly what they want, but they are sure that they don’t want inaction. The frustration rising, I don’t think it’s necessarily economic agitation, but the correct observation that our government is stultified makes people look for alternatives, and people would like some action. The attractiveness of a strongman, I think, is not to be underestimated.

Alright. Last question on this topic, and then I want to talk about our system of government and how we’re making policy right now — particularly tech policy — up against Europe, which is a much more Byzantine system of government. It seems to be more effective at actually passing regulations right now. But the Decoder question, this is a big one. We’ve described the system that you operate in, the pressures you’re under, the things you can and can’t do. How do you make decisions? What is your framework for making decisions?

I start with my core values. I start with what I want to accomplish for people, and then I try to do an overlay on what I can accomplish for people. And then I try to divide my daily actions into sort of long lead time items that may take 10 or 20 or 30 years. So one of the things that I’m working on over time is to try to change the way especially the Democratic Party views housing, but that’s a political challenge, and we’re going to have to change public opinion. That is not something that I can just pass a bill and just claim success and move on to the next thing. But I also want to, for instance, enact telehealth legislation to extend these authorities that people have been using through the covid pandemic and continue the telehealth revolution.

That’s an “I’ve got to get it done by the end of the year” thing. So I try to make sure that whatever I’m doing is rooted in my core values, but then I divide my objectives based on what can be accomplished right away and what’s going to take a little longer. The challenge, I think — just to be completely frank about it — is that sometimes I’m not even sure which thing is going to end up being ripe. So when we raised the tobacco age to 21, and I’m quite proud of that — and my father was a public health doctor and he would be very proud of that. When we sat down to do it because of the influence that big tobacco had in the Congress, I remember telling my staff that this was a 10- or 20-year project, and then we got it done in 16 months.

Other things that I thought I could get done in one Congress, I’m in my 11th year and still working on. So part of what has to happen is you can have all the spreadsheets in the world and all the planning sessions and staff retreats in the world, but it is more about broken play running and being opportunistic about the art of the possible in the legislative arena than I think anybody in the private sector would be comfortable with because it is almost always one of those, “everybody’s got a plan until they get punched in the mouth.”

Let’s talk about Europe for a second. You mentioned people want change. You mentioned that the United States in particular seems stultified. It is impossible for me where I sit covering the tech industry to not notice that Europe, the EU — which is a much more complex system of government, vastly more complex, almost impossible to describe to the average person how it works — is passing tech regulations left and right.

Just this week, the Digital Markets Act led to Apple opening up the App Store on the iPhone allowing alternative web browsers. Apple is going into this kicking and screaming. They do not like this, but it’s happening to them in order to access the European market. The European Union has mandated USBC on iPhones. We can just go down the list of things that the European Union has done to regulate tech to make the products better or worse but to actually regulate the products. Why are they able to do it and we are not?

I’ll give you a couple of reasons, and on this one, I don’t want you to think I’m defending Congress’ inaction on tech policy. I do think it’s indefensible. It’s a couple of things. It’s the basic sort of dysfunction and polarization, that’s number one. Number two is, for whatever set of reasons on tech policy, there’s a lot of a desire among members on both sides of the aisle to be the mayor of the issue rather than to try to get to some sort of solution. So I think one of the things that Leader Schumer has done that’s been extraordinarily successful is with bipartisan senators establishing these roundtables on AI where we’re doing deep dives and not allowing it to become a partisan issue.

But I do think the incentives right now are, “I’m the one that’s fighting the tech industry. No, I’m the one that’s fighting the tech industry.” And people’s kind of political itch gets scratched without ever having enacted anything. And then there’s a little bit of the perfect being the enemy of the good that people want to solve everything before they solve individual problems. USBC is a really good example. We don’t do the small things that could be large.

I have an AI bill, which doesn’t solve all of the problems that could be presented by AI, but it seems to be on a bipartisan basis pretty straightforward, which is, if something is generated using a generative AI, people should be able to know that with a watermark. And I can’t find anybody who disagrees with that. But the challenge is, how do you pass just that without everybody saying, “Well, as long as there’s a moving vehicle, how about my more comprehensive Rube Goldberg machine to make sure we protect against everything?” — fraud or catastrophic risk or national security risk. And then by the time you’re done having this conversation, my thing sounds small and it’s not a big enough deal and we can’t pass it.

I do think we need to understand that this is a generational challenge, and we need to start doing some of the small things that are straightforward rather than waiting until we have a perfect solution. Sorry, one final thing, which is maybe not the biggest deal, but it’s probably worth mentioning, is that there was a concerted effort on the part of the tech platforms who did not want any regulation to, as Senator Chris Murphy calls it, tech-shame members. They did it really effectively. When Mark Zuckerberg came to testify, a couple of members were betraying that they were not digital natives, and then it became a good way to ridicule the members.

I still remember, I was super jet-lagged, and I asked Zuckerberg a pretty legitimate question, which is, “Within WhatsApp, as you are transmitting messages, is any of that data, even disaggregated, even anonymized, feeding into their ad algorithm?” But I said, because I was tired, I said, “When you email on WhatsApp,” and then Zuckerberg was able to spend all of his time correcting me that it’s an encrypted messaging thing and not emailing. I mean, I’m 51, so I’m not trying to pretend I know everything, but I knew that. I just misspoke. And the reason that that sticks in my craw is that I never got the answer in the hearing. I think it’s a very legitimate question. It’s a question people still have. And then it became, they just did these mashups of, “Look at all these idiot senators, and because they’re not experts or engineers or coders, they have no right to regulate.”

Well, listen, I don’t know how an F35 operates, but I’m on the Defense Appropriations Committee. I’m the chairman of the Transportation and HUD Committee, and I’ve never been an affordable housing developer or run a nonprofit that provides resources for individuals with HIV / AIDS, and yet, I’m in charge of both oversight and funding for those things. So this idea that you have to have deep, deep expertise in order to be able to weigh in on public policy has become this kind of shield and sword that the tech companies have used very effectively against members. And members don’t like to get embarrassed in viral moments. And so everyone’s very cautious, and it’s not the main thing, but it’s emblematic of why we can’t even get serious about it.

There’s going to be a hearing this week about kids’ safety in Congress. The CEO of X, Linda Yaccarino, will be there. The CEO of TikTok will be there. Some other folks will be there. I hear you say that members are cautious. I know that hearing is going to be a whole bunch of grandstanding chaos. Square that for me because that doesn’t make any sense to me.

I think they are cautious about legislating in a serious way, and they are incautious about trying to create a viral moment where they look tough with the CEO of X or Meta or wherever. So you’re right that it kind of feels a little bit in tension. But back to what we were talking about earlier, the incentives are aligned for a member to just have a viral moment, throw a bill in the hopper, and be done with it. And the point here is that we are lawmakers, not law adjusters. Jon Stewart should go viral. Trevor Noah should go viral. Kids on TikTok should go viral. Members should enact laws.

Is there a sense right now in Congress that Europe is regulating our tech industry for us? Because when I talk to tech executives, they are besides themselves saying, “How is it that we have an American tech industry that is now beholden to what a bunch of Europeans want, and our government is not either protecting us, which you could disagree or not disagree with, but also not having any of these ideas on their own.” Because I think those companies would prefer that their regulatory constraints come from us and are exported around the world instead of exporting their products to face regulatory constraints from another system of government.

I think they might think that at the CEO level, but at the government relations level and the sort of chief legal officer level, they still are set up to, basically, they hire lobby shops with the Democratic and Republican side, and the government relations people report to the, say, senior vice president for government relations, which reports to the board, and they go, “Good news, we killed it again.” And so they might say that they want to be regulated and they want this to be comprehensive and they don’t want it to be either state by state or to have to operate in a patchwork of international laws. And yet, they are the very ones in the Congress through their lobbyists trying to kill everything.

And look, I had a bill that established a duty of care, right? Basic idea is, I do not believe the notice and consent regime — the, “Hey, here’s how we’re going to use your data,” and then you click “I agree” — really does much for consumers at all. And I think requiring more in the notice and consent space is stipulating to something that’s not true, which is that people may read the disclaimer. They just click “I agree” and move on. So my view is the companies are going to collect the data. They should have the same obligation that an accountant or a lawyer or a tax preparer has, which is, “Yeah, we’re going to use your data, but we promise not to use it against you.”

A lot of the big companies were perfectly comfortable with that, but the chief legal officers and the government relations folks were super terrified about what that would mean because you’re empowering an expert agency, in this case, it would’ve been the Federal Trade Commission, to interpret what it means to have a duty of care, and they don’t like the lack of clarity so they killed it. And I shouldn’t say they killed it. We’re still in charge, but they influenced members to be nervous enough that it was not able to be enacted. So I pushed back a little bit,  and tell me which CEO you’re talking about, but I do think they think they want federal regulation to make sure that the United States is the leader in tech policy, but the way they manifest their government relations shop in the Congress is risk-averse, Whack-a-Mole, try to kill everything.

Do you feel the pressure that the European Union is now the leader in tech policy and the United States is not?

Yes. But on the other hand, some of the things that they’re doing make a lot of sense to me. In the same way that, for instance, the California privacy law is maybe a suboptimal way to try to regulate some of the actions of these tech companies and that we should have a federal law and a federal standard, both because of interstate commerce issues but also just making the thing work mechanically speaking — it’s still better than nothing. If we’re stuck in the mud, then Europe should act. And if we’re stuck in the mud, then California should act.

And that may be the only thing that precipitates federal action because these tech companies will have to actually call the question: “You really don’t like this patchwork? Well, there is a way to solve this.” And one of the things that I’ve said is whether it’s privacy policy or children’s safety or social media regulation, Section 230, I’m open to a federal law that preempts state action, but only if it’s sufficiently strong. And what they’ve been wanting is a federal law that preempts state action, which basically enshrines the status quo in place.

Let’s talk about some of these specific issues. Let’s start with AI because it is the thing. It is the elephant in the room. Europe hasn’t figured it out. We haven’t figured it out. It is where we might be able to take more of a leadership position. The White House obviously released its executive order. There have been these roundtables in Congress. Let me start with the hardest problem here. Literally this week, as we’re speaking, there’s deepfake porn of Taylor Swift on the X platform. It spread to it from Telegram. X has whatever trust and safety disaster that it has. They didn’t really stop it. It went viral. It was trending at one point. The Swifties are trying to stop it. It’s chaos.

Everyone I know involved in trust and safety looked at generative AI years ago and said, “This is going to happen. This is going to happen to women on the internet. People are going to make deepfake porn of women on the internet,” and we haven’t done anything about it. And now we’re at this moment, and maybe this moment will catalyze some change. Can that be stopped? Do you have a view on that? How would you stop that? How would you pass a law to stop that?

So I don’t know, mechanically and sort of at the technical level, how plausible it is to stop all of this, but some of these things aren’t even illegal. You’ve got a bunch of generative AI products that could very easily be required as a matter of federal law to have embedded in the code either to watermark if it’s a visual image or some other audio or other disclosure to clarify that something is not real. Now, you get into a more thorny public policy question, which is, “Well, doesn’t that person still have a right not to have a fake porn image posted of them even if in the bottom-right corner it says “Made by generative AI.”

So I think these are tough questions, but it goes back to what I was saying: can we please start with the not-most-complicated, not edge-case questions? And the stuff that people can plainly see is preposterous and offensive and dangerous to people and others. To me, a very scary example is the ability to commit financial fraud and have, you or I, to have our voice call our mother or father and say, “Look, I need to transfer 500 bucks to your bank account so you can pay your groceries. Give me your routing number.” And that, I think, is so scalable and so terrifying, we need to act immediately on that. And it is a little shocking the extent to which some of these things are not already illegal. The last example, of course, would be illegal in the first instance because it is fraud. But equipping law enforcement with those tools… I guess what I think is we should start with the easy stuff first, the obvious stuff first, and the most urgent stuff first.

Because I think part of what’s fascinating about AI is that it allows everybody to stroke their chin and imagine a future that’s different, and what’s the future of creativity, and what does it mean to be a human, and how much work provides enough meaning for people to not lose a sense of self? And those are all angels on the head of a pin. There are a bunch of things that we need to address now and not overcomplicate the questions.

Well, those are two things, right? One, you can deepfake my voice and call my parents. That seems bad, just straightforwardly bad. Two is, women all over the internet are having essentially nonconsensual revenge porn made of them. You should be able to stop that. I don’t know what mechanisms to stop that. Historically, the mechanism has been like copyright law. I’m an old copyright lawyer. I wasn’t any good at it, which is probably why I can’t think of a way to make copyright law solve this problem. But it seems to be hitting a wall. How would you solve it?

I don’t know. I’m not even a lawyer, but it is something that I think we need to get at immediately. And there are a couple of things. Again, in the AI space, Schumer has been really good about this and sort of dividing stuff that we ought to do immediately that is urgent, including on the campaign side. There are a bunch of things that we can’t wait on including giving the government authority over catastrophic national security risk. I mean, that’s something that cannot wait. And we can wrestle with the tougher stuff over a longer period of time.

But I think that’s the right way to look at this: impatient about the urgent tragedies that are upon us but not overreacting and trying to create some Rube Goldberg machine that stops this tech from flourishing. Because I think the challenge with that is that we are not the only place where this technology is emerging. And if we don’t make public policy, then I do think some of these companies are just going to go elsewhere.

You mentioned it’s an election year. You mentioned national security. This is going to be a very contentious election between [Donald] Trump and [Joe] Biden. I feel comfortable saying it will be Trump and Biden. They both have pretty recognizable voices. Both sides are pretty motivated to make the other guy look very bad, look very dumb. Is there any pressure to make this stuff happen before this campaign season kicks into what will be the most contentious, silliest campaign season that I think I’ve ever faced?

Yeah. And I’m of two minds here. First of all, the answer is yes, and on a bipartisan basis, we’re doing reasonably well on keeping this one to be… And I think part of it is, there’s a little self-motivation among members who have to run that they don’t want to have to contend with a bunch of lies about them and their voice being faked or all the rest of it. I think there’s an opportunity to do something on this pretty darn quick. The other thing I would say, though, is that lying in politics is not a new phenomenon, and faking things in politics is not a new phenomenon.

The ability to scale it via robocalls or social media posts or whatever is obviously a kind of category change. But I think we should always be wary of the high likelihood that candidates and then noncandidate committees lie about their opponents. And so I’m of two minds. I’m not sure whether this misinformation election cycle is going to be orders of magnitude worse, or is this just going to feel like the same thing and there’s a new tool in the toolkit for the dark arts?

Yeah, I would say that one of the things that struck me about 2016 and 2020 was there were all these tools to make new kinds of believable lies, but then actually just lying, just dead-ahead confidently lying seemed to be pretty effective for everyone. But this does seem different. You can make a video of Donald Trump saying something he didn’t say. You can make a video of Joe Biden saying something he didn’t say. It looks real enough on a social media feed. People see it, they move on, they’re not going to dive into it. That is a new kind of danger. I don’t think that’s just a robocall scales, fake voices. That is messing with people’s belief about what actually happened or didn’t happen from the source.

Yeah. And it’s getting into their brain stem. I agree with that. I think that this is… But I just don’t know in the end whether this moves votes at the margins because it’s just, there will be a point at which the zone is so flooded that people are smart and can sort of recognize that this is a lot of garbage, and I’m going to just tune out. I came from the state legislature in Hawaii, and the main way to communicate with voters in a State House race is direct mail, still hard copy, card stock, glossy, in the mailbox — still, in 2024.

There’s a lot of lying in the mailbox, too. And it is also irrefutable in the sense that there’s nothing you can do once the thing is in the mailbox with the lie. And the Constitution provides campaigns wide latitude to engage in bullshittery. So I honestly am not entirely sure whether I think this cycle is likely to be orders of magnitude more nuts or whether we’re going to look at it like, “Yeah, that was the year where some of these operatives tried a new way of being shitty, but it didn’t really move the needle.”

Let me ask a very reductive version of this question. It kind of ties this all together. Should it be illegal at the federal level to take someone’s image and make it do something that person didn’t do?

Yes, I do believe that. Now, I’m not a constitutional lawyer, so I do think there are some challenges there, but I think people have a right to their own thoughts, and people have a right to their own image. And we’re going to have to clarify that in the context of both the First Amendment and Section 230 and all the rest of it. But there seems to be, to me, some space between the parameters of the First Amendment and Section 230, which gives platforms indemnification for things that are posted and outright lies. Not just, “Hey, Brian Schatz is shitty at his job,” which I do believe would be a lie. But also, “Hey, Brian Schatz said this thing.” And it turns out I didn’t say this thing.

Well, that’s also a lie. I’m not sure that’s definitely supposed to be illegal. And then the new threshold would be like, “Here is video of Brian Schatz saying something.” And it’s just not the case that I said it because you just used AI to make it seem like I said it. That seems to me to be something we can make public policy about without bumping up either against the parameters of existing statutory law or the Constitution itself.

Is there any action on that front? Have you seen language to that effect? Are people working on it?

Yes, people are working on all of this. And listen, I’m very appreciative of and loyal to Chuck Schumer, so I don’t want to make it sound too much like I’m schlepping for him. But I am super impressed at this process because everything I described to you about the kind of dysfunction of Congress in the tech space in particular — it’s sort of the overlay of it’s dysfunctional in the first place, it’s also hard to pass a law in the first place, but then we have these particular dynamics that make it hard to pass tech legislation — he’s been able to avoid by setting up these longform deep dive forums. And you got members… Members don’t sit for scarcely 17 minutes, let alone two, three hours. And we are plopping ourselves down with experts from the nonprofit sector, certainly the private sector, from labor, from academia, and we’re trying to sink our teeth into this and chunking it down by category: catastrophic risk, national security risk, fraud, the future of work, IP, elections.

We’re chunking it down, and then the project on a bipartisan basis, again, is to kind of divide this temporally. What’s ready to move, and what must move in this calendar year? And then what’s going to take a little more work? And arguably, not just because it will take time but because it ought to take time. Because we probably should see a few more cards before we lay down a statutory framework as if we’re already sure of what the law should say. In some instances, to me, I want to see a few more cards before we lay down a statute.

Let me take that framework and apply it to the other bill that you’re working on in the tech policy space: KOSA, the Kids Online Safety Act. It feels like you’ve seen all the cards you need to see there, right? You’ve got a bill, it says kids under 13 should just not be allowed to use social media. Up to 17, they need parental consent. I’m just going to read the little summary of the bill here. “Upon covered providers, there’s a duty of care to keep minors away from content that would be considered to cause anxiety, depression, eating disorders, substance use disorders, suicidal behaviors, and other harms like sexual exploitation.” And there’s data that says social media is, particularly with teen girls, causing a massive crisis of depression. Have you seen all the cards you need to see there? You’re like, we just got to make some regulation now?

100 percent. And I’m a co-sponsor of the Kids Online Safety Act, but that’s led by others. That’s Dick Blumenthal and others. I’m the primary co-sponsor of the Protecting Kids on Social Media Act. Now, I don’t want to get into too many of the differences, but I’ll just say, my bill does just three things. Under 13, you can’t be online. [Ages] 13 through 17, you need parental consent. And importantly in my view, 13 through 17, no algorithmic boosting within that session. Because I think it is fair to say that these algorithms are sort of overpowering the brain stem of teenagers. And I am absolutely prepared to say that whatever the potential First Amendment equities may be, that both legally and morally, I’m ready to say that the downside of social media for kids 12 and under way outweighs the upside.

Sure, they can find a recipe. Sure, they can find friendship. Sure, they can post interesting things and find joy in all of that. But one of the things that’s being asked of us is to kind of get longitudinal data to establish the direct connection between kids using social media and all of this increased anxiety and depression and polarization and suicidal ideation and all the rest of it. And by the way, decreased exercise and sedentary activity, all that. What they mean by longitudinal data is we have to wait one or two more generations while these kids’ brains are being melted. So I have seen enough. I am confident that the net impact of getting kids off of social media 12 and under would be a positive. And interestingly, I can’t find a single person, regular human, who disagrees with this. Even kids are like, “Yeah, please just take this damn thing away from me.” The algorithmic boosting thing is a little more challenging because some people like their algorithms, and I understand that.

But a lot of people are not in the position… the kind of Meta position is like, “We want to give a bunch of tools to users to modify the algorithm.” Well, that’s fine, but 14-year-olds don’t modify their algorithm, 14-year-olds don’t even understand what is happening to them. You know this as well as I do, but I think it is absolutely worth saying that social media companies depend on eyeballs for revenue and for their quarterly earnings calls. And I don’t think they started out this way, but they figured out that, unfortunately, the way to get eyeballs, the way to get people to linger on things, the way to get people to click on things, is to upset them.

And so now you have… The syllogism goes that these companies have arguably a fiduciary obligation to systematically upset tens of millions of children. And I don’t think that’s okay. And I don’t think they have the First Amendment, an algorithm doesn’t have a First Amendment right. And I also think a 12-year-old has some First Amendment rights, but they’re not the same as an adult’s. And also your right to go on the internet and use Meta is not enshrined in the United States Constitution.

So let me ask you a few follow-up questions here. That’s a lot of ideas. And I’ve been talking to people about how we might regulate content on the internet for a long, long time. For most of that time, what people talked about was Section 230. You yourself have brought up Section 230. And I have always felt that the conversation about Section 230 is kind of a smoke screen. Because you can’t just go regulate content on the internet. The First Amendment exists, it’s the first one. People are pretty familiar with it. It literally prevents Congress from making laws about speech. You can threaten companies by saying, “We’ll take your 230 protection away and unleash a tidal wave of litigation that will basically crush you. So do what we want with the speech regulation, and we’ll just keep hammering away about Section 230.” And that threat has worked or not worked.

I think it is notable that Trump and Biden have the exact same position on Section 230 because they need the exact same leverage to get whatever they want. On the flip side, it seems like “Okay, we need to protect kids” just jumps right over the First Amendment. You can go to the Supreme Court and say, “This is a narrowly tailored regulation in that Congress has in the best interest of our nation to protect kids. We’re going to do some content regulation here. We’re going to say you can show this to kids and not show this to kids.” Is that how you’re thinking about it? This is the one that we can get through and directly regulate the content.

100 percent. I think the art of the possible is, let’s at least agree that kids should be protected. But I also want to just agree with you about Section 230. I have a bipartisan bill that does some tinkering with Section 230, but you’re right, when people talk [about] Section 230… I mean, I tweeted one time, anybody who talks about eliminating Section 230 has to describe what Section 230 does. No one knows what Section 230 does. I mean, it’s actually not a particularly complicated statute, but it became just an incantation for politicians to say, “I’m anti-big tech.” No one’s sure what it means. And if you took away those protections, the internet would be more, not less, of a cesspool. And that’s the really important thing, is that they would be unable to do any content moderation, and then everything would be more of a sewer.

Do I think just like the Communications Decency Act, just like the Telecommunications Act, that we should be iterating this over time, and Section 230 is not some sacrosanct bill or law on a tablet that must not be touched? Yeah, I think we ought to make changes, and there are some pretty rational changes that ought to be made, but you’re right that it has become a distraction. It has become a sort of bludgeon. And the reason that Section 230 has not been repealed is that in the end, the product would be way worse. Whatever challenges we have on the internet, we still do want Meta and Twitter and others to be able to do whatever content moderation they’re doing.

We could argue that they’re not doing enough, but we certainly don’t want them to hear from their lawyers, “Hey, listen, if you touch this, we could be in for trouble damages four times gross profits in the United States.” And they’ll just say, “Fine, we’re going to keep providing this platform, but we’re not going to moderate at all.” And no moderation is quite bad. I think what people are mad at is they don’t like the moderation decisions — not that moderation happens at all.

But the laws about children’s safety, that’s not some end run around the First Amendment. That’s just driving right at it and saying, “Okay, this is the thing that will stand up to First Amendment scrutiny. We’re going to say to the platforms, ‘You can and cannot show this speech to these people in this way — algorithmically boosted, in this way.’ And we’re going to face some legal challenges. We’re going to face some First Amendment scrutiny in our narrowly tailored policy that has a compelling government interest, strict scrutiny under the law.” It will survive that strict scrutiny. Because it is kids, and kids are a political winner.

Yeah, and I mean you just described the legal argument, right? Compelling government interest and all the rest of it, but it’s also the public policy argument, which is like, “Can we please argue about everything else but agree that an 11-year-old shouldn’t have their brain stem melted?”

So we started by talking about bad faith, and you are notable for calling out the bad faith of your opponents. This is a law that there’s a lot of interpretation inside of this law, and we live in a time when LGBTQ rights are under attack, trans rights are under attack, books are being banned, abortion is not legal in every state. And the opponents of these bills say, “Okay, this is going to keep kids from accessing information about their own reproductive healthcare. This will keep kids from finding communities of like-minded queer people in states where that is becoming more and more illegal. And that is a cost that is higher than the benefit.” Do you see that calculation playing out?

Yeah, I do. This one was hard for me. I’ve sort of met with and talked to a lot of LGBTQ rights organizations, and I think we’re going to be able to find a sweet spot here. The challenge is that we are operating under the assumption that every kid is online, and the only way they’re online is through social media. So if you run the Trevor Project or some other LGBTQ rights organization, your main means of reaching a child is through a social media platform. It doesn’t have to be that way. They can meet each other in real life, they can still use a web browser, they can still find community. But it is true that if all kids, if all 12-year-olds, are no longer on social media, then that won’t be the way they find information. We’ve just decided that that’s true.

Those equities I think are legitimate. I do think that the social media platforms encourage a level of sensitivity and paranoia among those organizations that once I talk with them, I’m able to go, “Look, that’s not what we’re doing here.” But I mean, for me, there is a little bit of a balancing act because, although my bill is bipartisan, and it’s not the same as what’s happened in Arkansas or Utah or anywhere else, it sort of rhymes with it enough where people are quite worried that this is actually just a mechanism for both governmental and parental control of queer kids and access to public health services and community and just feeling like they have some sense of belonging. So I’m very sensitive to the potential problem here, which is that when Senator Blackburn on the Kids Online Safety Act, I don’t want to get the quote wrong, but it was something about, “We’re going to use this bill to kind of go after LGBTQ groups who are targeting kids.”

I went, “Whoa, sign me out of that.” I want, again, for us to do something that is narrow enough where we can all agree, and frankly, you and I may be in a little bit of disagreement about this. I want it to be content-neutral. That’s why I want that bright-line 12 and under: no. 13 and over: with your parents’ consent and no algorithmic boosting. But this idea that there would be a government panel that determines harm or likelihood of causing anxiety seems to be a little dangerous if you’re looking at the kind of federalist project out in America.

I don’t like the idea of a state government panel saying, “Hey, this queer content is going to turn kids — whatever they’re thinking and saying.” I just want to make clear, I want no part of that, which is why my bill’s content-neutral and just has this bright line of 12 versus 13. Is it arbitrary? Could it have been 14? Sure. But you have to draw lines, and we do that for driving and voting and drinking. So I just think this is another place we need to draw a hard line.

I want to come back to bad faith. The Marsha Blackburn quote is, “We should be protecting minor children from the transgender in this culture.” And then she said that this bill would provide a duty of care. You know that there’s a lot of bad faith. If there’s a hypothetical second Trump administration and there’s a Trump FTC armed with these powers in this bill to go after companies that might be disseminating information about people’s rights because they might hypothetically reach a child so then we can censor all of it anyway, isn’t that a problem? Isn’t that the bad-faith problem neatly expressed here? This is the thing we can do because it protects children, and the repercussions be damned.

Yeah, I think you’ve described a little bit the difference between my bill and the Kids Online Safety Act. I am very uncomfortable with… Look, I like a duty of care as it relates to customer data because that’s already an established principle in other sectors. Like I said, banking information, financial information, HIPAA, your lawyer, they all have information that is deeply sensitive, and the basic expectation is they will lose their job if they use that data against you. Not that they can’t share it or use it in order to do their job, but that they can’t use it for purposes other than what you could reasonably expect them to use it for. So I like a duty of care in that context. But if the duty of care is like, “Hey, the government itself gets to adjudicate what content may or may not be harmful to a child,” I think that just gives too much power to individual politicians to referee what content is dangerous.

And let me take a conservative example. Do I think that the government ought to be able to go in and say, “The National Rifle Association is encouraging people to join and do youth gun camps,” and there might be some progressive groups that say, “Look, this is endangering children”? I just don’t think we want to go down that slippery slope, and we don’t have to. We can avoid that whole argument by just saying we know exactly what’s going on. Anyone who’s a parent knows what’s going on. The surgeon general knows what’s going on. You can believe both the data that’s incoming and our own experience and our own eyes. It’s the algorithmic boosting, and it’s kids getting online, and especially on social media platforms, too early in their lives to be able to sort it all out. We can address those two problems without even touching the question of the content itself.

I’ve got a five-year-old. She wants to be a YouTuber when she grows up. This is terrifying. She asked me for a YouTube account the other day so strangers could watch her videos. She’s never made a video in her life, but she’s already there in her brain. Then she asked me to be on Roblox, and I told her she has to learn how to read before she can use Roblox. And this is going to buy me some time, but it’s not going to buy me forever. That’s great, I’m going to provide parental consent to be on these platforms. The bill is like: your parents need to say yes. But just in my experience, all the parents end up saying yes. Does this stop anything?

So that’s a really interesting question. Probably slows down and precipitates a conversation for a 13-, 14-, 15-year-old. But I do think it gives a lot of parents some measure of comfort to be able to establish a societal norm. Do kids drink when they’re 17? Sure they do. Do they drink less than they would if it were legal at 16? That’s also true. So this idea that, well, kids are going to get a VPN, yeah some kids will get a VPN. Just like… Well, I won’t say, standing outside of a 7/11 on Kilauea Avenue trying to see if some 25-year-old will buy me a six-pack when I’m 17 years old or whatever. This is a hypothetical, you understand—

But I think the point is that most parents I talk to would be so pleased to be able to say to their 11-year-old who is demanding an Instagram account, “I’m sorry, it’s not permissible.” Right? Now, will some kids be enterprising and figure out a way, and will some kids have their parents set up a fake Instagram account through the adult age verification process? Sure. But I still think it’s worth it to establish some bright lines societally. But that doesn’t obviate the difficulty for every parent, whether you’re a parent of young kids or teenagers or preteens or whatever. It’s still going to be a rough world here. But I think what the government has to do is establish, at our most basic level, that we are the ally of parents in helping to navigate all of this.

We started off by talking about the incentives of Congress and how the incentives are toward virality and not necessarily policymaking. Shouldn’t there be some rules from members of Congress and how they use social media? This is what I come back to. I feel like the adults are being driven more insane than the kids. Their brains are being melted. Congress’ brains are being melted on the daily. And the kids are sort of fine. I talk to a lot of teenagers who know social media is bad for them, and they are learning how to have boundaries to this thing that is just sort of always happening to them.

And then I talk to politicians sometimes, not you sir, but I talk to politicians sometimes who seem utterly captivated by how many likes their tweets got. It feels like maybe the action’s pointed in the wrong direction, but you can’t say protecting adults from themselves passes First Amendment scrutiny so we’re pointing all the action at kids.

Yeah, I think it’s a fair criticism. I’m not really sure that there’s anything you could do by rule or law to stop adults, especially elected representatives of their districts or their states, from speaking out in whatever manner that they see fit, even including being stupid on social media. So you can’t make a law against that. But I think your point is exactly right, which is that we are pointing at something that we feel we can solve and where we can sort of separate ourselves from the problem and not admit that to greater or lesser extents that part of what’s happening in our politics — not the whole thing, we can’t blame it all on the internet or social media — but at least part of what’s happening in our politics is a tendency toward understanding political success not as the enactment of laws or policies that actually help people but the extent to which people say “Good job” online.

And those are entirely different things. And I don’t know why I’ve been able to maintain some perspective, but I just remember… I forgot what I did, but I was really proud of it on substance. I had delivered something for Hawaii, and in the same day, I tweeted something sassy about Donald Trump, and Wolf Blitzer’s producers called my comms director, and they wanted to get me on the TV to talk about my tweet, not to talk about my accomplishments. So one of the things I try to do is verbalize that although that’s tantalizing, that is not the job.

One platform that we have talked about but we haven’t talked about in substance very directly is TikTok. There was a lot of action to ban TikTok or sell it to Oracle or sell it to Microsoft. Do you think TikTok should be banned or sold to an American company?

I would like it to be sold to an American company. I have not quite gotten to the point where I think it should be banned. But part of my analysis here is sort of a question of fact. I do want to understand better who controls the algorithm and the extent to which the government of China is using it as a geopolitical, strategic vector. I just don’t know that that’s happening. I think it is worth monitoring. I think it is worth being on the borderline of paranoid about.

But I don’t want to assume something is happening without having that evidence. But it is very worrisome, especially if you can imagine a situation where if the situation with Taiwan gets a little more dicey, do they have the ability to start doing things that they’ve been promising through their American subsidiary that they wouldn’t do all along? So I worry about the national security risks. But other than that, they’re just another social media platform that should be subject to the same scrutiny and the same public policy as everybody else.

Alright, last big-picture question. It’s an election year. It feels like people are going to tune out of it because it’s the same two guys, the same kind of noise. There is a stultified Congress, as you have described, the inaction is high, the theatrics are high. How should people think about tuning in and constructively participating in this election cycle, versus sort of tuning out and letting the chaos rein?

Part of what’s happening pre-Trump officially getting the nomination was the kind of parlor game based in Washington is like, well, is Biden going to just be a one-termer? And maybe Nikki Haley can pull it off, and all the rest of it. But now we’re facing a rather stark choice. And one of the things that I think is a little maddening for those of us who are part of this historic Congress where we passed the biggest climate bill in human history, where we reduced the price of medicine, where we passed infrastructure, where we did all these extraordinary things, probably the most productive Congress since LBJ, and Chuck Schumer and Nancy Pelosi and Joe Biden deserve a lot of credit for it. And yet, part of the problem is like, yeah, but Joe Biden is old. And I’m not sure that that argument is going to be as compelling as it was in an odd-numbered year.

Once we’re facing this choice and you have this president with a record of accomplishment, with costs going down, knock on wood, with a relatively healthy economy, lots of problems remaining in the world, but a pretty good record to point to, that people are going to be thinking about the records of these two men. And if we could play the parlor game and pick another person, would your average 22-year-old voter pick someone else? Sure. But politics is about choices, right? And Biden himself always says, “Don’t compare me to the Almighty. Compare me to the alternative.” And I think that’s what this election is going to be about.

Alright. Well, Senator Schatz, thank you for spending so much time on Decoder today and being so direct with us. Really appreciate it, we’ll have to have you back soon.

Decoder with Nilay Patel /

A podcast about big ideas and other problems.

SUBSCRIBE NOW!

Share.
Exit mobile version