How big is your concern that these constraints will spur China to spin up competitive AI chips?

China has things that are competitive.

Right. This isn’t data-center scale, but the Huawei Mate 60 smartphone that came out last year got some attention for its homegrown 7-nanometer chip.

Really, really good company. They’re limited by whatever semiconductor processing technology they have, but they’ll still be able to build very large systems by aggregating many of those chips together.

How concerned are you in general, though, that China will be able to match the US in generative AI?

The regulation will limit China’s ability to access state-of-the-art technology, which means the Western world, the countries not limited by the export control, will have access to much better technology, which is moving fairly fast. So I think the limitation puts a lot of cost burden on China. You can always, technically, aggregate more of the chipmaking systems to do the job. But it just increases the cost per unit on those. That’s probably the easiest way to think about it.

Does the fact that you’re building compliant chips to keep selling in China affect your relationship with TSMC, Taiwan’s semiconductor pride and joy?

No. A regulation is specific. It’s no different than a speed limit.

You’ve said quite a few times that of the 35,000 components that are in your supercomputer, eight are from TSMC. When I hear that, I think that must be a tiny fraction. Are you downplaying your reliance on TSMC?

No, not at all. Not at all.

So what point are you trying to make with that?

I’m simply emphasizing that in order to build an AI supercomputer, a whole lot of other components are involved. In fact, in our AI supercomputers, just about the entire semiconductor industry partners with us. We already partner very closely with Samsung, SK Hynix, Intel, AMD, Broadcom, Marvell, and so on and so forth. In our AI supercomputers, when we succeed, a whole bunch of companies succeed with us, and we’re delighted by that.

How often do you talk to Morris Chang or Mark Liu at TSMC?

All the time. Continuously. Yeah. Continuously.

What are your conversations like?

These days we talk about advanced packaging, planning for capacity for the coming years, for advanced computing capacity. CoWoS [TSMC’s proprietary method for cramming chip dies and memory modules into a single package] requires new factories, new manufacturing lines, new equipment. So their support is really, really quite important.

I recently had a conversation with a generative-AI-focused CEO. I asked who Nvidia’s competitors might be down the road, and this person suggested Google’s TPU. Other people mention AMD. I imagine it’s not such a binary to you, but who do you see as your biggest competitor? Who keeps you up at night?

Lauren, they all do. The TPU team is extraordinary. The bottom line is, the TPU team is really great, the AWS Trainium team and the AWS Inferentia team are really extraordinary, really excellent. Microsoft has their internal ASIC development that’s ongoing, called Maia. Every cloud service provider in China is building internal chips, and then there’s a whole bunch of startups that are building great chips, as well as existing semiconductor companies. Everybody’s building chips.

Share.
Exit mobile version