A new report from Bloomberg says that once-again CEO of OpenAI Sam Altman’s efforts to raise billions for an AI chip venture are aimed at using that cash to develop a “network of factories” for fabrication that would stretch around the globe and involve working with unnamed “top chip manufacturers.”

A major cost and limitation for running AI models is having enough chips to handle the computations behind bots like ChatGPT or DALL-E that answer prompts and generate images. Nvidia’s value rose above $1 trillion for the first time last year, partly due to a virtual monopoly it has as GPT-4, Gemini, Llama 2, and other models depend heavily on its popular H100 GPUs.

Accordingly, the race to manufacture more high-powered chips to run complex AI systems has only intensified. The limited number of fabs capable of making high-end chips is driving Altman or anyone else to bid for capacity years before you need it in order to produce the new chips. And going against the likes of Apple requires deep-pocketed investors who will front costs that the nonprofit OpenAI still can’t afford. SoftBank Group and Abu Dhabi-based AI holding company G42 have reportedly been in talks about raising money for Altman’s project.

Microsoft’s new Azure Maia 100 AI processor.
Image: Microsoft

AWS, Azure, and Google use Nvidia’s H100 processors as well. This week, Meta CEO Mark Zuckerberg told The Verge reporter Alex Heath that “by the end of this year, Meta will own more than 340,000 of Nvidia’s H100 GPUs” as the company pursues the development of artificial general intelligence (AGI).

Nvidia GH200 “Grace Hopper Superchip”
Image: Nvidia

Share.
Exit mobile version