A few minutes every morning is all you need.
Stay up to date on the world's Headlines and Human Stories. It's fun, it's factual, it's fluff-free.
The backstory: Last year when ChatGPT emerged on the scene, it sparked a frenzy in the world of artificial intelligence (AI). Everyone from tech giants like Google and Microsoft to smaller players wanted a piece of the action. A big part of this AI revolution was the demand for specialized AI chips, also known as AI accelerators, which are crucial for the latest generative AI technology. Leading this charge was Nvidia, which quickly became the market leader.
Nvidia's GPUs (Graphics Processing Units) are highly sought after for training deep-learning AI models. With 84% market share, Nvidia left competitors like Intel and AMD far behind. The company's market value even soared to US$1 trillion in May.
Notably, systems powering services like ChatGPT heavily rely on Nvidia GPUs, some of which cost over US$10,000. OpenAI also relied on a supercomputer funded by Microsoft, one of its major supporters. This supercomputer boasted 10,000 Nvidia GPUs.
But running ChatGPT comes at a cost. Each question you ask costs around 4 cents, according to analyst Stacy Rasgon from Bernstein. If ChatGPT's queries were to reach even a fraction of Google's search volume, we'd be talking about an initial investment of US$48.1 billion in GPUs, along with an annual chip bill of US$16 billion.
More recently: In May, OpenAI's CEO Sam Altman raised concerns about a shortage of AI chips. Altman's worries revolved around two main issues. Firstly, there was a shortage of the critical processors that power OpenAI's software, requiring an increased supply of these chips for smooth operations. Secondly, the high cost of running this hardware prompted OpenAI to take a closer look at its budget.
The development: OpenAI is reportedly exploring the possibility of creating its own AI chips. It’s even considering potential acquisitions in this field, according to insiders in the know. But it's important to note that OpenAI has not made a final decision. According to these sources, the company is discussing options and figuring out what to do about the chip shortage and the high costs. These also include potentially working more closely with chipmakers and diversifying its suppliers.
"Nvidia’s continued dominance has put a really fine point on how hard it is to break into this market," said Greg Reichow, a partner at Eclipse Ventures. "This has resulted in a pullback in investment into these companies, or at least into many of them."
"We view Nvidia as the most important company on the planet in an era that is rapidly changing towards one that will be emphasized by greater AI capabilities," said CFRA Research analyst Angelo Zino.
“A common theme that came up throughout the discussion was that currently OpenAI is extremely GPU-limited, and this is delaying a lot of their short-term plans,” wrote Raza Habib in a blog post about interviewing OpenAI CEO Sam Altman, which has since been taken down. “The biggest customer complaint was about the reliability and speed of the API. Sam acknowledged their concern and explained that most of the issue was a result of GPU shortages.”