Nvidia’s Jensen Huang plays down competition worries as key supplier disappoints with subdued expectations for AI … – Fortune

Nvidia will remain the gold standard for AI training chips, CEO Jensen Huang told investors, even as rivals push to cut into his market share and one of Nvidias major suppliers gave a subdued forecast for AI chip sales.

Everyone from OpenAI to Elon Musks Tesla rely on Nvidia semiconductors to run their large language or computer vision models. The rollout of Nvidias Blackwell system later this year will only cement that lead, Huang said at the companys annual shareholder meeting on Wednesday.

Unveiled in March, Blackwell is the next generation of AI training processors to follow its flagship Hopper line of H100 chipsone of the most prized possessions in the tech industry fetching prices in the tens of thousands of dollars each.

The Blackwell architecture platform will likely be the most successful product in our history and even in the entire computer history, Huang said.

Nvidia briefly eclipsed Microsoft and Apple this month to become the worlds most valuable company in a remarkable rally that hasfueledmuch of this years gains in theS&P 500 index. At more than $3 trillion, Huangs company was at one point worth more than entire economies andstock markets, only to suffer arecord loss in market valueas investors locked in profits.

Yet as long as Nvidia chips continue to be the benchmark for AI training, theres little reason to believe thelonger-term outlookis cloudy, and here thefundamentalscontinue to look robust.

One of Nvidias key advantages is a sticky AI ecosystemknown as CUDA, short for Compute Unified Device Architecture. Much like how everyday consumers are loath to switch from their Apple iOS device to a Samsung phone using Google Android, an entire cohort of developers have been working with CUDA for years and feel so comfortable that there is little reason to consider using another software platform. Much like the hardware, CUDA effectively has become a standard of its own.

The Nvidia platform is broadly available through every major cloud provider and computer maker, creating a large and attractive base for developers and customers, which makes our platform more valuable to our customers, Huang added on Wednesday.

The AI trade did take a recent hit after memory-chip supplier Micron Technology, provider of high-bandwidth memory (HBM) chips to companies like Nvidia, forecast fiscal fourth-quarter revenue would only match market expectations of around $7.6 billion.

Shares in Micron plunged 7%, underperforming by a large margin a slight gain in the broader tech-heavy Nasdaq Composite.

In the past, Micron and its Korean rivals Samsung and SK Hynix have seen a cyclical boom-and-bust common to the memory-chip market, long considered a commodity business when compared with logic chips such as graphic processors.

But excitement has surged given the demand for its chips necessary for AI training. Microns stock more than doubled over the past 12 months, meaning investors have already priced in much of managements predicted growth.

The guidance was basically in line with expectations, and in the AI hardware world if you guide in line thats considered a slight disappointment, says Gene Munster, a tech investor with Deepwater Asset Management. Momentum investors just didnt see that incremental reason to be more positive about the story.

Analysts closely track demand for high-bandwidth memory as a leading indicator for the AI industry because it is so crucial for solving the biggest economic constraint facing AI training todaythe issue of scaling.

Costs crucially do not rise in line with a models complexitythe number of parameters it has, which can number into the billionsbut rather grow exponentially. This results in diminishing returns in efficiency over time.

Even if revenue grows at a consistent rate, losses risk ballooning into the billions or even tens of billions a year as a model gets more advanced. This threatens to overwhelm any company that doesnt have a deep-pocketed investor like Microsoft capable of ensuring an OpenAI can still pay the bills, as CEO Sam Altman phrased it recently.

A key reason for diminishing returns is the growing gap between the two factors that dictate AI training performance. The first is a logic chips raw compute poweras measured by FLOPS, a type of calculation per secondand the second is the memory bandwidth needed to quickly feed it dataoften expressed in millions of transfers per second, or MT/s.

Since they work in tandem, scaling one without the other simply leads to waste and cost inefficiency. Thats why FLOPS utilization, or how much of the compute can actually be brought to bear, is a key metric when judging the cost efficiency of AI models.

As Micronpoints out, data transfer rates have been unable to keep pace with rising compute power. The resulting bottleneck, often referred to as the memory wall is a leading cause for todays inherent inefficiency when scaling AI-training models.

That explains why theU.S. government focused heavily onmemory bandwidthwhen deciding which specific Nvidia chips needed to be banned from export to China in order to weaken Beijings AI development program.

On Wednesday, Micron said its HBM business was sold out all the way through the end of the next calendar year, which trails its fiscal year by one quarter, echoingsimilar commentsfrom Korean competitor SK Hynix.

We expect to generate several hundred million dollars of revenue from HBM in FY24 and multiple [billions of dollars] in revenue from HBM in FY25, Micron said on Wednesday.

Go here to see the original:

Nvidia's Jensen Huang plays down competition worries as key supplier disappoints with subdued expectations for AI ... - Fortune

Related Posts

Comments are closed.