September 24, 2024

The World Opinion

Your Global Perspective

AMD unearths new A.I. chip to problem Nvidia’s dominance

Lisa Su presentations an ADM Intuition M1300 chip as she delivers a keynote deal with at CES 2023 at The Venetian Las Vegas on January 04, 2023 in Las Vegas, Nevada.

David Becker | Getty Photographs

AMD mentioned on Tuesday its most-advanced GPU for synthetic intelligence, the MI300X, will get started delivery to a couple consumers later this yr.

AMD’s announcement represents the most powerful problem to Nvidia, which recently dominates the marketplace for AI chips with over 80% marketplace proportion, in step with analysts.

GPUs are chips utilized by corporations like OpenAI to construct state-of-the-art AI techniques reminiscent of ChatGPT.

If AMD’s AI chips, which it calls “accelerators,” are embraced through builders and server makers as substitutes for Nvidia’s merchandise, it will constitute a large untapped marketplace for the chipmaker, which is absolute best identified for its conventional pc processors.

AMD CEO Lisa Su instructed buyers and analysts in San Francisco on Tuesday that AI is the corporate’s “greatest and maximum strategic long-term enlargement alternative.”

“We consider the knowledge heart AI accelerator [market] rising from one thing like $30 billion this yr, at over 50% compound annual enlargement charge, to over $150 billion in 2027,” Su mentioned.

Whilst AMD did not divulge a worth, the transfer may put worth force on Nvidia’s GPUs, such because the H100, which is able to price $30,000 or extra. Decrease GPU costs might lend a hand pressure down the prime price of serving generative AI packages.

AI chips are probably the most brilliant spots within the semiconductor trade, whilst PC gross sales, a standard motive force of semiconductor processor gross sales, droop.

Remaining month, AMD CEO Lisa Su mentioned on an profits name that whilst the MI300X shall be to be had for sampling q4, it might get started delivery in better volumes subsequent yr. Su shared extra main points at the chip all the way through her presentation on Tuesday.

“I really like this chip,” Su mentioned.

The MI300X

AMD mentioned that its new MI300X chip and its CDNA structure had been designed for massive language fashions and different state-of-the-art AI fashions.

“On the heart of this are GPUs. GPUs are enabling generative AI,” Su mentioned.

The MI300X can use as much as 192GB of reminiscence, because of this it may have compatibility even larger AI fashions than different chips. Nvidia’s rival H100 most effective helps 120GB of reminiscence, for instance.

Huge language fashions for generative AI packages use a number of reminiscence as a result of they run increasingly more calculations. AMD demoed the MI300x operating a 40 billion parameter type known as Falcon. OpenAI’s GPT-3 type has 175 billion parameters.

“Fashion sizes are getting a lot greater, and also you in truth want more than one GPUs to run the newest massive language fashions,” Su mentioned, noting that with the added reminiscence on AMD chips builders would not want as many GPUs.

AMD additionally mentioned it might be offering an Infinity Structure that mixes 8 of its M1300X accelerators in a single device. Nvidia and Google have evolved identical programs that mix 8 or extra GPUs in one field for AI packages.

One explanation why AI builders have traditionally most well-liked Nvidia chips is that it has a well-developed tool package deal known as CUDA that allows them to get admission to the chip’s core {hardware} options.

AMD mentioned on Tuesday that it has its personal tool for its AI chips that it calls ROCm.

“Now whilst it is a adventure, we have made in reality nice growth in construction an impressive tool stack that works with the open ecosystem of fashions, libraries, frameworks and gear,” AMD president Victor Peng mentioned.