Connect with us

Tech

AMD, Intel and Nvidia’s latest moves in the AI PC chip race | TechTarget

Published

on

AMD, Intel and Nvidia’s latest moves in the AI PC chip race | TechTarget

The CEOs of AMD, Intel, Nvidia and Qualcomm all delivered keynotes during Computex conference in Taiwan this week to share significant chip advancements as the companies aim to turn every device into an AI device.

The vendors pushed “AI everywhere” agendas during the computing conference, touting generative AI as the technology that will make companies more efficient and ‘improve lives.’

“AI is our number one priority, and we are at the beginning of an incredibly exciting time for the industry as AI transforms virtually every business, improves our quality of life, and reshapes every part of the computing market,” said AMD CEO Lisa Su in her opening keynote, which was live streamed.

AI is also critical to these chip makers’ bottom lines. Nvidia leads the AI chip market with dramatic stock price surges to prove it, but there is room for the other chip makers to stake a claim. Revenue from AI chips globally is expected to reach $71 billion this year, up 33% from 2023, according to a recent forecast from Gartner.

While the chip makers all touted their incredibly fast, power-efficient processors that can deliver GenAI workloads from any device, most enterprises are years away from needing hardware that’s purpose-built for GenAI, particularly laptops and PCs.

“Since organizations are in three to five year refresh cycles, we’re probably 18 to 24 months away from seeing any critical mass of these in the workforce,” said Gabe Knuth, an analyst with Enterprise Strategy Group, a division of TechTarget. “At that point, I suspect we’ll also begin to see what uses for local AI hardware stick, and the real momentum will start to build toward a moment when all PCs will have some AI capabilities in three to five years.”

NPUs, TOPS key to AI PC performance

During AMD CEO Su’s keynote, she introduced AMD Ryzen AI 300 Series mobile processors and AMD Ryzen 9000 Series processors for laptops and PCs, and emphasized the importance of neural processing units (NPUs) to run AI workloads efficiently. That includes tasks such as real time translation, content creation, and customized digital assistants to help with decision making, Su said.

NPUs enable devices to run longer, quieter and cooler as AI tasks run continually in the background, according to Gartner.

Pavan Davuluri, Microsoft VP of Windows devices, joined Su on the Computex keynote stage to explain the importance of specialized chips with built-in NPUs for Copilot+ PCs, the tech giant’s AI-focused PCs.

“On-device AI means faster response time, better privacy, lower costs, but that means running models with billions of parameters in them, on PC hardware,” Davuluri said.

“Compared to traditional PCs of just a few years ago, we are talking 20 times the performance and up to 100 times the efficiency [required] for AI workloads,” he continued. “To make that possible, every CoPilot+ PC needs an NPU of at least 40 TOPS [trillions of operations per second].”

Microsoft launched its CoPilot+ PCs last month with Qualcomm as its first chip provider.

Earlier this week at Computex, Qualcomm President and CEO Cristiano Amon planted the company’s flag in the AI PC arena with Snapdragon X chips. Qualcomm claims Snapdragon X series systems offer days-long battery life at 45 TOPS AI performance.

“One thing that will be different about this new PC — unlike in the past, your Windows PC will get better over time,” Amon said during his keynote.

Like AMD, Qualcomm touted the NPU as the key to performance. AI workloads are offloaded from the CPU and GPU to the NPU, providing significant performance enhancement and power savings.

Meanwhile, Intel CEO Pat Gelsinger introduced the new Lunar Lake processors for Copilot+ PCs during Computex but emphasized the benefits of x86 CPUs and GPUs combined with NPUs.

“Since there’s been some talk about this other X Elite chip and its superiority to the X86 [chips], I just want to put that to bed right now. Ain’t true,” Gelsinger said in his keynote, referring to Qualcomm’s Snapdragon X Elite.

Gelsinger cited significant power and efficiency gains in Lunar Lake processors, which will power more than 80 models of CoPilot+ PCs from 20 equipment suppliers beginning in Q3. Lunar Lake delivers 48 NPU TOPS and up to four times the AI compute power over the previous generation to improve generative AI workloads.

With the introduction of AMD’s own NPU delivering 50 TOPS and Intel’s 48 compared to Qualcomm’s 45, the AI PC arms race is rapidly advancing, Knuth said.

“The TOPS war has begun,” he added.

As for Nvidia, at the AI chip giant at Computex launched new AI PCs built on its RTX platform, and an RTX AI Toolkit, a collection of free tools and SDKs that Windows app developers can use to customize and deploy AI models for Windows applications. The RTX platform targets gamers and content creators.

It’s early days for AI PCs but within the next two to three years, they will make up 65%-75% of the PC market, and even higher in the enterprise, said Jack Gold, founder and analyst at J Gold Associates.

“I expect Intel to hold the majority of enterprise market share, followed by AMD with Qualcomm being more popular in the high-end consumer and SMB space, but still a minority player,” Gold said.

AI chips, beyond PCs

Beyond end point devices, Nvidia, AMD and Intel all shared visions of transforming data centers.

Nvidia launched its new Blackwell architecture systems, which include Grace CPUs, and Nivida networking and infrastructure that companies will use to build ‘AI factories’ and data centers to support future GenAI breakthroughs.

Intel launched Xeon 6 chips in two form factors; the Performance Core (P-Core) for resource intensive AI applications and the Efficient Core (E-Core) that’s designed for power efficiency in data centers.

AMD introduced its AI and adaptive computing technology, AMD Versal AI Edge Series Gen 2, available now. It combines FPGA (field programmable gate array) logic for real-time pre-processing, AI Engines powered by XDNA dataflow architecture technology for AI inference, and embedded CPUs for edge AI

The new AMD Instinct MI325X server accelerator with be available in Q4 2024. The chip vendor also previewed its fifth generation EPYC brand of multi-core server processors, codenamed “Turin,” expected to launch in the second half of this year.

With plenty of AI hardware options to choose from, IT buying decisions might come down to performance per dollar or performance per watt. That’s because of the “extreme power required to run the high-end chips like [Nvidia’s] B200 is problematic from a power availability and cost perspective,” Gold said. 

“Inference workloads will happily run on more generic CPUs with some AI acceleration as needed. And since ultimately inference workloads will be the major bulk of AI processing, that gives Intel and AMD some advantages, as Nvidia really doesn’t have a competitive CPU offering,” Gold said. ” Diversity in AI-based processing will be key to expansion in the market for AI workloads over the next two to three years.”

Bridget Botelho has covered a variety of technologies and broad IT industry trends since joining TechTarget in 2007. She leads TechTarget’s team of reporters as Editorial Director of News. 

Continue Reading