Nvidia gave yet another triumphant quarterly report on Wednesday evening, and also gave the clearest sign yet that it intends to take on Intel directly for control of the personal computer. Basically, Nvidia has tremendous access to the resources an “AI PC” needs—namely, lots and lots of memory circuitry.
Intel, of course, sells upward of ninety percent of the central processing units controlling PCs made by Dell, HP, and others. It has had some very slight competition from the occasional challenger, and has mainly kept its main rival, Advanced Micro Devices, at bay for decades.
But the stage is set for Nvidia to use its rising dominance of artificial intelligence to take serious market share from Intel. Nvidia already sells graphics cards for gaming PCs, but there have been rumors recently the company would take on Intel directly by selling CPUs as well. An October report by Reuters cited unnamed sources saying, “Nvidia has quietly begun designing central processing units (CPUs) that would run Microsoft’s Windows operating system.”
Wednesday evening brought new fuel for such speculation. On the conference call hosted by CEO Jensen Huang, and CFO Colette Kress, Huang was asked by analyst William Stein of Trust Securities if there is a chance to compete more directly with Intel and AMD on the PC.
After a long and winding answer, Huang came around to mentioning Microsoft’s unveiling this week of the “Copilot+ PC,” a new approach to the personal computer that does more AI processing on the device as opposed to in the cloud. The Copilot+ PC, said Huang, “opens up opportunities for system innovation even for PCs.”
Now, the first Copilot+ PCs are using not Intel CPUs but chips from Qualcomm. Microsoft chose to lead with Qualcomm because of Qualcomm’s extensive work in building AI chips that can also handle all the other normal tasks of a PC, such as spreadsheets. Intel-based versions of the Copilot+ PC, and AMD-based versions, are going to follow in the coming months, said Microsoft.
However, the Qualcomm chip, called “Snapdragon X Elite,” is noteworthy because it doesn’t use the traditional PC technology of Intel and AMD, called “x86”; it uses technology from ARM Holdings, the company that licenses blueprints for chip design to Qualcomm and Nvidia and many others. Nvidia also has a CPU chip based on ARM, but one that costs thousands of dollars, the “Grace” CPU, built for AI tasks in the data center. PC CPUs have to be on the order of hundreds of dollars.
There would be no reason for Nvidia to go down-market making cheaper CPUs for the PC unless the company stood a chance of really taking a bite out of Intel’s market share. In his long and winding answer to Stein’s question, Huang cited the various “system” advantages that Nvidia has brought to the data center.
Among those advantages is extensive use of the most-advanced DRAM memory chips. It’s well known that AI is a memory-intensive application of computing. Having plentiful amounts of memory, and having an efficient connection between the CPU chip and memory, are the keys to performance in AI computing.
Nvidia, which now controls huge access to DRAM because of AI, is in a prime position to leverage that position as an advantage in PCs such as the Copilot+ PC.
A report this week by analyst Matthew Bryson of Wedbush Securities noted there is a looming shortage of DRAM globally because more and more of the silicon wafers used to make DRAM have been devoted to making what’s called “HBM,” or, high-bandwidth memory, the most-advanced DRAM needed for AI servers using Nvidia’s Grace CPU. Nvidia’s raging success in AI servers, in other words, risks starving ordinary PCs (and ordinary servers) of DRAM.
Unless, that is, Nvidia leverages its influence among suppliers to come up with a cost-effective part that combines an ARM-based CPU with DRAM memory. That is likely what Huang is betting one when he refers to “opportunities for system innovation even in PCs.”
Huang realizes an AI PC will be all about the memory capacity of the machine. Microsoft has described an application of the Copilot+ PCs called “Recall” that will require fifty gigabytes of disk storage. Capturing and storing that data will no doubt also require a massive lift in the standard configuration of DRAM in a PC.
If the PC is now more of a memory machine, then selling a new style of CPU that has a closer coupling to massive amounts of DRAM opens the door to a shift away from the Intel-based PCs to date. That kind of “architecture” change is a big opening, as Huang clearly senses. it offers the prospect of the first major market-share shift from Intel in decades.
It’s as if Huang were to take the same blueprint for computers that he has exploited in the cloud for AI and shrink it down to run on PCs.
The key question for Nvidia is how much of its time and effort it is willing to devote to CPU chips that cost at most a couple hundred dollars. Even if such a CPU were to be a unique kind of part that pairs the CPU itself with massive amounts of DRAM, there is no way it would ever come close to the profit margin of Nvidia’s data center chips such as the “H200” GPU, which costs tens of thousands of dollars. Even Nvidia’s “RTX” graphics cards for gaming, in the neighborhood of $600, are way more profitable than CPUs.
Huang will have to retool a part of his organization to focus for the first time ever on a major effort to produce a budget chip. The investment is worth it if the company can really take substantial share in PCs, where the volume of sales annually dwarfs even a raging market for AI data center chips.
Over the decades, Huang has been bold in taking on challenges others deemed too remote. With his company minting money on AI, Nvidia is the in the best position it has ever been to replicate its success in one market to be the challenger in another.