The artificial intelligence (AI) gold rush is on, driving feverish demand for this era’s version of shovels and pickaxes: the semiconductors that make AI and data gathering possible.
Graphical processing units (GPUs) are a prime example. These high-end chips provide the massive computational power needed for AI models. Sales at Nvidia, which dominates the GPU market, have more than tripled in the past year.
DATA DRIVES CHIP INVESTMENT
But GPUs are just one type of high-performance semiconductor in demand as companies scramble to refurbish data centers and build new facilities. Microsoft, Meta, and Google spent more than $32 billion combined on data center infrastructure in just the first three months of this year.
Semiconductor sales are expected to hit a record $611.2 billion in annual global sales this year, according to World Semiconductor Trade Statistics—an increase of 16% year over year. Another 12.5% annual growth is expected by 2025.
“The surge in demand for [generative] AI applications is propelling a corresponding need for computational power, driving both software innovation and substantial investment in data center infrastructure,” notes a March 2024 McKinsey & Company report, pointing to segments like AI accelerators, high-bandwidth memory, and NAND flash data storage, in addition to GPUs.
HIGH-ENERGY CHALLENGES
Energy usage is another key factor driving the AI semiconductor market, says Roger Corell, senior director of leadership marketing at Solidigm, which specializes exclusively in NAND flash memory, solid-state drives (SSDs). These SSDs are supplanting traditional hard-disk drives (HDDs) due to their performance, power, and space efficiencies. “AI workloads are extremely challenging from an infrastructure efficiency perspective, as they consume a tremendous amount of power,” he says.
Data centers now commonly draw 10 times more power than a decade ago, with a single data center cabinet of general-purpose servers using up to 15 kilowatts. AI training workloads can consume 100 or 150 kilowatts. Some experts estimate usage could soon rise to 300 kilowatts.
Solidigm’s ultra-efficient technology is especially well-positioned to serve this market. Formed in late 2021 when South Korean semiconductor giant SK hynix acquired Intel’s NAND SSD business, Solidigm has a comprehensive data center portfolio. Of particular value to AI workloads, though, is its quad-level cell (QLC) NAND technology, a form of flash memory that stores four bits of data per cell versus the traditional one to three. The more bits per cell, the more information the semiconductor can store.
“We are seeing massive growth because data center operators realize they have to upgrade their infrastructure if they want to scale out AI,” says Corell.
Sacramento’s Solidigm boasts an industry-leading SSD with 61.44 TB of data capacity. With more than 30 years of innovation and long-standing engagements designing data centers with major cloud service providers, Solidigm is riding high on the AI wave.
“Our pipeline is full of conversations with customers about what hyper-dense, high-performance, power-efficient storage can do to overcome AI scale challenges,” says Corell.
In this new gold rush, not just any shovel will do.