Dell CEO Warns AI Memory Demand Could Reach “Unimaginable” Levels by 2028 as Supply Constraints Continue to Tighten

Dell Technologies CEO Michael Dell is signaling that the AI memory supercycle is far from over. According to a recent report from ETNews, Dell said that as both memory capacity per accelerator and the total scale of AI infrastructure expand at the same time, total memory demand is forming a structure that could rise by roughly 625 times by 2028. The same report says Dell argued that supply expansion takes years, while AI infrastructure demand is still showing no signs of slowing.

That number comes from a very aggressive projection. Dell’s framework, as summarized in current coverage, assumes that memory capacity per accelerator could increase by around 25 times, while the number of deployed accelerators could also rise by 25 times, creating the much discussed 625x multiplier. It is important to note that this is a forward looking estimate rather than a confirmed market model, but it clearly reflects how serious large infrastructure buyers have become about securing memory for AI systems.

The broader market context suggests Dell is not making this argument in a vacuum. Recent reporting says memory is already consuming a much larger share of hyperscaler AI data center spending than it did just a few years ago, with analyst estimates pointing to memory taking about 30% of hyperscaler AI data center capex in 2026, up from about 8% in 2023 and 2024. That same reporting says DRAM pricing has surged and that supply remains tight, especially for advanced memory tied to AI platforms.

This matters because AI infrastructure is no longer only about accelerators themselves. Each new generation is demanding more memory capacity, more bandwidth, and more complex supporting memory architectures. Coverage of Dell’s comments notes that the industry is moving from platforms such as NVIDIA’s H100 toward much larger memory footprints in future generations, and that the memory requirement growth is no longer limited to HBM alone. New server level and system level memory pools are also becoming part of the equation, especially as inference, agentic systems, and larger scale deployments expand.

In practical terms, the message is straightforward. If hyperscalers, cloud builders, and enterprise AI buyers believe memory availability will remain constrained for several more years, then the market becomes less about negotiating ideal pricing and more about securing enough supply to avoid falling behind. That is the core fear running through the AI buildout right now. Companies may not like current memory pricing, but the strategic risk of underbuying may look even worse. This is an inference based on Dell’s comments and the current supply chain reporting.

At the same time, the phrase that buyers will have “no option” other than paying whatever is demanded should be treated more as a market interpretation than a direct confirmed quote. What is confirmed through current reporting is that Dell expects demand pressure to remain intense through 2028, and that memory supply cannot be expanded quickly enough to match the projected pace of AI infrastructure growth. Whether that translates into unlimited pricing power for suppliers will still depend on contract structures, capacity expansion, customer leverage, and how quickly new supply from major memory makers comes online.

There is also an important nuance here for the wider market. Some investors recently hoped that the AI memory trade was cooling down after volatility around memory related names, but the latest supply chain coverage argues the opposite. Reporting from the last week suggests that new production capacity for the most in demand memory categories is still not expected to meaningfully ease pressure until 2027 to 2028, which supports the idea that shortages and elevated pricing may remain part of the landscape for years rather than quarters.

For Dell, this outlook is also commercially relevant. The company is deeply exposed to AI server infrastructure, and separate recent analyst commentary says Dell’s AI server business could continue growing sharply through 2027 and 2028 as enterprise and hyperscaler demand rises. That does not prove Dell’s memory forecast, but it does show why the company is watching memory availability so closely. If memory becomes the bottleneck, then the entire AI server ecosystem will feel it.

The real takeaway is not just that memory demand is going up. It is that AI infrastructure is becoming so memory intensive that buyers may soon view memory supply as strategically important as the accelerator itself. If Dell’s projection is even directionally correct, the next few years will not just be defined by who can get the most compute, but by who can lock in enough DRAM and advanced memory to keep those systems fed.

What do you think will become the bigger bottleneck for AI infrastructure by 2028: compute silicon, or memory supply?

Share
Angel Morales

Founder and lead writer at Duck-IT Tech News, and dedicated to delivering the latest news, reviews, and insights in the world of technology, gaming, and AI. With experience in the tech and business sectors, combining a deep passion for technology with a talent for clear and engaging writing

Previous
Previous

Samson: A Tyndalston Story and Morbid Metal Lead This Week’s New GeForce NOW Additions as Starfield Becomes RTX 5080 Ready

Next
Next

Thermalright Launches AI HydroNous R1 Mini PC With Water Cooled Ryzen AI Max+ 395 and Up to 176W Peak Power