DRAM Production Numbers Reportedly Won’t Fulfill the Demand Ahead, Driven by the AI Industry
The global DRAM industry is facing unprecedented pressure to scale up production as demand from the artificial intelligence sector surges beyond anything seen before. Every AI accelerator requires massive amounts of high-bandwidth memory (HBM) per chip, and with industry giants like NVIDIA, Microsoft, and OpenAI pursuing custom AI silicon alongside GPUs, the need for DRAM has become as critical as advanced chip nodes themselves.
According to Chosun Biz and recent analyst notes from UBS (via Jukan), demand is expected to rise exponentially within the next several years. UBS projects that OpenAI’s upcoming ASIC will be equipped with 12-Hi HBM3E, requiring 500,000 to 600,000 wafer starts per month (WPM) between 2026 and 2029. For context, this is an enormous share of global DRAM output generated by just one company’s product.
"While it is not exactly clear as to what DRAM geometry basis and HBM/DDR mix the 900k wpm number is based on, and hence output in DRAM bit terms, assuming a 1b nm equivalent and node migration alone yielding 20-30% density gain, this could imply the DRAM industry needing to add… https://t.co/4NsnfCX9yJ
— Jukan (@Jukanlosreve) October 2, 2025
Industry forecasts estimate that global DRAM production will reach around 1.955 million WPM by 2026, but even that level is unlikely to meet demand. Inventory is already at record lows, with TrendForce reporting global DRAM supplier inventory at just 3.3 weeks, the lowest in seven years. Historically, inventory levels hover around ten weeks, highlighting just how strained supply has become.
The imbalance is being accelerated by manufacturers shifting production focus. Leaders like Samsung, SK hynix, and Micron are converting existing DRAM lines to prioritize HBM production, as HBM has become the memory of choice for advanced AI accelerators. Current process nodes are at 1c, but scaling to next-generation HBM technologies will only push production requirements further.
Demand is not only tied to GPUs and custom AI ASICs. Data centers are consuming vast amounts of DRAM to handle workloads, and projects like OpenAI’s Stargate may reshape the market entirely. Estimates suggest Stargate alone could consume 900,000 WPM, roughly 40% of global DRAM supply at current levels (Samsung & OpenAI partnership). This level of consumption is unprecedented, raising questions about how the industry can respond.
Geopolitics also plays a role, as the majority of DRAM production capacity remains concentrated in South Korea. While Micron and SK hynix are investing heavily in new facilities in the United States and elsewhere, it remains uncertain whether such fabs can be established quickly enough to meet demand within this decade.
The next frontier - HBM4 - will further intensify the need for scaling. With AI’s appetite for memory accelerating at an exponential pace, industry watchers expect DRAM to remain one of the most critical bottlenecks in global technology supply chains. For now, the only way forward appears to be rapid and aggressive investment in new production, as big tech’s demand for HBM shows no signs of slowing.
What do you think? can manufacturers realistically scale DRAM fast enough, or is the AI industry heading toward a severe memory shortage?