Micron Begins Sampling Next-Gen HBM4 Memory 36GB Capacity and Over 2TB/s Bandwidth Targeting AI Acceleration
Micron Technology has officially begun shipping its next-generation HBM4 (High Bandwidth Memory) to key partners, a major step in supporting the rapid evolution of AI platforms. The newly announced HBM4 36GB 12-high memory stacks are currently being sampled to selected customers, showcasing performance breakthroughs that will power the next era of AI computation, large language model inference, and high-throughput neural networks.
A Leap in Memory Innovation
Built on Micron’s 1ß (1-beta) DRAM process technology and 12-high advanced packaging, the HBM4 modules offer:
36GB capacity per stack
Over 2.0 TB/s bandwidth via a 2048-bit interface
More than 60% performance uplift over HBM3E
20% improved power efficiency
This significant performance and power improvement sets a new bar for AI accelerators. The wide 2048-bit bus and enhanced MBIST (memory built-in self-test) capabilities ensure seamless integration for data centers and next-gen AI systems. This is critical as AI inference demands escalate and foundation models expand in size and complexity.
Designed for the AI Era
Micron’s HBM4 is aimed squarely at the next wave of generative AI platforms, promising superior throughput and reduced latency—key elements for faster reasoning in chain-of-thought architectures, LLM inference, and real-time decision-making in large-scale compute environments. With over 2TB/s per stack, the HBM4 module enables significantly faster communication across AI compute clusters.
The company notes that HBM4 will enable innovation across critical industries, including:
Healthcare, by accelerating genomic sequencing and diagnostics
Finance, through high-frequency trading and fraud detection
Transportation, powering real-time decision-making in autonomous systems
Future-Ready: HBM4 Ramping in 2026
While sampling has already begun, Micron plans to ramp HBM4 into volume production in calendar year 2026, aligning with the release of next-gen AI platforms. The company is positioning HBM4 as a cornerstone of upcoming system architectures, including those incorporating NVIDIA’s Rubin R100 and other AI accelerator ecosystems.
This announcement marks another milestone in Micron’s multi-decade legacy of driving memory innovation. From pioneering DRAM processes to enabling sustainable AI compute, Micron reaffirms its position as a leader and essential enabler of intelligent technologies from cloud to edge.
How do you see Micron's HBM4 changing the AI hardware landscape? Is this the breakthrough memory your next-gen AI infrastructure needs? Join the conversation below.