JEDEC Previews LPDDR6 With SOCAMM2, PIM, and 512 GB Density Goals for Next Generation AI Servers
JEDEC has officially previewed the next phase of its LPDDR6 roadmap, and the message is clear: low power memory is no longer just a mobile story. In its latest LPDDR6 roadmap preview, JEDEC said the upcoming evolution of the JESD209 6 standard is being shaped not only for future mobile platforms, but also for AI data centers and processing in memory use cases. The roadmap highlights higher capacities, more flexible metadata handling, LPDDR6 SOCAMM2 modules, and LPDDR6 PIM, all of which point to a much bigger role for LPDDR in server class AI infrastructure.
One of the biggest highlights is density. JEDEC said LPDDR6 is expected to unlock capacities beyond the current LPDDR5 and LPDDR5X ceiling, with 512 GB density on the horizon. That matters because AI training and inference workloads are becoming increasingly memory hungry, especially in systems optimized around large models, high context windows, and faster inference throughput. This roadmap positions LPDDR6 as a serious answer to that growing demand, especially for environments where power efficiency and compact form factors still matter.
JEDEC is also changing the interface structure to make that happen. The organization said the updated LPDDR6 direction includes a narrower per die interface with new x12 and x6 sub channel modes under a non binary architecture, expanding from x16 to x24. In practical terms, this allows more dies per package and raises memory capacity per component and per channel, which is exactly the kind of design change needed if LPDDR is going to scale properly into AI servers rather than remain limited to traditional mobile deployments.
Another major piece of the roadmap is SOCAMM2. JEDEC said an LPDDR6 based SOCAMM2 module standard is now in development, designed to carry forward the compact and serviceable module form factor while providing a clear upgrade path from current LPDDR5X SOCAMM2 modules. That is a very important detail because SOCAMM2 has increasingly been viewed as one of the most interesting memory formats for next generation AI systems, especially where dense packaging, serviceability, and better power efficiency than conventional server DIMM approaches are all part of the equation.
JEDEC also previewed a flexible metadata carve out model for LPDDR6, saying it is intended to minimize the impact on peak data throughput while giving data center customers room to balance usable capacity and metadata needs depending on their reliability requirements. That may sound like a niche technical point, but it is actually a strong sign that LPDDR6 is being designed with real server deployment needs in mind rather than simply being stretched upward from a mobile baseline.
The roadmap goes even further with processing in memory. JEDEC said it is nearing completion of an LPDDR6 PIM standard, which is intended to reduce data movement between memory and compute by adding processing capability directly inside the memory subsystem. That is especially relevant for edge and data center inference workloads, where bandwidth efficiency and power consumption have become critical bottlenecks. If LPDDR6 PIM matures the way JEDEC is suggesting, it could become one of the more important memory side developments for inference acceleration over the next few years.
From a broader market perspective, this roadmap fits with what has already been happening around LPDDR and SOCAMM class memory. Industry reporting over the past several months has pointed to SOCAMM2 moving toward broader standardization and adoption in AI servers, while vendors have also started talking more openly about LPDDR6 products and speeds in the 10.7 Gbps class. In other words, JEDEC’s preview is not arriving in a vacuum. It is part of a wider shift where LPDDR is becoming a more serious data center memory option rather than staying confined to phones, tablets, and thin mobile devices.
For AI infrastructure, that is a meaningful change. Traditional server memory formats still dominate many workloads, but the industry is clearly searching for denser and more power efficient alternatives that can support larger AI scale memory footprints without pushing power and thermal costs even higher. LPDDR6 with SOCAMM2 and PIM looks like JEDEC’s answer to that challenge, and while the final standards are still in development, the direction is now unmistakably clear.
What do you think will matter more for next generation AI servers: higher raw memory capacity, better power efficiency, or smarter formats like SOCAMM2 and PIM?
