SK hynix Showcases 48GB 16 Layer HBM4, LPDDR6, SOCAMM2, And 321 Layer 2Tb QLC NAND To Power Next Gen AI Platforms At CES 2026
SK hynix announced it will open a customer exhibition booth at Venetian Expo during CES 2026 to showcase a broad next generation memory portfolio aimed directly at the rapidly scaling AI server and on device AI markets. The headline product is a 16 layer HBM4 stack delivering 48GB capacity, which the company is presenting publicly for the first time at the exhibition and positioning as the next step beyond its 12 layer HBM4 product at 36GB that is currently under development in alignment with customer schedules.
On the HBM roadmap, SK hynix is using CES 2026 to underline continuity and volume relevance alongside forward looking capacity upgrades. The company notes its 12 layer HBM3E at 36GB, described as the product expected to drive the market this year, will also be presented. SK hynix further emphasizes ecosystem integration by jointly exhibiting GPU modules that have adopted HBM3E for AI servers with a customer, specifically to demonstrate how HBM is deployed as a real system level building block inside modern AI platforms rather than as a standalone component story.
Beyond HBM, SK hynix is using CES 2026 to broaden the narrative into server memory form factors and platform level efficiency. The company will showcase SOCAMM2, a low power memory module specialized for AI servers, positioning it as part of a diversified portfolio built to respond to the fast growing demand curve for AI infrastructure. The direction here is clear: as AI deployments scale, memory is no longer only about peak bandwidth. It is also about power envelopes, density, serviceability, and platform fit, especially in high volume server deployments where total cost of ownership is a boardroom metric.
For on device AI, SK hynix will present LPDDR6 and frames it as optimized for AI experiences running locally, with significantly improved data processing speed and power efficiency compared to previous generations. This is a critical signal for the broader PC and mobile ecosystem because local AI workloads are increasingly limited by memory throughput and energy efficiency, not only by raw compute. Faster, more efficient LPDDR6 directly supports the industry shift toward more capable NPUs and iGPUs executing AI pipelines without immediate cloud dependency.
On the NAND side, SK hynix will exhibit a 321 layer 2Tb QLC product aimed at ultra high capacity enterprise SSDs as AI data center demand accelerates. The company positions the product around best in industry integration and highlights improved power efficiency and performance versus prior QLC generations, which is a meaningful lever in AI data centers where power consumption, rack density, and storage throughput are tightly coupled to operational cost.
SK hynix is also highlighting an emerging direction in memory architecture with cHBM or Custom HBM. The company says it prepared a large scale mock up due to specific customer interest, giving visitors a visual look at its structure and the idea of integrating part of computation and control functions into HBM that historically sat on the GPU or ASIC. The strategic takeaway is that as competition shifts from raw training performance toward inference efficiency and cost optimization, SK hynix is signaling readiness for more tightly integrated memory plus compute approaches that could reshape how AI systems are designed at the module level.
Overall, SK hynix is using CES 2026 to communicate a full stack memory play for AI, spanning HBM4 capacity scaling, HBM3E deployment readiness, AI server focused modules like SOCAMM2, on device AI memory via LPDDR6, and data center storage density through 321 layer QLC NAND. For the AI industry, this is not just a speed story. It is a platform readiness story focused on capacity, efficiency, and integration, which are the levers that will define who wins the next wave of AI infrastructure design cycles.
Engagement
Which matters more for the next wave of AI platforms in your view, higher HBM capacity like 48GB per stack, or new memory system designs like cHBM that move compute and control closer to the memory itself?
