NVIDIA To Deploy Up to 800,000 SOCAMM Memory Modules in 2025, Accelerating AI Product Efficiency and Upgradability

NVIDIA is preparing to significantly ramp up deployment of its innovative SOCAMM (Stacked Optimized Compact Attached Memory Module) memory solution, with reports indicating that 600,000 to 800,000 units will be shipped this year alone to support the company's growing lineup of AI-focused devices. According to a new report from Korean publication ETNews, the GPU giant is building out a strong inventory of LPDDR-based modular memory in a push to support higher performance, increased flexibility, and lower power consumption across its AI product ecosystem.

First showcased during NVIDIA’s GTC 2025 event, SOCAMM stands apart from conventional memory like LPDDR5X and HBM in both its form factor and functionality. Unlike soldered LPDDR or high-bandwidth memory found in current servers and mobile AI devices, SOCAMM modules are user-replaceable and attach to the PCB using three screws. This modular design provides a breakthrough in upgradability and serviceability for AI PCs, edge devices, and future low-power server configurations.

One of the first implementations of SOCAMM is within NVIDIA’s GB300 Blackwell platform, which marks the company’s early pivot toward the new form factor. Though this year’s projected shipment volume is modest when compared to the massive volumes of HBM memory supplied to NVIDIA by partners like SK Hynix, Samsung, and Micron, the adoption of SOCAMM is expected to scale significantly in 2026, particularly with the introduction of the next-generation SOCAMM 2 modules.

Micron is currently the exclusive manufacturing partner for SOCAMM, but Samsung and SK Hynix are reportedly in negotiations with NVIDIA to join the supply chain. This expansion would be crucial for maintaining availability as AI deployment across consumer and enterprise devices accelerates.

SOCAMM modules deliver impressive performance specifications: bandwidth reaching 150 to 250 GB/s, with energy efficiency improvements expected to surpass traditional RDIMMs, LPDDR5X, and LPCAMM options. The design enables better thermal control and easier scalability, making it a versatile choice for NVIDIA’s diverse AI hardware roadmap.

As the demand for modular, efficient memory in AI computing grows, SOCAMM may quickly become a standard component not only for NVIDIA products but potentially across the broader industry, especially in devices that prioritize power consumption, compactness, and long-term upgrade potential.

How do you feel about upgradable memory becoming part of future AI PCs and servers? Could SOCAMM redefine modular computing? Join the discussion below.

Angel Morales

Founder and lead writer at Duck-IT Tech News, and dedicated to delivering the latest news, reviews, and insights in the world of technology, gaming, and AI. With experience in the tech and business sectors, combining a deep passion for technology with a talent for clear and engaging writing

Next
Next

CD Projekt RED to Unveil Major Cyberpunk 2077 Patch 2.3 During July 16 REDstream