Tachyum Introduces Open Source TDIMM Memory Standard With Major Bandwidth and Capacity Gains for AI Data Centers
Tachyum has expanded its technology roadmap following the reveal of its 2 nanometer Prodigy and Prodigy Ultimate processors by introducing a new open source memory standard named TDIMM, short for Tachyum DIMM. This next generation memory design focuses on significantly increasing bandwidth and capacity for large scale artificial intelligence and high performance computing environments.
According to Tachyum, the DDR5 based TDIMM design delivers a major uplift in performance with a 5.5 times increase in bandwidth compared to conventional registered DIMMs. Standard DDR5 RDIMMs offer approximately 51 GB per second, while TDIMM modules scale this to 281 GB per second. The platform is being introduced with a variety of form factors, starting with standard TDIMMs featuring 256 GB of capacity, a tall format offering 512 GB and an extra tall form factor planned to scale to 1 TB per module.
A technical overview slide shared by Tachyum highlights core DDR5 features integrated into the standard including a 1.1 volt VDD, high density 18 x 8 DRAM packages, a 484 pin connector, on module SPD EEPROM and halogen free materials.
Tachyum’s published comparison between standard DDR5 RDIMMs and its new TDIMMs outlines the following differences:
-
64 bit data
16 bit ECC
40 DQS
14 command address lines
16 control signals
288 pin connector
-
128 bit data
16 bit ECC
36 DQS
14 command address lines
16 control signals
484 pin connector
The doubled data width and expanded connector interface clearly indicate that TDIMM will require a new connector layout. Although the modules share similar physical dimensions with DDR5 RDIMMs, the electrical interface is significantly enhanced. Tachyum states that TDIMM increases the total signal count by only 38 percent while delivering double the bandwidth. The company also claims that TDIMM modules will require 10 percent fewer DRAM ICs, resulting in approximately 10% lower cost per module.
A 3D model of the TDIMM Type A module and a detailed block diagram show how the architecture integrates DRAM packages through a multi rank controller design with I3C signaling and the full 484 pin connector interface.
Looking forward, Tachyum expects the TDIMM standard to undergo continuous evolution. The company forecasts bandwidth improvements that could reach 27 TB per second by 2028. This would far surpass the estimated transition from 6.7 TB per second on DDR5 to 13.5 TB per second on DDR6.
In an official announcement, Tachyum stated that TDIMM will be instrumental in enabling very large artificial intelligence models at significantly lower cost and reduced power requirements. The company estimates that TDIMM based infrastructure could reduce projected OpenAI style data center expenses from 3 trillion dollars and 250000 megawatts of power to 27 billion dollars and 540 megawatts.
Dr. Radoslav Danilak, the founder and chief executive officer of Tachyum, stated that the TDIMM standard could redefine the economics of future artificial intelligence training. According to Danilak, the technology could reduce the cost of training models on all global written knowledge from 8 trillion dollars and 276 gigawatts to 78 billion dollars and 1 gigawatt by 2028. Danilak emphasized that TDIMM could democratize large scale artificial intelligence for companies and nations worldwide.
Tachyum is known for presenting ambitious long term visions for data center infrastructure and artificial intelligence acceleration. As with previous announcements, the industry will be watching closely to see whether the TDIMM memory initiative advances beyond the conceptual introduction phase and moves into production ready hardware.
Do you think TDIMM could become a practical open standard for future data centers or will adoption challenges slow its progress
