TSMC Must Grow Over 100% in 10 Years Just to Cover NVIDIA Demand, Says Jensen Huang, Signaling the Next Scale Phase of the AI Boom

The AI infrastructure race is no longer about who has the best chip design on paper. It is about who can reliably secure wafer supply, advanced packaging, and end to end manufacturing throughput at a pace that matches demand that keeps compounding every quarter. That is why Jensen Huang’s latest comments out of Taiwan are so important, because they frame the next decade as a capacity buildout era that will feel more like national infrastructure planning than normal semiconductor expansion.

Speaking to local media UDN during the latest round of supplier meetings tied to his well known Taiwan dinner circuit, Jensen said TSMC will likely expand production capacity by more than 100% over the next 10 years, and that it would still need to effectively double just to meet NVIDIA demand. The quote also describes this level of expansion as the largest scale infrastructure investment in human history.

From an operational perspective, the key detail is what Jensen specifically called out as the bottleneck: NVIDIA needs a lot of wafers and CoWoS. That is not a vague statement. It points directly to the reality that leading edge AI compute is constrained by both front end silicon starts and back end advanced packaging capacity, with CoWoS sitting at the center of modern high performance AI modules.

Jensen also emphasized that TSMC has been executing well but still needs to work extremely hard this year because NVIDIA demand is so large. In the same context, he stated NVIDIA has already entered full production on Blackwell and Vera Rubin, and he highlighted that Vera Rubin includes 6 different chips, each described as among the most advanced in the world. That line is a signal to the market that NVIDIA is not only ramping one platform, it is running overlapping ramps, and that kind of overlap tends to compress supply chain slack to near zero.

Another message embedded in the interview is competitive positioning. Jensen rejected the idea that cloud providers can easily out scale NVIDIA with custom ASIC programs by implying that matching NVIDIA would require equally rare R and D talent and comparable scale. He framed NVIDIA’s differentiation as building the full AI infrastructure, not just a single chip, and also pointed out NVIDIA’s broad partnership posture across the AI ecosystem, including work with Google and open source organizations.

For gamers and PC enthusiasts, this matters even if you never touch a data center GPU. When wafer demand and advanced packaging become dominated by AI infrastructure cycles, the entire ecosystem feels it, from allocation and timing to how OEM roadmaps get prioritized. NVIDIA’s ability to stay one step ahead has increasingly looked like a supply chain strategy advantage, not just an architecture advantage. And Jensen’s statement effectively sets expectations that the AI boom is not a short burst, it is a decade scale buildout where capacity planning becomes the primary battlefield.

 
Do you think this decade will be defined more by who can manufacture and package at scale, or will architectural breakthroughs still be the main differentiator once capacity finally catches up?

Share
Angel Morales

Founder and lead writer at Duck-IT Tech News, and dedicated to delivering the latest news, reviews, and insights in the world of technology, gaming, and AI. With experience in the tech and business sectors, combining a deep passion for technology with a talent for clear and engaging writing

Previous
Previous

Beast of Reincarnation Pushes Back on Soulslike Comparisons with Difficulty Options and a Clear Accessibility Goal

Next
Next

MMORPG Ashes of Creation Suddenly Implodes 52 Days After Steam Early Access Launch