Lisa Su Says Agentic AI Is Adding to GPU Demand, Not Replacing It, as AMD Sees Server CPU Growth Accelerate

AMD CEO Lisa Su is pushing back against one of the more important questions now shaping the AI infrastructure market: if agentic AI drives a much bigger role for CPUs, does that come at the expense of GPUs and accelerators? Based on AMD’s latest earnings call, her answer is no, at least not yet. Su said the rise in CPU demand tied to agentic AI is “largely additive to the TAM,” meaning AMD sees the new CPU wave as expanding the overall AI infrastructure opportunity rather than cannibalizing accelerator demand. This came during AMD’s Q1 2026 earnings discussion, not a fourth quarter call, and it followed the company’s decision to sharply raise its server CPU market outlook.

The context for that argument is AMD’s strong first quarter. The company reported 10.3 billion dollars in revenue, up 38% year over year, with Data Center revenue reaching 5.8 billion dollars, up 57%. Lisa Su said AI was the main growth engine, with major cloud providers expanding EPYC deployments across general compute, data processing, accelerator head nodes, and emerging agentic AI workloads. In AMD’s view, that shift is making CPUs more important again, especially in large scale AI systems where orchestration, data movement, and parallel task handling all matter alongside raw accelerator performance.

Su’s central point is that AI infrastructure is no longer just about stacking more GPUs into a rack. She said agentic AI increases the need for server CPU compute because those workloads require more orchestration and coordination, in addition to serving as the head nodes for GPUs and accelerators. On the earnings call, she also indicated that customers are now planning CPU deployments and accelerator deployments together more closely than before. That is why AMD now expects the server CPU total addressable market to grow more than 35% annually and exceed 120 billion dollars by 2030, up from its previous estimate of about 18% annual growth over the next 3 to 5 years.

What makes this more interesting is how Su described the changing CPU to GPU balance. She said the industry has historically thought about CPUs mostly as host nodes in one to four or one to eight configurations relative to GPUs, but that ratio is now moving closer to one to one in some discussions. She even said it is possible to imagine scenarios where there could eventually be more CPUs than GPUs if agent based workloads become widespread enough. That does not mean GPUs are becoming less important. It means the control and orchestration layer is becoming much more compute intensive than many investors and even vendors expected just a year ago.

This is also where Su’s “largely additive” comment becomes strategically important. AMD is effectively arguing that foundational models still need accelerators, while the agents running on top of those models generate new CPU tasks that expand total compute demand. In other words, the GPU layer remains essential for training and high performance inference, but the growth of agentic AI adds another demand layer on top through CPUs rather than displacing the accelerator layer underneath. That framing aligns with broader analyst and market commentary following AMD’s earnings, which highlighted the company’s growing leverage to both CPU and AI accelerator spending.

There are 2 important corrections worth making to the narrative circulating around these remarks. First, the new 120 billion dollar server CPU outlook is for 2030, not 2020. Second, the comments were tied to AMD’s current Q1 2026 earnings cycle and recent investor discussion, not an older quarter. Those details matter because AMD’s updated forecast is being treated as a reflection of current enterprise planning around inference, cloud expansion, and agentic AI deployment rather than a recycled talking point from an earlier earnings season.

From a competitive standpoint, this gives AMD a strong narrative advantage. NVIDIA still dominates the accelerator conversation, but AMD can now argue that the next phase of AI infrastructure will reward vendors that own both the CPU and GPU side of the stack. That message also helps explain why EPYC is being discussed so aggressively alongside Instinct accelerators and why AMD is leaning harder into server roadmap messaging around Venice, Verano, and rack scale AI deployments. This last point is an inference based on AMD’s earnings remarks and broader market reporting after the quarter.

For the industry, Lisa Su’s comments underscore a bigger shift. The AI buildout is no longer a simple race for the biggest GPU cluster. It is becoming a more complex infrastructure battle where CPUs, accelerators, memory, networking, and orchestration all matter more together. AMD’s argument is that agentic AI makes that combined stack bigger, not smaller. If that proves true over the next 18 months, then fears of CPU growth eating into GPU demand may give way to a different reality: AI infrastructure spending could broaden faster than expected across both layers at the same time.

Do you think Lisa Su is right that agentic AI will expand CPU and GPU demand together, or will the market eventually have to choose which layer captures most of the value?

Share
Angel Morales

Founder and lead writer at Duck-IT Tech News, and dedicated to delivering the latest news, reviews, and insights in the world of technology, gaming, and AI. With experience in the tech and business sectors, combining a deep passion for technology with a talent for clear and engaging writing

Previous
Previous

Ubisoft Publicly Rebukes Assassin’s Creed Leaker Over Viral Invictus Screenshot, Says AI Alteration Spread “Misinformation”

Next
Next

AMD Starts Sampling MI450 GPUs as Lisa Su Says Biggest AI Deployments Are Now Landing on the Inference Side