OpenAI Targets 30GW of AI Compute by 2030, Setting a Far Bigger Infrastructure Ambition Than Its Rivals

OpenAI has laid out one of the biggest AI infrastructure goals seen so far, saying it now plans to reach 30GW of compute capacity by 2030. The update came through OpenAI official X account, where the company said that after committing to 10GW of compute in January 2025 and identifying more than 8GW of that target already, it is now planning for a much larger 30GW figure by the end of the decade.

That target is massive when compared with OpenAI’s own recent scale. In OpenAI’s January 2026 business update, the company said its compute footprint grew from 0.2GW in 2023 to 0.6GW in 2024 and then to around 1.9GW in 2025. That means the new 2030 goal represents nearly a 16x increase from its 2025 level, showing just how aggressively the company expects AI demand, deployment, and infrastructure intensity to grow in the years ahead.

The larger context here is just as important. OpenAI has already been scaling Stargate at an aggressive pace with Oracle and SoftBank. In September 2025, the company said Stargate had reached nearly 7GW of planned capacity and over 400 billion dollars in investment over the next 3 years, putting it on a clear path toward the full 500 billion dollar, 10GW commitment announced in January 2025. The new 30GW ambition therefore is not a fresh standalone idea. It is the next step after a compute expansion plan that was already among the largest in the industry.

It also places OpenAI ahead of the current public targets discussed around its major competitors. Anthropic and Amazon recently announced an expanded partnership under which Anthropic secured up to 5GW of capacity for training and deploying Claude, with nearly 1GW of Trainium2 and Trainium3 capacity expected online by the end of 2026. That is still enormous by normal datacenter standards, but it remains well below OpenAI’s newly stated 30GW objective.

The business logic behind this is straightforward. OpenAI has repeatedly argued that compute is the scarcest resource in AI and that more available compute directly translates into stronger models, wider product adoption, and faster monetization. In its January business post, the company said revenue and compute had scaled on almost the same curve, with annualized revenue rising from 2 billion dollars in 2023 to 6 billion dollars in 2024 and more than 20 billion dollars in 2025. From OpenAI’s perspective, building more infrastructure is not just about staying competitive in model training. It is also the core economic engine for the next phase of the company.

This scale up will also ripple across the hardware supply chain. A move toward 30GW of AI compute implies far more accelerators, advanced packaging, networking gear, power equipment, cooling systems, and memory. That last point matters especially because HBM remains one of the most constrained resources in the AI market. The more frontier model builders push toward huge compute clusters, the more pressure they place on semiconductor manufacturing, packaging capacity, and power infrastructure at the same time. OpenAI’s roadmap is therefore not just a company growth story. It is also a signal that the next AI buildout wave will be even more demanding on the broader tech industry than the current one.

There is also a national infrastructure angle. In its 2025 policy filing, OpenAI argued that power availability is becoming a strategic bottleneck for AI leadership and said the United States should target 100GW of new energy capacity per year. The company tied that directly to AI competitiveness, manufacturing, and national industrial policy. When OpenAI now talks about 30GW of compute by 2030, it is not talking only about servers. It is talking about a buildout that will require major advances in energy generation, grid capacity, construction, and skilled labor as well.

For the AI industry, the headline is clear. OpenAI is no longer thinking in terms of incremental compute growth. It is planning around infrastructure at a utility scale. If the company executes on that roadmap, the 2030 AI race may be shaped as much by power, fabs, memory, and datacenter construction as by the models themselves.

Do you think AI’s next big bottleneck will be chips, power, or memory supply?

Share
Angel Morales

Founder and lead writer at Duck-IT Tech News, and dedicated to delivering the latest news, reviews, and insights in the world of technology, gaming, and AI. With experience in the tech and business sectors, combining a deep passion for technology with a talent for clear and engaging writing

Previous
Previous

GTA 6 Delay Fears Ease as Fresh Claim Says November 19 Release Date Is Still Holding

Next
Next

Behaviour Interactive Confirms Layoffs Less Than a Month After The Fun Pimps Acquisition