Dell Expands Its AI Factory Portfolio With Integration of NVIDIA’s Blackwell Ultra and RTX Pro 6000 GPUs
At Computex 2025, Dell Technologies made a significant stride in the enterprise AI infrastructure space by announcing a comprehensive lineup of AI Factory solutions featuring NVIDIA’s next-generation Blackwell Ultra GPUs and RTX Pro 6000 Server Edition graphics. The collaboration marks a major advancement in scalable AI deployments, empowering customers with cutting-edge tools to handle the rapidly evolving demands of large language models (LLMs), robotics, and multimodal AI systems.
PowerEdge Servers Meet Blackwell Ultra: From 192 to 256 GPUs in a Rack
Following NVIDIA's keynote, Dell unveiled a refreshed range of PowerEdge servers integrated with NVIDIA HGX B300 GPUs — branded under the Blackwell Ultra family. Two key server configurations were introduced:
Air-cooled Dell PowerEdge XE9780 and XE9785: These servers support up to 192 NVIDIA Blackwell Ultra GPUs, offering seamless integration into traditional enterprise data centers.
Liquid-cooled PowerEdge XE9780L and XE9785L: Designed for high-density deployments, these models scale up to 256 Blackwell Ultra GPUs per Dell IR7000 rack using direct-to-chip liquid cooling for maximum thermal efficiency.
These new PowerEdge platforms succeed Dell’s fastest-ramping product to date, the XE9680, and promise up to four times faster LLM training performance, driven by NVIDIA’s 8-way HGX B300 architecture.
PowerEdge XE9712 and XE7745: Scaling Inference and Multimodal AI
In addition to training-focused racks, Dell introduced the PowerEdge XE9712, which utilizes NVIDIA GB300 NVL72 technology to deliver:
50x more AI reasoning inference output
5x throughput improvements
Enhanced power efficiency with Dell PowerCool, tailored for dense AI workloads
Dell is also the first OEM to announce support for the NVIDIA RTX Pro 6000 Blackwell GPUs, with availability planned for July 2025 in the PowerEdge XE7745 platform. This 4U chassis server will support up to 8 RTX Pro GPUs, targeting high-demand enterprise workloads such as robotics, digital twins, and multi-modal AI.
The platform is validated under NVIDIA’s Enterprise AI Factory framework, making it a powerful universal system for physical and agentic AI use cases.
Preparing for the Future: Vera and Rubin Platform Integration
Looking ahead, Dell confirmed its intention to adopt NVIDIA’s upcoming Vera CPUs and Rubin GPUs within new PowerEdge designs. These future-ready servers will be optimized for Dell Integrated Rack Scalable Systems, allowing end-to-end AI lifecycle management for enterprises — from training to inference, regardless of scale.
Dell CEO Michael Dell reinforced this ambition:
“We’re on a mission to bring AI to millions of customers worldwide. Our job is to make AI more accessible. With the Dell AI Factory with NVIDIA, enterprises can manage the entire AI lifecycle across use cases, from deployment to training, at any scale.”
Industry Leadership Through Scalability and Accessibility
The launch cements Dell as one of the first vendors to offer the most extensive NVIDIA-centered AI infrastructure portfolio. From liquid-cooled hyperscale training systems to versatile AI-ready servers for enterprise deployment, Dell’s offerings cater to organizations aiming to deploy next-gen AI models in real-time, across edge and cloud environments.
As the AI race accelerates, Dell’s collaboration with NVIDIA not only enhances its “AI Factory” concept but also positions it as a key enabler in democratizing access to world-class AI solutions globally.
What are your thoughts on Dell’s massive AI infrastructure leap with NVIDIA’s Blackwell GPUs? Could this reshape enterprise AI deployment strategies? Let us know in the comments.