Intel Project Battlematrix With Arc Pro B60 GPUs Delivers Strong MLPerf v5.1 Results, Outshines NVIDIA in Perf/$
Intel has released new results from the latest MLPerf Inference v5.1 benchmarks, published by MLCommons, showing impressive performance from its Project Battlematrix workstation powered by Arc Pro B60 GPUs and Xeon 6 CPUs. According to Intel, the all-Intel platform achieved up to 4x higher performance per dollar versus NVIDIA’s L40S and a 25% uplift over the RTX Pro 6000 in workloads such as Llama 8B, underscoring the value proposition of the company’s GPU systems for modern AI inference.
Project Battlematrix, which Intel detailed in its technical introduction, is designed as a full-stack AI inference solution for high-end workstations and edge deployments. It combines validated hardware and software in a Linux containerized environment optimized for multi-GPU scaling and PCIe peer-to-peer transfers, while also including enterprise-grade reliability features such as ECC memory, SR-IOV virtualization, telemetry, and remote firmware updates. Intel’s goal is to make AI adoption easier while ensuring strong performance and accessibility at a competitive cost, providing an alternative to NVIDIA’s CUDA-driven ecosystem.
The benchmarks also highlighted the importance of CPUs as the orchestration hub of AI systems, handling preprocessing, transmission, and coordination. Intel pointed out that it remains the only vendor to consistently submit server CPU results to MLPerf, a reflection of its ongoing leadership in the space. In fact, Xeon 6 with P-cores delivered a 1.9x performance improvement over the previous generation in MLPerf v5.1, confirming the CPU’s continued role in enabling GPU-powered inference systems.
By demonstrating clear efficiency gains against NVIDIA’s professional lineup, Intel has positioned its Arc Pro B60 GPUs and Xeon 6 CPUs as a compelling solution for enterprises looking to balance performance and cost. With MLPerf providing an industry-standard benchmark, these results highlight that Project Battlematrix is not just about raw power, but about delivering scalable, enterprise-ready AI performance at competitive economics.
Do you think Intel’s momentum with Project Battlematrix is enough to challenge NVIDIA’s dominance in AI inference, or will ecosystem maturity still decide the race?