CoreWeave Delivers Leading Inference Performance in MLPerf® Benchmark
MWN-AI** Summary
CoreWeave, Inc. (Nasdaq: CRWV), dubbed The Essential Cloud for AI™, recently made headlines by achieving significant results in the latest MLPerf® Inference v6.0 benchmark suite, particularly in the Datacenter Closed division. Utilizing NVIDIA's cutting-edge AI infrastructure, including the GB200 NVL72 and GB300 NVL72, CoreWeave has demonstrated its capability to transform raw compute power into industry-leading inference performance.
With the growing demand for AI-driven applications, inference performance has become a critical factor for enterprises shifting from AI experimentation to real-world applications. CoreWeave's recent benchmark results underscore its commitment to full-stack optimization and its ability to deliver exceptional inference performance across complex reasoning models. As CoreWeave's CTO, Peter Salanki, stated, "Inference is the defining layer in AI... Benchmarks like MLPerf help measure how theoretical performance translates into real-world output."
CoreWeave’s MLPerf v6.0 results have proven its prowess in two of the most demanding reasoning models, DeepSeek-R1 and GPT-OSS-120B. Notably, the GB200 NVL72 configuration achieved high performance in both server and offline modes, leading in tokens per second per GPU for the DeepSeek-R1 model. The GB300 NVL72 further showcased impressive throughput and efficiency, effectively doubling CoreWeave's prior performance in the MLPerf 5.1 results.
As AI workloads transition from experimental phases to mission-critical applications, CoreWeave reinforces its market standing as the go-to cloud provider for AI innovation. With its recent benchmark outcomes, the company has solidified its reputation as a leader in delivering high-performance, reliable AI cloud infrastructure designed for the demands of production environments. CoreWeave's ongoing advancements are crucial for enterprises aiming to harness AI at scale efficiently.
MWN-AI** Analysis
CoreWeave, Inc. (Nasdaq: CRWV) has recently made significant strides in the AI cloud market by achieving landmark results in the MLPerf® Inference v6.0 benchmark suite. The company showcased its capabilities using NVIDIA’s latest AI infrastructure, notably the GB200 NVL72 and GB300 NVL72 architectures, achieving industry-leading performance metrics. This reflects not only CoreWeave's commitment to pushing the envelope in machine learning (ML) inference but also highlights the growing demand for these capabilities as businesses transition AI applications from experimental phases to critical production environments.
The impressive performance metrics recorded—doubling the outcomes from prior benchmark versions—suggest that CoreWeave is not only on the cutting edge of technology but also strategically positioned to capture market share in a sector where demand significantly outstrips supply. As enterprises increasingly seek optimized solutions for AI deployment, CoreWeave's innovations in inference can address the gap between theoretical performance and practical application, a constraint that has hindered many competitors.
Investors should consider these developments when assessing the potential of CoreWeave shares. The company's ability to optimize performance in real-world scenarios makes it an attractive choice for enterprises looking to scale AI applications. Furthermore, its leadership in MLPerf benchmarks positions it favorably against competitors, enhancing its appeal to business clients and investors alike.
Given the strong alignment between CoreWeave’s capabilities and the evolving landscape of AI workloads, a bullish outlook is warranted. For those considering a position in CRWV, the recent achievements indicate significant growth potential not just for CoreWeave, but for the broader AI infrastructure market as it continues to mature. Long-term investors may see substantial returns as demand for reliable, high-performance inference solutions intensifies in the coming years.
**MWN-AI Summary and Analysis is based on asking OpenAI to summarize and analyze this news release.
Latest submissions featuring NVIDIA Grace Blackwell architectures demonstrate how CoreWeave’s purpose-built AI infrastructure translates raw compute into industry-leading inference performance
CoreWeave , Inc. (Nasdaq: CRWV), The Essential Cloud for AI™, today announced landmark results in the MLPerf® Inference v6.0 benchmark suite. Participating in the Datacenter Closed division, CoreWeave leveraged NVIDIA’s newest AI infrastructure , the NVIDIA GB200 NVL72 and NVIDIA GB300 NVL72 .
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260401967118/en/
CoreWeave leads MLPerf v6.0, doubling performance and delivering top results.
The AI industry is undergoing a fundamental shift with inference as the new critical focus. As enterprises move AI from experimentation into production and agentic workloads become the new standard, inference has emerged as the critical measure of performance. At the same time, demand for inference is growing faster than the underlying hardware can be deployed, and the gap between theoretical system performance and real-world output has emerged as a defining constraint on how quickly AI companies can grow. CoreWeave's MLPerf v6.0 results reflect the company's continued investment in full-stack optimization, consistently turning cutting-edge hardware into real-world inference performance.
"Inference is the defining layer in AI. It's where models are actually put to work and where performance in production shows up. Benchmarks like MLPerf help measure how theoretical performance translates into real-world output," said Peter Salanki, co-founder and chief technology officer of CoreWeave. "These latest results reflect our ability to deliver exceptional performance for the most demanding frontier reasoning models at scale through full-stack optimization. That's why customers rely on CoreWeave to launch, scale, and operate AI workloads in production, where real-world value is created and where it matters most."
CoreWeave’s v6.0 submissions reflected NVIDIA’s reference configurations as a verified, production-ready baseline across two of the most demanding reasoning models available: DeepSeek-R1 and GPT-OSS-120B. Key results include:
- Continued NVIDIA GB200 NVL72 Leadership: Led performance for DeepSeek-R1 in server and offline mode in tokens per second per GPU 1 . The configuration of GB200 NVL72 demonstrated standout throughput on DeepSeek-R1’s sparse Mixture-of-Experts architecture, where efficient serving requires dynamic expert routing and high-bandwidth internode communication.
- NVIDIA GB300 NVL72 Portfolio Leadership: Delivered high server throughput measured in tokens per second per GPU and per-GPU efficiency in the portfolio on DeepSeek-R1, 2X CoreWeave’s own MLPerf® 5.1 results on the same hardware footprint 2 .
- Innovation at Speed: Today, eight of the leading 10 model providers rely on CoreWeave Cloud, enabling customers to innovate at speed.
"The gap between benchmark performance and production reality has been one of the most persistent challenges in AI,” said Nick Patience, vice president & practice lead, AI platforms at Futurum Research. “CoreWeave's MLPerf v6.0 results, particularly on DeepSeek-R1, demonstrate the company is closing that gap through disciplined, full-stack optimization, which is exactly what enterprises and AI labs need as inference workloads move from experimental to mission-critical."
CoreWeave’s MLPerf v6.0 results provide additional validation as the only AI cloud to earn top Platinum ranking in both SemiAnalysis ClusterMAX™ 1.0 and 2.0, which evaluate AI cloud performance, efficiency and reliability. These benchmark results reflect CoreWeave’s platform strategy: delivering infrastructure purpose-built for the demands of production AI, from high-performance compute through the software layer that builders depend on to develop, test, and deploy at scale.
About CoreWeave
CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to move at the pace of innovation, building and scaling AI with confidence. Established in 2017, CoreWeave completed its public listing on Nasdaq (CRWV) in March 2025. Learn more at www.coreweave.com .
1 CoreWeave MLPerf 6.0-0022 server and offline mode. TPS/GPU is not an official MLPerf metric. Used in this article to normalize submissions that use different numbers of GPUs
2 Verified MLPerf score of v5.1 Inference Closed DeepSeek R1 server. Retrieved from https://mlcommons.org/benchmarks/inference , 2 April 2025, entry 5.1-0097. The MLPerf name and logo are registered and unregistered trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. See www.mlcommons.org for more information.
View source version on businesswire.com: https://www.businesswire.com/news/home/20260401967118/en/
FAQ**
How does CoreWeave Inc. (CRWV) plan to maintain its leadership in AI inference performance in future iterations of the MLPerf benchmark?
What specific investments is CoreWeave Inc. (CRWV) making to enhance its full-stack optimization capabilities for AI workloads?
How does CoreWeave Inc. (CRWV) differentiate itself from competitors in the rapidly evolving AI cloud infrastructure market?
What impact do CoreWeave Inc.'s (CRWV) MLPerf v6.0 results have on customer trust and market positioning in the AI industry?
**MWN-AI FAQ is based on asking OpenAI questions about CoreWeave Inc. (NASDAQ: CRWV).
NASDAQ: CRWV
CRWV Trading
-0.32% G/L:
$81.98 Last:
9,308,711 Volume:
$81.33 Open:



