Akamai to Deploy Thousands of NVIDIA Blackwell GPUs to Create One of the World's Most Widely Distributed AI Platforms
MWN-AI** Summary
Akamai Technologies (NASDAQ: AKAM) has announced a significant advancement in its AI capabilities by acquiring thousands of NVIDIA® Blackwell GPUs to enhance its global distributed cloud infrastructure. This strategic deployment will establish a unified platform tailored for AI research, development, and post-training optimization, aimed at optimizing AI inference workloads across Akamai's extensive network.
As AI technology transitions from a primary focus on model training to an emphasis on inference, latency has emerged as a crucial barrier to widespread AI deployment. With 56% of organizations indicating that latency hinders implementation at scale, Akamai's architecture seeks to mitigate these challenges by treating the globe as a single low-latency environment, accommodating the real-time decision-making demands of physical and agentic AI applications.
Akamai's Chief Operating Officer, Adam Karon, emphasized the need for a decentralized approach to bring AI models to life at scale, stating that their infrastructure not only adds capacity but also supports high-performance, low-latency AI inference. The integration of NVIDIA Blackwell GPUs empowers various applications, including autonomous deliveries, surgical robotics, and critical fraud prevention, all without the limitations of traditional centralized cloud systems.
Further bolstering this initiative, Akamai previously launched the Akamai Inference Cloud in October 2025, which optimizes AI processing closer to end users and devices, significantly improving throughput and reducing AI inference costs. With its robust global edge network comprising over 4,400 locations, Akamai is positioned to lead the charge in making AI more accessible and efficient.
Overall, Akamai’s investment in NVIDIA Blackwell GPUs marks a crucial step towards realizing a globally distributed AI compute grid, paving the way for innovative and scalable AI solutions across various industries.
MWN-AI** Analysis
Akamai Technologies (NASDAQ: AKAM) has made a significant move by acquiring thousands of NVIDIA Blackwell GPUs, positioning itself at the forefront of the AI inference market. This strategic decision to bolster its global distributed cloud infrastructure enables Akamai to create one of the world’s most widely distributed AI platforms, catering to the pressing need for rapid, low-latency inference capabilities.
As industries increasingly recognize the importance of inference, with recent reports indicating that 56% of organizations cite latency as a significant barrier to large-scale AI deployment, Akamai’s decentralized approach addresses these challenges head-on. By leveraging its extensive global network, the company is effectively turning the world into a low-latency backplane, allowing for seamless AI integration into various applications—from autonomous systems to healthcare technology.
Investors should view this development as a critical indicator of Akamai's growth potential. The company's focus on inference rather than just model training distinguishes it from larger hyperscalers, suggesting a niche and possibly less competitive space where Akamai can excel. The potential savings of up to 86% on AI inference compared to traditional hyperscaler infrastructures are compelling, signaling a substantial value proposition for enterprises seeking efficiency.
Furthermore, the introduction of Akamai Inference Cloud underscores the company's commitment to providing localized fine-tuning and privacy compliance, essential for businesses handling sensitive data. With over 4,400 global locations, Akamai is uniquely positioned to deliver robust AI solutions with operational efficiency.
As we move further into the AI-driven future, Akamai’s investments in Blackwell GPUs and its vision for a decentralized AI framework represent an attractive opportunity for stakeholders. Investors should consider Akamai’s stock as a strong buy due to its innovative AI strategy, robust demand for its new offerings, and significant cost-saving potential for clients in an increasingly competitive marketplace.
**MWN-AI Summary and Analysis is based on asking OpenAI to summarize and analyze this news release.
CAMBRIDGE, Mass., March 03, 2026 (GLOBE NEWSWIRE) -- Akamai (NASDAQ: AKAM), announced the acquisition of thousands of NVIDIA® Blackwell GPUs to bolster its global distributed cloud infrastructure. The deployment creates a unified platform for AI R&D, fine-tuning, and post-training optimization that intelligently routes AI inference workloads to optimized compute resources across Akamai's massive global network. The architecture is designed to support rapid inference by reducing the latency and data egress issues associated with centralized data centers.
While the first wave of AI focused on model training in centralized hubs, the industry has reached a tipping point where inference matters as much as training. The MIT Technology Review recently reported that 56 percent of organizations cite latency as the primary barrier preventing AI deployment at scale. By treating the globe as a single, low-latency backplane, Akamai is bridging this gap and providing the foundational infrastructure for physical and agentic AI where decisions must happen at the speed of the real world.
“While hyperscalers continue to push the boundaries of AI training, Akamai is focused on meeting the unique demands of the inference era,” said Adam Karon, Chief Operating Officer and General Manager, Cloud Technology Group, Akamai. “Centralized AI factories remain essential for building models, but bringing those models to life at scale requires a decentralized nervous system. By distributing inference-optimized compute across our global fabric, Akamai isn’t just adding capacity. We’re providing the scale, at minimal latency, that is required to move AI from the laboratory to the street corner and the hospital bed – where the work happens, where the data lives, and where the ROI is realized.”
Akamai’s adoption of Blackwell GPUs advances Akamai’s vision for a globally distributed AI compute grid built for the inference era. By extending AI processing beyond centralized AI factories to high-density distributed infrastructure, Akamai allows AI to interact with physical systems — from autonomous delivery and smart grids to surgical robotics and critical fraud prevention — without the geographic or cost limitations of traditional cloud architecture.
The integration of NVIDIA Blackwell AI infrastructure enables:
- Predictable, High-Performance Inference: Processing AI workloads on dedicated GPU clusters to generate rapid responses.
- Localized Fine-Tuning: Optimization of Large Language Models (LLMs) on-site to support data privacy and regional compliance needs.
- Post-Model Training: Fine-tuning and adapting foundation models on proprietary data to improve accuracy for specific tasks.
This announcement follows Akamai’s recent initiatives to expand its AI inference and generalized compute capabilities. In October 2025, the company announced Akamai Inference Cloud, redefining where and how AI is used by bringing AI inference closer to users and devices.
By providing tools for platform engineers and developers to build and run AI applications and data-intensive workloads closer to end users, Akamai delivers highly efficient throughput while reducing latency up to 2.5x, saving businesses as much as 86% on AI inference using NVIDIA AI infrastructure when compared to traditional hyperscaler infrastructure.
The platform combines NVIDIA RTX PRO™ Servers, featuring NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPUs, and NVIDIA BlueField®-3 DPUs with Akamai's distributed cloud computing infrastructure and global edge network, which has over 4,400 locations worldwide.
Akamai has seen strong demand for its initial deployment of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, and will be continuing to add GPU capacity as part of its cloud infrastructure strategy.
About Akamai
Akamai is the cybersecurity and cloud computing company that powers and protects business online. Our market-leading security solutions, superior threat intelligence, and global operations team provide defense in depth to safeguard enterprise data and applications everywhere. Akamai’s full-stack cloud computing solutions deliver performance and affordability on the world’s most distributed platform. Global enterprises trust Akamai to provide the industry-leading reliability, scale, and expertise they need to grow their business with confidence. Learn more at akamai.com and akamai.com/blog, or follow Akamai Technologies on X and LinkedIn.
Contacts
Akamai Media Relations
akamaipr@akamai.com
FAQ**
How does the acquisition of NVIDIA Blackwell GPUs by Akamai Technologies Inc. (AKAM) enhance its capabilities to address the latency issues faced by organizations deploying AI at scale?
In what specific applications does Akamai Technologies Inc. (AKAM) envision utilizing its globally distributed AI compute grid to optimize inference and improve decision-making processes?
What steps is Akamai Technologies Inc. (AKAM) taking to ensure data privacy and regional compliance during the localized fine-tuning of Large Language Models (LLMs) on its distributed infrastructure?
How does the integration of NVIDIA AI infrastructure into Akamai Technologies Inc. (AKAM) Inference Cloud help businesses save costs and improve efficiency compared to traditional hyperscaler solutions?
**MWN-AI FAQ is based on asking OpenAI questions about Akamai Technologies Inc. (NASDAQ: AKAM).
NASDAQ: AKAM
AKAM Trading
3.06% G/L:
$105.53 Last:
1,283,889 Volume:
$101.55 Open:



