|
CoreWeave First Cloud Provider to Announce General Availability of NVIDIA GB200 NVL72 Instances
Wednesday, February 12, 2025
Another first-to-market milestone with some of the world's most advanced infrastructure technology to help organizations train, deploy, and scale the world's most complex AI models up to 30X faster.
LIVINGSTON, N.J., Feb. 4, 2025 /PRNewswire/ -- CoreWeave, the AI Hyperscaler(TM), today announced it is the first cloud provider to make NVIDIA GB200 NVL72-based instances generally available. CoreWeave's GB200 NVL72-powered cluster is built on the NVIDIA GB200 Grace Blackwell Superchip - bringing performance and scalability to the next level, empowering customers to rapidly train, deploy, and scale the world's most complex AI models.
"Today's milestone further solidifies our leadership position and ability to deliver cutting-edge technology faster and more efficiently," said Brian Venturo, co-founder and Chief Strategy Officer of CoreWeave. "Today's launch is another achievement of our series of firsts, and represents a force multiplier for businesses to drive innovation while maintaining efficiency at scale. CoreWeave's portfolio of cloud services -- such as CoreWeave Kubernetes Service, Slurm on Kubernetes (SUNK), and our Observability platform--is purpose-built to make it easier for our customers to run, manage, and scale AI workloads on cutting-edge hardware. We're eager to see how companies take their AI deployments to the next level with NVIDIA GB200 NVL72-based instances on CoreWeave."
The promise of next-generation AI powered by foundational and reasoning models is enormous. Scalability of cutting-edge models can be vastly constrained by server limitations - especially when it comes to memory capacity and communication speeds between GPUs. The CoreWeave GB200 NVL72 instances feature rack-level NVLink connectivity and NVIDIA Quantum-2 InfiniBand networking delivering 400Gb/s bandwidth per GPU through a rail-optimized topology for clusters up to 110,000 GPUs. Leveraging NVIDIA Quantum-2's SHARP In-Network Computing technology, collective communication can be further optimized, resulting in ultra-low latency and accelerated training speeds. CoreWeave's purpose-built, no-compromises approach to AI workloads, integrated with NVIDIA's world-class architecture, enables companies to harness the full power of the superchip efficiently, in a highly performant and reliable environment. Specifically:
-- Up to 30X faster real-time large language model (LLM) inference compared
to previous generations.
-- Up to 25X lower Total Cost of Ownership and 25X less energy for
real-time inference.
-- Up to 4X faster training of LLM models compared to previous generations.
This latest development furthers CoreWeave's journey as an industry leader in AI infrastructure. Last August, the company was among the first to offer NVIDIA H200 GPUs to train the fastest GPT-3 LLM workloads. In November, it was one of the first to demo NVIDIA GB200 systems in action. Earlier this month, CoreWeave announced it will deliver one of the first NVIDIA GB200 Grace Blackwell Superchip-enabled AI supercomputers to IBM for training its next generation of Granite models.
"Partnering with CoreWeave to access cutting-edge AI compute, including IBM Spectrum Scale Storage, to train our IBM Granite models demonstrates our commitment to advancing a hybrid cloud strategy for AI," said Priya Nagpurkar, VP, Hybrid Cloud and AI Platform Research at IBM. "As we continue to develop hybrid cloud and AI solutions, we are committed to delivering best-in-class innovations to our enterprise clients, from purpose-built Granite models, to advanced hybrid cloud platform and compute capabilities."
"Scaling for inference and training is one of the largest challenges for organizations developing next generation AI workloads," said Ian Buck, Vice President of Hyperscale and HPC at NVIDIA. "NVIDIA is collaborating with CoreWeave to enable fast, efficient generative and agentic AI with the NVIDIA GB200 Grace Blackwell Superchip to empower organizations of all sizes to push the boundaries of AI, reinvent their businesses and provide groundbreaking customer experiences."
Customers interested in spinning up CoreWeave GB200 NVL instances can reach out to us for more information.
About CoreWeave
CoreWeave, the AI Hyperscaler(TM), delivers a cloud platform of cutting-edge software powering the next wave of AI. The company's technology provides enterprises and leading AI labs with cloud solutions for accelerated computing. Since 2017, CoreWeave has operated a growing footprint of data centers across the US and Europe. CoreWeave was ranked as one of the TIME100 most influential companies and featured on Forbes Cloud 100 ranking in 2024. Learn more at www.coreweave.com.
Media contact: Brittany Stone, brittany.stone@teneo.com
View original content:https://www.prnewswire.com/news-releases/coreweave-first-cloud-provider-to-announce-general-availability-of-nvidia-gb200-nvl72-instances-302367298.html
SOURCE CoreWeave
|
|
|
 |
4BY4 to Showcase AI Video Solution 'PIXELL' at NAB 2025 with 8K Association | Mar 28, 2025
|
 |
Halfpricesoft.com Launches ez1095 ACA Software: Streamline 1095 Efiling | Mar 28, 2025
|
 |
SonicWall's Zero Trust Network Access (ZTNA) Solutions Driving Success Tackling 2025's Biggest Cybersecurity Challenges | Mar 28, 2025
|
 |
Innovative Control Systems Introduces New AI-Powered License-Plate Recognition Solution | Mar 28, 2025
|
 |
AI Demand Fuels Server and Storage Component Revenues to Record $244 Billion in 2024, According to Dell'Oro Group | Mar 28, 2025
|
 |
Cosmonic Launches Cosmonic Control: WebAssembly for Enterprise | Mar 28, 2025
|
 |
QuickLogic Announces the Amendment and Extension of Credit Facility | Mar 28, 2025
|
 |
TerraMaster Launches D4-320U Ultra-Short 4-Bay Rackmount USB3.2 10Gbps Expansion Enclosure, Enabling Easy Expansion for NAS and Servers | Mar 28, 2025
|
 |
Crypto4A Technologies Submits PQC-Capable QASM for FIPS 140-3 Level 3 Certification | Mar 28, 2025
|
 |
Introducing PDW SIM, a Next-Generation Flight Simulator for Tactical Small Unmanned Aircraft Systems | Mar 28, 2025
|
|
|
|