Accelerating AI: NVIDIA B100 Rumoured Unveiling at GTC 2024

NVIDIA is gearing up to introduce a pivotal breakthrough in AI technology with the upcoming unveiling of the B100 GPU.

4 min read

Chris Saganic
Accelerating AI: NVIDIA B100 Rumoured Unveiling at GTC 2024 cover photo

As the AI and machine learning sectors continue to advance rapidly, NVIDIA is gearing up to introduce a pivotal breakthrough in AI technology with the upcoming unveiling of the B100 GPU.

This latest addition, built on the innovative Blackwell architecture shared with the forthcoming RTX 50 Series gaming cards, promises to redefine the landscape of AI computation through its enhanced parallel computing capabilities.

NVIDIA GPU AI roadmapImage source: NVIDIA

While details on the B100 have been limited, NVIDIA's projections and a comparative analysis with its predecessors indicate a significant leap forward in performance, particularly in AI compute capabilities. The new GPU is expected to be crafted using the advanced 3nm process technology from TSMC, allowing for a denser transistor layout that boosts overall performance.

Intriguingly, the B100 might also embrace a multi-chiplet design, which could streamline production by improving yield rates for the smaller chiplets, despite potentially complicating the assembly process due to the complexity of multi-chiplet packaging.

Expected B100 performance chartImage source: NVIDIA

NVIDIA has acknowledged the high anticipation surrounding its next-gen Blackwell-based B100 products, even as the company prepares for supply challenges. Colette Kress, NVIDIA's CFO, highlighted during an earnings call the expected supply constraints, underscoring the immense demand outpacing the supply for these groundbreaking GPUs. This anticipation is partly driven by the significant performance enhancements promised over the existing Hopper architecture.

The expansion of NVIDIA's GPU line up does not stop with the B100; the company is also preparing to launch several other products based on the Blackwell architecture. These include the B40 GPU, aimed at enterprise and training applications, and the GB200, a combination of the B100 GPU with an Arm-based Grace CPU designed for the training of large language models.

Additionally, NVIDIA's GB200 NVL is set to cater specifically to training large language models, highlighting the company's commitment to supporting a wide range of AI and HPC applications.

NVIDIA is concurrently ramping up production of its H100 and H200 compute GPU, based on the refined Hopper architecture. This model builds on the company's existing offerings with increased memory capacity and bandwidth, benefiting from an established supply chain that promises a quicker market arrival.

NVIDIA Data Center & AI GPU Roadmap

CodenameXRubinBlackwellHopperAmpereVoltaPascal
FamilyGX200GR100GB200GH200 / GH100GA100GV100GP100
SKUX100R100B100 / B200H100 / H200A100V100P100
MemoryHBM4eHBM4?HBM3eHBM2e / HBM3 / HBM3eHBM2eHBM2HBM2
Launch202X202520242022-20242020-202220182016

If you are interested in gaining early access to the B100 GPU, you can register your interest to be notified as soon as it becomes available for rental.

To explore these advancements first hand and learn more about the integration of cutting-edge GPU technology in AI and cloud platforms, visit us at NVIDIA GTC 2024, booth 1621. We’ll be showcasing our GPU cloud platform, designed to leverage the capabilities of NVIDIA's latest GPUs to empower AI and machine learning applications.

Rent cloud GPUs

Learn more about CUDO Compute: Website, LinkedIn, Twitter, YouTube, Get in touch.

Subscribe to our Newsletter

Subscribe to the CUDO Compute Newsletter to get the latest product news, updates and insights.