Investors Hub
Tech Support
The beast of GPUs, B200 announced by NVIDIA
Mar 26, 2024
Santosh Agrawal

The beast of GPUs, B200 announced by NVIDIA

NVIDIA has just announced the Blackwell platform B200 GPU, a beast of GPUs that promises to deliver unprecedented performance and efficiency for LLM Inferencing and AI training applications. Named in honour of David Harold Blackwell - a mathematician who specialized in game theory and statistics, and the first Black scholar inducted into the National Academy of Sciences — the new architecture succeeds the NVIDIA Hopper architecture, launched two years ago.

NVIDIA has not yet released the full technical specification of the Blackwell platform but with the publicly available information, with 208 billion transistors and a 1000W TDP the B200 GPU is a beast of a graphics card. The latest iteration of NVIDIA NVLink® delivers groundbreaking 1.8TB/s bidirectional throughput per GPU, ensuring seamless high-speed communication among up to 576 GPUs for the most complex LLMs.

The B200 based supercomputers are expected to perform 15X faster than the existing H100 powered machines in real-time LLM inferencing and 3X faster in AI training applications. The B200 will have 180GB GPU memory per GPU as compared to 80 GB in the H100. An 8 x B200-powered system claims to deliver 72 petaFLOPS in training and 144 petaFLOPS in inferencing as compared to 32 petaFLOPS at FP8 in a similar 8 x H100 powered system.

The power consumption of a 8 x B200-powered supercomputer shall also be at least 40% more than the H100-powered machines which means that we need to start working on the design for more power and cooling density rack to run these power guzzlers and heat generators.

Advanced confidential computing capabilities, a dedicated RAS Engine for reliability, availability and serviceability, advanced confidential computing capabilities and dedicated decompression engine are some brand now features of Blackwell platform.

NVIDIA also announced the GB200 Grace Blackwell Superchip which connects two NVIDIA B200Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect. GB200-powered systems can be connected with the NVIDIA Quantum-X800 InfiniBand and Spectrum™-X800 Ethernet platforms which can deliver advanced networking at speeds up to 800Gb/s.

NVIDIA has already published the specifications of B200-powered DGX systems as well as GB200-powered NVL72, a multi-node, liquid-cooled, rack-scale system which includes 72 Blackwell GPUs and 36 Grace CPUs. The GB200 NVL72 provides up to a 30x performance increase compared to the same number of NVIDIA H100 Tensor Core GPUs for LLM inference workloads.

At Esconet Technologies, we're eagerly awaiting the arrival of these game-changing GPUs which are expected to be priced at around ₹ 40.00 Lacs each. As a proud Nvidia partner, we're committed to leveraging the latest technology to meet the evolving needs of our customers. Stay tuned as we dive headfirst into the realm of Blackwell-powered systems, ready to unlock unparalleled performance and usher in a new era of computing.

Product names, brands and trademarks used in this article are a property of their respective owners.

Important Links
Get In Touch
Esconet Technologies Limited
  • D-147, Okhla Industrial Area, Phase - 1, New Delhi 110020 India
  • Google Maps Location
  • +91-11-42288700

*Formerly known as: Esconet Technologies Private Limited

© Copyright 2023 Esconet Technologies Ltd.