NVIDIA Hopper GH100 GPU For Next-Gen Data Centers Rumored To Feature Over 140 Billion Transistors In A Monster 5nm Package
A few weeks ago, it was reported in a rumor that NVIDIA’s Hopper GH100 flagship GPU would be based on a 5nm process node with a die size measuring close to 900mm2. This would make it the largest GPU ever produced, not only on the 5nm process node but also in all existence. But that’s not all, now a new rumor has popped up over at Chiphell Forums which alleges that the GPU could feature over 140 Billion transistors. Well, just how much are 140 Billion transistors? For comparison, the current flagship data center chips such as AMD’s Aldebaran for Instinct MI200 series and NVIDIA Ampere GA100 for the A100 accelerators feature just 58.2 and 54.2 Billion transistors, respectively. That’s almost a 2.5x overall transistor count bump for the Hopper GH100 GPU if the rumor holds true. In terms of density, the NVIDIA Ampere A100 amounts to 65.6M transistors per mm2, while the Aldebaran GPU (based on its speculated die size of 790mm2) should have a density of 73.6M transistors per mm2. Assuming that the GH100 measures around 900mm2, its density should easily cross 150M transistors per mm2. That’s more than twice the density increase on the 5nm process node. But once again, these are all rumored figures and will only be applicable to the monolithic GH100 Hopper GPU. The MCM GPU is an entirely separate entity based on rumors and will come as the GH102 GPU. We don’t know the exact specifications except what research papers & rumors have told us. But all in all, the NVIDIA Hopper GPU, both, in its monolithic and MCM form, will offer a serious increase in transistor count and feature advanced 5nm packaging solutions.
NVIDIA Hopper GPU - Everything We Know So Far
From previous information, we know that NVIDIA’s GH100 accelerator would be based on TSMC’s 5nm process node. Hopper is supposed to have two next-gen GPU modules so we are looking at 288 SM units in total. We can’t give a rundown on the core count yet since we don’t know the number of cores featured in each SMs but if it’s going to stick to 64 cores per SM, then we get 18,432 cores which are 2.25x more than the full GA100 GPU configuration. NVIDIA could also leverage more FP64, FP16 & Tensor cores within its Hopper GPU which would drive up performance immensely. And that’s going to be a necessity to rival Intel’s Ponte Vecchio which is expected to feature 1:1 FP64. It is likely that the final configuration will come with 134 of the 144 SM units enabled on each GPU module and as such, we are likely looking at a single GH100 die in action. But it is unlikely that NVIDIA would reach the same FP32 or FP64 Flops as MI200’s without using GPU Sparsity. But NVIDIA may likely have a secret weapon in their sleeves and that would be the COPA-based GPU implementation of Hopper. NVIDIA talks about two Domain-Specialized COPA-GPUs based on next-generation architecture, one for HPC and one for DL segment. The HPC variant features a very standard approach which consists of an MCM GPU design and the respective HBM/MC+HBM (IO) chiplets but the DL variant is where things start to get interesting. The DL variant houses a huge cache on an entirely separate die that is interconnected with the GPU modules. Various variants have been outlined with up to 960 / 1920 MB of LLC (Last-Level-Cache), HBM2e DRAM capacities of up to 233 GB, and bandwidth of up to 6.3 TB/s. These are all theoretical but given that NVIDIA has discussed them now, we may likely see a Hopper variant with such a design during the full unveil at GTC 2022.
NVIDIA Hopper GH100 ‘Official Specs’:
News Source: HXL (@9550pro)