Nvidia unveiled the H100 GPU and Grace CPU ‘superchip’ at their AI developers conference. The H100 is built on Nvidia’s new Hopper architecture, and can make training AI models up to nine times faster. According to Nvidia, this can cut down computing times from weeks to days.
The H100 GPU contains 80 billion transistors, and is the first GPU to support the PCIe Gen5 standard, enabling a memory bandwidth of 3TB per second.
The Grace CPU superchip, which is based on the ARM architecture, has 2 CPUs connected via a new low-latency NVLink-C2C. The Grace CPU has 144 cores, and a 1TB per second memory bandwidth.
Also Read: Nothing confirms phone (1) in the works for Summer 2022, shows off Nothing OS
The H100 will be used to power Nvidia’s ‘Eos’ supercomputer, which it claims will be the fastest AI supercomputer in the world when it begins operations by late 2022.
Facebook’s parent company Meta had also announced a new AI supercomputer in January, which it then touted as the fastest in the world, with a claimed computing performance of 5 exaflops. Nvidia’s ‘Eos’ supercomputer will run at 18 exaflops.