IT Creations Partners

Nvidia H100 SXM5 GPU - System Overview

Performance

The NVIDIA H100 SXM5 GPU supports 528 4th generation Tensor cores that can perform up to 989 teraFLOPS. An NVLink board connects 4x to 8x GPUs at up to 900GB/s using a PCIe Gen5.0 interface. Using the H100 GPU delivers and impressive 30x inference speedup on large language models, plus is 4x faster training. It features 2nd generation multi-instance GPU, or MIG, meaning that it can provide up to 7x separate virtual instances for multiple users. It has a base clock of 15890MHz with a boost clock of 1980MHz. AI Enterprise provides an accelerated path to workflows like AI Chatbots, recommendation engines, vision AI and more and is available by subscription enabling Administrators to quickly get up to speed with pre-defined workloads. This card does not have any external ports for connecting monitors.

Nvidia H100 SXM5 GPU side view

Memory

With 80GB of HBM3 memory connected via a 5120-bit memory interface that runs at 1313MHz. The NVIDIA H100 SXM GPU resides on an NVLink board connecting 4x to 8x GPUs. The SXM form factor GPU delivers a memory bandwidth of 3.35TB/s compared to 2TB/s using the PCIe-based card.

Cooling and Power

With the SXM form factor, this card draws power from an 8-pin EPS power connector and is connected to the rest of the system via a PCIe 5.0 x16 interface. It has a power draw of up to 700W maximum.

Summary

Built on a 4nm process, the NVIDIA H100 SXM5 80GB GPU is based off the GH100 graphics processor, it does not support DirectX 11 or DirectX 12, and as a result, gaming performance is limited. I addition to the phenomenal performance provided by 528 Hopper architecture tensor cores, it also features 16896 shading units, 528 texture mapping units and 24 ROPs (Raster Operations Pipeline). This is the top performing card for AI applications and accelerated computing performance for data center platforms. NVIDIA AI Enterprise software is an add-on but does simplify the adoption of AI with AI frameworks and tools to build accelerated AI workflows.

Check Availability

Nvidia H100 SXM5 GPU - Specifications

Form Factor
  • SXM
Memory
  • GPU Memory: 80GB
  • GPU Memory Bandwidth: 3.5TB/s
Decoders
  • 7 NVDEC
  • 7 JPEG
Power

    Max thermal design power (TDP):

  • 700W (configurable)
  • Multi-Instance GPUs:

  • Up to 7 MIGS @ 10GB each
Interconnect
  • NVLink: 900GB/s
  • PCIe Gen5: 128GB/s
Server Options
  • NVIDIA HGX™ H100 partner and NVIDIA-Certified Systems™ with 4 or 8 GPUs
  • NVIDIA DGX™ H100 with 8 GPUs
NVIDIA AI Enterprise
  • Add-on
Tensor Cores

    FP64:

  • 34 teraFLOPS
  • FP64 Tensor Core:

  • 67 teraFLOPS
  • FP32:

  • 67 teraFLOPS
  • FP32 Tensor Core:

  • 989 teraFLOPS
  • BFLOAT16 Tensor Core:

  • 1,979 teraFLOPS
  • FP16 Tensor Core:

  • 1,979 teraFLOPS
  • FP8 Tensor Core:

  • 3,958 teraFLOPS
  • INT8 Tensor Core:

  • 3,958 TOPS

Check Availability

Get a quote for Nvidia H100 SXM5 GPU

If you know what you want but can't find the exact configuration you're looking for, have one of our knowledgeable sales staff contact you. Give us a list of the components you would like to incorporate into the system, and the quantities, if more than one. We will get back to you immediately with an official quote.

[email protected]