NVIDIA A100 SXM GPU 40GB

Check Availability

NVIDIA A100 SXM GPU 80GB

Check Availability

NVIDIA A100 PCIe GPU 40GB

Check Availability

NVIDIA A100 PCIe GPU 80GB

Check Availability

IT Creations Partners

NVIDIA A100 SXM GPU 40GB and 80GB - System Overview

Performance

A100 GPU SXM angle

Using Nvidia’s NVLink board, the NVIDIA A100 SXM GPU can interact with 4 to 16 GPUs at up to 600GB/s for the highest application performance. This GPU can also be partitioned into 7x GPU instances enabling businesses to rapidly accommodate changes in demand. Performance using the Ampere architecture is increased by a factor of 20X compared to the prior generation, with the 80GB version processing the world’s fastest bandwidth at over 2 terabytes per second (TB/s). The A100 represents the engine that powers Nvidia’s data center platform and enables businesses and scientists to run the largest data sets and most complex simulations and models. A 4x GPU configuration is fully interconnected using NVIDIA’s NVLink technology. In turn, the 8x GPU system uses the NVSwitch for interconnect with 2x of these 8x GPU boards using an NVSwitch to establish a single 16-GPU node.

Memory

The NVIDIA A100 SXM GPU is available in two flavors with 40GB and 80GB options. The Ampere Tensor cores with Tensor Float (TF32) offer 20-times the performance compared to the previous V100 GPUs utilizing Nvidia’s Volta architecture. These dramatic increases in performance throughput are from the combined development of PCIe 4.0, NVIDIA NVLink, NVIDIA NVSwitch, NVIDIA InfiniBand enabling scale-out expansion to thousands of A100 GPUs. GPU performance for AI inference platforms provide a mind-boggling improvement of 249-times the performance compared to a single CPU (Intel Xeon Scalable Gold 6260). Double-precision 3rd generation Tensor Cores have cut the processing time to less than half for double-precision simulation workloads. Single precision workloads supported by NVIDIA’s TF32 also offer several factors of improved performance to analyze, visualize, and accelerate time-to-solution. MIG, or multi-instance GPU, allows the GPU to be partitioned, providing up to 7x users GPU acceleration with up to 5GB - 10GB per instance with the Nvidia A100 40G or 80G, respectively. GPU memory bandwidth for the 40GB version is rated at 1,555GB/s while the 80GB SMX card is rated at 2,039GB/s. With support for up to 16x GPUs on an NVLink board connected by the NVIDIA NVSwitch, the GPUs can communicate at an astonishing 600GB/s).

A100 GPU SXM front

Cooling & Power

A maximum thermal design power (TDP) for the NVIDIA A100 40/80 GPUs is listed at 400W, and they’re only available through the HGX A100 server boards using NVLink technology. For PCIe-based form factors of the V100, TDP is 250W. These SXM form factor GPUs do not have integrated fans and depend on a variety of different cooling methods depending on implementation and density with cooling options provided by the hardware manufacturers. Cooling options include passive heat sinks, plus active cooling options with liquid cooling for best performance.

Summary

Software like Nvidia AI Enterprise provides businesses and data scientists with optimized and certified data analytics software. This allows for rapid deployment, management, and scaling of AI and Machine Learning workloads in the hybrid cloud. Delivering the most powerful all-around AI and HPC Data center solutions, the NVIDIA A100 GPU featuring a SXM form factor and options for both 40G and 80G, enables researchers, businesses, data centers, and cloud providers to rapidly provide real-world solutions, at scale, and in a fraction of the time. Nvidia’s Ampere architecture is the weapon of choice for large data sets with the A100 80G delivering a three-fold increase in throughput over the A100 40g with 1.3TB of unified memory per node.

Check Availability

NVIDIA A100 SXM GPU 40GB and 80GB - Specifications

Memory
    40GB HBM2

  • GPU Memory Bandwidth: 1,555GB/s
  • 80GB HBM2e

  • GPU Memory Bandwidth: 2,039GB/s
Cores
  • Shading Units: 6912
  • TMUs: 432
  • ROPs: 160
  • SM Count: 108
  • Tensor Cores: 432
  • FP64: 9.7 TFLOPS
  • FP64 Tensor Core: 19.5 TFLOPS
  • Transistor Count: 54,200 million
Interconnect
  • NVLink: 600GB/s
Form Factor
  • SXM
Power Consumption
  • Max TDP Power: 400W
Server Options
  • NVIDIA HGX A100-Partner and NVIDIA-Certified Systems with 4,8, or 16 GPUs
  • NVIDIA DGX A100 with 8 GPUs

Check Availability

Get a quote for NVIDIA A100 SXM GPU 40GB and 80GB

If you know what you want but can't find the exact configuration you're looking for, have one of our knowledgeable sales staff contact you. Give us a list of the components you would like to incorporate into the system, and the quantities, if more than one. We will get back to you immediately with an official quote.

[email protected]

NVIDIA A100 SXM GPU 40GB and 80GB - Documentation

Check Availability