AT 4U the GPU SuperServer SYS-420GP-TNAR+-US can support up to 8x Nvidia A100 SXM4 GPUs. With support for dual 3rd generation Intel Xeon Scalable processors, this platform can support up to 12TB of memory. It offers 6x 2.5-inch NVMe, SAS3, SATA3 drive bays in front, with optional dual M.2 storage drives.
Optimized for high-performance computing applications, the 4U Supermicro GPU SuperServer SYS-420GP-TNAR+-US delivers uncompromising performance for Artificial Intelligence (AI), AI inference, and Deep Learning applications. NVIDIA’s NVLink with NVSwitch technology can support up to 8x SXM4 Nvidia A100 GPUs for extremely fast interconnections between GPUs. Power is provided by a bank of 4x 3000W Titanium level redundant power supply units (PSUs) with large integrated fans.
Intel Xeon Scalable processors (SP) from the 3rd generation provide up to 40x physical cores and 80x virtual threads with Platinum series processors. Gold SPs will provide up to 32 cores, while Silver SPs will top out at 22 cores. With cooling provided by 4x 9cm fans in front coupled with 8x additional CPU head node cooling fans inside the case, CPUs with a thermal design power rating of up to 270W are supported. Large fans on the back of each of the 4x Titanium level PSUs do double duty cooling the PSUs and removing heated air from the chassis.
32x memory module slots are divided between the PSUs for 16x memory module slots per socket. With 8-memory channel architecture provided by the CPUs, each channel can be outfitted with 2x memory modules supporting memory speeds of 3200MHz, 2933MHz, and 2666MHz depending on the choice of CPU, memory modules and configuration. Intel Optane Persistent Memory 200 series (PMem) modules are supported providing the highest total capacity at up to 12TB when paired with DDR4 DRAM modules. Using just Registered (RDIMM) or Load-Reduced (LRDIMM) memory modules, the system can be outfitted with up to 8TB of memory at maximum capacity.
6x storage bays on the front of the Supermicro GPU SuperServer SYS-420GP-TNAR+-US chassis can be outfitted with NVMe, SAS3, or SATA3 drive types. Another optional drive cage on the back of the system can be outfitted with 4x NVMe drives. There are also 2x PCIe 4.0 x4 slots on the system board for support of dual NVMe M.2 or SATA3 drives. With 2x M.2 drives installed, they can be used in a RAID configuration to boot the system.
A GPU NVLink board houses up to 8X SMX4 GPUs with NVSwitch technology for superfast GPU to GPU interconnects through 6x modules. The GPU tray houses the Nvidia HGX A100 baseboard and is accessed from the front of the chassis behind the 4x large fans. It can be pulled out by unlocking the handles to either side of the fans on the front of the chassis. The PCIe 4.0 switch board, located just behind the motherboard tray, is accessed from the back of the system just above the row of 4x large fans in back. Like the front GPU tray, handles on the back also need to be unlocked and extended to access the PCIe switchboard. In addition, 8x expansion slots on the PCIe switch tray feature a x16 PCIe 4.0 interface offering Remote Direct Memory Access for InfiniBand EDR I/O cards offering up to 10GB/s Access speeds. Two additional PCIe 4.0 x16 slots on the motherboard tray accessible from the front of the system add up to 10x LP x16 Slots. An Advanced I/O Module (AIOM) compatible with open compute project 3.0 can be installed in a dedicated slot on the far left-hand side of the PCIe switch tray.
A dedicated slot on the back of the system provides access to the baseboard management interface (BMC), which is compatible with intelligent platform management interface 2.0 and Redfish API protocols. AN out of Band (OOB) management Package provides remote access to the system and requires an OOB license. Additional management software includes the Intel Node Manager, plus Supermicro’s proprietary software suites Watch Dog and SuperDoctor 5 (SD5).
For high-performance computing applications, the GPU Supermicro SuperServer SYS-420GP-TNAR+-US delivers with support for dual 3rd generation Intel Xeon Scalable processors plus support for 8x Tensor core A100 SMX4 GPUs. It is designed to support AI, AI Training and Inference plus Machine Learning and other compute-intensive applications. With 8x high performance AIOM cards, it can provide remote direct memory access to another networked computer boosting performance and reducing latency.
If you know what you want but can't find the exact configuration you're looking for, have one of our knowledgeable sales staff contact you. Give us a list of the components you would like to incorporate into the system, and the quantities, if more than one. We will get back to you immediately with an official quote.