The Gigabyte G481-S80 GPU Server delivers powerful GPU-assisted performance for deep learning and machine learning applications. GPU-to-GPU communications are enhanced with Nvidia’s NVlink board, which supports up to eight SXM2 graphics modules at speeds of up to 300GB/s. It features dual Intel Xeon Scalable processors plus data centric persistent memory modules for drastically increased performance and resiliency.
With two processors, the Gigabyte G481-S80 GPU server can support several terabytes of memory. There are also 10x 2.5-inch storage bays supporting SAS, SATA and NVMe drives. Outfitted with up to 4x high-speed network controllers supported on the NVlink board, this platform offers record-breaking performance for exceptional Peer-to-Peer communications and Remote Memory Data Access (RMDA). This platform is ideal for high-performance computing applications and offers the flexibility to support a number of different workloads, including deep learning and machine learning.
This server can support one or two 2nd generation Intel Xeon Scalable processors with up to 28 cores, and a TDP of 205W. Each processor has six memory channels and supports 12x memory module slots for a total of 24 active memory module slots with dual processors installed.
The Gigabyte system supports a maximum of 4TB using Load Reduced (LRDIMM) memory modules or a maximum of 2TB using 64GB Registered (RDIMM) memory modules. Data Centric Persistent Memory Modules are also supported in memory module capacities up to 512GB per module, but must be installed with a complement of Registered DDR4 DIMMs with certain constraints.
Ten storage bays up front will support 6x SAS or SATA drives on the left six bays and up to 4x NVMe drives on the right. Both SATA and NVMe drives are supported natively with ports on the motherboard. SAS drives will require a separate HD/RAID controller that can be installed in the x8 PCIe slot located at the front of the chassis. For NVMe RAID, the system comes with a virtual RAID on CPU, (vROC) key that is specific to Intel SSDs.
The PEX splitters on the NVlink board enable up to 224 PCIe lanes with 8x PCIe 3.0 x16 lanes dedicated for support of either the Nvidia Tesla P100 featuring Pascale architecture or the Texla V100 offering tensor cores and Volta architecture. Another two pairs of 4x PCIe 3.0 x16 lanes between the SXM2 modules are for high-speed network controllers like the Mellanox Infiniband 100GbE cards. You can install up to 4x with each connected directly to a CPU and 2X GPUs for ridiculously fast Peer-to-Peer (P2P) communications.
System management is through the BMC (Base Management Controller), specifically AMI MegaRAC SP-X, for remote and at-chassis management of the server. The Gigabyte Server Management (GSM) provides an easy to use graphical interface that can be accessed from a standard browser. Contrary to other manufacturers who charge extra for remote management features, GSM is included as a standard feature. It’s also compatible with Integrated Platform Management Interface (IPMI) and Redfish (RESTful API) connections. It includes several subprograms designed to get the most out of your Gigabyte server including a VMware Plugin.
With the impressive features on the Gigabyte G481-S80 GPU Server it’s clearly an excellent choice for a number of high-performance computing applications. The Nvidia NVlink board, compatibility with Data Centric Persistent Memory Modules, and NVMe storage enables this system to achieve the best possible performance for deep learning and machine learning applications. The NVlink board also delivers excellent P2P communications and RDMA support using up to four Infiniband 100GbE network controllers.
If you know what you want but can't find the exact configuration you're looking for, have one of our knowledgeable sales staff contact you. Give us a list of the components you would like to incorporate into the system, and the quantities, if more than one. We will get back to you immediately with an official quote.