Featuring an NVIDIA Grace Hopper Superchip CPU, the Gigabyte H263-V11 server, houses 4x nodes with rear access in a 2U form factor. Each node supports a single Hopper H100 GPU and Grace CPU, with a maximum TDP of up to 1000W, connected via NVLink-C2C. A total of 16x 2.5" Gen5 NVMe hot-swappable bays are available on the front panel with an option to install 8x more via NVIDIA BluField-3 DPU, for up to 6x drive bays per node. For more storage options, each of the 4x nodes supports 2x M.2 PCIe Gen5 x4 slots for 2280 or 22110 cards.
The high-density H263-V11 server supports up to 480GB of ECC-enabled LPDDR5X memory with a maximum memory bandwidth of 512GB/s. The NVIDIA Hopper H100 GPU, on the other hand, offers up to 96GB of HBM3 memory with a memory bandwidth of up to 4TB/s. Each node has a PCIe cable and a single PCIe Gen5 x16 FHHL slot, plus 1x OCP 3.0 slot with PCIe Gen5 x16 bandwidth. Offering a number of high-performance features, this platform is intended for use in AI, AI inference, AI training, and high-performance computing (HPC) applications. Triple 3000W (240W) 80 PLUS Titanium redundant power supplies are located on the back of the server. A total of 8x 10Gb/s BASE-T LAN ports and 4x dedicated management ports are available to regulate the system's fan speed, CPU temperature, and other settings, utilizing the Gigabyte Management Console or the Aspeed AST2600 management controller. It can also be managed using the single chassis management controller (CMC) port. This system features liquid cooling.
If you know what you want but can't find the exact configuration you're looking for, have one of our knowledgeable sales staff contact you. Give us a list of the components you would like to incorporate into the system, and the quantities, if more than one. We will get back to you immediately with an official quote.