Penguin Computing makes available its servers based on NVIDIA Tesla V100 GPU accelerator, powered by the NVIDIA Volta GPU architecture.
NVIDIA Tesla V100 GPUs join a GPU server line that covers Penguin Computing’s Relion servers (Intel-based) and Altus servers (AMD-based) in both 19-in. and 21-in. Tundra form factors. Penguin Computing will debut a 21-in. Tundra 1OU GPU server to support 4x Tesla V100 SXM2, and 19-in. 4U GPU server to support 8x Tesla V100 SXM2 with NVIDIA NVLink interconnect technology optional in single root complex.
The NVIDIA Volta architecture pairs NVIDIA CUDA cores and NVIDIA Tensor Cores within a unified architecture. A single server with Tesla V100 GPUs can reportedly replace hundreds of CPU servers for AI. Equipped with 640 Tensor Cores, Tesla V100 delivers 125 TeraFLOPS of deep learning performance.
For more info, visit Penguin Computing.
Sources: Press materials received from the company.