AI researchers using desktop GPUs can now tap into the NVIDIA GPU Cloud (NGC) as the company has extended NGC support to NVIDIA TITAN.
NVIDIA also announced expanded NGC capabilities—adding new software and other key updates to the NGC container registry—to provide researchers a set of tools to advance their AI and high performance computing research and development efforts.
Customers using NVIDIA Pascal architecture-powered TITAN GPUs can sign up for a no-charge NGC account and gain access to a comprehensive catalog of GPU-optimized deep learning and HPC software and tools. Other supported computing platforms include NVIDIA DGX-1, DGX Station and NVIDIA Volta-enabled instances on Amazon EC2.
Software available through NGC’s container registry includes NVIDIA optimized deep learning frameworks such as TensorFlow and PyTorch, third-party managed HPC applications, NVIDIA HPC visualization tools and NVIDIA’s programmable inference accelerator, NVIDIA TensorRT 3.0.
In addition to making NVIDIA TensorRT available on NGC’s container registry, NVIDIA announced the following NGC updates:
- Open Neural Network Exchange (ONNX) support for TensorRT;
- immediate support and availability for the first release of MXNet 1.0; and
- availability of Baidu’s PaddlePaddle AI framework.
ONNX is an open format originally created by Facebook and Microsoft through which developers can exchange models across different frameworks. In the TensorRT development container, NVIDIA created a converter to deploy ONNX models to the TensorRT inference engine.
NGC is available free of charge to users of NVIDIA Volta GPUs on Amazon Web Services and all NVIDIA DGX-1 and DGX Station customers.
For more info, visit NVIDIA.
Sources: Press materials received from the company.