IBM is focused on delivering new AI capabilities in the cloud and on premises to help enterprises gain insights from their data and create new value with that data, the company reports. IBM has been working with NVIDIA to bring its latest GPU (graphics processing unit) technology, NVIDIA Tesla V100, to the cloud and offers a suite of GPUs including the P100, K80 and M60 on IBM Cloud bare metal and virtual servers. To power on-premises workloads, IBM also offers CPU-to-GPU NVIDIA NVLink connection on its POWER9 servers.
Now IBM is introducing the NVIDIA Tesla V100 GPU to support AI, deep learning and HPC workloads on the cloud.
Users can equip individual IBM Cloud bare metal servers with up to two NVIDIA Tesla V100 PCIe GPU accelerators, NVIDIA’s latest, most advanced GPU architecture. AI models that once needed weeks of computing resources can now be trained in just a few hours, the companies report.
Building on IBM bare metal support for the NVIDIA Tesla P100 GPU, IBM will also make the P100 GPU available on IBM Cloud virtual servers. This provides power and high performance for AI and deep learning workloads with the scalability and flexibility of IBM’s virtual servers.
These new IBM Cloud services deliver access to GPU technologies, enabling enterprises, data scientists and researchers from organizations including NASA Frontier Development Lab and SpectralMD to train deep learning models and create cloud-native applications that can help address complex problems. IBM also collaborates with technology partners such as Rescale and Bitfusion to facilitate access to GPUs on IBM Cloud.
“Whether they are accelerating drug discovery or creating virtual personal assistants that converse naturally, data scientists are using our GPU computing platform in the cloud to solve complex problems that were once considered unsolvable,” says Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “The new IBM Cloud offerings based on our Volta technology provide incredible processing speeds and the ability to scale up or down on demand for HPC and deep learning.”
Sources: Press materials received from the company.