NVIDIA Gets Ready to Float the GPU in Cloud

As NVIDIA sees it, you don’t necessarily need to be sitting in front of your GPU-equipped workstation to experience the power of the GPU. You should be able to tap into your desktop machine’s graphics horsepower remotely from anywhere, using a lightweight machine or a mobile device. Simply put, you could be running GPU-accelerated games, movies, modeling, and simulation programs from a tablet, connected to your remote GPU-powered workstation or data center on a high bandwidth.

It’s a vision previously revealed by NVIDIA’s CEO Jen-Hsun Huang during the GPU Technology Conference (GTC) 2012. While introducing the company’s next generation Kepler architecture, he said, “Kepler can render and stream instantaneously right out of the chip to a remote location you choose.” (For more, read “GTC 2012: GPU, Virtually Yours,” May 2012.)

For designers, engineers, and digital artists, NVIDIA’s virtual remote workstations promise “the experience of NVIDIA Quadro GPU on any VGX device,” according to Will Wade, senior product manager, NVIDIA Quadro product line. VGX is the hardware that enables desktop virtualization, often abbreviated as VDI for virtual desktop infrastructure. (Oh, these insufferable tech acronyms!)

NVIDIA’s remote desktops are made possible by combining the following:

  • NVIDIA VGX Boards, designed for hosting large numbers of users in an energy-efficient way. The first NVIDA VGX board is configured with four GPUs and 16 GB of memory, and fits into the industry-standard PCI Express interface in servers.
  • NVIDIA VGX GPU Hypervisor, a software layer integrated into commercial hypervisors, such as the Citrix XenServer, to enable GPU virtualization.
  • NVIDIA User Selectable Machines, which allows you to configure the graphics capabilities delivered to individual users in the network.
This week, NVIDIA unveiled its new VGX board, dubbed NVIDIA VGX K2 GPU, described as “cloud-based.” The K2 card has “double the cores of a Quadro 5000 GPU, double the memory, all running on 245 Watts,” said Wade, “which means this card can fit into servers designed for Tesla GPUs [NVIDIA’s product for high performance computing servers].”

The VGX technology and hypervisor like Citrix’s reduce the bandwidth requirement, according to Wade. This helps a smooth connection between the client device and the GPU-enabled host. “So now we’re talking megabyte bandwidth instead of bandwidth with tens of megabytes,” said Wade.

The current VGX solution doesn’t permit multiple client devices to tap into the same GPU to perform separate workloads (often called virtual GPU sharing). “This is still on the road map for our VGX solutions,” Wade revealed, “and we expect to be able to talk more about that in 2013.”

The first K2 server solutions will come from Cisco, Dell, HP, IBM, and Supermicro. They’re set to become available in early 2013.

Though GPUs were originally conceived as hardware for graphics boost, in recent years, GPU makers began refining their technology for general purpose computing. The GPU’s parallel-processing architecture makes it particularly suitable for computing jobs that can be subdivided into multiple operations. GPU acceleration was previously confined to professional apps, such as finite element analysis, animation rendering, and games. But it has now been incorporated into everyday computing apps, including Microsoft Office and Adobe Photoshop.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Kenneth Wong's avatar
Kenneth Wong

Kenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.

      Follow DE
#19784