Virtualization and the GPU

How GPUs can deliver engineering application performance.

How GPUs can deliver engineering application performance.

By Peter Varhol

As graphics processing units (GPUs) expand their processing capabilities beyond graphical computation and rendering, engineering organizations are seeking ways to leverage their superior floating-point capabilities in day-to-day work. GPUs from the likes of NVIDIA and AMD can speed up the execution of numeric computations by a factor of 10 or more over conventional, industry-standard CPUs.

 

Who Needs GPUs? Workstation Clusters Have Arrived

General-purpose CPUs haven’t conceded an inch to the power of graphics processing units (GPUs) for computationally intensive engineering work. In fact, Intel has managed to lay the groundwork for harvesting a far greater proportion of available CPU processing power than ever through the concept of workstation clusters. These clusters incorporate a fast—at least 1Gbps—private network, Intel’s Vt-D hardware virtualization technology, and Parallels’ Workstation Extreme virtualization software.
Workstation Extreme running on up to eight Intel Xeon-based workstations is able to create a large virtual working space consisting of processor cores, memory and disk space. These resources can execute computationally intensive applications that can be parallelized to run on the available cores. The engineer running interactive tasks such as document processing and email kept a portion of the memory and processing cores, and the rest is devoted to the batch simulation job.
The key to making this ad hoc cluster work for engineering applications is Intel Vt-D, a way of virtualizing I/O, including network I/O, at close to hardware speeds. This enables the memory bandwidth, network performance, and disk access to keep up with the demands of a distributed cluster running engineering computations in parallel.
The technology is new, but proven. There are already a few workstation clusters in engineering groups that are doing real engineering work, bringing the computation and the results closer to the engineers’ desktops than ever before. This means little or no waiting for scheduling—and the ability to view results when they are ready, rather than when the data center sends them over. While workstation clusters can’t yet handle the biggest analysis and simulation jobs, they can do enough to help change engineering practices.

At the same time,  engineering application providers such as ANSYS are building versions of their software that execute specifically on GPUs. These applications are able to dispatch computationally intensive operations to the GPU, where the computations are performed and returned to the application for further analysis and display.

Rewriting applications for GPU execution is a non-trivial job for vendors, and maintaining versions for both types of processors is an expensive proposition,  so MATLAB offers similar capabilities for custom engineering applications,  letting users dispatch portions of code and data to the GPU for execution. MATLAB instructions enable engineers to modify existing code to designate routines to run on an individual GPU or GPU parallel cluster, enabling millions of lines of MATLAB code to run substantially faster. Third-party products like AccelerEyes Jacket also provide the ability to run MATLAB and other language code on NVIDIA GPUs, providing a more automatic means of converting CPU code to optimized GPU instructions.

But these capabilities are largely limited to individual applications and workstations,  and systems with one or more GPUs that are networked to standard servers or workstations. Can engineering organizations take advantage of broader capabilities of GPU-enabled systems? Is there a way to expand the use of GPUs beyond individual workstations, so that engineers and other professionals can make effective use of this power?

Bringing Virtualization to Bear
The answer is yes. Thanks to a new virtualization capability by GPU vendor NVIDIA, it is possible to equip a server with a GPU board that is virtualized across multiple users on the network. That server can serve up multiple GPU desktops and applications to each engineer, providing a GPU for rendering or computation as required.

 
NVIDIA
The NVIDIA VGX Hypervisor provides that software interface that creates and manages the virtual machines for multiple workstations.

NVIDIA recently unveiled its VGX platform, which enables organizations to deliver a virtualized desktop with the graphics and GPU computing performance of a PC or workstation to engineers using any connected device. That includes tablets, smartphones and embedded display devices such as manufacturing control machines.

The VGX in Practice
The working model looks something like this: An engineering group has a number of individual engineers with workstations. These engineers are performing a variety of different activities, including design, analysis, simulation, Web searches,  email and general office activities. When they do engineering tasks requiring a lot of computation, and the software supports GPU execution, they can open a virtual machine to the VGX card on the server and launch the required application. The GPU delivery mechanism also works for graphics rendering jobs,  allocating graphics performance when and where needed. When completed, they close the session and return the GPU resources to the server.

While the alternative would be to outfit all workstations with GPU cards, this is a more cost-effective approach in a group that requires broad, but sporadic access to GPU resources. The resources are effectively shared from a server based on when and where they are needed.

According to Jeff Brown, general manager of the Professional Solutions Group at NVIDIA, the VGX board “delivers an experience nearly indistinguishable from a full desktop,  while substantially lowering the cost of a virtualized PC.” NVIDIA estimates that up to 100 users can be supported by a single VGX board, depending on the resources required. The company will develop other board configurations based on what engineering groups need in the future.

There are actually three parts of the VBX approach to virtual computing: the board, the hypervisor, and the User Selectable Machines (USMs). The initial NVIDIA VGX board features four GPUs, each with 192 NVIDIA compute unified device architecture (CUDA) cores and 4GB of frame buffer, for a total of 768 cores and 16GB of DDR3 memory. It plugs into a standard server in the data center, and provides the hardware resources needed.

The hypervisor is the virtualization software. The NVIDIA VGX GPU Hypervisor is a software layer that integrates into a commercial hypervisor, enabling access to virtualized GPU resources. It is specifically designed to integrate with standard virtualization hypervisors such as the Citrix XenServer, so that virtualized resources cover both GPU and CPU operation.

The USMs are a manageability device that allows administrators to configure the GPU capabilities delivered to individual users in the network, based on their needs. This can range from a standard PC-quality interface to full 3D design and rendering, and can be controlled from a central location.

As with CPU virtualization, the value is in the ability of the hypervisor and supporting software to provide I/O at near hardware speeds. This enables data to get in and out of GPU memory and cores quickly, making close-to-real-time rendering and computation feasible under many more circumstances.

GPU virtualization using NVIDIA VGX isn’t for every engineering group. If all engineers use high-resolution graphics and do a lot of computation, individual workstation solutions such as NVIDIA Maximus may make more sense. But desktop virtualization for rendering and computation adds to the number of ways engineers can take advantage of GPUs to accelerate many operations engineers need on a daily basis.

For widespread, but occasional use of GPU graphics and computation capabilities, NVIDIA VGX can address a lot of diverse needs.

Contributing Editor Peter Varhol covers the HPC and IT beat for DE. His expertise is software development, math systems, and systems management. You can reach him at [email protected].

More Info
AccelerEyes
Advanced Micro Devices
ANSYS
Citrix Systems
Intel
MathWorks
NVIDIA
Parallels

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Peter Varhol

Contributing Editor Peter Varhol covers the HPC and IT beat for Digital Engineering. His expertise is software development, math systems, and systems management. You can reach him at [email protected].

Follow DE
#2694