By Peter Varhol
One of the most significant innovations in high-performance computing in recent years has been the emergence of the so-called graphics processing unit (GPU) for general-purpose computation. These processors, designed by the likes of NVIDIA and AMD, have traditionally been designed for graphical computation and display. It turns out that the technical characteristics that make them good for graphics processing also make them excellent for the types of mathematical computations that are the bread and butter of engineering design and simulation.
There is a catch. For applications to run on these processors, they require recompilation, and possibly changes to the source code. These aren’t activities an end user will engage in, unless it is custom code and the engineer moonlights as a computer programmer. Instead, users have to depend on software developers and vendors to recognize the need and commercial possibilities of GPU computing, and to invest the time and money to make that conversion and offer it as a product.
One company has stepped up to make that conversion process easier. AccelerEyes has found a way to easily adapt source code to run on the NVIDIA GPUs. The company’s product, Jacket, acts as kind of a traffic cop for executing code, diverting code to run on the GPU when appropriate.
Developers tell Jacket how to treat executing code by tagging data structures that can be used effectively on the GPU. When code is needed to process that data, Jacket compiles the code for the GPU just in time and sends it to run on the graphics processor. When the GPU is finished executing the code, it returns both code and data back to the CPU.
The upshot is that there is no need to manually recompile code, the structures that must be tagged are easy to identify, and tagging is simple. With Jacket, the entire process can take days or even hours, rather than weeks or months to do a formal porting of the code.
Right now, Jacket works only with MATLAB code. According to Dave Gibson, a VP at AccelerEyes, this was a logical starting point. “Engineers are the ones most in need of the computational power that GPUs can deliver,” says Gibson, “and engineers are the primary users of MATLAB.” An engineering group that either has MATLAB code or uses commercial MATLAB code can see immediate and often dramatic performance improvements.
However, the goal of the company is to add more languages beyond MATLAB, and indeed most high-performance commercial applications are written in C or C++. Once AccelerEyes expands to those languages, it opens the door for many commercial software vendors to more easily develop GPU-friendly versions of their applications.
Jacket can also be used in the NVIDIA Compute Unified Device Architecture (CUDA), which expands the use of GPUs into an architecture for running code in parallel. Jacket can extend the single GPU support of the base Jacket product to as many as eight GPUs in a single-system image machine, and ultimately to GPU clusters.
This innovation expands the capability of engineers to use GPU computational power for a much wider variety of problems than can be addressed today. Thanks to the fact that GPUs and GPU systems using the NVIDIA Tesla architecture tend to be much less expensive than industry-standard CPUs, GPU computing is accessible to just about anyone who needs the horsepower. And the cost of performing increasingly sophisticated computations continues to drop.
If, over the next few years, your MATLAB code ends up running on NVIDIA GPUs, chances are it will be using AccelerEyes Jacket to do so. It will be under the covers, so you likely won’t know that it’s there, but you’ll see the results in faster execution of your analyses and simulations.
Contributing Editor Peter Varhol covers the HPC and IT beat for DE. His expertise is software development, math systems, and systems management. You can reach him at DE-Editors@deskeng.com.