By Peter Varhol
I was fortunate enough to attend NVIDIA’s GPU Technology Conference in San Jose September 30-October 2. There were substantially more people in attendance than had been expected, and the conference facilities of the Fairmont San Jose were bursting at the seams.
The focus was, of course, GPU (graphics processing unit) computing. This encompasses three primary areas—computer visualization, parallel processing, and Web computing. Given the heritage of the GPU, computer visualization is a natural application. For the most part, GPU computing is about graphics, and this conference proved that with GPUs graphics have much higher resolution, are realistic, and are much faster than with general-purpose CPUs.
Parallel processing is also a familiar topic. Thanks to the introduction of CUDA architecture, parallel processing with GPUs is one of the most intriguing models of high-performance computing today. The conference offered the prospect of substantially improving calculations through the use of CUDA, NVIDIA’s GPU parallel computing architecture.
Web computing is an unusual addition to this computing model. For this model, think cloud computing, and think video streaming. From the standpoint of the cloud, there was a demonstration of the ability to call video and other high-end graphics rendered on the fly.
The two keynotes that I saw—the Opening Keynote with NVIDIA CEO Jen-Hsun Huang and the second-day general session talk featuring Harvard professor Hanspeter Pfister—offered a strong and inspirational endorsement of GPU computing. The combination of state-of-the-art examples with outstanding vision of future possibilities made these presentations among the best I have seen.
I later told NVIDIA representatives that they should package up these two keynotes and send them to every high school in the country. Huang offered a series of demonstrations that illustrated both the power and versatility of the GPU, including an application from Ferrari that let you view your car in software, including the prospective options and accessories, down to the color, within a few seconds. Users can customize the car right down to the rims. Do you think this is pie in the sky? Hardly. Within a decade, any car buyer will be able to view and purchase their customized mode of transportation the same way.
Will GPU computing ever become part of the engineering mainstream? That’s a difficult question to answer; any existing application, whether commercial or custom, will require some reworking to run on the GPU. To take advantage of architectures with multiple GPUs using NVIDIA’s CUDA architecture, code has to be substantially reworked. This means that commercial vendors have to be convinced that such processors are broadly accepted by customers.
Alternatively, if you have your own code, you have to put in fairly significant effort to see benefits from the GPU. At the very least, you will have to change all of your C function pointers into call-by-value (I understand that Fortran code ports easier). Either way, you have to believe that the gain in performance, or the potential for new types of applications, is worth the effort.
I was ambiguous about the significant academic focus of this conference. On the one hand, I was a computer science professor for a significant part of my career, and appreciated the ability to showcase academic projects and research. On the other, it’s not entirely clear to me how a large academic presence will help achieve commercial acceptance of the GPU as a mainstream processor.
If I had the code, I would ask NVIDIA for a system with which to do my own development and experimentation. If the excitement generated by the conference and the potential of the GPU can be spread in the high-performance computing community, it will be a mainstream approach to computing in the near future.
Contributing Editor Peter Varhol covers the HPC and Engineering IT beat for DE. His expertise is software development, systems management, and math systems. Send comments about this column to DE-Editors@deskeng.com.