Check it Out: Using OpenACC Directives with PGI Accelerator Compilers

By Anthony J. Lockwood

Dear Desktop Engineering Reader:

Portland GroupIt’s dorky of me to state the obvious, but high-performance computing (HPC) offers a world of possibilities for engineers and application developers. But did you know that back at the Supercomputing Conference 2011 in November, the Portland Group (PGI), Cray, and NVIDIA, along with CAPS, announced the OpenACC standard? OpenACC, in a nutshell, expands HPC’s performance possibilities by enabling developers to leverage GPU (graphical processing unit) coding and thus maximize application performance. That’s the outline of what today’s Check it Out is about. Make sure to read to the end because there’s a terrific kicker.

The penny tour of OpenACC is that it’s designed to be a portable programming platform. It uses a directive-based approach and is fully compatible and interoperable with NVIDIA’s CUDA parallel programming architecture. The promise is that OpenACC enables you to have a single, multi-platform, multi-vendor compatible code base, which makes cross-platform and multi-generation application development easier to achieve. What OpenACC enables you to do is create applications that offload code from a host CPU to a GPU accelerator with a minimum of fuss. And that means you get significant performance improvements with CAE, ray tracing, visualization, and other engineering applications without changing your underlying source code. For good measure, your coding productivity gets a boost too because OpenACC is said to be simple to work with.

OK, now, tomorrow, July 31 at 9 a.m. (Pacific), PGI is staging a complimentary webinar called “Using OpenACC Directives with PGI Accelerator Compilers” where you can learn more about this. (The kicker, BTW, is still coming.) In this webinar, Michael Wolfe, PGI Compiler Engineer and one of the principal OpenACC architects, looks at some ways to use OpenACC directives. He’ll also show you a few of the capabilities of PGI’s Accelerator compilers, some of which are support for the OpenACC 1.0 specification on NVIDIA GPUs, auto-generation of optimized loop schedules, and interoperability with CUDA FORTRAN and CUDA C/C++.

The kicker: Attend this webinar and you’ll be rafflized and could receive an NVIDIA Tesla C2075 GPU gratis. (You must be present at the end.) The NVIDIA Tesla C2075 companion processor is engineered for GPU computing: It features 448 application-acceleration cores per board, 6GB GDDR5 memory, and 515 Gflops (double precision)/1030 Gflops (single precision) floating-point performance. The full details will be given during the webinar, but they include things like downloading and using a complimentary 30-day trial license of the PGI Accelerator with OpenACC and completing some surveys. One tantalizing thing on a slide about this I received from my PGI contact: If you do not have access to a GPU, PGI says contact one of its webinar people. That might be really interesting.

You already know that HPC has changed what you can do with engineering applications. OpenACC-supported GPU computing might be the right tools for you to grasp HPC’s full potential. Learn more about this from PGI’s webinar and see if you’re doing all you can to ensure that your reach meets your ambitions. Register from this link.

Thanks,  Pal.—Lockwood

Anthony J. Lockwood
Editor at Large, Desktop Engineering

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Anthony J. Lockwood's avatar
Anthony J. Lockwood

Anthony J. Lockwood is Digital Engineering’s founding editor. He is now retired. Contact him via [email protected].

Follow DE
#2567