By Peter Varhol
Writing any software, whether it was a general-purpose business application or special-purpose engineering codes, used to be pretty much the same. You wrote code into a text editor. When you thought it was ready, or at least ready enough to give you some feedback on how well it met your requirements, you ran a command line compile sequence, or invoked a make file that automated the compile. Because the program probably consisted of multiple modules, you invoked the linker, enabling the modules to find one another and call the proper interfaces. Last, you loaded the program into memory and ran it, usually within the context of the debugger.
In all likelihood, the program wouldn’t run right away, at least not correctly. So you went back to the debugging file, and it would point you to the line number (which may or may not be the correct one) and give a cryptic hint as to the cause of the deficiency. And so you went through the compile-link-load-debug cycle a number of times before the code with right enough to use.
Today, I describe to student programmers the compile-link-load cycle, followed by debugger error messages piped to a text file and examined individually, and most look at me like I had just grown two more heads. These students, along with commercial and enterprise software developers, now have sophisticated integrated development environments that largely hide the intricacies of just what is being done and why.
However, many engineers who program their own codes are still well and to some extent painfully familiar with the above sequence of operations. Few comprehensive environments for writing Fortran exist, and even C++ development is taking a back seat to the newer trend in managed code development using Java or Microsoft .NET.
Adding GPUs and other nonstandard processors into the mix makes the difference between engineering and business development even more apparent. Few makers of high-performance processors have the resources to build a high-end productive development environment. That means that many engineers still use individual tools at each stage in the development and debugging cycle for their codes.
Can Engineering Close the Gap?
But software development for engineering is in the process of catching up. Vendors such as the Numerical Algorithms Group (with the intriguing acronym NAG) (Oxford, UK) provides comprehensive libraries and modules of particular interest to those engineers building custom codes that need high-performance algorithms. These include optimized and modular libraries for mathematical, statistical and data mining problems, as well as visualization software.
And the Eclipse Foundation (Toronto, Canada) provides an open source development environment that can be adapted by smaller processor and compiler vendors for building a far more productive tool for engineers to program custom codes for analysis and simulation. Many commercial and embedded development vendors have adopted the Eclipse platform as a way to get a compelling development environment for small and previously-disenfranchised groups of developers, including engineers.
It is within reach of the processor and compiler vendors to deliver a really good development environment for engineers writing their own codes. Combine the Eclipse platform with libraries such as those available from NAG, and vendors such as Nvidia and AMD can make sure that engineers want to develop using their processors.
Whether or not the vendors see it like that is a different question. If you program as a part of your job, you need better and more integrated tools. The foundation has become available for vendors to deliver them to you. Let’s hope that those who have the most to gain from promoting their vision of high performance computing use the tools available to make it a reality.
Peter Varhol has been involved with software development and systems management for many years. Send comments about this column to DE-Editors@deskeng.com.