HPC Has Three Futures, and Each Needs Software

By James Reinders, Intel Corp.

The future of high-performance computing (HPC) will manifest itself in three ways: on every desktop, in big systems made of off-the-shelf parts, and by way of renewed interest in more exotic HPC systems. All three need software more than hardware advances to really be revolutionary. Without advances in software, the only hardware that will win is that which caters to current programming methods; a seemingly limiting prospect.

  Consider HPC for the desktop. Remember that building a machine that could handle a TeraFlop was a challenge, then it became a reality,  and a decade later such a machine doesn’t even rate enough interest to guarantee a spot on the Top 500 fastest computer list. Soon such capability will be on our desktops. This widespread availability of HPC, enabled by TeraFlop desktop machines, will be a triumph for HPC computing. The research simulations that required a government grant to run a decade ago will soon be run on a whim. Scientists and engineers will hardly give them a second thought,  and they’ll lead to many discoveries and inventions.

  The advances in software that will help in this endeavor are those that rely on shared memory (UMA) and only reluctantly address even nonuniform memory access. While we can expect many software innovations that enable mass use of parallelism, we should not expect revolutionary distributed memory programming innovations from this community. While we can expect heterogeneous systems in this space, we’ll see them become widely popular only after they overcome difficult software programming, and that will take time.

  It is a little odd in a way, to see parallelism adopted by the whole world and yet not directly help the very community that has been working on parallelism the longest.

  This type of parallelism has been the story of supercomputers for some time now, and much has been published about the likely emergence of higher speed interconnects up to and including optics (maybe even on the chips themselves). The focus on high-bandwidth and low-latency interconnects reflects a software dependency that we are still tied to. It won’t go away — but wouldn’t it be nice if we had high-performance programming techniques that weren’t so desperate for these particular interconnect demands?

  We should wonder if any techniques to avoid this hunger will ever be so common that we could build a Top 15 machine with low-speed and low-bandwidth interconnects. Now that would be a revolution, wouldn’t it? Of course, it is unlikely to make the Top 15 if LINPACK remains the metric.

  As traditional HPC makes it to the mainstream, and parallel solutions get widespread attention, it is tempting to ignore the need for expensive and high-risk trailblazing. It is also hard to stay ahead of the march of mass-market progress. We need some investments to blaze new trails;  imagination, money, and progress are needed.

  I’m very bullish on the future of HPC. The world’s hunger for compute power is not going to wane.

  As I see it, HPC has three futures. One that has broad impact, one that we are used to, and one that needs a concerted effort to breathe more life into it in order to make sure we enjoy another round of innovation. That is, once the other two futures have taken prior innovation to its limits.

  So we need to reinvigorate interest in the hard trailblazing work to find the innovations for the future. To quote a popular TV ad of late,  “In a world gone soft, someone has to be hard.”


  James Reinders is the chief product evangelist and director of marketing for Intel’s Software Development Products division. Send feedback on this commentary to [email protected].

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#8338