Partly Cloudy, Bright Skies Ahead

The high-performance computing community is beginning to embrace the cloud as a viable application platform.

The high-performance computing community is beginning to embrace the cloud as a viable application platform.

By Peter Varhol

Supercomputing ‘08 provided an excellent opportunity to examine how the leading edge of computing is reacting to the concept and practice of cloud computing. It turns out it is reacting pretty well.

High-performance systems are being designed with the cloud in mind; management software is making it possible to better control computing power among multiple users and applications; and programming tools are enabling rapid development, testing,  and deployment using cloud resources. The result is that there is increasing acceptance of cloud computing by this leading-edge community and rapid adaptation of tools to enable high-performance computing users to take advantage of computing power in the cloud. This trend bodes well for general purpose computing in the cloud as industry standard hardware and software is becoming a key component of high-performance computing.

Hardware Paces the Move
High-performance hardware has a completely different meaning today than it did just a couple of years ago. There was a significant performance gap between clustered industry standard hardware and top-end limited edition and often proprietary systems. But multicore chips have changed that equation, and those chips, and the systems built around them, are poised to make a difference in the cloud.

A dramatic multicore configuration today is only on the fringes of what might be considered industry standard. At some point in the recent past, software engineers determined that for compute-intensive applications, the high-end graphics processing units (GPUs) normally used for gaming had become far more powerful than Intel-architecture chips. The thinking went that there must be a use for this type of processor. The question was how to leverage it in standard computer architecture.

At the conference, NVIDIA presented a 960-core GPU system using its high-end graphics processors. This system — priced at just under $10,000 — is called the Tesla and is rated at 36 TeraFLOPS, making it theoretically possible to solve all but the most computationally intensive problems.

Granted,  standard applications can’t run on systems like this  as most are typically compiled to run on industry-standard Intel processors. But NVIDIA makes compilers available, so custom code can be compiled to run on this platform and thereby take advantage of the parallelism offered by the multiple cores. The ability to have this much computing power available in the cloud can lead application developers to find ways to take advantage of it.

Of course, Intel processors are also multicore, and can be clustered for greater computing power. At the conference, Microsoft announced that its Windows Server System had powered one of the Top Ten-rated supercomputers as measured by industry standard floating point benchmarks. This type of performance makes even Windows-based computers available for use in the cloud for high-performance needs.

The implications to enterprise computing in the cloud are significant. HPC on Windows using standard hardware brings to the engineering community the ability to obtain performance on demand for its applications. Applications used for a wide range of capacities that must execute fast under all situations will benefit from this trend.

Managing this level of computing power has always been one of the barriers to its widespread use. But management tools are also emerging that provide the ability to effectively partition this power among multiple applications and jobs. These tools manage individual applications, often from different users, across the multiple cores. The goal is to identify opportunities for an application to take advantage of parallel processing to execute more quickly, and assign systems or even individual cores to that application.

Special-purpose middleware is also making it possible to run applications that have been tuned to work with that software. Middleware from companies such as Acceleware and ScaleMP sit between the operating system and the application, and use both general-purpose and industry-specific algorithms to break up the application execution into parallel components. Those components can then execute on the underlying Windows operating system in threads that run in parallel on separate cores.

These tools and middleware are likely to be employed by the cloud provider — rather than by the enterprise — to better use the computing power of multicore clustered systems. But individual enterprises using high-performance cloud services will benefit through more granular and scalable use of that power.

Don’t Forget About Programming Tools
In HPC, programming, or at least building an application, is just as important as running it. That’s because the application is often compute intensive and uses special-purpose tools in its execution.

Fortunately,  development tools are emerging that can help application developers build software that takes advantage of these systems, both locally and in the cloud. These tools aren’t your standard development products, but they can be used by enterprise developers under certain circumstances. They include Mathematica,  the math programming environment from Wolfram Research, and MATLAB, the engineering modeling and simulation environment from The MathWorks. Both assist in developing for and deploying to the cloud. For example, the MATLAB language provides a keyword that enables parallel execution of defined parts of the code. It works similarly to the Unix fork and join instructions, except it specifically tells the platform to run on multiple cores or systems if available.

Second,  MATLAB enables the programmer to define the execution environment, even defining a specific cloud such as the Amazon EC2. By configuring that location for execution, the programmer can ensure that the application is optimized for that particular environment.

Overall,  the cloud is fast becoming a legitimate platform for both development and execution of HPC needs. While a lot of development can still occur on the desktop, there is a need to both take advantage of the higher level of parallelism afforded by the cloud and increased capacities to quickly scale in response to business needs.

Peter Varhol has been involved with software development and systems managementfor many years. Send comments about this column to [email protected].

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Peter Varhol

Contributing Editor Peter Varhol covers the HPC and IT beat for Digital Engineering. His expertise is software development, math systems, and systems management. You can reach him at [email protected].

Follow DE
#9242