Intel made a splash at SC12 with the introduction of its Xeon Phi.
The Supercomputing 2012 conference featured computers, processors and other subsystems that are placing the fastest computing power within the reach of more engineering professionals. With high-powered alternatives in abundance, and rapidly shrinking prices, it won’t be long before virtually all engineers either have the equivalent of a supercomputer at their desks, or in their laptop bags.
Supercomputing 2012 was also about other revolutionary changes in engineering. Cloud computing alternatives abounded. Some were simply hosting solutions, with a combination of CPUs and GPUs, for those who are able to upload an entire virtual machine and execute independently. Others, such as Microsoft’s Azure, offer a more comprehensive execution environment–with specific language and system frameworks, and sometimes even specific engineering applications.
Mobile computing is also gaining a toehold in high-performance computing (HPC)–as a way to not only access computing jobs in the cloud, but to make designs more mobile and sharable. NVIDIA’s Tegra processor adds both processing and graphics power, across four cores, for mobile devices.
On the software side, the focus was on getting the most out of parallel computing. Domain languages such as MATLAB, Mathematica and Maple are enabling engineers to run specialized computations on multiple cores, and in the cloud.
Big Announcements and Trends
Perhaps the most significant announcement of the week was the Intel Xeon Phi coprocessor. The Intel Xeon Phi is based on the vendor’s many integrated cores (MIC) architecture, which uses Intel’s industry-standard architecture and instruction set across multiple high-performance cores. The principle advantage of Xeon Phi and MIC is the ability to run existing software without recompile or other modification.
Many of the system and board vendors demonstrated systems that included a Xeon Phi slot, making them ready to go as soon as the coprocessor was available.
But the Xeon Phi wasn’t the only high-performance processor on display. AMD is also in the game, and with a unique approach. During the conference, AMD launched the AMD FirePro S10000 server graphics card, designed for a combination of HPC workloads and graphics-intensive applications. Unlike Intel and NVIDIA, which have separate alternatives for computation and graphics, AMD combines graphics and computational GPU cores onto a single die. This provides 5.91 teraflops of peak single-precision and 1.48 teraflops of double-precision in floating-point calculations. With two GPUs in one dual-slot card, the AMD FirePro S10000 enables a combined compute-rendering GPU solution while increasing overall processing performance.
Hardware systems are undergoing a significant change to accommodate faster parallel computing processors. One of the most significant architectural bottlenecks in computing is the single, relatively narrow bus between the processor and main memory, and between main memory and secondary storage. Fusion-io offered a partial solution to that bottleneck with a flash memory storage device that was software-configured to act as either storage or an extension of main memory, with separate buses into the processor space. The advantage here is that program instructions and data can be stored on these flash memory devices, which then have direct access to main memory and processor space. The single bus isn’t such a bottleneck any more.
Dell also demonstrated an innovative approach to managing engineering workstations in a secure yet flexible environment. Using technology from its recent Wyse acquisition, the vendor showed a display and keyboard remotely connected to a system located in the data center, or perhaps a vault. The workstation and I/O were connected by a box that transferred a high-speed protocol, passing graphics and keyboard data in real time across Gigabit Ethernet connections on the server and box. The end result is a separation among monitor, keyboard and physical computer–but with no lag in performance.
Why would an engineering group want to do this? One reason is security; the actual servers can be in a locked server room or vault, physically protecting designs on the system. But it also provides a way for engineers to work remotely, while accessing the applications and files on their workstation.
Software Still Makes an Engineering Solution Possible
The $64,000 question continues to be how effectively engineering software can take advantage of multiple cores to bring the full processing power to bear on a problem or computation. Many engineering vendors are rewriting their commercial software packages to parallelize computations, but this can be an expensive and time-consuming process.
Some languages, such as MATLAB and Maple, offer a limited ability to assign jobs to multiple processors and cores. But a lot of more general-purpose software may never be ported to execute in parallel. That’s where Advanced Cluster Systems comes in. This vendor provides a software solution called SET that enables vendors and engineering groups with proprietary source code to easily parallelize that code. In some cases, if the application is constructed appropriately, source code may not even be required.
From talking to dozens of exhibitors and attendees at the conference, it was clear that engineers are using these advances in hardware and software to do more simulation of design components and even entire designs. The ongoing story was one of refining and optimizing simulations, and perhaps building one or two physical prototypes before going to manufacture. DE
Contributing Editor Peter Varhol covers the HPC and IT beat for DE. His expertise is software development, math systems, and systems management. You can reach him at DE-Editors@deskeng.com.