Time for Larger Models

HPC-powered simulation goes beyond electro-structural-mechanical snapshots.


Scenarios that can be simulated on workstations are usually confined to snapshots (for instance, the effects of a specific stress load applied to an automotive component). Simulating even a small slice of time in a thermal, electrical or mechanical event (for example, six seconds of heat buildup inside an operating engine’s chamber) could push the limits of a reasonably spec’d workstation’s performance. To simulate more complex events or longer slices of time on personal computing systems, engineers usually simplify the job, often by reducing the level of details from the geometry or choosing a simpler physics model. Some accept this as a reasonable tradeoff. Others view it as a compromise in the accuracy of the answer.

 

The use of HPC-powered design space exploration in heat sink pin positions and configuration (left) led to the use of thinner pins for greater efficiency. Image: Siemens PLM Software The use of HPC-powered design space exploration in heat sink pin positions and configuration (top) led to the use of thinner pins for greater efficiency. Image: Siemens PLM Software

However, with high-performance computing (HPC), users have the option to increase both the size of the job and the level of details involved. Furthermore, they also may look at longer slices of time to get a better understanding of certain physical events, like car crashes, heat buildup, and wear and tear of a part over time on operating aircrafts. This has significant implications on the safety of products, not just in planes and cars but in a much wider range, covering mobile devices with heat-generating electrical components and hot-food containers with fragile walls to power plants with failsafe systems.

 

Parallel Move

Previously, due to the intense computation it demanded, the use of digital simulation was confined to simpler, smaller problems. The emergence of affordable HPC in the form of on-premise hardware and on-demand services changes the dynamics. Software vendors’ efforts to parallelize their products also make HPC-powered simulation economically attractive.

 

“HPC enables you to look at not just a single design, but a family of designs—all at once.”

— Alex Read, Siemens PLM Software

“If you run a simulation problem on 20 cores instead of one, you usually see a speedup, like finishing the job in 1/20 of the time it would have taken on a single core,” says Alex Read, Siemens PLM Software’s global director of industry groups. “Most simulation work could be done on a 50-100 core range. In one case, our customer runs jobs on a server with excess of 100,000 cores, but that is obviously not mainstream usage.”

Read was formerly the global business development director for oil and gas industries at CD-adapco, known for the STAR-CCM+ simulation software. In 2016, Siemens PLM Software acquired CD-adapco. Today, STAR-CCM+ is part of the Siemens PLM Software portfolio.

“We have parallelized the general purpose nonlinear solver in 3DEXPERIENCE so users don’t have to do anything special to use HPC,” says Bill Brothers, Dassault Systèmes’ head of business development. “They just tell [our simulation solver] Abaqus to run on HPC. There are clients running the solver on hundreds of cores. For a general purpose solver, that is very good scaling.”

Dassault Systèmes’ modeling and simulation technologies are part of its 3DEXPERIENCE offerings. Its simulation products are gathered under the SIMULIA brand. Its solver is Abaqus. The company also offers SIMPAK, a multibody simulation software to simulate the nonlinear motion of mechanical or mechatronic systems; and XFlow CFD for dynamic fluid flow simulation in automotive and aerospace (part of Dassault Systèmes’ acquisition of Next Limit Dynamics last year).

Scaling benefits depend on the application, explains Wim Slagter, ANSYS’ director of HPC and cloud marketing. “Structural mechanics solvers scale up to a different level than with CFD (computational fluid dynamics) or electromagnetics solvers,” he says. “Due to the underlying numerical algorithm, CFD jobs can scale up to a very high core count. We reached another world record when we were able to scale a single CFD job to 172,000 cores.”

In November 2016, ANSYS announced it has managed to “[scale] ANSYS Fluent to over 172,000 computer cores on the HLRS (High Performance Computing Center, University of Stuttgart) supercomputer Hazel Hen, a Cray XC40 system.”

 

Design Families

Simultaneous exploration of a family of design variants, or a sweep of design, is often called design space exploration. Running design space exploration on a single machine may be technically possible, but the length of time it takes to finish the computation would make the endeavor impractical. Therefore, design space exploration is often conducted on HPC.

“HPC enables you to look at not just a single design, but a family of designs—all at once,” says Siemens PLM Software’s Read. “In one case, we looked at a heat sink’s design and possible locations, and the air inlet’s positions. Through design exploration, we were able to cut the mass of the heat sink in half. It not only saves materials but also improves the cooling performance by 10%.”

Real-time product performance data is a feature of connected devices. Therefore, in the near future, many simulations—including design exploration for safety-critical architectures—likely will involve data from the field, making the computation in design exploration more complex and intense.

“Over the life of that product, the heat sink’s fan speed will change, dust buildup in the inlet will affect its performance, and the chips may kick off more heat,” says Read. “We can use simulation to evaluate the design families for their sensitivity to chip overheating or dust buildup. Doing such thing on HPC is much faster and easier than would be otherwise.”

 

Simulation Improves Safety, Cuts Costs

About 10 years ago, Swedish aircraft and rocket engine manufacturer Volvo Aero began collecting data related to its engines—time, speed, pressure, temperature and other operating conditions—to find correlations between mission conditions and engine part wear. The company used ANSYS structural mechanics software, along with some in-house tools and a high-performance computing architecture, for a reliability and flight safety project dubbed Life Tracking System (LTS).

Recounting the project for the ANSYS Advantage magazine (Vol. 5, Issue 3, 2011), Magnus Andersson, system owner for Life Engine, Volvo Aero, wrote, “The duty-profile method of calculating fatigue resulted in many cases of engine parts being serviced and replaced earlier than necessary, as well as engines undergoing maintenance more often than required. As a result, the jet’s owners incurred unneeded expenses.”

On the other hand, making part wear predictions using the data collected under LTS was estimated to require tens of thousands or hundreds of thousands of structural mechanic simulations. The job was therefore deployed on a 200-node cluster running on Linux OS. The cluster could run up to 128 independent and simultaneous simulations. The outcome of these simulations uploaded to a database formed the basis for Aero Volvo’s part fatigue predictions with greater accuracy.

On June 9, 2016, in Yokohama, Japan, Marco Pellegrini from Tokyo’s Institute of Applied Energy recreated in pixels and bytes a tragic event that happened five years earlier in Fukushima. Pellegrini was a featured speaker at the STAR Japanese Conference 2016, where STAR-CCM+ simulation software users gathered.

In 2011, about 200 miles away from the conference site, a magnitude 9.1 earthquake struck the eastern part of Japan. The natural disaster and the subsequent tsunami brought down the power supply and the cooling system of the Fukushima Daiichi nuclear reactors, triggering a meltdown.

Pellegrini and his colleagues employed Eulerian equations, condensation theories and experimental data to simulate how the meltdown might have happened. The compression steam flow simulation was conducted with STAR-CCM+ software.

“Simulation plays an important role in understanding the catastrophic events in these industries—why they happened, how they happened and how to design systems and processes to prevent them,” says Siemens PLM Software’s Read.

For more info:

ANSYS

Dassault Systèmes

High Performance Computing Center, Stuttgart

Siemens PLM Software

Tokyo Institute of Applied Energy

Volvo Aero (now GKN)

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Kenneth Wong's avatar
Kenneth Wong

Kenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.

      Follow DE
#16622