Home / Editor's Pick of the Week / Editor’s Pick: CFD Software Scales for Multi-Core Clusters

Editor’s Pick: CFD Software Scales for Multi-Core Clusters

Dear Desktop Engineering Reader:

Computational fluid dynamics (CFD) analyses of phenomena like hydraulics, castings, thermal stress evolution, and microfluidics can take days to solve even on a brawny workstation. This is the why, where, and how multi-core, high-performance computing is completely changing the game for CFD practitioners. Simply put, high-performance computing (HPC) means complex CFD analyses with multi-gazillion cells are now achievable efficiently, however you may define efficiency — a day, a weekend, or a few days. But to really take advantage of a multi-core HPC cluster, you need to have CFD software tuned to the assignment so that your definition of efficiency takes on new meaning, maybe even hours. That’s why when I came across version 5.0 of FLOW-3D/MP from Flow Science, I was curious to learn more.

FLOW-3D/MP is the distributed version of the company’s FLOW-3D CFD solution for transient, free-surface CFD modeling. That is, FLOW-3D/MP is a parallel code version of the software that’s optimized for multi-core 64-bit Linux clusters. It has the potential to scale up to 128 cores or more. I say “potential” because I read on the company’s website that, at present, FLOW-3D/MP is typically used for simulations with 16 or more cores.

Now, FLOW-3D itself is a general-purpose CFD system. It can model flow around 3D structures within a shallow water environment, work with fluid-structure interaction and thermal stress evolution models, and even analyze cooling channels. In other words, FLOW-3D is intended to give you the means to tackle pretty much any fluid dynamics and heat transfer problem life throws at you. BTW, it handles meshing and post-processing without additional modules, which condenses a few steps in your process.

Still, the times they are a-squeezing. Your design-analyze-deliver-to-market cycle seems to get shorter every quarter, yet your analysis demands are growing exponentially complex, requiring more time and more iterations to solve. To keep up, you need the computational horsepower that a multi-core HPC cluster offers you and that reality seems what Flow Science is intent on addressing with FLOW-3D/MP. As an aside, the company is a member of the HPC Advisory Council and FLOW-3D/MP 5.0 will soon be an Intel Cluster Ready registered application (the previous version was certified, but you have to recertify with a new release). It’s also in the process of being validated on Intel’s Xeon processor E5-2600 product family. So, they’re serious.

FLOW-3D/MP seems to have a lot of characteristics designed to give you back productive engineering time. For example, parallelization is based on the Hybrid MPI-OpenMP methodology. Essentially what this means is that limitations on scaling, load balancing, and inter-process communications found in one or the other parallelization methodology are canceled out by each other, giving you better overall performance and scalability.

But the neatest productivity feature is something called ADT — automatic decomposition tool. In a nutshell, ADT takes your initial mesh and tunes it for FLOW-3D/MP. You can learn more about ADT from a link at the end of today’s Pick of the Week write-up, but, for now, your takeaway here is that ADT lessens the time and effort you have to put in prepping a simulation job for distributed computing.

FLOW-3D/MP is really all about time and productivity. Let me leave you with this to think about: According to a testimonial on the company’s website, simulations that once required 5 or 6 days became 15-18 hour simulations using FLOW-3D/MP running on a cluster. Additional materials available today bring you to videos, on-demand webinars, and the like. Make sure to check out the benchmarks.

Thanks, Pal. — Lockwood

Anthony J. Lockwood
Editor at Large, Desktop Engineering

Read today’s pick of the week write-up.

This is sponsored content. Click here to see how it works.

About Anthony J. Lockwood

Anthony J. Lockwood is Digital Engineering's Editor-at-Large. Contact him via de-editors@digitaleng.news.