Home / Check it Out / Improving LSTC’s Multifrontal Linear Solver

Improving LSTC’s Multifrontal Linear Solver

Tony LockwoodSponsored ContentDear DE Reader:

A Check it Out column last September focused on how engineers from Cray, Livermore Software Technology Corp. (LSTC) and Rolls Royce teamed up to improve a simulation involving over 80 million elements using the Cray XC40 Supercomputer. That paper’s underlying theme is that you need to constantly improve your product as both technologies and user needs evolve. Constant improvement is the theme of the paper at the far end of today’s Check it Out link.

Livermore Software Technology has developed a new scalable algorithm for graph partitioning to address bottlenecks resulting from increasingly large simulation models. This dummy engine model, supplied by Rolls Royce, contains nearly 200 million equations. It was processed on an eight-node, 128-core cluster. LS-DYNA image courtesy of Cray Inc.

Livermore Software Technology has developed a new scalable algorithm for graph partitioning to address bottlenecks resulting from increasingly large simulation models. This dummy engine model, supplied by Rolls Royce, contains nearly 200 million equations. It was processed on an eight-node, 128-core cluster. LS-DYNA image courtesy of Cray Inc.

Hot off the PDF creator, “Improving LSTC’s Multifrontal Linear Solver” is something of an extension of the previously reported project. This paper will be presented at the European LS-DYNA Conference 2017 in Salzburg, Austria, next week. The authors gave Cray permission to publish it via DE. Unlike the previous paper, an engineer from Rolls Royce is not listed as a co-author; however, an engineer from Intel is credited as a co-author. My contact at Cray notes that they have worked closely with Rolls Royce and that its data was again used for this project.

The paper focuses on LSTC’s ongoing research into improving the performance of LS-DYNA FEA (finite element analysis) software. Specifically here, the focus is its multifrontal solver. The issue at hand is that implicit time steps in LS-DYNA work on solving sparse systems of linear equations that it transforms into a hierarchy of dense matrix factorizations. In a nutshell, the complexity of factorization is computationally intensive and can be a bottleneck. It requires lots of storage, and the time to solution as well as scalability are always something you want to improve. Engineers also want to simulate ever-larger models.

In a Nutshell: Improving LSTC’s Multifrontal Linear Solver

• Follow-on report on research to improve the performance of LS-DYNA for ever-larger simulations.

• Discusses how new microprocessors and storage technologies can improve performance.

• Looks into new methodologies to increase processor scaling.

• Examines a new scalable algorithm for graph partitioning to better use distributed memory.

• Deep technical details appeal to analysts as well as engineering and IT managers.

Learn more here.

This six-page paper looks at three areas of research showing promising performance improvements. One leverages new technologies, such as new microprocessor architectures, as well as emerging technologies for SSDs (solid-state drives). The second discusses how the project team is working to improve the performance of factorization on large-scale distributed memory systems so that sparse matrix factorization will continue to scale for thousands of cores. The last section looks at research efforts that should enable LS-DYNA to handle and solve even larger implicit models in the near future.

Befitting its upcoming presentation at the European LS-DYNA Conference 2017, “Improving LSTC’s Multifrontal Linear Solver” is an in-depth, highly technical paper. Its appeal for FEA practitioners, engineering project leaders and engineering IT managers should be wide. Hit today’s Check It Out link for your complimentary copy.

Thanks, Pal. – Lockwood

Anthony J. Lockwood
Editor at Large, DE

About Anthony J. Lockwood

Anthony J. Lockwood is Digital Engineering's Editor-at-Large. Contact him via de-editors@digitaleng.news.