By Steve Robbins
In the late 1990s, I had the extraordinary luck to be invited to Sunnyvale, CA, to observe a test simulating the launch of a Space Shuttle payload. This was the actual payload for a shuttle launch, mounted on a test stand that looked like an upright flatbed tractor-trailer. It would shake and vibrate, simulating the launch vehicle while nitrogen-powered acoustic horns, mounted in the ceiling of the hanger, simulated acoustic vibration
The payload was covered with 640 load cells, accelerometers, position, pressure and other sensors hardwired to an observation booth about three stories up. These sensors were processed by I/O boards, A/D converters, and data acquisition boards into a bank of Pentium workstations running Windows NT. It was all monitored by engineers and scientists who had assembled the test platform. They were ecstatic because the system had just been optimized to run in “real-time,” which meant that the results from the analysis were returned in less than an hour, as opposed to the weeks of analysis it had previously taken.
When they ran the test, pieces of the payload rattled and shook. Some of them actually flew off. No one knew what was going to happen. But everyone was excited that the test could be analyzed so quickly so they could put the pieces back together and run another test. The more iterations, the fewer problems with the shuttle mission.
From Inputs to Output
More recently, I met with an engineer involved in the design and simulation of jet engines. Aerospace engineers have been using simulation software since the IBM System/360 was introduced in the ’60s. This manufacturer has millions of lines of legacy code that had been updated numerous times over the decades. As commercial simulation software became available, they incorporated it into their simulation practices.
Enabled by increasingly better software tools and exponentially faster compute power, they had just made the decision to move away from their in-house analysis code in favor of a multiphysics and simulation lifecycle platform that would take them to new levels of fidelity in their designs. Jet turbines are very complicated systems. I bet they are more complicated than the shuttle payload test I watched, and they are much more difficult to test.
During our discussion, the manufacturer said turbine simulation had reached the point where they could predict the outcome of the physical test. Despite the incredible complexity, they were actually simulating the physical test. The outcome was the ability to optimize the engine with confidence as they were designing it, thereby limiting the number of prototypes and tests that were needed. The tools of design technology have caught up with their design process, enabling them to find the optimal outcome.
A Process Approach
This issue introduces a change in the way Desktop Engineering will be reporting to our readers. While we discuss technology features and benefits in our articles–and will continue to do so when appropriate–we are becoming more engaged with the outcomes enabled by the tools engineers use today. We will explain how engineers combine the different technologies into a design process that yields the best outcomes. Outcomes are of prime importance. Creating a better design is good, but creating the best design–an optimized outcome–is better. Creating an optimized outcome months before your competition is the Holy Grail. This issue is focused on using the tools available today to optimize your designs and design processes in order to achieve the most important goal in optimization: the best outcome.
As I thought back to the shuttle payload and contrasted it with the jet engine simulations being performed today, I realized that both outcomes are equally important. The latter wouldn’t happen without the former. The near future is going to bring astounding changes that we cannot predict.
Steve Robbins is the CEO of Level 5 Communications and editorial director of DE. Send comments about this subject to DE-Editors@deskeng.com.