Engineering The Future

By Joe Curley

Editor’s note: This commentary was sponsored as part of DE’s Visionary Voices section.

Innovation remains one of the primary drivers of job creation and economic stability. Innovation is rewarded. Perhaps the best example is Apple’s ascent from near collapse to being one of the most respected companies for creativity and innovation. But innovation is not for the timid. It needs to be fed and cultivated. It needs to embrace change.

Testing one idea at a time as Thomas Edison did when he first created the light bulb is no longer plausible in the today’s competitive markets, and even more certainly won’t be in the future. Design of experiment (DoE) workflows offer organizations the opportunity to consider more design factors and potentially develop an optimal product in the same time it took to create an adequate product design. However, this time the product will be less expensive to manufacturer, use a third of the material, and deliver the experience it was intended to.

Intel

  Experiment, Predict, Calculate
Looking forward, I see a dramatic expansion in ideas like the DAKOTA project (design analysis kit for tera-scale architectures) at Sandia Labs. In fact, I expect software technologies like DAKOTA or other DoE workflows to morph into an intelligent design agent that will help engineers quickly and efficiently interrogate model parameters and ideas. Imagine the impact this might have on your organization and how it could be used to reduce design costs by accelerating the design process, lower the need for expensive engineering of physical prototypes and associated design changes, or simplify product material requirements. I suspect that when you look at the cost to support the development of DoE workflows and compare it to the opportunity cost, you may quickly realize that not investing in these iterative design workflows or associated technologies may be more expensive.

One thing is certain: The need for engineering compute capacity will not diminish anytime soon. In fact, it will explode exponentially. Intel expects to be one of the center pieces in this explosion. What this means is that workstations, where your innovative ideas first take their shape, will be more powerful and more capable of doing even larger, more complex modeling locally at your desktop. Your opportunity to explore “what if” on an Intel ® Xeon ® processor-based workstation will be nearly interactive, and I do not expect systems like these to be relegated to the cloud.

DAKOTA will place unbelievable demands on high-performance computing (HPC) solutions. I expect intelligent design agents that spawn from today’s DoE workflows will reach beyond tera-scale computing, and at Intel we are working at ways to meet those needs. It is not just about the CPU, the number of cores or the frequency, it is about the entire infrastructure and how data is plumbed around the system. It is about system balance, providing solutions that do not starve or drown parts of a computing architecture. Yes, there will be more cores and solutions with many cores, and yes, there will be new ways to plumb data that include larger caches, new memory topologies, integrated i/o controllers and much more. High-density drives (HDDs) will be supplanted by solid-state drives (SSDs) and the performance per watt ratio will hit an all-time high, providing for denser processing centers capable of delivering exascale class computing. However, more cores and higher speed infrastructure only represent part of the issue. These exascale computing solutions require programming tools that afford users and independent software vendors (ISVs) access to massively parallel computing solutions that scale from multi- to many-core architectures and allow forward scaling strategies in performance that provide users and ISVs an opportunity to quickly take advantage of new architectures as they are made available.

By 2018, I expect to provide 125 times the performance of today’s processors. To make this increase possible, engineers at Intel will push Moore’s Law as far as it can go. One approach to exascale-capable processor design is to stack chips and transistors on top of each other so that processors will be built more like cubes rather traditional flat chips. For example, our recently announced Intel ® Tri Gate Technology represents our first foray into 3D transistors, and it is just the beginning. The reason for 3D cube-shaped chip architecture is to facilitate faster data transfer in and out of the processor cores.

Join on us on the ride, embrace change, and accelerate your innovation cycle.

If you are not currently using DoE workflows, you may want to investigate them. They offer you an opportunity to explore more what ifs in less time. If you are using DoE technologies, I’m sure you will appreciate how they will change in the years to come and help you optimize your product development cycle. We will have the compute power to meet your needs.

Joe Curley is Director Technical Computing marketing, Intel Corp.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#4452