Getting the Most from Parallel Systems

MatLab from The MathWorks provides an easy way to take advantage of multicore and other parallel computing systems.

The next generation of high-performance computing is upon us, and we have to figure out how to make use of it. This generation involves slower processors, but these processors have multiple cores, or self-contained execution engines. My dual-core laptop is now three years old, and I anticipate that my next purchase will have either four or eight cores.

The problem is that actually using these cores in the acceleration of one or more applications is highly problematic. Most engineering applications are single-threaded, in that they follow only one execution path at a time. By their nature, they can execute on only a single core, while the rest stand more or less idle.

A few applications are multithreaded, and each thread can theoretically be scheduled to run on different cores. This is somewhat of an improvement, as long as the operating system can schedule individual threads, and the threads accomplish a lot of work independently of one another.

The MathWorks is trying to change that. I had the opportunity to visit the sprawling campus of this engineering software company in suburban Boston and speak with high-performance computing manager Silvina Grad-Freilich. MatLab is already multithreaded, she explained, meaning that it can execute parts of the code that it knows can be executed independently in parallel.


The MathWorks high performance computing manager Silvina Grad-Freilich talks about how you can use MatLab and associated tools to take advantage of multi-core desktop systems, multi-core and multi-processor servers, and clusters and grids.

One capability of MatLab is that its Parallel Computing Toolbox can automatically parcel out work based on the availability of units of computation, whether they are cores, clusters, or grids. Further, the Parallel Computing Toolbox enables users to run MatLab applications in up to eight cores locally, taking advantage of the latest trends in desktop system processors.

For those of you who write your own MatLab code, you can add some code to help it still more. Using parallel commands as a part of, for example, FOR loops (assuming that the repeating computations are independent), you can explicitly tell those loops to execute simultaneously on whatever resources are available. Rather than FOR, the MatLab instruction is PARFOR. And any code will execute on all defined resources in the project.

There is more to analysis than code. You may also have extremely large datasets. By breaking up problems on different computers in a cluster or grid, you can keep a smaller set of data in memory during parallel computation and still collect the data and analyze it later. If you can t run on a desktop system because of heap space limitations, maybe now you can.

The end result is you can take existing MatLab code and run it in parallel on multiple cores, often with few or no changes. Depending on where you are executing this code, you can get some pretty significant performance improvements. And best of all, you can largely use your existing MatLab routines.

While not everyone uses MatLab, it provides engineers with the opportunity to make full use of the power of their desktop computers and other inexpensive clusters and grids, including the Amazon EC2 cloud. Just be prepared to say goodbye to renting time on your favorite supercomputer.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Peter Varhol

Contributing Editor Peter Varhol covers the HPC and IT beat for Digital Engineering. His expertise is software development, math systems, and systems management. You can reach him at [email protected].

Follow DE
#11937