Will small and midsize businesses (SMBs) find high performance computing (HPC) or supercomputing useful — that is, useful enough to pay for it? The rapid growth of on-demand HPC providers suggests they will.
Parallel Works, a startup spawned by the University of Chicago and Argonne National Labs, is the latest to bet on the emerging SMB supercomputing market. “Our goal is to build a sustainable business that enhances the design, research, and development efforts of our customers while supporting the development and innovation of parallel computing technology,” the company announces in its homepage.
The company’s founder and CEO Michael Wilde is a software architect at Argonne National Laboratories and Senior Fellow at the University of Chicago Computation Institute. The company officially began operating in Spring 2016 with a beta test.
Parallel Works’ platform is built on Amazon Web Services (AWS) EC2. It also connects to other larger HPC systems and supercomputers, like the Ohio State Supercomputer Center, according to Wilde. (The Ohio Supercomputer Center is also home to AweSim, a new startup offering HPC-driven simulation apps. For more, read “Simulation for the Masses,” Jan 2016.)
“We plan to expand the types of compute resources offered to include many more cloud and HPC platforms,” Wilde said. “The technology underlying Parallel Works (the Swift Parallel Scripting Language) has drivers to connect to most HPC systems and schedulers, so making new connections is relatively straightforward. This also enables us to connect to the customers’ in-house compute resources if they so desire, giving them the capability to burst to the cloud as additional capacity is needed.”
“Bursting” is a term that describes the practice of using on-demand capacity from public or private computing service providers, such as Parallel Works or AWS, to augment onsite resources for large-scale jobs.
SMBs that need additional computing capacity can turn to public providers like AWS. But providers like Parallel Works and San Francisco-based Rescale, another on-demand HPC provider, add software and services that make their offerings more suitable for specialized workloads, such as machine learning or engineering simulation.
Wilde explains that the Parallel Works platform comes with a middleware and service layer, designed to ease HPC-related functions. They include:
- compute management tools to spin up large cloud clusters quickly with a few mouse clicks (but with constraints to ensure the usage doesn’t get out of hand);
- a simple UI with validated input fields and drop-down options to make it easy enough for a general user to deploy studies;
- The Swift technology (an open source parallel scripting language developed by Wilde’s team) to orchestrate the workflow across all available compute resources onsite, offsite, and in the cloud;
- a collaborative development environment for engineers to build and encapsulate workflows.
Many of these components are designed to make HPC and simulation much more accessible to a wider user base. The simple deployment UI, for instance, can be used by an engineer with sufficient domain knowledge of the scenario that needs to be simulated (for example, a car crash) but lacks HPC expertise. The encapsulated workflows can serve as guided templates for non-experts to run routine simulation jobs.
Research institutions that pioneered the use of HPC for top-tier users are well-positioned to widen the market by making their services and products more accessible to SMBs.
The rise of on-demand HPC vendors is also forcing many traditional simulation software providers to rethink their strategy. Some may begin partnering with companies like Parallel Works and Rescale to offer on-demand HPC along with their software licenses. Such partnerships can eliminate one of the major hurdles in on-demand HPC usage — the complexity of purchasing compute time and software licenses from two different vendors.