By DE Editors
The Cornell University Center for Advanced Computing (CAC) is testing the performance of general-purpose GPUs with MATLAB applications in a new research collaboration with NVIDIA, Dell, and MathWorks.
This research will explore GPU computing capabilities for data manipulation on NVIDIA GPUs using MATLAB applications. In particular, Cornell will focus on the use of multiple GPUs on the desktop via the MathWorks Parallel Computing Toolbox, and a GPU cluster via MATLAB Distributed Computing Server.
Cornell is conducting this research on Dell C6100 servers with the C410x PCIe expansion chassis, which supports server connections to NVIDIA Tesla M2070 GPUs.
“The launch of this GPU capability with eight nodes (each with eight CPU cores) and eight NVIDIA Tesla M2070 GPUs (each with 448 CUDA cores) is extremely valuable particularly for researchers needing to process large blocks of data in parallel,” says David Lifka, Cornell CAC director.
Cornell previously deployed a National Science Foundation-sponsored 512-core experimental MATLAB resource for the research community in partnership with Purdue University to provide a bridge to high-end national resources. More than 550,000 jobs ran on the experimental resource, which facilitated research, student learning, and Science Gateway applications.
Sources: Press materials received from the company and additional information gleaned from the company’s website.