Mercedes Benz, Brought to You by Jeff Patton and NVIDIA

How long does it take to deliver a Mercedes Benz? In the capable hands of Jeff Patton, a self-taught computer graphics artist, it took about an hour. But that’s when he was relying solely on his CPU to produce the digital renderings. Now, by switching to GPU, he has reportedly found a way to churn out a new Mercedes in under 10 minutes.

In early 2010, Patton’s digital artworks caught the attention of Mercedes Benz U.S.A. (MBUSA). This landed him an assignment to create images for the car maker’s website and print advertising campaigns. Most of the times, Patton’s clients provide him with a 3D file in .FBX or .MB (Autodesk Maya) format. He usually exports it to .FBX format for rendering in Autodesk 3ds Max.  As 3ds Max data, the vehicle usually weighs around 1.3 to 2.5GB, he estimated. A complete 3D scene comprises roughly 10 to 15 million polygons.

“Some of the metallic silver paints, dull [paints], or glossy rims [in the images] were the most computationally intense areas for this type project,” Patton explained. “I base this on the amount of time the GPU’s would take to refine those particular areas in some lighting and environment configurations.”

According to NVIDIA, “Since [Patton] began using iray with NVIDIA Quadro 6000 and Tesla C2070 GPUs on the Mercedes Benz project, Patton’s renders have been running up to 7.5X faster than they had on mental ray and V-Ray running on the Intel Core i7 960 3.2GHz CPU.”

Another benefit to offloading rendering to GPU was, Patton found out, freeing up his CPU for other tasks. “In so many ways, iray running on the NVIDIA GPUs allows me to work faster and create higher quality images,” he said. “I’m handling more images in less time, and I can do more things with that time I save—whether it’s taking on more work or spending more time with my family.”

At last GPU Technology Conference, NVIDIA previewed a cloud-hosted rendering option in development for 3dx Max users (read “Maxed Out on Cloud in iRay,” Sep 23, 2010).

“I’m not 100% sure if my clients would allow me to use such options for security reasons,” noted Patton. I haven’t needed to approach any of my clients to date with this question so far ... I do believe they would have some legitimate concerns about putting intellectual property out on a cloud system. I’m not sure what kind of security measures will be in place on cloud systems, but that’s certainly an area I’ll need to investigate further.”

Another reason he remains cautious about cloud-hosted rendering is his demands for huge memory. He explained, “For example, if the cloud system were based on GPUs with 3GB of memory or less, then I wouldn’t be able to use them unless I was able to spend time stripping the scene of any unnecessary parts.  Sometimes it’s simply not possible to remove enough non-critical geometry or textures for scenes to fit within a 3GB or smaller footprint, such as flyby animations or 360 deg. rotations.”

NVIDIA and other GPU makers have consistently hailed the greater number of cores available in the GPU as one of the reasons artists like Patton are able to render digital images dramatically faster.

Patton said, “I was not able to test any newer CPUs or dual CPU systems against the GPUs.”

As counter measure, CPU makers like Intel are now developing chipsets that incorporate graphics processors as well as general processors on the same die, hoping to offer performance boost that rivals or supersedes GPU capacity.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Kenneth Wong's avatar
Kenneth Wong

Kenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.

      Follow DE
#19640