SIGGRAPH 2010: Pushing Pixels and Processors to the Limit

Still from The Wonder Hospital by Beomsik Shimbe Shim, presented at the electronics theater at SIGGRAPH 2010. Copyright (c) 2010, All rights Reserved by Shimbe.

Still from the animated feature The Light of Life by Daihei Shibata (http://www.vimeo.com/9325052), part of the computer animation festivel at SIGGRAPH 2010 and proof of the importance of realistic ray-tracing in the new era of digital content.

Earlier this week, I flew to Los Angeles to join the people who skinned the 10-foot-tall Na’vis in Avatar, polished the metallic armor worn by Iron Man, and helped a Viking kid soar in How to Train Your Dragon. I found them trading secrets and demo reels at SIGGRAPH 2010.

The annual pilgrimage for animators, movie makers, and game developers also attract those who want to hire them. Intel booth staff handed out postcards that read, “Intel is Hiring,” pointing to Intel job postings. Sitting before a crayon-colored dollhouse, Pixar Canada recruiters interviewed people on the spot. Clutching their portfolios (and hiding their anxiety), many fresh-faced graduates queued up for the Job Fair.

The Emerging Technologies Pavilion, always popular with the hands-on crowd, has become the place to get a glimpse of the future. This year’s installations include a soup can-style 360-degree stereoscopic projection device, a touch-enabled stereoscopic terrain navigation table, and fibrous textures that light up in response to human touch. Promising or puzzling (sometimes both), these working prototypes are tangible proof that innovation is alive and well.

The kind of innovations these pixel pushers undertake—high-resolution renderings, full-length animated features, stereoscopic imaging, to name but a few—require tremendous computing power. One alternative to meet these demands is to acquire more CPUs and GPUs; another is to take better advantage of the existing multicore processors through parallel computing.

CPU and GPU: Friends or Foes?Two of the largest booths in the exhibit hall belong to Intel and NVIDIA, the CPU and GPU giants. In one aisle, Intel touted the horsepower of Intel Core i7 and Intel Xeon chips. In the next aisle, NVIDIA boasted its new Fermi-based Quadro cards: Quadro 4000, 5000, and 6000. (Intel’s presentation: “How leading content creation and gaming applications take advantage of Intel Core   i7 and Intel Xeon platforms”; NVIDIA’s talk: “iray-CUDA accelerated photorealistic rendering.”)

In the last five years, multicore computing has become the norm. The latest generation Intel Xeon processors are available with up to 8 general-purpose computing cores. Quadro 7000, one of NVIDIA’s latest GPUs, houses 896 CUDA processing cores. Even the entry-level consumer notebook Dell Inspiron 1545 now comes with a dual-core Intel chip. Dell Precision M4500, a workhorse beginning at $1,239, comes with Quad Core Intel i7 chips.

But hardware makers like Intel and NVIDIA must now encourage software developers to catch up, to write (or rewrite) code that takes advantage of parallel computing, to harness the additional computing cores they’ve already sold.

Tony Neal-Graves, general manager of Intel’s workstation group, said, “We’re making it easier for people to [create computing clusters] through Intel Virtualization Technology (Intel VT).” The technology consolidates multiple computing environments into a single server or PC, allowing you to create a virtual computing cluster from unoccupied cores.

One of the factors expected to drive computing demand, noted Neal-Graves, is “the migration from overnight rendering to real-time rendering.” In the past, digital artists might have been willing to wait overnight to get a rendered view of their scene; today, they demand—and quite often get—instantaneous rendering results with little or no delay.

Bunkspeed SHOT, previously powered by Luxion's CPU-based technology, is now powered by mental images' GPU-accelerated iray.

NVIDIA's 3D Vision Pro product line anticipates the rise of stereoscopic content development.

Rendering Wars: The Sequel to CPU vs.  GPUTwo simplified rendering programs at the show—Luxion KeyShot and Bunkspeed SHOT—provided a perfect supplement to the CPU vs. GPU saga. Originally there was but one product: Bunkspeed HyperShot, developed and marketed by Bunkspeed, powered by the CPU-based rendering technology from Luxion. But in November 2009, license negotiation fell apart, causing a rift between Bunkspeed and Luxion.

Bunkspeed now offers Bunkspeet SHOT, powered by iray from mental images (a wholly owned subsidiary of NVIDIA). Luxion launched its own product, branded KeyShot, powered by the original CPU-based technology. At NVIDIA’s booth, Bunkspeed CEO Philip   Lunn demonstrated how the integration of GPU, an option previously not available when the product was HyperShot. A few yards away, Luxion VP of marketing Thomas Tegner taunted rivals with its own presentation: “KeyShot—Who needs a GPU? Serious real-time production quality rendering.” (For more on Bunkspeed SHOT and Luxion Keyshot, read “One Scene, Two Shots.”)

Throw HPC into the Mix: Cloud-Hosted Rendering

What if you just want an animation clip or a photo-realistic image but simply don’t care whether your rendering is processed on CPU or GPU? You may be interested to know, rendering is also moving into the cloud, or remote HPC (high performance computing).

Two early incarnations of cloud-hosted rendering platforms come from PEER 1 and Penguin Computing on Demand. Both are powered by mental images’ RealityServer, a web-accessible rendering system built on NVIDIA Tesla GPUs. RealityServer was created specifically to deliver near-instantaneous, real-time rendering over the web, allowing users to create high-resolution images and animations regardless of their computing device. Running a netbook, low-end consumer notebook, or an iPad? Connecting from a Windows machine or a Mac? It won’t make a difference to the cloud-hosted application.

Autodesk recently launched its own cloud-hosted rendering application, dubbed Autodesk Neon, as a technology preview at Autodesk Labs. The program proves it’s possible to let people remotely render high-resolution scenes saved in AutoCAD by uploading them through a standard browser. At the present, the rendered images’ quality undermines Neon’s appeal.

However, Picture Shooter from Mackevision, a similar application exhibited at SIGGRAPH, shows greater promise. This browser-based rendering application lets you upload a 3D model, apply materials, set background, render previews (updates are near-instantaneous), then order digital prints online. Picture Shooter is powered by Chaos Group’s V-Ray RT, currently using CPUs. But a statement at the company’s site hints at possible GPU acceleration: “The V-Ray RT architecture is very robust and if needed, it can seamlessly be implemented to allow new hardware acceleration technologies in the future” (italics added for emphasis).

Hardware: Multi-Prong MulticoreDell, one of the few workstation merchants still remaining, recognizes the need to appease both CPU and GPU. The company showcased its workstations at NVIDIA’s booth and promoted its business-class towers that are now available with NVIDIA’s latest Fermi graphics cards (T3500, T5500, and T7500). It also delivered talks under the Intel banner (“Impact   of multicore workstations to digital content creation,” by Don Maynard, Dell’s senior product marketing manager).

OpenGL 4.1: Next Step in Parallel Computing

Kronos Group, a member-supported consortium that promotes royalty-free open standards for 2D and 3D graphics acceleration, has good reasons to be at SIGGRAPH. It has just released what it calls “another significant release”: OpenGL 4.1. Delivered just five months after the launch of OpenGL 4.0, the new code is written for 64-bit computing. The new code offers better interoperability with OpenCL and WebGL specifications, creating more opportunities for parallel processing and browser-based 3D visualization.

Double Trouble in the Future

The proliferation of stereoscopic devices—stereoscopic terrain-navigation table, glass-powered stereoscopic displays, glassless stereoscopic monitors, to name but a few—suggests many animators, movie makers, and digital artists must now render their footage and images twice—for the left-eye and right-eye views to create perception of depth. (NVIDIA’s 3D Vision Pro product line—comprising Quadro family GPUs, stereoscopic projectors, stereoscopic display units, and 3D glasses—is designed to capture the market.) The popularity of smaller, portable devices—netbooks, iPhones, and iPads, for example—may also catapult high-quality ray-traced rendering and 3D visualization into the cloud, freeing content creators from their stationary desktops.

Many technologies found at SIGGRAPH may be too playful, too abstract, and too eccentric to be commerce ready. That’s quite consistent with the SIGGRAPH tradition: Creativity takes precedence over business plans and go-to-market strategies. But these pixel pushers—many of them students with small budgets—are also pushing the limits of what can be done with personal and portable computing devices, forcing CPU, GPU, and HPC providers to pump more firepower into their products. That’s something everyone is bound to benefit from.

For more, watch the slide show below. (Hover your mouse over the image and click on “i” for more info.)

Find more photos like this on DE Exchange

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Kenneth Wong's avatar
Kenneth Wong

Kenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.

      Follow DE
#19533