Photorealistic rendering is a matter of manipulating variables to trick the eye into mistaking pixels for reality—say those glints of sunlight off a sports car’s rims or the deepening shadows beneath the latest consumer appliance sitting on a virtual granite countertop. That manipulation of transparency, refraction, shading, translucency, texture mapping, motion blur and more are increasingly being handled by graphics processing units.
In the not-too-distant past, rendering was an afterthought of design. Something done after the product was finalized, mainly because it was too slow to do concurrent with design. That has changed thanks to technology advancements that enable interactive rendering and visualization. Interactive rendering relies on ray tracing and global illumination, and the physics of light and materials. Ray tracing means firing light rays into the scene and allowing them to bounce on and through materials as they would in the real world. As the light rays bounce around, they fill the volume of the scene with secondary bounces, so objects act as light sources themselves. This bounced light, which can illuminate all of the nooks and crannies of the scene, even objects that aren’t directly in the path of the sun or a light bulb, is called global illumination (GI).
The rub of physically based rendering is that it is much more expensive computationally. Simulating billions of light rays, bouncing around in a scene many times before they expire, requires a lot of computer power. Fortunately, GPUs have advanced photorealistic rendering.
Many GPU-based renderers are built on CUDA, NVIDIA’s massively-parallel supercomputing programming language, but designers don’t need a supercomputer or top-of-the-line CPUs to use these them. They run on the same Quadro graphics cards design engineers already use to power their professional CAD and simulation applications. Users can leverage the cost of computing hardware by optimizing their systems to use lower-end CPUs coupled with more powerful GPUs, which provide more benefit in terms of real-time display performance and rendering compute performance.
Workstation Configuration Suggestions
Rendering speed comes down to the software you choose and the hardware it is designed to use. Renderers were initially created to use the CPU. KeyShot and Chaos Group’s V-Ray Adv, for example, scale with a CPU’s clock speed and core count. If you are using a CPU-based renderer, then CPU choice is the critical factor to consider when configuring a workstation. If you have a demanding CPU rendering workload and your budget allows, then invest in dual, multi-core CPUs. With V-Ray-Adv or Keyshot, your choice of graphics card would be determined by the other software you plan to use on the the workstation.
To take advantage of the massively parallel nature of GPUs for rendering, many rendering software providers now offer GPU-based rendering. Next Limit’s Maxwell Render, for example, began supporting GPUs with Release 4 in 2016. Upon its release, the company stated that GPU rendering can be up to 10X faster than CPU rendering. Chaos Group’s V-Ray RT GPU renderer was officially renamed to V-Ray GPU in March because of significant improvements the company made to the software’s architecture.
Like NVIDIA’s Iray, OTOY’s OctaneRender, and Redshift Rendering Technologies’ Redshift, V-Ray GPU was created to make use of NVIDIA GPUs. Many of these rendering engines are available as stand-alone products or as plug-ins to popular design software, including Maya, 3ds Max, Modo, Rhino, Revit, Blender and more. Rendering via GPUs is more cost-effective than on CPUs because as you scale, you get more rendering bang for your buck.
Because GPU rendering scales well across multiple GPUs, the choice of which video card and how many to use comes down to form factor and budget. If you prefer a portable workstation, you likely must forego the option of multiple graphics cards for the benefits of mobility, extended battery life and thin, lightweight design. Still, portable workstations equipped with even mid-range NVIDIA GPUs exceed the minimum requirements for GPU-based rendering. If your rendering needs are greater, then larger workstations like the Dell Precision 7920 Tower can be configured with multiple graphics cards, all the way up to 3 x NVIDIA Quadro P6000s or even GV100s. Each NVIDIA Quadro P5000 card features 2,560 CUDA cores.
As a general rule of thumb, scaling across multiple, lower-cost GPUs is the most cost-effective means of rendering and will meet the demands of most design engineers. There is now also the option to use both CPUs and GPUs to render. Last year’s release of V-Ray 3.6 introduced hybrid rendering. V-Ray GPU allows you to take advantage of the power of GPUs for rendering, and with hybrid rendering you basically get an added speed boost from the CPU for free that would otherwise be sitting idle.
You can have the latest rendering software and top-of-the-line GPUs, but if your monitor is not up to par, you’re not getting the whole picture when you render. Fortunately, today’s 4K and even 8K monitors are up to the task. For example:
- The Dell 43 Ultra HD 4K Multi Client Monitor: P4317Q is like having four monitors in one. The 43-in. monitor supports up to four simultaneous inputs, from Full HD to Ultra HD 4K.
- The Dell UltraSharp 32 Ultra HD 4K Monitor with PremierColor: UP3216Q provides precise, accurate colors right out of the box, according to the company.
- The Dell UltraSharp 27 4K Monitor: U2718Q features a color depth of 1.07 billion as well as the InfinityEdge thin bezel that is well-suited for multi-monitor setups.
- The Dell UltraSharp 32 8K Monitor: UP3218K is the world’s first 32-in., 8K monitor, according to Dell. It features a 33.2 million pixel resolution and a pixel density of 280ppi for incredibly sharp, realistic visuals.
If you’re looking for the latest and greatest, and rendering speed is crucial enough to your workflow to support the investment, look no further than NVIDIA’s new Quadro GV100. The first workstation-class GPU based on the AI-powered Volta architecture was announced at the GPU Technology Conference in March. “The new Quadro GV100 packs 7.4 TFLOPS double-precision, 14.8 TFLOPS single-precision and 118.5 TFLOPS deep learning performance, and is equipped with 32GB of high-bandwidth memory capacity,” writes NVIDIA’s Bob Pette in a blog post. “Two GV100 cards can be combined using NVIDIA NVLink interconnect technology to scale memory and performance, creating a massive visual computing solution in a single workstation chassis.”
The GV100 represents the state-of-the-art in real-time, GPU-based rendering on the workstation. It provides deep learning-accelerated denoising performance for ray tracing and a significant speed boost.
“V-Ray GPU is 47% faster on NVIDIA’s new Quadro GV100 GPU than it is on their previous Quadro GP100 flagship,” writes Chaos Group’s Phillip Miller in a blog post. “It’s also 10 to 15 times faster than an Intel Core i7‑7700K — which is pretty impressive given we do our best to ensure the CPU performance of V-Ray GPU stays on par with that of V-Ray. Through our continued support of NVLink, a system housing two GV100s provides 64GB of memory for handling large scenes — all interactively.”
GPU-acceleration makes rendering so fast that it can be used interactively. This means design engineers can work normally, but evaluate the appearance of materials and lights, as well as changes to the geometry of the designs, with almost no disruption to the design workflow at all. GPUs are easily ganged together, so if one graphics card isn’t fast enough, a second graphics card will make the render almost twice as fast. What used to take a farm of computers running overnight can now be done interactively on a desktop.