Professional graphics cards continue a steady march toward greater speed and capability.
PCI Express, faster RAM, and programmable graphics architectures are pushing the new generation of graphics boards to new heights. As programs catch up and begin to tap into the new bandwidth and processing power, users are in for a serious jump in performance and capability-including some applications you probably haven’t considered.
The biggest change in graphics cards over the last year is the move from AGP to PCI Express. “We were getting close to the limits of AGP 8X,” says Jeff Little of 3D Labs. “Certainly over the next couple of years, system capabilities and application requirements would have broken the boundaries of AGP. The industry really needed that next step.”
3Dlabs Wildcat Realizm 800 graphics accelerator. Click image to enlarge.
That next step was PCI Express. Pumping roughly 4GBps, in both directions, PCI Express offers about four times greater bandwidth to and from the graphics card than AGP 8X. The transition to PCI Express, about a year old, is already more or less complete. Major OEMs are selling PCI Express products almost exclusively. “We carry a line of AGP cards for people who want to upgrade,” says Little, “but it’s really, really hard to buy a brand new AGP workstation.
“There are applications that won’t see much benefit from PCI Express,” admits Little. “But as model complexity increases and applications take advantage of the programmable nature of new graphics architectures, we expect more and more processing to take place on the graphics card itself. You’ll need higher bandwidth connectivity to the graphics card, and PCI Express enables that.”
That new programmability in graphics processing units (GPU) is being put to work to run a variety of shaders — small programs that create material and scene effects. “These are not unlike the shaders available via OpenGL,” says Jeff Brown of NVIDIA. “Now, however, the shader programs run on the GPU.
The NVIDIA Quadro FX-4400. Click image to enlarge.
“In addition, in the last generation or so we’ve moved to floating-point frame buffers, where color is described by IEEE FP32 32-bit floating-point math, with a 128-bit floating-point number for each pixel. Now you have a huge palette of colors and light with which to make your graphics look real.”
It’s as though the graphics processor has evolved from calculator to full-fledged computer, Brown says, giving tremendous flexibility in applying effects to objects on the screen in real time. Historically, materials and effects (fog, for example) were built into the rendering application, with much of the computation done on the system CPU. Now the programmability is pushed down to the graphics card, freeing the CPU to spend time doing other things. “If the designer applies a shot-peen aluminum look,” says Brown, “the application just tells the graphics board to render that material.”
Give engineers a new, high-speed computer, and they’ll put it to use, even if it is hiding on the graphics card. “People are starting to use that programmability and precision to run other kinds of programs on the GPU,” says Brown, “such as FEA and CFD. You’ve got IEEE FP32 floating-point precision, and the raw horsepower is quite high.” The nascent field goes by the name of GPGPU (General Purpose computation on GPUs). You can read more about it at gpgpu.org.
Digital Video Interface
Thanks to the soaring popularity of flat-panel LCD monitors, you’ll now find digital output on nearly every professional graphics card in the form of digital video interface (DVI). Fortunately, for those of us still wedded to CRTs, the common DVI-I (DVI-integrated) connector supplies analog output via an inexpensive dongle.
DVI comes in two forms, or speeds, called single-link and dual-link. Single-link DVI provides a maximum bandwidth of 165MHz (that’s roughly 60Hz at 1920 x 1080). Dual-link DVI is essentially two single-link channels, combined, for a total bandwidth of 330MHz. If you want to run one of Apple’s new 30-inch Cinema displays at 2560 x 1600, you’ll need dual-link DVI.
Despite its suggestive name, dual-link DVI will only drive a single display. With the trend toward dual displays, most workstation graphics boards support dual displays via multiple DVI connectors. These are generally single-link connectors, but some new, high-resolution displays, such as IBM’s 9.2 megapixel T221 LCD monitor (3840 x 2400), require two dual-link DVI connections for maximum resolution and refresh. You’ll only find those on very high-end cards. You can drive such displays with one dual-link channel, if your application will tolerate very low refresh rates.
More RAM, Faster!
And finally, there’s RAM. Not surprisingly, you’ll need more than ever before. Greater display density-whether dual, high-resolution, or even dual, high-resolution displays-calls for more onboard memory. How much more?
“That can certainly get confusing,” says 3D Lab’s Jeff Little. “For example, our Wildcat Realizm products can use anywhere from as few as 15B, to as many as 101 to represent each pixel on the display, depending on features enabled at the time. … A 1920 x 1080 resolution display would take 31MB of frame buffer memory. In the full-featured case this would change to 209MB of frame buffer memory. A card with only 128MB … couldn’t support that.”
It’s worse if you want to drive multiple displays. And new programmable architectures mean programs, data, and textures residing on the graphics card as well. Not surprisingly, 128MB and 256MB cards are becoming the norm, with high-end cards at 512MB or more.
For the very best graphics performance, high-end cards feature GDDR3 memory. Combining a wide, 256-bit data path and clock speeds up to 600MHz with a simplified architecture and reduced power draw, GDDR3 is clearly the graphics memory of the future. For now, at least.
Writer and artist Mark Clarkson‘s latest book is Photoshop Elements by Example. Visit him on the Web at markclarkson.com.You can also send e-mail about this article addressed to email@example.com.