What is GPU rendering?

Written by 
What is GPU rendering?

GPU rendering technology is getting a lot of attention at the moment. You probably already know that it’s extremely fast, and you may also know that the hardware is far more compact than a classic render farm. If you don’t know anything else about it but are burning to know, then read on.

Let’s start with traditional CPU rendering. Without going into too much detail, a CPU (Central Processing Unit) is a general purpose processor that sits in the middle of the system. It runs all the programs on your workstation and controls interactions with disks, networks and screens.

A CPU is fundamentally based on a single core design that can be instructed to perform one task on one bit of data at a time. As CPUs have developed, more cores have been added on, so that now each of the cores can be separately assigned tasks. A typical workstation CPU will have 6-14 cores and run 12-28 simultaneous ‘threads’ of instructions. Most of the time each thread will work on one block of data.

The GPU (Graphics Processing Unit) sits on the graphics card. Its primary task is to process data into images on the screen. Originally, the GPU was connected to the rest of the computer via AGP which sent data very fast in one direction – from the computer to the graphics card – but barely at all in the other. This meant that the GPU could render for the screen, but having processed that data, it could not send it back to the computer to be stored.

About 10 years ago, graphics cards started to be connected via PCIe instead. This sends data quickly both to and from the GPU, enabling it to function as a mini-computer in its own right.

The way that data is processed by GPUs and CPUs is fundamentally similar, but with a GPU the emphasis is on parallel processing (working with lots of data at once). By contrast with CPU technology, GPUs are designed from the ground up to process instructions simultaneously across many cores.

So in rendering, the GPU takes a single set of instructions and runs them across multiple cores (from 32 to hundreds) on multiple data. A typical workstation GPU will have 2000-3000 cores and run 100 or more threads of instructions. Each thread will work on around 30 blocks of data at once.

What this means is that the CPU can work on about 24 blocks of data at the same time, while a GPU can handle 3000 or so. This makes a huge difference in performance. For example if you’re rendering HD frames made up of about 2 million pixels, it’s the difference between processing 24 or 3000 of those pixels at once.

This means that GPUs are (much) faster than CPUs, but only for some tasks. In the realm of VFX, this is actually quite a good thing, since 3D rendering is the exact task a GPU is designed for.

But there are limiting factors. Graphics cards contain very fast memory – but a smaller amount of it relative to the main system memory. For many GPU renderers the size of the scene you can render is limited to the maximum size of the graphics card’s memory (24Gb with the NVIDIA Quadro M6000, as of this week). There is one renderer, Redshift, which has shaken this up by enabling the GPU to use main memory for rendering. So using Redshift, you can render scenes that are much larger. It’s a major development for GPU rendering.

Who’s rendering on their GPUs at the moment? A lot of VFX software already uses GPU rendering technology – hence the benefits of a 3D card. Pretty much all real time renderers use GPU. A lot of the major facilities are adding GPU rendering into their CPU render farm-based pipeline to help with performing quick tasks.

If you’ve already made the investment in a CPU render farm, chances are you’re not going to scrap it. But if you want to work in 4K then it’s really worth considering how GPU rendering could speed up your work.