GeForce versus RTX: the space, power and performance shuffle!

Written by 

 
Anybody with even just one eye on the world of graphics cards will have noticed the recent feature differences between GeForce cards developed primarily for gaming, and RTX (formerly Quadro) designed to perform in CAD, CUDA, engineering and intense VFX workflows.
 
For a long while, the issue has been that the media and entertainment industries were being increasingly drawn to the performance of the GeForce cards, which were often performing on the same level as the professional (and more expensive) equivalents. Once warranties on the GeForce GPUs increased to match the professional GPUs, and software developers began to endorse the ‘gamer’ cards to run their professional packages, it became hard to justify the extra expense of an RTX.

 

 

Blowers vs fans

 
However, in recent times, we’ve started to see some separation again between the two sides. This was started initially by a change in design ethos by Nvidia with its 2080 card. Rather than implement a blower design on the GPU, Nvidia went for a design which used side-mounted fans.
 
This change was significant. The professional blower design pushes the hot air generated by the GPU out of the back of the workstation it is installed inside. In contrast, the fan design draws in cool air and pushes it over the GPU’s heatsinks to keep it cool. This air then escapes into the case, rather than being blown out of it. Perfectly fine for a gaming PC, but suddenly the challenge with GeForce-equipped workstations in an enclosed, rack-mounted environment becomes one of cooling.
 
We’re just beginning to see blower-design 40-series GeForce cards emerge from China which are specifically designed to tackle this problem. But with no guarantee of quality or performance, and without any involvement of Nvidia, these are GPUs that introduce too many unknowns into an already complicated area.

 

 

The big issue

 
In our opinion, the line between the gamer and professional card has once again become more defined with the release of Nvidia’s new 40-series GPUs. Their performance is still comparable with their professional equivalent, but they ask serious questions when it comes to physical size and power draw.
 
In the gaming and VFX industries, remote working is common, with studio machine rooms and data centres accessed by a remote desktop streaming protocol. Data sets are increasing and workflows are changing to USD (Universal Scene Description), which means that the demand for GPUs continues to intensify and impact on studio infrastructure. Whilst the GeForce cards can deliver when it comes to performance, their design means they struggle to deliver on this additional front: GPUs in dense server chassis.
 
The 4090, 4080 and 4070 are all in excess of a two-slot PCI bay, and max out at four PCI slots. This makes the group amongst the largest (if not THE largest) GPUs ever to exist, and tricky to fit within a densely-packed server chassis. But ironically, that’s probably not the largest obstacle.

 

 

Power is money

 
The power needed to run these GPUs can be excessive. For example, the 4070 Ti requires a 700-watt PSU (Power Supply Unit), whilst a seriously over-clocked 4090 needs a 1200-watt PSU. Compare that to Nvidia’s professional series Ada6000’s power consumption, which maxes out at 300W, and the potential extra cost of running one of the 40-series GPUs in the long term starts to mount up.
 
And that’s without even considering anything else in the chassis, or the power needed to keep these machines cool whilst they’re running.
 
With business owners already dealing with the end of government subsidies and the cost of power continuing to rise, the real cost of ownership starts to make itself known. The KW cost per rack suddenly focuses the mind not only on space, but also power consumption.
 
All of that is to say nothing of the green impact that using this power will have. Many sectors have established sustainability agendas (universities and colleges for example), and it’s in all of our interests to consider how we impact the world around us. For these groups, these new Geforce cards present a problem.

 

 

Don’t forget memory

 
Another major difference between the gamer and professional cards is the amount of RAM on each. Professional cards get 16, 24 or 48 GB on them, whilst the GeForce sees only 8, 12 or 24 GB options. Even then, 24 GB is only available for the 4090.
 
We’ve mentioned data sets getting bigger, and on-card memory is one area where this becomes an issue, particularly when considering virtual-set workflows and the demand for larger files to fill high-resolution walls with convincing textures. In these kinds of environments, the last thing that’s needed is for the data to be continually swapping from the card’s memory onto the workstation’s memory and back again as it struggles to deal with these large data sets.
 
One thing to particularly note for studios working in the virtual-set environment, is that not only do the professional cards have much more memory to play with, they also feature Genlock, which helps to keep those multiple screens in sync with each other.
 

Is there a home for the 40-series?

 
These performant but power- and space-hungry GeForce GPUs seem to be at odds with the way large, modern studios want to run. And that is the point.
 
Nvidia has cleverly made its GeForce GPUs viable for smaller studios and in particular, games developers - where the GPU usage needs to mimic that of the consumer. For this kind of environment, a GPU like the 4090 still has a place. It’s not racked, doesn’t require extra cooling and provides all the high frame-rate, low-latency performance a gamer needs.
 
The same is true for a small VFX house with generalists using the 4090 to tackle all kinds of work, without having to drive it hard 24/7. This is the market that Nvidia envisaged for its impressive GeForce card.
 
The cut-off point comes if your headcount rises above 10 and your team has embraced remote/hybrid work. In this scenario, workstations are no longer under desks but in machine rooms, accessed with remote streaming protocols. In this case, with more GPUs running longer and harder, the amount of power draw, the cooling solution and the space to house it all needs to be considered carefully before committing to non-professional cards.
 
If you’re looking for the ultimate performance for the cheapest price, it’s still available out there. Only the cheapest price now may not work out to be the cheapest in the long run.
 
For advice on choosing the right GPUs for you, This email address is being protected from spambots. You need JavaScript enabled to view it.