Nvidia's 50 Series Blackwell GPUs: A Breakdown


The recent launch of the Nvidia Blackwell 5090 GPU has generated excitement among gamers and graphics/VFX artists due to its significant performance gains. However, this performance comes with increased size and power requirements. The 5090 demands 575 watts, necessitating a 1000-watt power supply.
In March, Nvidia released their latest generation of GPUs, based on their new Blackwell architecture. The GeForce 5090 in particular has generated significant excitement among gaming enthusiasts and CG artists alike, on the basis of its apparent performance gains, but as in typically the case with major Nvidia releases, the reality may be a little more complex.
The Blackwell architecture - named after celebrated black mathematician David Blackwell - positioned its AI credentials front and centre in all of its prerelease publicity, specifically ‘Agentic AI’ described as that which ‘uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.’ You can just about say that in a single breath, but what, on a more prosaic level, are the immediate gains for our customers? There’s no doubt that Blackwell offers an increase in performance for certain workflows, along with exciting new features for GPU virtualisation.
Surveying the professional cards first - and we’re chiefly interested in the 4000 upwards - all of the new cards offer increased VRam and Cuda cores capability. The B4000 brings a very respectable near-9000 cores and 24GB of addressable GPU memory, situating it at the lower end of the high-performance cards, and given the complexity of modern VFX workloads, the ideal standard workstation card for those environments. That said, for artists wrestling with huge texture maps, or handling bigger and bigger data sets for USD workloads or machine learning, the GPU memory is the driving factor, and one that will steer towards 4500 or 5000 cards. Then we come to the Blackwell 6000 products - or products, as there are two versions of this card. This is where things get interesting, on account of Blackwell’s new Multi-Instance GPU (MIG) feature.
Virtualising GPUs has always been attractive in terms of achieving a greater density of users from shared resources, but it was historically problematic owing to its reliance on time-slicing the GPU. Time-slicing is a GPU resource allocation technique enabling over-subscribed workloads to interleave, taking turns utilising the GPU’s compute cores and memory. This was fine with lower intensity workloads, but for high-end CG, it generally incurred performance degradation - the ‘noisy neighbour’ problem. You could overcome this by either buying beefier GPUs to build in a buffer or opting for GPU pass-through (one-to-one usage), both of which were likely more expensive than buying individual machines.
MIG enables partitioning of the GPU into as many as four instances, each fully isolated with its own high-bandwidth memory, cache, and compute cores, making GPU virtualisation properly viable from a performance perspective. The Blackwell RTX Pro 6000 is available in two forms, the Max-Q and Workstation Editions. Both versions seem intended to challenge the perennial problem of lower-cost GeForce cards outperforming their professional equivalents; the 6000 editions both offer superior processing and memory against the GeForce 5090.
Confusingly, to us here at Escape, the Max-Q is in fact the genuine workstation card, given its standard height and dual slot form factor. The Workstation edition, by comparison, belongs with the GeForce 5090 with a 600W power demand (575W for the 5090) and extended height - in that it won’t fit in a current (more on this below), standard workstation tower. Where the RTX Pro 6000 starts to make sense though, is with MIG where the 6000 can present four 24GB users, each akin to an RTX 4000 Pro workstation. Yes, this is still more expensive - but not implausibly so, and the density and elimination of user contention make it a viable solution for data centre deployments.
Exciting developments are afoot for the GeForce 5090, however. Whilst it’s true that its big form factor and power hunger make it unsuitable for a standard workstation tower currently, later this year, HP will release the new Z2, specifically designed to accommodate a 5090 GPU. Like its pre-pandemic Z4 with a 2080Ti, the new Z2 is likely to be enormously successful. A proper tier one workstation product with a high-end gamer card has always been much sought, but often as rare as hen’s teeth.
It should not necessarily be assumed, however, that performance gains are an automatic given when transitioning to the new GeForce series; whilst the 5080 has shown promising results in gaming with DLSS4, potentially surpassing the 4090 in some cases, its performance in VFX environments is less consistent. For compute-heavy tasks, utilising software like RedShift or Arnold, or within DCC application viewports, the 4090 often outperforms the 5080. This is partly attributable to the 4090s 24GB VRam compared to the 5080s 16GB.
Another issue is the physical size of the cards. The Founders Edition 50 series card is a 2-slot PCI width, but most other manufacturers produce 3 or 4 slots cards, similar to the 4090. This poses challenges for multi-GPU systems used in VFX rendering. Older generations like the 2080Ti and 3090 were ‘blower’ style cards that expressed hot air through the rear of the card and workstation, but the 4090 and many 50 series cards have side-mounted fans. Attempts to create 2 slot ‘blower’ style cards were allegedly shut down by Nvidia.
This size issue limits the density of GPUs in rack-mounted systems. Water-cooled 5090s can be configured in 2-slot configurations, allowing up to four cards in a 6U space. However, rack-mounted water cooling is often viewed negatively in professional server environments. Additionally, the power consumption of four 5090s (2.4kW for GPUs alone, exceeding 3kW for the entire system) can create power management issues within standard rack setups. Even in non-rackmount scenarios, fitting multiple GPUs with side-mounted fans can cause thermal issues. Placing two such cards close together often leads to the second card overheating due to the first card's hot air output.
Overall, Blackwell offers much, both at an individual artist level, and in larger scale data centre deployments, where MIG offers some truly exciting possibilities for virtualisation. We look forward to providing further update on both as we move through the year.
.jpg)
We'd love to hear from you, why not speak to us today...
Whether you're a small studio or a global company we can help you.


