Processors and graphics cards are expensive, so you will have to balance your budget between these two key components. Here’s how.
There has been a relentless drive in the PC industry towards more performance for less money over the past few decades, but graphics workstations are still expensive purchases. In fact, they have a tendency to go up in price.
With a big-ticket item like this, an IT buyer needs to think in detail about how best to spend their workstation budget to get the best uplift in workflow. This leads to the inevitable question about whether more should be spent on the graphics card or CPU.
Before we tackle this big question, however, let’s cover some basics.
What is the difference between CPU and GPU?
This may seem like a simple question, but over time the difference between a CPU and GPU has arguably shrunk. But some things stay the same: CPU stands for central processing unit, GPU for graphics processing unit.
A CPU is a general-purpose processor that can do pretty much anything. That’s true whether we’re talking about the lightweight ARM chips found in phones or the heavyweight monsters in servers and graphics workstations.
GPUs, on the other hand, are specialists. They’re designed to handle the complex mathematical computations found in 3D graphics and machine learning.
While a regular CPU will typically have two to eight cores, a modern GPU has tens, even hundreds of smaller, more specialised cores that attack tasks in parallel.
What should I look for in a workstation CPU?
It’s no coincidence that both Intel and AMD’s workstation CPUs share similarities: huge core counts, high clock speeds and massive amounts of cache.
Take AMD’s Threadripper CPUs, which have been a revelation since their introduction in 2017. The Threadripper Pro 5995WX now sits at the top of the range with a staggering 64 cores and 128 threads, while its boost clock peaks at 4.5GHz and it has 256MB of Level 3 cache. And it’s yours for a trifling £7,000.
Although Intel recently announced 4th gen Xeon Scalable chips, these are server-targeted processors that are likely to be overkill for your workstation — as, frankly, is the top-end Threadripper Pro. Instead, a high-end consumer chip will often do the job.
Should I spend more on the graphics card or CPU?
As with most things in life, there is no simple yes or no answer to this question. However, there are some clear reasons why you should favour CPU over GPU, and vice versa. This revolves around the focus of your workflow activity.
A workstation can be used for a variety of 3D content-creation tasks, spanning from 3D animation or video compositing in the media and entertainment industry, to computer aided design (CAD) and engineering, to scientific or medical visualisation.
The CPU/GPU balance also depends on where in the pipeline of these an individual worker’s tasks sit.
For activities that mostly involve live modelling or visualisation, such as 3D animation for film and TV, CAD/CAM and engineering, the GPU will be to the fore. The more you spend on the graphics, the faster it will be with more memory to hold assets locally.
The good news? You can save money on the processor. A regular desktop or professional CPU with eight to 16 cores could be perfectly adequate for this kind of system.
Both AMD and Intel produce great consumer-grade processors to form a basis for a modelling workstation, such as the AMD Ryzen 7000 series or the 13th Generation Intel Core i9 ranges.
Modelling viewports can be memory intensive, however, so specifying at least 32GB or even 64GB of system RAM will ensure a smooth modelling experience.
Does your workstation need pro graphics?
Another question to consider is whether to purchase a pro-grade graphics card or a consumer-grade one. The latter will be much more expensive than the former for a similar raw processing power and memory specification.
In most professional 3D content-creation workflows, professional graphics from either Nvidia’s RTX professional (formerly Quadro) or AMD’s Radeon Pro ranges are the best choice, despite the additional cost. These include explicit driver support for key professional applications, alongside assistance if issues are encountered.
Although most professional applications run well on consumer-grade graphics accelerators (thanks in part to Nvidia’s Studio initiative), performance is often lower than an equivalent professional GPU.
In some cases, this can be pronounced. The Siemens NX product design CAD application is a particular culprit. If your work involves Siemens NX, you really should opt for a professional graphics card, otherwise your viewport frame rates will be many times slower.
Games design studios often specify consumer-grade graphics, however, and not for cost-cutting reasons. While they will be using professional 3D software to build assets, artists in this industry frequently test code on the same workstations they use for designing.
This is because the models being created are intended to be rendered live on gamers’ computers, so it makes complete sense to use similar GPU-acceleration hardware and drivers to ensure a design will be rendered live as expected.
That’s why game design studios will regularly use high-end workstations but with (usually top-range) consumer-grade graphics. Alternatively, AMD Radeon Pro GPUs can run gaming drivers as well as professional drivers, switching between them with just a system reboot. So, these can provide the best of both worlds.
AMD Instinct alternative
AMD also offers an extremely high-end Instinct GPU range. This is primarily aimed at acceleration for non-visualisation workloads such as machine learning, astrophysics and computational fluid dynamics.
The AMD Instinct cards are aimed more at servers than workstations, however.
You would be unlikely to install one in a system destined for desktop usage unless that system might be expected to perform GPU-accelerated tasks locally. Even then, a more mainstream graphics card could still be the preferred option.
The AMD Instinct has features to optimise use when multiple cards are installed in a single server, such as memory sharing, and they are priced accordingly.
A CPU bias
Certain types of content creation activity do have heavier need for CPUs. One of those is 3D rendering, where animations are exported to video for inclusion in final film or TV releases.
Big postproduction studios have traditionally maintained their own datacentres full of servers specifically dedicated to this purpose called render farms. Designers “farm out” their rendering jobs to these servers, rather than rendering on their workstation locally.
However, this can be a workflow bottleneck, as designers may have to wait in a queue for their jobs to be completed to see if further adjustments are required.
Fortunately, the era when a graphics-focused workstation should have fewer, faster cores are over. Years ago, a processor with more cores tended to have lower core clock speeds, which can reduce modelling performance. Dassault Systèmes SolidWorks, for example, is sensitive to both CPU frequency and GPU power.
Now, the CPU market has developed such that you can (almost) have the best of both worlds.
Top-end AMD Ryzen CPUs now have 16 cores, and the 13th Gen Intel Core i9 processors have 24, although 16 of these are slower “Efficient” cores. Both CPU brands are quite capable of local rendering, and even doing so while the content creator performs other tasks in the foreground.
AMD Ryzen Threadripper Pro CPUs with 32 or 64 cores are increasingly popular with 3D-animation studios because they negate the need to use a render farm for test output, which they can easily generate in the background while the artist continues modelling on the same system.
Intel’s third-generation Xeon Scalable processors weren’t able to deliver the same combination of clock speed with lots of cores. Intel’s fourth generation Xeon Scalable chips, codenamed Sapphire Rapids, offer up to 60 cores per CPU and frequencies up to 5.5GHz. It could give AMD’s Threadripper Pro a significant challenge, but we haven’t yet seen products in the wild.
Where to spend?
As a rule of thumb, a professional workstation used for modelling will probably spend more on its GPU than its CPU. The Nvidia RTX A4000 accelerator is a sensible baseline for pro graphics work, but choose a more expensive model for more intensive viewsets.
Couple this with a high-end consumer-grade CPU from AMD or Intel, probably costing less than half as much, and you will have a great platform.
However, if you also need to perform more local CPU-focused activities, such as 3D animation rendering, a more expensive CPU platform such as an AMD Threadripper Pro will deliver both great modelling and all the CPU power you need.
When the next Intel Xeon Scalable truly arrives, that could be an excellent option too – but neither will be cheap.
Updated 18 April: new sections added on the difference between CPUs and GPUs, what to look for in a workstation CPU and updated references to Intel’s 4th gen Xeon Scalable family of processors.
Nathalie Parent, Chief People Officer at Shift Technology: “HR is the conscience of an organisation”
For more than 30 years, Nathalie Parent has led global HR teams, working primarily with software companies. Today she’s Chief People Officer at Shift Technology
Amazon introduces new storage class that makes it cheaper to store rarely used files
Robot carers are real, but caregiving has bigger problems, writes Richard Trenholm in this FlashForward edition