This a topic that's often raised when we do our CPU gaming benchmarks. As you know, we perform a ton of CPU and GPU benchmarks tests throughout the year, a big portion of which are dedicated to gaming. The goal is to work out which CPU will offer you the most bang for your buck at a given price point, now and hopefully in the future.
Earlier in the year we compared the evenly matched Core i5-8400 and Ryzen 5 2600. Overall the R5 2600 was faster once fine-tuned, but ended up costing more per frame making the 8400 the cheaper and more practical option for most gamers (side note: Ryzen 5 is a better buy today since it's dropped to $160).
For that matchup we compared the two CPUs in 36 games at three resolutions. Because we want to use the maximum in-game visual quality settings and apply as much load as possible, the GeForce GTX 1080 Ti was our graphics weapon of choice. This helps to minimize GPU bottlenecks that can hide potential weaknesses when analyzing CPU performance.
When testing new CPUs we have two main goals:
#1 to work out how it performs right now
#2 how 'future-proof' is it.
Will it still be serving you well in a year's time, for example?
The problem is quite a few readers seem to get confused about why we're doing this, and I suspect without thinking it through fully, take to the comments section to bash the content for being misleading and unrealistic. It just happened again when we tested budget CPUs (Athlon 200GE vs. Pentium G5400 vs. Ryzen 3 2200G) and we threw in an RTX 2080 Ti.
This is something we've seen time and time again and we've addressed it on the comments directly. Often other readers have also come to the rescue to inform their peers why tests are done in a certain way. But as the CPU scene has become more competitive again, we thought we'd address this topic more broadly and hopefully explain a little better why it is we test all CPUs with the most powerful gaming GPU available at the time.
As mentioned a moment ago, it all comes down to removing the GPU bottleneck. We don't want the graphics card to be the performance limiting component when measuring CPU performance and there are a number of reasons why this is important to do and I'll touch on all of them in this article.
Yes, it's true. It's unlikely anyone will want to pair a GeForce RTX 2080 Ti with a sub $200 processor. However when we pour dozens and dozens of hours into benchmarking a set of components, we aim to cover as many bases as we possibly can to give you the best possible buying advice. Obviously we can only test with the games and hardware that are available right now and this makes it a little more difficult to predict how components like the CPU will behave in yet to be released games using more modern graphics cards, say a year or two down the track.
Assuming you don't upgrade your CPU every time you buy a new graphics card, it's important to determine how the CPU performs and compares with competing products, when not GPU limited. That's because while you might pair your new Pentium G5400 processor with a modest GTX 1050 Ti today, in a year's time you might have a graphics card packing twice as much processing power, and in 2-3 years who knows.
So as an example, if we compared the Pentium G5400 to the Core i5-8400 with a GeForce GTX 1050 Ti, we would determine that in today's latest and greatest games the Core i5 provides no real performance benefit (see graph below). This means in a year or two, when you upgrade to something offering performance equivalent to that of the GTX 1080, you're going to wonder why GPU utilization is only hovering around 60% and you're not seeing anywhere near the performance you should be.
Here's another example we can use: in early 2017, during the Pentium G4560's release we published a GPU scaling test where we observed that a GTX 1050 Ti was no faster on the Core i7-6700K than with the Pentium processor.
However using a GTX 1060 the Core i7 was shown to be 26% faster on average, meaning the G4560 had already created a system bottleneck but we could only know this because a higher-end GPU was used for testing. With the GTX 1080 we see that the 6700K is almost 90% faster than the G4560, a GPU that by this time next year will be delivering mid-range performance at best, much like what we see when comparing the GTX 980 and GTX 1060, for example.
Now, with this example you might say well the G4560 was just $64, while the 6700K cost $340, of course the Core i7 was going to be miles faster. We don't disagree. But in this 18 month old example we can see that the 6700K had significantly more headroom, something we wouldn't have known had we tested with the 1050 Ti or even the 1060.
You could also argue that even today at an extreme resolution like 4K there would be little to no difference between the G4560 and 6700K and that might be true for some titles, but won't be for others like Battlefield 1 multiplayer and it certainly won't be true in a year or two when games become even more CPU demanding.
Additionally, don't fall into the trap of assuming everyone uses Ultra quality settings or targets just 60 fps. There are plenty of gamers using a mid-range GPU that opt for medium to high, and even low settings to push frame rates well past 100 fps, and these aren't just gamers with high refresh rate 144 Hz displays. Despite popular belief there is a serious advantage to be had in fast paced shooters by going well beyond 60 fps on a 60 Hz display, but that's a discussion for another time.
Getting back to the Kaby Lake dual-core for a moment, swapping out a $64 processor for something higher-end isn't a big deal, which is why we gave the ultra affordable G4560 a rave review. But if we're comparing more expensive processors such as the Core i5-7600K and Ryzen 5 1600X for example, it's very important to test without GPU limitations...
Back to our discussion of the Core i5-8400 vs. Ryzen 5 2600 comparison featuring three tested resolutions, let's take a quick look at the Mass Effect Andromeda results. Those performance trends look quite similar to the previous graph, don't they? You could almost rename 720p to GTX 1080, 1080p to GTX 1060 and 1440p to GTX 1050 Ti.
Since many suggested that these two sub-$200 CPUs should have been tested with a GPU packing an sub $300 MSRP, let's see what that would have looked like at our three tested resolutions.
Now, we know the GTX 1060 has 64% fewer CUDA cores and in Mass Effect Andromeda that leads to around 55% fewer frames at 1080p and 1440p using a Core i7-7700K clocked at 5 GHz and we see that in these two graphs from my 35 game Vega 56 vs. GTX 1070 Ti benchmark conducted last year. The GTX 1060 spat out 61 fps on average at 1080p and just 40fps at 1440p.
So here's where the GTX 1060 is situated on our graph in relation to the GTX 1080 Ti. The first red line indicates the 1% low result and the second red line the average frame rate. Even at 720p we are massively GPU bound. Had I only tested with the GTX 1060 or possibly even the 1070, all the results would have shown us that both CPUs can max those particular GPUs out in modern titles, even at extremely low resolutions.
In fact, you could throw the Core i3-8100 and Ryzen 3 2200G into the mix and the results would lead us to believe neither CPU is inferior to the Core i5-8400 when it comes to modern gaming. Of course, there will be the odd extremely CPU intensive title that shows a small dip in performance but the true difference would be masked by the weaker GPU performance.
I've seen some people suggest reviewers test with extreme high-end GPUs in an effort to make the results entertaining, but come on, that one's just a bit too silly to entertain. As I've said the intention is to determine which product will serve you best in the long run, not to keep you on the edge of your seat for an extreme benchmark battle to the death.
As for providing more "real-world" results by testing with a lower-end GPU, I'd say unless we tested a range of GPUs at a range of resolutions and quality settings, you're not going to see the kind of real-world results many claim to deliver.
Given the enormous and unrealistic undertaking that kind of testing would become for any more than a few select games, the best option is to test with a high-end GPU. And if you can do so at 2 or 3 resolutions, like we often do, this will mimic GPU scaling performance.
Don't get me wrong, it's not a dumb suggestion to test with lower end graphics cards, it's just a differentkind of test (...) but given GPU-limited testing tells you little to nothing, that's something we try to avoid.
Ultimately I feel like those suggesting this testing methodology are doing so with a narrower viewpoint. Playing Mass Effect Andromeda with a GTX 1060 but using medium settings will see the same kind of frame rates you'll get with the GTX 1080 Ti using ultra quality settings. So don't for a second make the mistake of assuming everyone games under the same conditions as you.
Gamers have wide and varying range of requirements, so we do our best to use a method that covers as many bases as possible. For gaming CPU comparisons we want to determine in a large volume of games which product offers the best performance overall as this is likely going to be the better performer in a few year's time. Given GPU-limited testing tells you little to nothing, that's something we try to avoid.