The Radeon RX 5700 is the second of AMD’s new Navi 10 graphics cards, sporting the new / improved RDNA architecture and playing second fiddle to the RX 5700 XT’s leading role. It’s a story we’ve seen many times over the years—the RX 570 was a modest step down from the RX 580, same for Vega 56 compared to Vega 64, or if you prefer the Nvidia side of things it’s a bit like the GTX 1660 vs. the GTX 1660 Ti or GTX 970 vs. GTX 980. Sporting a last minute price drop, thanks to Nvidia’s launch of the RTX 2060 Super and RTX 2070 Super, the direct competition is the RTX 2060.
I’ve already covered AMD’s new Navi / RDNA architecture in detail, and the companion Radeon RX 5700 XT review largely overlaps this one. I recommend starting with those articles if you want more information, but if you’re wondering how AMD’s $349 part stacks up against Nvidia’s $349 part, you’ve come to the right place. AMD’s new Navi GPUs improve performance and efficiency compared to the old GCN architecture, and are the first mainstream 7nm graphics cards to hit the market (unless you want to count the Radeon VII). Here’s a quick look at the specs, comparing the new RX 5700 models with several of AMD’s previous generation GPUs:
The RX 5700 uses the same Navi 10 GPU as the other 5700 models, but AMD disables four CUs and 256 cores as well as dropping the boost clock by 180MHz. That’s pretty typical for the second tier product for any GPU, and the reason for the changes comes down to chip yields. I’ll dig into this subject in more detail in a future article, but here’s the short overview.
Modern microchips are made using large (300mm) silicon wafers, cut from a large silicon crystal cylinder. There are impurities in the wafers used for making CPUs and GPUs, and these can lead to errors or differences in performance characteristics. My understanding is that chips from the center of the wafer tend to be ‘better’ than those near the edge, so they may require less voltage to hit the same clockspeed, or they may run at higher clockspeeds at the same voltage. Chips with impurities or errors don’t fully work, so modern processors build in redundancies and some portions of the chip can be disabled to produce a working chip—but one that’s not as fast as a fully enabled chip.
After each wafer gets cut into individual chips, each is tested to determine how good it is in a process called binning. Thus, the best chips usually get sold as the fastest, most expensive parts—the Core i9-9900K, Ryzen 7 3800X, or RX 5700 XT. Meanwhile, the functional but perhaps not quite as good chips are harvested and sold as lower tier parts. Depending on how intensive and accurate the binning process is—and how mature the manufacturing process is—the gap between the ‘best’ and ‘worst’ functional chips from a wafer may be relatively small. For newer manufacturing nodes like TSMC’s 7nm, the gap is more likely to be larger. Or in other words, the RX 5700 cards likely won’t hit the same clockspeeds as the 5700 XT, no matter how hard you try to overclock.
I wanted to explain that as often there’s information saying on the internet saying, “Oh, don’t worry, you can overclock your [lower tier product] to the same clocks as the [higher tier product].” In my experience, that’s rarely true. What’s more, early issues with AMD’s pre-launch RX 5700 drivers, WattMan, and MSI Afterburner mean overclocking right now isn’t very useful. That should be rectified soon, but in general I expect the 5700 to run about 200MHz slower than the 5700 XT. That combined with fewer GPU cores means it will be up to 19 percent slower in theory, and the lower TDP means it can sometimes be more than that. But at least it has the same memory bandwidth.
AMD provided reference models of the RX 5700 XT and RX 5700 for testing. Unlike the 5700 XT, the vanilla RX 5700 sports a more traditional blower design. It looks similar to the RX Vega cards, but with an aluminum shroud and no dots, with the Radeon logo shifted around a bit. The good news is that unlike the reference Vega cards, the RX 5700 has a much lower TBP (Typical Board Power) of just 175W, which in turn means the fan won’t need to spin nearly as fast. That in turn means less fan noise, and at stock sitting in an enclosed case, the noise of the RX 5700 fan wasn’t really audible.
If you’re not a fan of blower fans, AMD’s board partners will of course have axial fans and open air designs available, including models with factory overclocks. Those may cost a bit more, depending on the particular model, overclock, and other amenities, which sort of flies in the face of the 5700’s lower pricing. If you want a custom factory overclocked RX 5700, I’d just bump up to the 5700 XT first. Sure it’s $50 extra, but that’s less than the price of a typical game, and you’ll get to enjoy the added performance on every game you play.
With Nvidia’s updated RTX lineup and AMD’s reduced launch pricing on the RX 5700 line, the direct competition for the RX 5700 is the GTX 2060 (non-Super). Both sell for $349, though the 2060 can be found for slightly less than that now. Generally speaking, I expect the RX 5700 will be faster in many games, thanks to its additional 2GB of memory and generally higher specs. However, you do need to decide whether or not you want to be able to play around with the new and upcoming ray tracing games. At present, AMD has no support for Microsoft’s DXR (DirectX Raytracing) API or Vulkan-RT. It’s possible to do ray tracing via compute shaders, and Nvidia has released drivers that do exactly that for the GTX 10/16 series GPUs, but AMD is so far sitting out on ray tracing support.
Perhaps that won’t matter during the life of the cards—the RTX 2060 already struggles with even modest ray tracing effects. But with the next-gen AMD powered consoles adding ray tracing hardware support, there is the potential to be left behind. I’m not sure if that’s because AMD couldn’t get hardware RT support ready in time, or if it’s holding back that feature for the consoles, but it’s certainly something to factor into your buying decision.
Let’s get to the testing, where we’re using our standard GPU test bed—full specs are to the right. The overclocked Core i7-8700K running at 5.0GHz helps ensure the CPU isn’t a bottleneck, along with DDR4-3200 CL14 memory and fast SSD storage for the same reason. CPU performance it can be a factor, particularly at 1080p, less so at 1440p and 4K. We’ve benchmarked using the latest drivers available at the time of testing, including retesting older GPUs to ensure our results are up to date. For AMD, we used the 19.16.2 drivers on the previous generation GPUs, with new drivers provided prior to the launch for the RX 5700 cards.
The selection of games we’re using for GPU testing has been updated, and we don’t run DXR or DLSS for any of the benchmarks. That allows for meaningful comparisons between the various GPUs, since AMD has no support for DXR at present. The 11 games we’re using consist of a pretty even mix of AMD and Nvidia promoted titles—The Division 2, Far Cry 5, Strange Brigade, and Total War: Warhammer 2 sport AMD branding, while Assassin’s Creed Odyssey, Metro Exodus, and Shadow of the Tomb Raider are promoted by Nvidia. DirectX 12 is utilized in most cases where available, with the exception of Total War: Warhammer 2 where the “DX12 Beta” performance is particularly weak on Nvidia GPUs.
Each card is tested at four settings: 1080p medium (or equivalent) and 1080p/1440p/4k ultra (unless otherwise noted—for example, Metro Exodus is tested using the high preset). Every setting is tested multiple times to ensure the consistency of the results, and we use the best score. Minimum FPS is calculated by summing all frametimes above the 97 percentile and dividing by the number of frames, so it’s the “average minimum fps” rather than an absolute minimum. That makes it a reasonable representation of the lower end of the performance scale, rather than looking only at the single worst framerate from a benchmark run.
Here are the results, starting with 1080p. And if that seems like an easy target, it can be difficult to max out the capabilities of a 144Hz monitor, even with an RTX 2080 or 2080 Ti.
Radeon RX 5700 performance
Above you can see performance at our four test settings, and you can flip through each gallery to see the individual game results. Our initial testing of the RX 5700 has turned up a few oddities, in particular minimum fps in several games is much lower than expected. These sorts of driver issues for a new architecture and product aren’t that uncommon, though, and should be worked out over the coming weeks.
Overall, pretty much regardless of what setting you choose, the Radeon RX 5700 comes out ahead of the equivalently priced RTX 2060. It’s 5 percent faster than 1080p medium, 8 percent faster at 1080p ultra, 11 percent faster at 1440p ultra, and 15 percent faster at 4K ultra—not that I’d necessarily recommend trying to play games at 4K ultra on the card. There are a few games where Nvidia comes out ahead (Assassin’s Creed Odyssey and Total War: Warhammer 2), but the extra 2GB VRAM and the associated higher memory bandwidth definitely puts AMD in the lead.
What about competition from AMD’s own cards, though? The RX 5700 costs $349 and the RX 5700 XT costs $399, a price difference of 14 percent. And the performance lines up pretty well with that: 9 percent faster at 1080p medium, 11 percent faster at 1080p ultra, 14 percent faster at 1440p ultra, and 12 percent faster at 4K ultra. The RX 5700 is also 5-10 percent faster than the RX Vega 64, and 15-22 percent faster than the Vega 56
Something else to note is that the RX 5700 does this while using substantially less power than either Vega card (55W less than the Vega 56 and 135W less than the Vega 64). Or if you prefer, it uses about 10W less than the RX 580 8GB while delivering 55-80 percent more performance.
Those claims AMD made about improving performance per watt by 50 percent or more? They’re absolutely true based on our independent testing. About the only minor ding (other than the lack of DXR enabled drivers) is that noise levels appear to be slightly higher than Nvidia’s cards (based on AnandTech’s testing).
The above charts are also current as of 7/10 using the best prices I could find for the various GPUs and other system components.
If you’re just focusing on the price of a new graphics card and the performance it offers, the RX 570 4GB is pretty much impossible to beat. It’s routinely available for $120/£118 (though other parts of Europe might need to pony up €140 or more). It’s a bit more than half as fast as the RX 5700 for about one third the price. What’s not to like? The actual performance in the latest games. Even 1080p ultra is a stretch in a few games, and 1440p ultra can dip below 30fps in the most demanding games. If you’re after smooth framerates and want to enable all the graphical bells and whistles, the incredibly low price on some GPUs may look awesome, but you’ll want something faster.
The other thing to consider is that the graphics card is merely a part of the whole. A typical midrange PC will cost around $750/£675/€730, not including the graphics card. The second set of charts shows relative value (FPS per monetary unit) when looking at a complete midrange PC build, and suddenly spending a lot more money on the GPU can make sense. Here the RX 5700 XT takes top honors, with the RX 5700 sitting in the third or fourth slot. The incredibly affordable RX 570 meanwhile sits fourth from the bottom.
Both value charts are worth considering—there’s no uniform way of assessing the worth of certain features. People who want to play at 4K no matter the cost would have a different perspective. If I were to use a high-end build rather than a midrange PC, things skew even further in favor of buying the fastest graphics card, at least from a gaming perspective. The point is that performance and price of the graphics card aren’t the only metrics that matter.
RX 5700 is a step down from the 5700 XT, for a reasonable $50 savings
If you’re rooting for AMD or just sick of Nvidia’s stranglehold on the GPU market, the new AMD Navi graphics cards are a great addition. Performance and even price aren’t a massive improvement over the Vega cards, but even the slower RX 5700 tends to beat Vega 64 in most games. The fact that it can do so while using a bit more than half as much power (not including the power used for the CPU and other components) is the bigger deal in my book. All things being equal, I’d much rather have a quieter and less power hungry GPU, and the RX 5700 definitely fits that description. In fact, it’s about on par with the RTX 2060 in power use while providing more performance, and it uses 10-15W less than the 2060 Super. Less power means less noise, less heat, and potentially a smaller PSU.
The improvements come from multiple vectors. TSMC’s 7nm manufacturing process certainly plays a role, but the improvements in AMD’s Navi / RDNA architecture are clearly important. And even though the RX 5700 is a modest step down from the RX 5700 XT, the lower clockspeeds also translate to more efficient performance—sort of like the old R9 Nano approach, though not quite that extreme. But that doesn’t mean the RX 5700 series is a clear win.
If I’m being honest, I prefer the RX 5700 XT over the RX 5700 and would pay the extra $50. It’s not just about the higher performance, though that’s the biggest factor. The other concern is long-term stability, as I’ve had questionable luck with AMD’s second tier products over the past few generations. With the RX 570, I had two different cards from two different manufacturers early on that are, at best, a bit flaky (a third 570 released six months later works fine). I have to run MSI Afterburner and tweak the voltages and fan speed curves for the cards to be fully stable during gaming sessions. My RX Vega 56 reference model likewise has a few quirks and needs some custom tuning to avoid period crashes.
Contrast that with the RX 580 cards and Vega 64, which have generally worked fine, and there’s an appeal to paying more for higher performance and a higher binned GPU. I haven’t seen anything specific with the 5700 series yet, but the binning process I discussed earlier means the XT cards are getting better GPUs, especially this early in the life cycle. Plus, early driver issues have severely limited my ability to investigate overclocking potential, so I can’t fully test how much headroom there is on the 5700. Maybe waiting a month or two for the custom cards to arrive before jumping on RX 5700 might be wise—and necessary if you don’t want a blower cooler.
There’s also the question of API support. Like it or not, DirectX Raytracing (DXR) is now a Microsoft standard. That means more ray tracing games are coming, and even AMD has plans to support DXR via hardware acceleration… just not right now. If DXR catches on, and optimizations to improve performance without compromising image quality continue, in a year or two the Navi 10 GPUs could look like a questionable purchase. It’s not the end of the world to miss out on a few extra reflections or improved shadows and lighting, obviously—we’ve been ‘missing’ these features for decades—but personally I’d lean toward spending the extra $50 for a 2060 Super over the RX 5700, or sacrifice a few fps compared to the 5700 XT at the same price.
It’s impossible to say exactly what will happen in the coming months and years, however, and right now the RX 5700 is a high performance card at a reasonable $349 starting price. It beats Nvidia’s direct competitor, the RTX 2060, and also outperforms AMD’s previous generation Vega cards. You also get three months of free access to Microsoft’s Xbox Game Pass if you purchase an RX 5700 card—that’s (temporary) access to over a hundred games, though I’m not convinced that’s actually better than getting two or three games outright.
Overall, the Radeon RX 5700 is still a great card and belongs on the list of potential GPU upgrades. If you’re looking for the best $350 graphics card today, it’s the RX 5700, even if it might not be the better card six months or two years from now. There’s also hope that street prices will drop a bit after the initial rush is over, though I doubt we’ll see any RX 5700 deals during Prime Day next week. (Fingers crossed!) Perhaps most importantly, regardless of whether you want to buy an AMD or Nvidia graphics card, there’s more competition now and that’s resulting in better value for everyone.