NVIDIA’s Mainstream GeForce GPU Performance Per Dollar Visualized Over The Years, Are We Bound To Get Another Pascal-Like Upgrade With Ampere?
NVIDIA’s Mainstream GeForce GPU Performance Per Dollar Visualized Over The Years, Are We Bound To Get Another Pascal-Like Upgrade With Ampere?

An interesting chart depicting the generational performance per dollar improvement on NVIDIA's GeForce mainstream lineup has been posted over at the NVIDIA subreddit by the user Hic-Stunt-Leones. The chart shows us the gains we have seen since the GeForce 9600 GT, all the way up to the current GeForce RTX 2060.
The mainstream cards are generally what make up the bulk of GPU sales for both NVIDIA and AMD. This mass volume segment is what helps either company drive higher share percentages even if it doesn't offer the same returns as the high-end cards. AMD and NVIDIA have had some competitive offerings released in this segment over the past decade with cards such as the HD 5850, GTX 460, R9 270 series, GTX 970, RX 500 series and the GTX 1060 to name a few.
It's been an ever-shifting market with consumers leaning towards whichever graphics card maker offered the best performance to value proposition. So let's see how the mainstream NVIDIA cards have performed over the decade and what can we expect from the upcoming Ampere GeForce RTX 30 series mainstream offerings.
Before we go into details, the following is a note from the OP on how the information was calculated:
As stated performance data come from TechPowerUp. I took reference from the "Performance Summary" section you can find on each review on that website since 12 years ago. That shows a synthetic comparison between dozens of GPUs tested on several games.
Not all the values are shown on a single table of course so you have to jump from one chart to the other calculating every single generational improvement and stack up all the results.
Test resolution was kept as consistent as possible. GPUs were tested at 1900x1200 until GTX 460, then always at 1080p.
Note that this comparison can't take into account possible performance improvements over time due to new driver releases. Data come from the first review of the cards.
The chart takes us all the way back to 2008 when NVIDIA introduced its GeForce 9800 GT for under $200 US. The card replaced the GeForce 9600 GT at a lower price point due to heated competition from AMD's side. Not only was the 9600 GT replaced by the 9800 GT at a lower price point but it did so just 5 months after the first card's release. The 9800 GT performance is taken as the reference value for the rest of the cards and this is where the journey of NVIDIA's mainstream segment really begins.
Troubled Beginnings (Telsa & Fermi)
The GeForce 9800 GT was followed by the GeForce GTX 260 which offered better performance but at a huge premium of $449 US. The performance per dollar value here scales negatively due to the higher price of the GeForce GTX 260 which was the first mainstream card based on the Telsa GPU architecture. It was the GeForce GTX 460 from the Fermi generation which fixed the mainstream segment and offered competitive performance versus the AMD Radeon HD 5800 series graphics cards.
While the HD 5850 and GTX 460 were priced similarly and offered similar performance, one of the big advantages of the GeForce GTX 460 was its efficiency. The GF104 GPU featured on the GTX 460 was deprived off the compute hardware that was used on the rest of the GF100 Fermi-based cards such as the GTX 480 and the GTX 470 hence delivering lower temperatures and higher efficiency vs the higher end Fermi offerings which ran way too hot and consumed way too much power than the competition. The GTX 460 offered more than twice the performance per dollar ratio compared to its predecessor, the GTX 260.
The GeForce GTX 460 would eventually be replaced by the GeForce GTX 560 Ti which featured the enhanced GF114 GPU. The higher core count and faster clocks at a $20 US higher price resulted in a 23.3% increase in performance per dollar value increase over the GTX 460. It was a moderate but needed refresh to compete against AMD's HD 6800 series cards.
Peak Efficiency (Kepler, Maxwell)
Enter the 28nm era with Kepler and GCN being the latest and greatest architectures from the GPU giants. The GeForce GTX 660 Ti and the HD 7800 series competed against one another but NVIDIA having learned from the mistakes it made with Fermi, made Kepler an efficiency powerhouse. The overall result was NVIDIA dominating the efficiency charts while also offering competitive graphics performance vs AMD's lineup. However, the increase in price to $299 US for the GTX 660 Ti meant that overall performance per dollar was lower despite the switch to the new graphics architecture.
This is the same generation when we saw NVIDIA switch to lower-end GPU SKUs to be used on the high-end and mainstream cards. The GTX 680 moved down from G**00 to G**04 while the GTX 660 moved down from G**04 to G**06 cores. NVIDIA would keep this configuration alive to this very day. The GeForce GTX 760 in the coming year replaced the GTX 660 Ti at a lower price point of $249 while offering better performance. That led to a bigger performance per dollar increase vs what we saw with the move to GTX 660 Ti from the GTX 560 Ti.
Maxwell brought one of the biggest improvements to performance per dollar with a 40.5% jump from the preceding generation. The main drivers for this were the $199 US price for the GTX 960 which was an absolute deal and the fact that the Maxwell offered efficiency that blew through the roof. But AMD was picking up the pace by offering its older GCN offerings at much lower price points and with a new generation of cards coming in, NVIDIA had to go all out with Pascal, it's first 16nm cards using the FinFET design.
The FinFET Era & Beyond (Pascal, Turing, Ampere)
The GeForce GTX 1060 delivered the highest performance per dollar increase over the decade with up to 53% from the previous generation. The main reason behind this was AMD's Polaris based RX 500 series offerings which it was meant to compete against. The GeForce GTX 1060 remains the most used graphics card in Steam's database which comprises over 90 million monthly active users and increasing by the day.
The RX 580 currently has a market share of 2% compared to over 11% for the GTX 1060. With the large influx of used cards from both companies due to the mining crash, budget gamers flocked to buy them at bargain rates. AMD's Polaris cards did fare better than the Pascal offerings due to improved driver support but in the end, users who went with these cards had already upgraded to something better from the two companies.
Lastly, we have the existing generation of Turing cards which have brought the smallest increase to performance per dollar for over a decade. The GeForce RTX 2060 currently retails at under $300 US but started at $349 US. This has led to decreased performance per dollar from Pascal. The GeForce GTX 16 series cards do offer better performance/value compared to the RTX 2060 but considering that this is the best mainstream option NVIDIA has in its RTX lineup, it was used for comparison in this chart.
So with the RTX 2060 already nearing over 1.5 years in service, we have to ask, where do we go from here? Looking at the chart, we can see that every time NVIDIA drops behind in performance per dollar compared to previous generation offerings, the succeeding generation always sees a huge jump and that is largely due to two major things, increase performance and lower prices. Every major hit from NVIDIA in the previous generations came with graphics cards that offered a better value proposition close to $249 US rather than nearing the $349-$399 US segment.
Given that NVIDIA has experimented with its first generation of Raytracing enabled mainstream cards, it is likely that the upcoming generation will offer better value due to increase heat from competing ray-traced GPUs from AMD. We also expect NVIDIA to focus on both rasterized and raytraced GPU horsepower this time around rather than just pushing raytracing as a feature once again with minimal rasterization and shader performance increases. If NVIDIA really wants Ampere to stand out then they might have another Pascal-like performance per dollar jump ready for all of us consumers.
What's Your Reaction?






