The 5 worst Nvidia GPUs of all time |  Digital trends

The 5 worst Nvidia GPUs of all time | Digital trends

Nvidia has a solid pedigree for making great graphics cards. It’s never really been the underdog, and its best GPUs have outperformed rival AMD time and time again. But despite Nvidia’s penchant for innovation and technological advancement, it’s released quite a few abominable cards, cursed not necessarily by bad technology, but often by bad decision-making. Let’s remember some Nvidia GPUs we wish we could forget.

GeForce GTX 480

The way it’s supposed to be grilled

The Nvidia GeForce GTX 480.
Hyin

Although Nvidia has been in business for over 20 years now, there’s really only one GPU the company has ever released that was truly technologically terrible, and that’s the GTX 480. he Fermi architecture, the GTX 480 (and the entire 400 series by extension) was plagued with a host of issues which, in turn, allowed AMD to become the top graphics chip maker and nearly overtake Nvidia in share. Steps.

The 480’s biggest claim to fame (or infamy) was its power consumption and heat. In Anandtech’s tests, it was found that a single GTX 480 consumed as much power as dual GPU systems and could reach 94°C in normal games, which at the time was insane . It was an unfortunate coincidence that the stock 480 cooler looked like a grill, prompting critics to turn Nvidia’s slogan “the way it’s meant to be played” into “the way it’s meant to be grilled”.

To make matters worse, Fermi was about 6 months late to the party, as AMD’s HD 5000 series launched first. Sure, the 480 was the fastest graphics card with a single GPU die, but AMD’s HD 5870 had 90% of the performance without being a toaster. Also, AMD’s HD 5970 with two GPU dies was faster, and in 2010 CrossFire had much better support in games. Last but not least, the 480’s $500 price tag was just too high to make it competitive.

Nvidia ended up ignominiously killing off the GTX 400 series eight months later by releasing the GTX 500 series, which was essentially a fixed version of Fermi. The new GTX 580 was faster than the GTX 480, consumed less power and had the same price.

GeForce GTX 970

3.5 equals 4

Upon release, the GTX 970 was very well received, as were other 900 series cards powered by the legendary Maxwell architecture. It cost $329 and was as fast as AMD’s flagship R9 290X in 2013 while consuming significantly less power. In Anandtech’s opinion, it was a strong contender for the best value champion of the generation. So what did the 970 do so badly that it landed on this list?

Well, a few months after the 970 was released, new information emerged about its specs. Although the GPU had 4GB of GDDR5 VRAM, only 3.5GB was usable at full speed, with the remaining half GB running barely faster than DDR3, the system memory a GPU will go to if it runs out of VRAM. For all intents and purposes, the 970 was a 3.5GB GPU, not a 4GB one, which led to a lawsuit that Nvidia settled out of court, paying each 970 owner $30 each.

In reality, the performance implications of having half a gigabyte less VRAM were virtually non-existent according to Anandtech. At the time, most games that required more than 3.5 GB of VRAM were simply too intensive, even for the GTX 980 which had 4 GB of VRAM.

There are a few games these days where the 970 struggles due to its sub-optimal memory configuration. But performance isn’t the point here; Ultimately, Nvidia more or less lied about what the GTX 970 had, and that’s not acceptable and really stains the legacy of an otherwise excellent card. Unfortunately, playing fast and free with GPU specs is a habit that Nvidia has struggled to break ever since.

GeForce GTX 1060 3GB

This is not a 1060

Best graphics card for gaming

After the 970 debacle, Nvidia never attempted to create another GPU with a slow segment of VRAM and ensured that each card was advertised with the correct amount of memory. However, Nvidia found another specification that was easier to play with: the number of CUDA cores.

Prior to Series 10, it was common to see GPUs with multiple (usually two) versions that differed in VRAM capacity, such as the GTX 960 2GB and GTX 960 4GB. GPUs with more VRAM were just that ; they didn’t even have more memory bandwidth in the vast majority of cases. But that all started to change with Nvidia’s 10-series, which introduced GPUs like the GTX 1060 3GB. On the surface, it looks like a GTX 1060 with half the normal 6GB, but there’s a catch: it also had fewer hearts.

As an actual product, the GTX 1060 3GB was passable, according to reviewers like Techspot and Guru3D, who didn’t even care about the lower core count. But the 1060 3GB ushered in a barrage of GPU variants that had both less VRAM and fewer cores, and frankly, this trend has only caused confusion. The number of GPU cores is arguably what differentiates different GPU models, with VRAM being only a secondary performance factor.

The worst example of Nvidia doing this bait and switch would have been the RTX 4080 12GB, which was supposed to have only 78% of the cores of the RTX 4080 16GB, making it look more like an RTX 4070 than anything else. . However, the backlash was so intense that Nvidia actually canceled the RTX 4080 12GB, which (un)fortunately means it will never make it to this list.

GeForce RTX 2080

One step forward and two back

RTX 2080
Riley Young/Digital Trends

With the GTX 10 series, Nvidia achieved total dominance in the GPU market; cards like the GTX 1080 Ti and GTX 1080 are easily among Nvidia’s best GPUs of all time. Nvidia wasn’t slowing down either, as its next-gen RTX 20 series introduced real-time ray tracing and AI-powered resolution scaling. The 20-series was far more technologically advanced than the 10-series, which was basically the 900-series on a better node.

In fact, Nvidia liked its new technology so much that it gave the RTX 20 series the kind of price it thought it deserved, with the RTX 2080 at $800 and the RTX 2080 Ti at $1,200. Ray tracing and DLSS were the next big thing, so that would make up for it, Nvidia thought. Except it wasn’t obvious to anyone because on launch day there were no games with ray tracing or DLSS, and there wouldn’t be for months. It wasn’t until the release of the RTX 30 cards that there were plenty of games supporting these new features.

The RTX 2080 was a particularly bad 20-series GPU. It was about $100 more expensive than the GTX 1080 Ti while having slightly less performance according to our tests; at least the 2080 Ti could claim to be around 25% faster than the old flagship. Even when ray-tracing and DLSS came into play, enabling ray-tracing was so intensive that it struggled to hit 60fps in most titles, while DLSS 1.0 simply didn’t. looks very good. By the time DLSS 2 was released in early 2020, RTX 30 was just on the horizon.

Nvidia had overplayed its hand, and it knew it. Just eight months after the launch of the 20-series, Nvidia released its RTX 20 Super GPUs, a throwback to the GTX 500-series, and how it fixed the 400-series. The new Super variants of the 2060, 2070, and 2080 featured more cores , better memory and lower prices, somewhat solving the problems of the original 20 series.

GeForce RTX 3080 12GB

How to make a good GPU terrible

RTX 3080 graphics card on a pink background.
Jacob Roach / Digital Trends

So we’ve seen what happens when Nvidia takes a good GPU and reduces its VRAM and core countdown without changing the name, but what happens when it takes a good GPU and adds more VRAM and hearts? Making a good GPU even faster sounds like a great idea! Well, in the case of the RTX 3080 12GB, it resulted in the creation of what might be Nvidia’s most useless GPU by any measure.

Compared to the original RTX 3080 10GB, the 3080 12GB wasn’t much of an upgrade. Like other Nvidia GPUs with more memory, it also had more cores, but only about 3% more. During our review, we found the 10GB and 12GB models to have nearly identical performance, very different from how the 1060 3GB was noticeably slower than the 1060 6GB. To Nvidia’s credit, the name of the 3080 12GB was pretty accurate, a noticeable improvement over the 1060 3GB.

So what’s the deal with offering a newer version of a GPU with more memory? Well, Nvidia released the 3080 12GB during the 2020-2022 GPU shortage, and naturally it sold for an absurdly high price between $1,250 and $1,600. Meanwhile, the 10GB variants were selling for $300-$400 less, and since the memory upgrade clearly didn’t matter, it was obvious which card to buy.

Perhaps the most embarrassing thing for the 3080 12GB was not its cheaper 10GB version, but the existence of the RTX 3080 Ti, which had the same memory size and bandwidth as the 3080 12GB. Thing is, it also had 14% more cores and, as a result, significantly higher performance. On review day, the 3080 Ti was cheaper, rendering the 3080 12GB useless from all angles and just another card released during the shortage that made no sense at all.

Nvidia’s worst GPUs, so far

To Nvidia’s credit, even most of its worst GPUs had something to do with it: the 970 was good despite its memory, the 1060 3GB was just misnamed, and the RTX 2080 was just around $200 overpriced. Nvidia has made very few technological errors so far, and even the GTX 480 was at least the fastest graphics card with a single GPU chip.

That being said, good technology can’t make up for bad business decisions like bad naming conventions and exorbitant prices, and these are mistakes that Nvidia continues to make every year. Sadly, it doesn’t look like either of those things are going away anytime soon, with the RTX 4080 12GB almost hitting the market while the RTX 4080 and RTX 4090, while great cards, are just a lot too expensive to make sense.

It wasn’t hard to predict that Nvidia’s GPUs would continue to get more and more expensive, and I expect that trend to continue in the future. Nvidia’s next worst GPU won’t be let down by sleazy marketing, misleading branding, or tech errors, only price. We’d be lucky to see the RTX 4070 not cost more than AMD’s upcoming RX 7900 XTX.

Editors’ Recommendations






#worst #Nvidia #GPUs #time #Digital #trends

Leave a Comment

Your email address will not be published. Required fields are marked *