Windows 7 DX11 Cards

Was the wait worth it? Will you be buying one?

  • Price is too high so no to Fermi..

    Votes: 2 16.7%
  • I bought a 5000 series and am happy..

    Votes: 5 41.7%
  • Both Fermi and 5000 series way too expensive

    Votes: 0 0.0%
  • At last! Can't wait to get my Fermi.

    Votes: 0 0.0%
  • I'm waiting until the price drops.

    Votes: 4 33.3%
  • I'm going to wait for the refresh and 512 cores

    Votes: 1 8.3%

  • Total voters
    12
Why Nvidia cut back the GTX480

Less is more

by Charlie Demerjian

March 29, 2010

Link Removed due to 404 ErrorLAST MAY, we said that GTX480, then called GT300, was going to be hot, slow, big, underperforming, hard to manufacture, and most of all late. Almost a year later, Nvidia can't even launch PR quantities at the promised spec of 512 shaders.
To call the launch a debacle is giving Nvidia more credit than it is due. The sheer dishonesty surrounding the launch is astounding. As early as last September, Nvidia was telling partners March at the earliest, realistically later, while still promising investors and analysts that the chip would launch in Q4/2009. Other than quibbles about SEC rules, six months after its 'triumphant launch' Nvidia has finally thrown in the towel and can't launch a single card with the promised 512 shaders.
While it is a sad farce, there is a good technical reason for Nvidia having launched its Fermi GTX480 GPU with 480 shaders instead of 512. With 480 shaders, it can get a higher performing chip with fewer shaders. How? Through the magic of semiconductor chip binning.
If you recall, Nvidia was aiming for 750MHz/1500MHz with 512 shaders during planning, and publicly stated that it would beat ATI's 5870 GPU by 60 percent. On paper, that seemed quite possible, but then came the problem of actually making it. We said Nvidia couldn't. It said it could. Then it called SemiAccurate names. Then it finally launched its GTX480 chip, and the count of 512 shader parts is zero.
Back to the whole 480 versus 512 shader count issue, it all comes down to binning, a close cousin of semiconductor chip yields. In the common parlance, yield is how many chips you get that work, that are good rather than defective. Yield says nothing about the qualities of the chips, just yes or no. There is a lot more to it than that, since you could also include aspects of binning under the heading of yield, but for now we will stick with the good versus bad definition.
Binning on the other hand is more about what you get from those working parts. If you take a given semiconductor wafer, the chips you get from that wafer are not all equal. Some run faster, some run slower, some run hotter, and some run cooler. There is a lot of science behind what happens and why, but once again, let's just look at the simplified version and say that for almost any property you look at, the chips coming out of a fab will be on a bell curve for that property. Designers can skew how narrow or broad the curve is to varying degrees as well, but there is always a curve.
When designing a chip, you have design targets for making the chip. For example, let's say that you want it to run at 2GHz and consume 50W. The designs are set so that a large percentage of the chips will at least meet those minimum specs, that is, clock to at least 2GHz and pull no more than 50W when doing so. The idea is to have as little of the tail of the curve as possible be on the wrong sides of those two numbers.
Any chips that fall below those design points are scrap, so the trade-off in design is to figure out how much area you can add to the die in order to move that curve up before the scrap chips cost more than the net added area of the working chips. Ideally, you would want 100 percent of the parts above the line, but that never happens.
For some chips, for example game console chips, there is only one speed that it needs to run at. An XBox360 CPU runs at 3.2GHz. If the chips coming out of the fab run at 6GHz, it doesn't matter, they will spend their lives running at 3.2GHz no matter what their potential is. There isn't a market for faster XBox360 chips. On the other hand, if the chips can't run faster than 3.1GHz, they are scrap. There is a hard line, so you want the bell curve to be as high as you can get it.
For computer CPUs, they sell at a range of speeds, for example 2.0, 2.2, 2.4, 2.6, 2.8 and 3.0GHz. If you aim for everything above 3GHz, that is great, but it is usually a waste of money. When you get CPUs out of the fab, they are tested for speed. If they make 3GHz, they are sold as 3GHz parts. If not, they are checked at 2.8GHz, then 2.6GHz and all the way down to 2.0GHz. Missing a single cutoff in the CPU world does not mean a chip is scrap.
You can bin on multiple metrics, like how many are working at 2.8GHz while consuming less than 75W. The graphs on binning are multidimensional and get astoundingly complex very quickly. Since a chip is not uniform across even it's own die, especially with larger chips, you can selectively disable parts of a chip if they don't meet the bins that you require. A good example of this would be AMD's X3 line of 3-core chips.
You can also add redundant components, like an extra core, or an extra shader per cluster, but that adds area. More area costs more money per chip, adds power use, and can actually lower yield in some cases. Once again, it is a tradeoff.
GPUs have been doing this forever, and end up with very good overall yields on large and complex chips because of it. If you have a GPU with 10 shader groups, and it has defects, that is, it does not yield if you are aiming for all 10 groups working, it is very likely to yield as the next smaller part in the lineup, a hypothetical 8 group GPU. If you couple that with binning, and set things loosely enough, you will end up with a good number of formerly 'scrap' chips that lead a productive life. An extra shader per group ups the yield by a lot as well.
You can see this in almost every graphics chip on the market. The GTX280 has a little brother, GTX260, and the HD5870 has its HD5850. Later on in the life of a GPU family, you often see parts popping up, usually in odd markets or OEM only, with specs that are a portion of the smaller parts in the family. This generally happens when there is a pile of parts that don't make the lowest bin, and that pile is big enough to sell.
Ideally, you set targets to make sure there is almost no need for the proverbial 'pile in the back room', but there are always outliers. If you can't get the majority of the bell curve above the cut off points for your lowest bins, you have a problem, a big and expensive problem. At that point, you either need to respin the chip to make it faster, cooler, or whatever, or lower your expectations and set the bins down a notch or five.
Nvidia is in the unenviable position of having to set the bins down, way way down. The company is trapped, however. The chip is 60 percent larger, over 530mm^2, barely outperforms it's rival, ATI's 5870, and is over six months late.
This means Nvidia can't take the time to respin it. ATI will have another generation out before Nvidia can make the required full silicon (B1) respin, so there is no point in trying a respin. The die size can't be reduced much, if at all, without losing performance, so it will cost at least 2.5 times what ATI's parts cost for equivalent performance. Nvidia has to launch with what it has.
What it has isn't pretty. A1 silicon was painfully low yielding, sub-two-percent for the hot lots, and not much better for later silicon. A1 was running at around 500MHz, far far short of the planned 750Mhz. A2 came back and was a bit faster, but not much. When Nvidia got back A3 just before Christmas 2009, it was described to SemiAccurate by insiders at Nvidia as, "A mess". Shader clocks for the high tip of the curve were 1400MHz, and the highest bin they felt was real ended up being about 1200MHz.
This is all binning though. Yields were still far below 20 percent for both the full 512 shader version and the 448 shader part combined. For comparison, Nvidia's G200 chip, which became the GTX280 family of GPU parts, had a yield of 62.5 percent, give or take a little, and that yield was considered so low that it was almost not worth launching. Given a sub-20 percent yield, to call the Fermi or GF100 or GTX4x0 line of GPU chips unmanufacturable is being overly kind.
So, what do you do if you are Nvidia, are half a year late and slipping, and the best chip you can make can barely get out of its own way while costing more than five times as much as your rival's chip to manufacture? You change bins to play with numbers.
Nvidia made the rather idiotic mistake of announcing the GTX470 and GTX480 names in January, and now it had to fill them with real silicon. The company bought 9,000 risk wafers last year, and couldn't get enough to make the promised 5,000 to 8,000 512 shader GTX480s from that, a required yield of less than 1 percent. See what we mean by unmanufacturable? To make the minimal number of cards that you need for even a PR stunt level launch, you need to have at least a few thousand cards, and there simply were not that many 512 shader chips.
What is plan B? According to Dear Leader, there is no plan B, but that is okay. At this point, the GTX480 is on plan R or so. You have to suck down your ego and lower the bins. If you can't make 512 shaders, the next step down is 480 shaders. This moved the cutoff line down far enough to make the meager few thousand 480 shader parts necessary to launch.
GTX480 is slow, barely faster than an ATI HD5870. If Nvidia loses 32 shaders, it also loses 1/16th of the performance, or 6.25 percent. That would leave it a bit slower than the 5870 if the clocks were set in the low 600MHz range though. Still not good for a triumphant launch, but at this point, a launch is better than the alternative, no launch. Out the door is the first step.
Remember when we said that one problem was 'weak' clusters that would not work at the required voltage? Well, if you want to up the curve on yields, you can effectively lower the curve on power to compensate, and Nvidia did just that by upping the TDP to 250W. This the classic overclocking trick of bumping the voltage to get transistors to switch faster.
While we don't for a second believe that the 250W TDP number is real, initial tests show about a 130W difference in system power between a 188W TDP HD5870 and a '250W' GTX480, that is the official spec. Nvidia lost a 32 shader cluster and still couldn't make 10,000 units. It had to bump the voltage and disable the clusters to get there. Unmanufacturable.
If you are still with us, we did mention that the 480 shader part was faster. How? With the slowest cluster gone, that bumps the speed curve up by a fair bit, and the worst part of the tail is now gone. Bumping the voltage moves the speed curve up more, and the end result was that Nvidia got 700MHz out of a 480 shader unit. That 700/1400 number sounds quite familiar, doesn't it?
On CPUs with a handful of cores, multiplying the core count times MHz is not a realistic thing to do. Most workloads that CPUs handle are not parallel in nature so the result is bogus. GPUs on the other hand have embarrassingly parallel workloads, so number-of-cores X MHz is a fair calculation. If you look at our initial specs, the early GTX4x0 cards SemiAccurate had access to were 512 shader units running at 600MHz and 625MHz, and a 448 shader unit running at 625MHz or so.
With the last minute spec change, voltage bump and shader fusing off, Nvidia was able to move the bins down enough to get a 700MHz part that has 480 shaders with a '25W' penalty. The shipping specs are 448 cores at 1215MHz for the GTX470, and 480 cores at 1401MHz for the GTX480. If you look at the count of cores X MHz, you will see how it is a little faster.
Link Removed due to 404 Error
Shaders X Clocks, and speed versus a 600MHz 512 shader part
What did Nvidia get by losing a cluster, adding tens of watts, and upping the clock? It looks like nine percent over the spec tested by SemiAccurate last month, and five percent over the proposed 512 shader 625/1250MHz launch part. According to partners, Nvidia was playing with the numbers until the very last minute, and that playing seems to have paid off in a net faster card.
PR gaffes of not being able to make a single working part aside, this five to nine percent bump allowed Nvidia to dodge the bullet of ATI's Catalyst 10.3a drivers, and add a bit to the advantage it had over a single HD5870. Most importantly, it allowed Nvidia to get yields to the point where it could make enough for a PR stunt launch.
The 480 shader, 1400MHz cards are barely manufacturable. If you don't already have one ordered by now, it is likely too late to get one since quantities are going to be absurdly limited. As things stand, the 9,000 risk wafers seem to have produced less than 10,000 GTX480s and about twice that many GTX470s if the rumored release numbers are to be believed. That would put yields of the new lower spec GTX480 at well under the two percent Nvidia saw last fall.
It is hard to say anything good about these kinds of yields, much less to call it a win. A one-point-something percent yield, however, is a number greater than zero.S|A

SemiAccurate :: Why Nvidia cut back the GTX480
 
I found this clipping, makes for interesting reading... lol

Link Removed due to 404 Error
 

Attachments

  • newspaper(2).jpg
    newspaper(2).jpg
    56.6 KB · Views: 311
Last edited:
China to become the driver of future revenues growth: Q&A with Nvidia general manager of MCP business Drew Henry


With numerous negative rumors surrounding the discrete graphics card market regarding Nvidia's new GeForce 480/470 (Fermi) graphics chip, and facing competition from AMD, Digitimes recently had chance to talk to Drew Henry, Nvidia general manager of MCP business regarding the company's comment about these rumors as well as its strategy for the future.
Q: What are your comments on the rumors that Nvidia has blocked some graphics cores with problems on their GeForce 480/470 chips causing these chips to have less than 512 cores due to Taiwan Semiconductor Manufacturing Company's (TSMC's) low yields?
A: Nvidia does not comment on unannounced products; however, we have a chance to launch a graphics chip with 512 cores in the future.
TSMC's yields for its 40nm process has met our expectations and market rumors about the yields being lower than 20% are completely untrue. We currently have everything under control.
Q: Can you tell us about Nvidia's plans for improving the GeForce GTX 480/470's power consumption and heat, and your schedule for the rest of the GTX 400 series?
A: Our new Fermi-based GeForce GTX 480/470 chips are a significant improvement over performance compared to our previous-generation GTX 285 despite that the GTX 480/470's power consumption is about 15-20W higher. However, we believe consumers that choose to purchase GTX 480/470 are more focused on performance instead of how much extra watts they consume. To pay a little higher electricity bill in exchange for 10% more in performance, I believe consumers will think this is a worthwhile trade.
GeForce GTX 480/470-based graphics cards have already been shipping from most of the major graphics card makers in April and the rest of the GeForce 400 series graphics cards will gradually be launched in the next few months to satisfy different markets.
Q: In the past year, due to demand for Nvidia products seeing a drop, Nvidia's close graphic card partner XFX has turned to also sell AMD graphics cards. Does Nvidia have any plans to strengthen the partnership with graphics card players?
A: I need to make two clarifications, one is that Nvidia's share of the graphics card market in the past six months has seen steady growth and did not drop.
Another one is that XFX is not a close partner of Nvidia and the company has a lot of partners such as Asustek Computer, Micro-Star International (MSI), Gigabyte Technology and Zotac that we are currently working closely with.
Q: With the price of graphics card components such as DRAM rising, does Nvidia have plans to raise its product prices?
A: We currently do not have plans to increase prices.
Q: How is Nvidia's performance in China?
A: Compared to other countries, China's desktop PC market is currently still seeing strong growth and since the country's DIY PC and gaming markets are also seeing growing demand, we have been seeing discrete graphics card sales in China rising.
We currently have a large amount of share in China's discrete graphics card market and believe the country will become the key driver for future revenues growth.
1_r.jpg

Drew Henry, Nvidia general manager of MCP business
Photo: Monica Chen, Digitimes, April 2010

China to become the driver of future revenues growth: Q&A with Nvidia general manager of MCP business Drew Henry
 
consumption and heat, and your schedule for the rest of the GTX 400 series?
*A:* Our new Fermi-based GeForce GTX 480/470 chips are a significant improvement over performance compared to our previous-generation GTX 285 despite that the GTX 480/470's power consumption is about 15-20W higher. However, we believe consumers that choose to purchase GTX 480/470 are more focused on performance instead of how much extra watts they consume. To pay a little higher electricity bill in exchange for 10% more in performance, I believe consumers will think this is a worthwhile trade.

ah NO I'd rather a card went twice as fast for 10% less power consumption not the other way round you drongo Drew like the setup i have now vs what i had

2x ATI Radeon HD2600XT ddr4 in CF 125W vs 1x ATI Radeon HD5770 ddr5 108W = twice the FPS
 
imageview.php

GeForce GTX 480 3-way SLI with a sharp Point of View
You know, typically after spring, closing in on summer, time things start to slow down on the hardware scene. And good gawd my man, it's still busier than ever. Each and every month there are new NDA releases.. there's just lots of good new gear out there and that's just excellent! Now I still had a GeForce GTX 480 SLI article planned (and GTX 470 SLI as well actually) but as a result of how busy it has been and still is, and on top of that finding out that the NVIDIA board partners would rather see their boards selling in the stores insteaad of being tortured by the press, this SLI article got delayed.
None the less, have no fear .. ze Guru is here! Today we'll start up the first in a series of SLI articles based on the GeForce GTX 480 series graphics cards. GeForce GTX 400 series might have had a rough start but if we filter out the mighty thorn of this release, the noise levels, then for a minute everybody can wholeheartedly admit that they are beautiful performing graphics cards. Back in January I already heard that SLI performance would be outstanding with this new series and well, that definitely tickled my senses and taste buds.
As a results today we'll 'finally' have a look at SLI scaling of that GeForce GTX 480. We look at single card performance, dual-card performance and also triple (3-way) SLI performance to see how well these puppies will scale.
The article will first cover SLI performance among the new GTX 480's in several configurations and games, and then we'll check a little 2-way Multi-GPU gaming in a handsome multi-GPU slaughter-fest article in the ATI versus NVIDIA kind of way to see who and what scales the best.
Over the next few pages we'll tell you a bit about multi-GPU gaming, the challenges, the requirements and of course a nice tasty benchmark session. Have a browse to the next page please, where we'll startup like lightning and thunder --- jeehaw !

Read on: GeForce GTX 480 3-way SLI review
 
NVIDIA Announces GTX 580

NVIDIA Announces GTX 580



Less than a week after the launch of the GTX 480, NVIDIA has announced that the next series of graphics cards based on the Fermi architecture is already in the works - the 500 Series. The NVIDIA GeForce GTX 580 will be the first to launch and will feature 580 CUDA cores to go along with its "580" moniker and have a whopping 2560MB of 384-bit GDDR5 memory. The rest of the specifications are unknown at this time, but NVIDIA is promising the fastest single-GPU card on the market.

"Although we're very proud of the GTX 480," NVIDIA President and CEO Jen-Hsun Huang said, "the 400 series is merely a tease for what Fermi can really accomplish. When we release the 500 series later this year, I think everyone will be pleasantly surprised."

After the rather lukewarm reception of the GTX 480, this is certainly welcome news to fans of NVIDIA. Even more welcoming is that the GTX 580 is expected to launch before the end of the year. In fact, Huang said that he's hopeful it'll be on the market by this summer. But what about a price? A separate NVIDIA source who will remain anonymous for obvious reasons let us in on a little secret - the GTX 580 will launch with the GTX 480's current price. The best part of that tidbit may be the obvious implication - a price drop on the 400 series! So if you're in the market for a new video card, it may be wise to wait until the summer...if you can hold off that long.
 
ATI Announces Quad-GPU HD 5999

ATI Announces Quad-GPU HD 5999


You didn't actually think ATI would sit back and watch NVIDIA grab a chunk of the graphics market with its recent launch of the GeForce 400 series, did you? Just one week after NVIDIA's newest cards hit the market, ATI announced its new HD 5999, a quad-GPU single-card graphics card. You read that correctly - a single graphics card featuring four graphical processing units! As if that wasn't shocking enough, the most shocking piece of information may be its expected release date - next month! According to ATI, HD 5999s were shipped out to reviewers yesterday, and thus should be arriving at their doors as you read this.

ATI decided to keep mum about this new card and didn't even inform reviewers ahead of time. So hopefully when our beloved ccokeman receives an unexpected, nondescript box from AMD, he doesn't call the bomb squad over to investigate.



ATI Announces Quad-GPU HD 5999
 
I'm afraid you've been had Greg. As both articles are unfortunately April fools day jokes... (if you look under both article's you'll see written 'Happy AF (April fools) day'). Wishful thinking eh Greg?
 
Last edited:
I'm afraid you've been had Greg. As both articles are unfortunately April fools day jokes... (if you look under both article's you'll see written 'Happy AF (April fools) day'). Wishful thinking eh Greg?


You're right. :tongue:
 
How about this one:

ATI Radeon HD 5970 is the king of iPhone, Wi-Fi password cracking - Bright Side Of News*

Link Removed due to 404 Error

For instance, the company just announced GPU acceleration for their iPhone/iPod Backup and Wi-Fi Password Recovery if you're using ATI Radeon HD 5000 Series. According to the company, ATI Radeon HD 5000 series offers up to 20x performance increase when compared to top of the line Core i7 CPUs and up to two times faster than enterprise-level, four GPU-based Tesla products.

Using ElcomSoft's Wireless Security Editor, the company claims ATI Radeon HD 5970, a dual-GPU consumer card broke 100,000 passwords per second. In comparison, Tesla S1070 with four Tesla C1070 boards achieves 52,400 passwords. For comparison Intel Core i7 920, a 2.66 GHz CPU with eight threads can calculate 4000 passwords per second, or 25 times slower than a dual-GPU beast from AMD. Intel's sexa-core beast, Core i7 980X at 3.33 GHz cannot pass 6000 passwords per second - even with hard-encoded AES-NI encryption instructions [AES-NI is a feature of 32nm Westmere architecture].
 
PowerColor making progress with its Radeon HD 5970 Eyefinity 12 card



Tul Corp-owned AMD board partner PowerColor has provided a few new pictures depicting its Radeon HD 5970 Eyefinity 12 graphics card which boasts two Cypress 40nm GPUs and 12 (twelve) mini DisplayPort outputs, enabling a massive 12-monitor setup for gaming and serious bragging rights.
PowerColor's upcoming Eyefinity monster features a 2x256-bit memory interface backed by 4GB of GDDR5 VRAM, 3200 Stream Processors, DirectX 11 and Quad CrossFire support, and takes up three slots when utilized 'in full'. If you don't want to connect more than six monitors to the HD 5970 Eyefinity 12 then there's the option to remove the daughter card with six outputs that's at the rear.
PowerColor has not revealed the card's clocks or pricing but that's probably because it is preparing a Computex (June 1-5) launch.
Link Removed - Invalid URL
Link Removed - Invalid URL
Link Removed - Invalid URL
Link Removed - Invalid URL

Link Removed - Invalid URL
Link Removed - Invalid URL
 
End of the road for GTX470?



PRTv8U.jpg

May 19th, 2010 at 6:16 pm - Author faith

KitGuru’s sources in Taiwan have told us that nVidia has stopped taking orders for the GeForce GTX 470 card. Is this a temporary measure or has it been killed off less than 2 months after its launch on March 26th?
There’s no doubt that Fermi has huge potential. Its complexity and, in any ways, forward looking technology is interesting to KitGurus around the globe.
That said, not every flavour of Fermi will be a success.
On Wednesday, KitGuru heard that, for now, no more orders are being taken for the nVidia GeForce GTX470.
If true, that would make it one of the shortest lived graphic cards in history. But what’s the full picture?
When KitGuru broke the earlier story that nVidia appeared to be using GTX470 PCB production for the new GTX465, it immediately raised a question mark about why the GTX470 production line no longer needed them.
Link Removed due to 404 Error Jay Puri

Jay Puri, Executive Vice President of World Wide Sales, is a wise man at nVidia and any decision to drop superfluous lines will have been given the go ahead by him. Same goes for product branching. Unfortunately, so far, he has not been available to comment on our report.
KitGuru has learned that nVidia will look to spice up the Fermi range with a sexy GTX465 card with a lower price and strong performance figures. Now we need to see how it measures up against the Radeon HD 5850. If that move is successful, does that call the future of the GTX470 into question?
KitGuru UPDATE: Word has now reached us of a 375w, dual-GPU, GTX490. If true, then that could explain any reluctance to take new orders for the GTX470. nVidia could have choosen to move its GTX470 cores to the new GTX490 card, while it positions the 200w GTX465 chip against AMD’s Radeon HD 5850.
We’re still as keen as mustard to see if the GTX465, armed with 1GB of 256-bit GDDR5, will be enough to win that battle.
For now, it may not be the end of the road for the GTX470 – instead a well placed fork could be in order.

End of the road for GTX470? | KitGuru
 
HD 5970 Eyefinity 12 card

12 monitors is a little too expensive for a budget pc. As well as HD 5970. Anyway, in Catalyst tray icon, when you right click on it, would it show a long list of 12 items ATI 5900 series -> Extend desktop ?
 
Lol... I don't think it's really intended for a budget scenario.. :)
 
sheeshh you'd need to sit 30+ feet away to be able to see it all thats a lot of desktop, imagine a 2x6 bank of 30" LCDs
 
Voltage tweak :

ASUS Radeon EAH5870 Voltage Tweak Edition



Also spotted this special edition version ASUS is doing....38% faster over stock version:D

"Asus has today launched what is being called the world's first EAH5800 series graphics card with "voltage tweak technology" that promises up to a 38% boost in performance. The new 1GB EAH5870 and 1GB EAH5850 are the planet's first two cards to utilize this new tech, which essentially gives power users the ability to boost GPU voltages via the SmartDoctor application to enjoy up to a 38% improvement in performance."

all-new-ati-cards-5870.jpg
 
sheeshh you'd need to sit 30+ feet away to be able to see it all thats a lot of desktop, imagine a 2x6 bank of 30" LCDs


Would be a splendid panorama, somewhere in the backyard.
 
I read this review today: Link Removed due to 404 Error which is about the latest XFX Black Edition 5870. Anyway I eventually arrived at the bench tests and I couldn't believe how close to the GTX480 the vanilla 5870 is (l've not seen many reviews comparing the two) and on the Unigine Heaven bench it has the GTX down as 1218! (my card pulls 1207, I'll include the screenie to prove it)
I guess what I'm trying to say is that those extra few frames are extremely expensive!
 
I read this review today: Link Removed due to 404 Error which is about the latest XFX Black Edition 5870. Anyway I eventually arrived at the bench tests and I couldn't believe how close to the GTX480 the vanilla 5870 is (l've not seen many reviews comparing the two) and on the Unigine Heaven bench it has the GTX down as 1218! (my card pulls 1207, I'll include the screenie to prove it)
I guess what I'm trying to say is that those extra few frames are extremely expensive!

You proved it many times, Kem. :) Sure it's a great card.

I also noticed this in my benches, it says anisotropic filtering, antialiasing, and what is Filter: trilinear then ?
 
Back
Top Bottom