Windows 7 DX11 Cards

Was the wait worth it? Will you be buying one?

  • Price is too high so no to Fermi..

    Votes: 2 16.7%
  • I bought a 5000 series and am happy..

    Votes: 5 41.7%
  • Both Fermi and 5000 series way too expensive

    Votes: 0 0.0%
  • At last! Can't wait to get my Fermi.

    Votes: 0 0.0%
  • I'm waiting until the price drops.

    Votes: 4 33.3%
  • I'm going to wait for the refresh and 512 cores

    Votes: 1 8.3%

  • Total voters
    12
Quite comprehensive though. Looks like ATI 5870 CF = 1 GTX 480 under extreme tessalation in Unique Heaven.

Can you guys find 1 mistake in specs of the 2 cards in that video? Ok, is this true :

ATI -> 1600 stream processors
NVIDIA -> 480, so ATI has a big advantage here ?
 
Quite comprehensive though. Looks like ATI 5870 CF = 1 GTX 480 under extreme tessalation in Unique Heaven.

Can you guys find 1 mistake in specs of the 2 cards in that video? Ok, is this true :

ATI -> 1600 stream processors
NVIDIA -> 480, so ATI has a big advantage here ?

There isn't an accurate way to relate or compare ATI's numbers of "stream cores" to Nvidia's "Cores" they refer to totally different methodologies to achieve same the goal of performance in vastly different ways as you can often tell by ATi often chucking faster ram in their cards to compensate other difficiancies compared to Nvidia card to level things. For example Ati 5800's have dedicated tessaltor unit and the rest of the card could be largley idle in theory when doing a tessalated bechmark, but on Nvidia's 400 series the whole card's cores emulate a tessalation unit while doing all the other work at the same time as well at 100% output, giving rise to larger "benchmark" scores without exactly being the better overall card in real gaming, and likely explains why they are a lot hotter and costlier to run than ATi offerings

Simply put "Swings & Round-abouts".

I guess benchmarks are the only real way, but even then "real world" gaming often shows the weakness of any generation of cards by either brand.
 
Last edited:
I know what you mean, I don't think either brand has a monopoly on the best technology, so makes sense to vote based on bangs for bucks, which Ati is the current champion... but I don't expect that to last.
 
I still find it pretty amazing that these chips are planned like years beforehand..
 
cybercore said:
ATI -> 1600 stream processors
NVIDIA -> 480, so ATI has a big advantage here ?

What looks WRONG to me is how they compared the number of shader processors:

ATI=320 shader processors (850 Mhz each) x SIMD 5=1600

NVIDIA=480 shader processors (1608 Mhz ) x MIMD 8=3840! MIMD!

So, NVIDIA has a much bigger advantage in this respect. MIMD means multiple blocks processing the different instructions per 1 tact, and this is how shaders are to be counted.


~~~~~~~

NVIDIA has better potential, but practically at moderate tessallation ATI's performance is very close and even sometimes faster - as all these tests above show, including the tests in the video.
 
Much like trying to compare an American muscle car to an European muscle car, sure yankie ones might go faster in a straight line, but roads aren't all straight are they... and european muscle cars own cornering at speed.
 
The nvidia chip may have more potential on paper but Nvidia have just cancelled it. They are going to concentrate on the G104 which has less of everything anyway...
 
Nvidia GTS 455

Back when rumours started regarding the GF104, we heard about the GTS 455 and wrote about it here. Back then, rumourville was talking about GTX 460 and GTS 455 as cards that were going to be first cards made with GF 104 GPU, and it appears that GTS 455 was on the roadmap but just not yet as Nvidia should launch two GTX 460 version on July 12th.

Well today, rumourville is still talking about the GTS 455 as a possible GF104 card and considering Nvidia's naming scheme, this one might end up to be much more crippled GF104 than the one on GTX 460 cards.

The specs are still blurry and nothing is yet confirmed so its still not clear on how crippled will the GTS 455 actually be. The launch date is also not confirmed at press time, as it is unclear if Nvidia is going to launch this one before GF106 based cards or alongside them.

The GTS 455 for now sounds like a "fourth card" that will be launched with GF104 as you have two versions of GTX 460 cards, a possible GTX 475 with full GF104 chip and the rumoured GTS 455.

Ref: Link Removed due to 404 Error
 
GeForce GTX 460 review (roundup with 8 cards)

We test and review the GeForce GTX 460, not one of them .. eight in total. NVIDIA today launches both a 768MB and 1024MB version of this all new DirectX 11 compatible product series. Where there was a lot to discuss with the GF100 based GPUs, the all new chip that empowers the new cards really does it's job well. it offers good value for money, it not at all running hot, it's silent and comes with an okay power consumption as well.

In this first roundup we'll look at eVGA's regular 768MB and SuperClocked 768MB editions, MSI's 768MB Cyclone edition, Gigabyte's 768MB OC edition, Palit's Sonic Platinum 1024MB edition and Zotac's 1024MB regular edition. All in all we've been busy benchmarking our guts off... that's six cards plus two reference GeForce GTX 460 768MB and 1024MB graphics cards.

Click right here to read this Guru3D.com article: GeForce GTX 460 review (roundup)
 
kemical said:
Good thorough review - 30 pages!

A good mid-range card, although a little cut on bus width and shaders. Some benches (GTX 465 ~ $280):



Link Removed due to 404 Error


~~~~~~~~~~~~~~~~~

Link Removed due to 404 Error

Link Removed due to 404 Error

Link Removed due to 404 Error

Link Removed due to 404 Error

Link Removed due to 404 Error

~~~~~~~~~~~~~~~~~


Comparison

Ok to keep this from turning into a numbers jumble lets take this one component at a time. We covered Shaders with GTX-480 at 100% (Shaders and Cost), GTX-470 at 93.33% (Shaders) and GTX-465 at 70% (Shaders).

Shaders:

GTX-480 100% Cost 100% Shaders (480)
GTX-470 70% Cost 93.33% Shaders (448)
GTX-465 56% Cost 70% Shaders (352)

ROPs:

GTX-480 100% Cost 100% ROPs (48)
GTX-470 70% Cost 83.33% ROPs (40)
GTX-470 56% Cost 66.66% ROPs (32)

Transistors (Exposed)

Based on 3 Billion transistors exposed on GTX-480 with 480 cores. All the Fermi cores have 3 Billion transistors but only a percentage of those are exposed. We are giving estimated exposed Transistors.
GTX-480 100% Cost 100% Transistors (3 Billion)
GTX-470 70% Cost 93.33% Transistors (2,799,990,000)
GTX-465 56% Cost 70% Transistors (2.1 Billion)

Memory:

GTX-480 100% Cost 100% Memory (1536 MB)
GTX-470 70% Cost 83.33% Memory (1280 MB)
GTX-465 56% Cost 66.66% Memory (1024 MB)

Memory Bus Width:

GTX-480 100% Cost 100% Bus (384Bit)
GTX-470 70% Cost 83.33% Bus (320Bit)
GTX0465 56% Cost 66.66% Bus (256Bit)
 
So roughly speaking the new 465s are 56% of the cost for about 2/3rds of the performance of the 480... seems they are finally getting the chipset going in the right direction on both fronts.
 
New high end card has two GF104 chips
Comes later this year
We got some confirmation that GTX 480 will get a successor, and this new card will be made out of two GF104 chips.

The card will have two GTX 460 chips so to speak and with the right clocks it could be faster. It is too early to know the exact specification but as a hint what to expect, the full GF104 chip has eight clusters with 48 shaders each, and the card with total 384x2 is possible. This is of course if Nvidia manages to enable all shaders and clock the chip to some decent clocks, say 700+ MHz.

The dual card can end up with up to 768 shaders, but we are quite sure that the final number will be lower than that. As for the memory, GDDR5 is of course the first choice and the clocks speeds around 4000MHz should be possible.

The only thing we can confirm is that the new high end “GX2 Fermi successor” dual card has two GF104 chips and that the performance wise it should be the fastest thing on market, at least it should end up faster than Radeon HD 5970 X2 card, but then again, ATI is not sitting around with its legs crossed.

Ref: Link Removed due to 404 Error
 
Graphics Cards Supplier Expects ATI’s “Southern Islands” to Show Up in Q4.
ATI Radeon HD 6000 Will Be Announced “Around Q4” in 2010 – Graphics Cards Maker
[07/14/2010 09:24 AM]
by Anton Shilov
A senior marketing officer from Hightech Information System, a well-known add-in-card partner of ATI (graphics business unit of Advanced Micro Devices), said in an interview that the company expects its graphics chip partner to announce its next-generation Radeon HD 6000-series graphics processing units (GPUs) in Q4 2010.

“We are expecting the new ATI Radeon HD 6000-series to be announced later this year, around Q4 2010. After which, we will be releasing our own Radeon HD 6000-series early next year,” said Kenny Chow, a senior marketing officer at HIS, in an interview with Funky Kit web-site.

Although AMD itself has promised to update the graphics lineup substantially earlier this year, it has never confirmed its Radeon HD 6000-series name (or the Southern Islands code-name). Moreover, ATI itself was very quiet about its next-generation graphics products at Computex Taipei 2010 trade-show last month, which might be an indicator of a delay. There are sources who claim that the graphics processors designer hopes to release its new family of GPUs this year.

The reason why the senior marketing specialist from HIS decided to go on the record with the name and launch timeframe of the novelty is unclear. On the one hand, it may mean that ATI has already briefed its partners about the forthcoming graphics products. On the other hand, this may indicate that nothing has been revealed yet to the add-in-board partners and they only share their expectations. Yet another reason to start talking about the ATI Radeon HD 6000-series products is to distract attention from Nvidia Corp.'s latest GeForce GTX 460 product, which appears to be a potentially successful offering.

Not a lot is known about the Southern Islands family of products. The SI family will offer higher performance compared to currently available ATI Radeon HD 5000 “Evergreen” line, but will hardly be considerably more advanced in terms of feature-set. It is rumoured that designers of the new GPUs concentrated mostly on improving efficiency, but not on building something completely new from scratch, which is why certain building blocks of the new Sothern Islands family will be inherited from the current Evergreen line.

Ref: Link Removed due to 404 Error
 
Nvidia starting to catch up
Judging by the recently announced graphics results from AMD, it looks like the ATI part of the company has delivered some rather nice numbers. The company has sold 16 million Radeon HD 5000 series in the last three quarters.

This means that they have shipped 16 million DirectX 11 parts and that Nvidia probably managed only a few hundred thousand. So judging by number cards sold, ATI has won this round by a mile.

The second surprising part is that even today Radeon 5850 and 5870 cost as much as they cost six months ago when they launched. Although AMD has slightly reduced HD 5830 pricing in the short term we don’t see any major price cuts and Nvidia is surely catching up with Geforce GTX 400 series, but only in the $199 and up market. Bear in mind that this particular market segment is limited to a handful of people that are willing to pay such money for good graphics.

Nvidia's GF106 and GF108 a mainstream and entry level chips are to be expected in late summertime, let’s say August, and by that time, ATI will probably sell a few hundred thousand if not a million more HD 5000 series cards.

In its last quarter the company had reported revenues of $440 million and a quite miserable $33 million of profit, but a year ago they had revenue of $235 million and a loss of $17 million.

Nvidia is certainly limping with the number of DirectX 11 Fermi and similar parts sold, but overall Nvidia makes much more money with much higher average selling price for its GPUs.

Nvidia has Optimus and quite a strong following of people who want their GPUs and once they make a decent DirectX 11 mobile GPU, they will continue to grow in this market segment. ATI is doing purely in mobile computing and Nvidia as you should know by now makes a lot of money on professional CAD / CAM market and nowadays they make some decent margins selling GPUs for super computers.

Overall ATI wins this round, they could make better profits out of it but they own the DirectX 11 market. Nvidia still makes a lot of money with its outdated DirectX 10 cards, but they surely know how to sell them for much higher profit. The biggest issue for Nvidia is that once they have the whole DirectX 11 line up, ATI will be ready to move to its second generation DirectX 11 product line.

Of course ATI can thank Nvidia for being late with Fermi, and even when the company launched it after six month delay, it was not as great as people have expected, but knowing Nvidia, once they fall on its face, they always tend to return stronger.

Ref: Link Removed due to 404 Error
 
DirectX 11 Cards

Link Removed - Invalid URL


Link Removed - Invalid URL


Dual graphics processors with ATI Radeon HD 5970 model, the industry's fastest graphics solution with the AMD, on the one hand support DirectX 11 ATI Radeon HD 5000 series with new models being prepared to extend, beyond the hand-designed from scratch entirely new architecture for next-generation graphics processor is still working.

In the second half of the new year, probably in the last quarter are expected to be ready for next-generation graphics processor architecture of the power of the ATI Radeon HD 6000 series will give the estimates were alleged to be in contrast to the year 2011. 40nmmanufacturing technology, the efficiency problem of having to TSMC'nin seriously affected AMD's partners that plan to take steps to produce the GPU with GlobalFoundries known. Late in the day we explained Dirk Meyer, AMD hill manager, GlobalFoundries company will begin production of the GPU was confirmed. In the process of preparatory work for the transition to 28nm production in the last quarter of 2011 to complete to start volume production expected to GlobalFoundries plans.

ATI Radeon HD 6000 series according to information from the launch, production facilities for the 2011 GPUs GlobalFoundries accepted. Yet clear, but not information on AMD's new architecture with high 3D performance GPGPU approach will give greater weight and flexible design approach because it was prepared to oversee the GPU architecture, Fusion sornaki generation processor family in a more integrated into the processor is said to be.
 
Re: DirectX 11 Cards

What are Nvidia's GF104, GF106 and GF108?

As we have been saying for over a year now, Nvidia picked the wrong architecture for Fermi/GF100, it attempts to do everything well, and ends up doing everything in a mediocre way. The Fermi architecture is simply too inefficient to compete against ATI's Evergreen in performance per mm^2, performance/Watt, and yield.

Cutting down an inefficient architecture into smaller chunks does not change the fundamental problem of it being inefficient compared to the competition. In fact, if you cut a GF100 in half, you will get something that loses by almost the same ratio to a Cypress/HD5870 cut in half, a GPU you know as Juniper/5770.

The GF104/106/108 chip were never really meant to be what they are now, they are a desperate stopgap that simply won't work, but that is better than trying to sell DX10 parts for another year. The problem is that Nvidia makes a big, hot, fast part, and then shrinks it relatively quickly. The big chips are at the top of the performance stack, so they sell at a premium, if they didn't the math wouldn't work out.

From there, the shrink offers about 90% of the performance of the big brother at 'consumer' level prices. The math works out, and Nvidia makes money. They did it with the 90nm G80/8800GTX, and shrunk it to the 65nm G92/8800GT. It worked out nicely. The G200b/GTX285 on 55nm was supposed to be tarted up and shrunk to the 45nm G212, but that project failed, as did it's smaller brother the G214. This put NV in a big bind, and explains the utter lack of decent DX10.1 parts from the company.

More importantly, it broke the economics of the G200 line, something that did not get derivatives until late Q3/2009, leaving a huge hole in Nvidia's line. Eventually, that, when combined with the woefully late GT300/Fermi architecture, made them too expensive to manufacture for the price they could command in the market.

Nvidia silently EOLed the chips then was forced to pretend that they were still in production. They weren't. With Fermi, the problem is worse, much worse. Nvidia designed an unmanufacturable chip and now has to do the same hot-shoe dance to pretend it is viable in the market. It isn't, and was never meant to be, the shrinks to 32nm would cure that little problem, until then, the architecture is not economically viable.

Then TSMC canceled the 32nm node. This left Nvidia with an economically non-viable high end chip, no mid-range, and a woefully out of date low end. The process tech that was going to save them went bye bye with the 32nm cancellation, and their worst nightmare, status quo, was in force.

The stopgap plan was to take the GF100 and cut it up. Since GF100 is quite modular, it can be cut into four parts easily, that is exactly what happened. No shrinks, no updates, and no better chance of economic viability until TSMC comes through with 28nm wafers in mid-2011. Maybe. The magic 8-ball of semiconductor process tech just laughed when asked if 28nm would be on time.

So, what are the derivatives? One of the four, the full shrink/G102, had to die for obvious reasons leaving three derivatives. The new dice are 3/4s of a GF100 for the GF104, 1/2 GF100 for the GF106, and 1/4 GF100 for the GF108. Using a bit of math brings us the GF104 with 384 shaders and a 256 bit memory bus, GF106 has 256 shaders and a 192 bit bus, and GF108 has 128 shaders and a 128 bit bus.

With that out of the way, the math starts looking really ugly. Since Nvidia won't release official die numbers, lets go with a little larger than the 530mm^2 we first told you about. For the sake of argument and tidy numbers, lets just go with 540mm^2, our sources later clarified that GF100 was 23+mm * 23+mm, so that is still likely a bit small. Each quarter of the chip is going to be about 135mm^2, give or take a little.

If you remove some of the GPGPU features and caches, not only do you remove some of the performance, but also some die size. Word has it that Nvidia is basically adding a bit of pixel fill capacity to make it competitive on DX10 performance, so lets just assume the additions and subtractions are a die size wash. Performance of GF10x is said to be very close to GF100 on a per-shader basis, and a bit better than GF100 on older games.

GF100 is also hot, it sucks power like an Nvidia PR agent sucks up the Kool-Aid, the official 250W is, well, laughable. For the sake of argument, lets lowball and say it takes 280W even though several companies list it as 295W and 320W in their sales literature. Being kind would put power use at 70W per quarter GF100.

Power is one area that Nvidia can make the most progress on the new chips, but we hear that the progress, like the silicon changes, are minimal at best. The claims of 150W GF104s should be taken with two big grains of salt. First is that there will be a lot of units fused off and clocks 'managed' for thermal reasons more than anything else. Second, since Nvidia is still claiming 250W for GTX480, why not make up a similarly laughable number for the new part? The last claim worked, right?

So, that would put the GF104 at 210W, GF106 at 140W, and the GF108 at 70W, but that is before shaders are fused off. Since the GF10x line is more or less unchanged from the GF100 in most ways, the yields are going to be in the same toilet. Why? Read this.

The same problem that affected GF100 are going to hit the GF10x parts, and Nvidia is going to end up doing the same things to make salable parts, fuse-off bad clusters while downclocking like mad. Contrary to their magical statement about 40nm yields, our checks at Computex said quite the opposite. Sources tell SemiAccurate that the only way GF100 yields broke 20% is if you include GTX465. Then again, hitting 50% or so yields with 5/16 shader clusters disabled, 80% of your intended clock rate, and nearly 100% over power budget is one thing I would not recommend putting on your resume. The Fermi architecture is still an unmanufacturable mess.

The GF104/6/8 are based on the same architecture, more or less unchanged, so they are going to be a mess as well. Since they are smaller, around 405mm^2, 270mm^2 and 135mm^2, yields should improve proportionately, but still be way behind the competitive ATI chips. If you want a sign of how bad yields are going to be, look at the samples floating around, specifically EXPreview's great find.

It looks like the initial GF104s are going to have at least one block of 32 shaders disabled if not two. The 336 shader number is likely an artifact of the stats program not being fully aware of the new chip yet, but Nvidia might have added the ability to disable half a cluster. GF108s shown at Computex had 96 of 128 shaders active.

In any case, the same problems of vias failing and failing clusters don't seem to be fixed this time around, so expect all sorts of yield irregularities, non-fully functional 'fully functional' parts, and the other half-baked solutions many have grown to expect from Nvidia of late.

Another good example of how unchanged the GF104 is can be seen by the shape of the GF104 at EXPreview. If you take three of the four roughly square clusters of 128 shaders from the GF100 and arrange them in a shape that wastes the least area, what do you get? A rectangle, long and thin, like this. That more than anything shows the cursory nature of the cut and paste job done by Nvidia, don't expect any real changes.

This all leaves Nvidia in quite an economic pickle. The shrink that was going to make the architecture financially viable didn't happen. The updates that were going to make the chips manufacturable didn't happen. The attendant power savings didn't happen. The attendant performance gain didn't happen. What you are going to get is simply a smaller shader count with a slight front end rebalancing of an architecture that wasn't competitive in the first place. These derivatives don't make it any more competitive.

The GF104s are larger than ATI's Cypress by quite a bit, consume more power, and are much slower. The GF104 cards are rumored to sell for less than the lowest end GF100 based 465, so economic viability is, well, questionable from day one. If Nvidia raises the clocks enough to make them competitive, the chips not only blow out the power budget, and drop yields, but they also obsolete the GF100 based 470.

That in turn makes the GF100 chip economically non-viable, the two most manufacturable variants, relatively speaking, are under water. This is the problem with a broken architecture, damned if you do, damned if you don't, and there is no hope for change.

When will they be coming out? Originally, the GF104 was slated for a Computex launch, but that didn't happen. Contrary to rumors started by people not actually at the show, there were no GF104s at Computex, and partners had not received samples yet. This means launch slipped from June to mid-July at the earliest, maybe later.

GF106 was supposed to launch about the same time as GF104, so it may be out at the same time as well. GF108 originally had an August launch, but that slipped a month as well to September. The open question is what Nvidia can actually make, and at what clock speed?

Nvidia was tweaking clocks and shader counts three weeks before 'shipping' in late March, and we hear much of the same hand wringing is going on now. Maybe the magic 8-ball won't laugh as hard if I ask it again in a few weeks. The more things change at Nvidia, the more they stay the same, GF104, GF106, and GF108 are proof that the company is lost.
 
Re: DirectX 11 Cards

ASUS Working on MARS II Dual GTX 480 Graphics Accelerator
Posted on Saturday, July 17 2010 2:38 am by forum member: Nashaz
Filed under: News Around the Web

After treating the enthusiast community to the Republic of Gamers (ROG) ARES Dual HD 5870 graphics accelerator, ASUS isn't wasting any time is designing its successor, referred to (for now) as "MARS II". This graphics accelerator uses two NVIDIA GeForce GTX 480 (GF100) GPUs on one board, that's right, the first dual-GPU accelerator based on GF100, which is dreaded for its thermal and electrical characteristics so much, that NVIDIA is content with having the second-fastest graphics card in the market (GTX 480), with no immediate plans of working on a dual-GPU accelerator.
mars_ii_1.jpg


ASUS' ambitious attempt is in the design stage deep inside its R&D, where the design is in an evaluation state. The R&D gave us some exclusive pictures of the MARS II PCB to treat you with. To begin with, the card's basic design is consistent with almost every other dual-GPU NVIDIA card in recent past. There are two independent GPU systems, each with its own VRM and memory, which are interconnected by an internal SLI, and connected to the system bus by an nForce 200 bridge chip. On this card, two GF100 GPUs with the same configuration as GeForce GTX 480 (GF100-375-A3) are used, each having 480 CUDA cores, and connecting to 1536 MB of GDDR5 memory across a 384-bit wide memory interface.

mars_ii_2.jpg


mars_ii_3.jpg





ASUS' innovations kick in right from the PCB, since it takes a lot of effort to keep such a design electrically stable, as well form an overclockers' product. MARS II uses a PCB with 3 oz copper layers to increase electrical stability, and used a strong VRM. Each GPU system is fed by an 8+2 phase VRM of its own, which use a new Super Alloy choke that reduces core energy loss. The card takes its power input from three 8-pin power inputs, which are fused.



The card is quad SLI capable, and can pair with another of its kind (and probably single GTX 480s). To cool this monstrosity, ASUS is coming up with a beefier than ever cooling solution. With the product being still at an evaluation stage, how long it will take to reach production, or whether it will in the first place, remains to be seen.

techPowerUp! News :: ASUS Working on MARS II Dual GTX 480 Graphics Accelerator
 
Re: DirectX 11 Cards

GeForce GTS 450 PCB Design Leaked?

Yesterday we reported that NVIDIA GeForce GTS 450 is scheduled to be released in August. The new card will be based on GF106, boasting 1GB of GDDR5 memory paired up with 128-bit memory interface. The PCinlife forum has just unleashed a mysterious PCB design drawing. It’s supposed to be a GF106-based product, but we’re not sure if it’s GTS 450.

4502.jpg


As shown above, the front side can be equipped with 6 memory chips, so the memory size is supposed to be 768MB. The card features 3+1 phase power design, a 6pin power connector, a SLI connector, and dual DVI+Mini-HDMI output connectivities.

450.jpg


Ref: Link Removed due to 404 Error
 
ARES/2DIS/4GD5 Dual Radeon HD 5870 with 4GB GDDR5


~ 1300 USD






- Dual Radeon HD 5870 with 4GB GDDR5
- Performance King~~32% faster than generic Radeon HD 5970
- 600% air flow Fansink in comparison with reference Radeon HD 5970
- 8mm heat pipe x8 with 99.9% oxygen-free-copper for best heat dissipation
- Limited Edition with Unique Metal case & ROG Gaming mouse bundle


ATi: ASUS ARES/2DIS/4GD5 Limited Edition , Unlimited Power


The Republic of Gamers consists only the best of the best. ASUS offer the best hardware engineering, the fastest performance, the most innovating ideas. ARES/2DIS/4GD5( Dual Radeon HD 5870 with 4GB GDDR5, 32% faster than generic Radeon HD 5970, 600% air flow Fansink in comparison with reference Radeon HD 5970, 8mm heat pipe x8 with 99.9% oxygen free copper for best heat dissipation and Limited Edition with Unique Metal case & ROG Gaming mouse bundle)

ASUS Features are Voltage Tweak(ASUS Exclusive Voltage Tweak Technology for up to 50% performance),Gigantic 4GB GDDR5 Memory,PCI Express 2.1 support, GPU Guard,ASUS Splendid,ASUS Gamer OSD and ASUS Smart Doctor.

Graphics GPU Features -- Vision Black, 40nm GPU, Microsoft Windows 7 Support, ATI Eyefinity Technology(Extend the View across 6 Displays to immerse yourself in gameplay and entertainment)
 
Back
Top Bottom