Windows 7 DX11 Cards

Was the wait worth it? Will you be buying one?

  • Price is too high so no to Fermi..

    Votes: 2 16.7%
  • I bought a 5000 series and am happy..

    Votes: 5 41.7%
  • Both Fermi and 5000 series way too expensive

    Votes: 0 0.0%
  • At last! Can't wait to get my Fermi.

    Votes: 0 0.0%
  • I'm waiting until the price drops.

    Votes: 4 33.3%
  • I'm going to wait for the refresh and 512 cores

    Votes: 1 8.3%

  • Total voters
    12
Yes, most of the 5850s on all e-shops etc are £230+ with VAT & P&P, so ordered mine today.

Guess that's finally made me a ATI convert....lol last one I had was a 9800pro about 8 years ago for 6 months.
 
nvidia.gif


Nvidia's CFO confirms

While
initial reported pricing left a lot to be desired, Nvidia’s CFO has calmed the rampant masses low on cash saying that cheaper Fermi-based offerings should arrive mid-2010.

David White, Nvidia’s chief financial officer said how “Fermi will show up as the GeForce and Tesla first and it will definitely show up first in the highest-performance configuration. Sometime in the mid-part of the year we will see a lower-cost version of that come out both for Quadro and Tesla,”. Although White didn’t spill the beans on cheaper Geforce offerings, it is expected that they should also be available after 1H 2010.

Once again, Nvidia makes a big deal out of the multi-functional nature of their cards and the fact that they’ve announced it at a compute conference rather than graphics conference is supposed to underline that. In fact, White says that the company will tape out “a whole stack of Fermi products that will be very specific for computing at the very high-end of the stack and others will be targeted at desktops and personal computing”.

As of late, Nvidia seems to actively lash out at anyone who associates the company solely with gaming but let’s face it, the huge majority of graphics cards is used for gaming and everyday work rather than compute work, and it’ll probably remain that way for a while. After all, announcing Fermi on a fishing conference won’t make everyone fish nor will it make Fermi into a must-have fishing device.

More Link Removed due to 404 Error.

Link Removed due to 404 Error
 
Just the other day when the naming scheme for the new Nvidia Fermi cards was announced, we speculated that supply would be very low at launch. As such prices could be high. Now we’re hearing that is likely the case. Sources in the graphics manufacturing market have indicated that the new GTX 470 and 480 will be in very short supply at launch. In fact, the cards may only be available through select companies close to Nvidia. This may mean that only companies that do not sell AMD parts will have access to the GF100 cores at launch.
As for price, Nvidia is said to be aiming for numbers well above AMD’s current line. The GTX 470 may retail for $499, and the GTX 480 could go for an eye-popping $680 at launch. For perspective, AMD’s Radeon HD 5970 dual GPU card tops out at only $599 MSRP.
The real bummer here is that high priced Nvidia cards would probably give AMD no reason to cut prices. Even if the Fermi cards are substantially faster, very few people will be dropping almost $700 on a GPU. Sure, these are new flagship parts, but is the price justified? If you’ve been waiting for Fermi, are you still in? Is it better to go AMD, or wait for lower end Fermi cards?

NVIDIA's GeForce GTX 480, 470 Will Come in Low Supply, High Prices | Maximum PC
 
ThumbL.aspx

ASUS to introduce Dual-Fermi on Computex 2010?

2/26/2010 by: Theo Valich



Earlier today, Link Removed due to 404 Error about Asus'es "Shanghai Surprise" for the next weeks' tradeshow of the year. On CeBIT 2010, ASUS will introduce Republic of Gamers HD5970 Ares. This board is designed to show engineering excellence as demonstrated by its previous siblings, such as the GTX 285 Mars.

Link Removed due to 404 ErrorROG HD5970 Ares features two Cypress GPUs clocked at 850MHz - yes, just like GTX 285 Mars, both GPUs are clocked to the level of single-GPU cards, unlike GTX295 and HD5970 who both feature much lower clocks than their single-GPU siblings.

Ares also packs 4GB of GDDR5 memory clocked at 1.2 GHz QDR [4.8 GT/s], meaning that a combined bandwidth of the whole board is staggering 307.2 GB/s, i.e. 50GB/s faster than a stock clocked HD5970 [256GB/s]! These numbers are nothing short of staggering.

Given our natural curiosity, we asked our sources at ASUS is there a chance to see a Dual-GT300/GF100/Fermi based board on Computex Taipei 2010. As you might imagine, we expected an answer in the lines of "no comment," "wait and see," or something similar. Instead, we received one quite interesting answer: "Hopefully. They [we take this refers to ASUS Engineering team] didn't got farther than engineering drawings."

Yes, we know the sentence isn't exactly correct in English, so grammar police can rest here. What can't rest is our interest in Asus'es engineering team and the work they'll obviously have to do in order to create a dual-Fermi GPU based board.

Link Removed due to 404 Error
Can ASUS GTX 285 Ares get a worthy successor in a form of Dual GTX 480 board? ASUS thinks so...

IF ROG HD5970 Ares is anything to go by, we shouldn't be surprised to either see a repeat of Ares'es power configuration with two 8-pin and one 6-pin PEG connector, or we might even see a triple or quad 8-Pin connectors on a single PCB. Yes, this sounds absolutely crazy - then again, why not if there are only 1000 pieces being made?

ASUS to introduce Dual-Fermi on Computex 2010? - Bright Side Of News*
 
NVIDIA GeForce GTX 480 Video Cards Require 600W Power Supply!


For months that we have heard the NVIDIA GeForce GTX 480 video card is a power hog and today we find out for certain thanks to some cards being shown off at CeBIT. On the side of some retail boxed graphics cards it states that a minimum 600W or greater power supply (with a minimum +12V current of 42A) is needed for proper operation. How does that compare to other graphics cards? The ATI Radeon HD 5970 is a dual-GPU video card solution and requires a 650W power supply. The ATI Radeon HD 5870 would be the single-GPU card to compare a GeForce GTX 480 against and it has a suggest power supply of 500W. So, the GeForce GTX 480 needs a 100W larger power supply than the competitors high-end single-GPU solution.
fermi_psu.jpg

Fermi has three billion transistors on 40nm, 512 CUDA cores, eight times the double precision compute, IEEE 754-2008, ECC memory, support for Fortran, C++, C, OpenCL, DirectCompute, Java, and Python, and to top it off, Nexus, the world’s first fully integrated computing application development environment within Microsoft Studio.

NVIDIA GeForce GTX 480 Video Cards Require 600W Power Supply! - Legit Reviews
 
Geforce GTX 4xx at Cebit
Geforce GTX 4xx: Fermi graphics card pictured at Cebit

PC Games Hardware has been able to take close-up pictures of a Fermi graphics cards that is installed in one of Nvidia's demo systems. The PC is running at a booth in hall 17 and shows the GF100/Fermi tech demo Supersonic Sled.

Nvidia-GF100-Fermi-Geforce-GTX-4xx-Cebit-2010-03.JPG

Link Removed due to 404 Error
Nvidia GF100/Fermi: Geforce GTX 4xx pictured at Cebit 2010 (3) [Source: view picture gallery]​


Are those the first pictures of a final Fermi graphics card? PC Games Hardware was able to take pictures of a Geforce GTX 4xx that has been installed in a demo system of one of Nvidia's board partners and was guarded by an Nvidia employee.

The dual slot card with the typical radial fan has two DVi and one HDMI output. The power supply is realized with one 8-pin and one 6-pin PCI Express power connector - just like the GTX 280. Although the air that was exhausted by the graphics card was warm, it wasn't noticeably hotter than the outgoing air of other high-end devices. The heat of the card is transferred from the heatsink via heatpies to aluminum fins. As usual there are two SLI connectors that make the card ready for 3-way SLI.

A little surprise was the product sticker, which mentioned a GF-100-375-A3. So the device already reached the third generation, but the product code hasn't been updated to match the recently revealed Geforce GTX 4xx series. Thus the sample is still tagged as a Geforce GTX 375.

So Nvidia still doesn't show the fastest GF 100 model running the Supersonic Sled demo - probably as a safety measurement in order to prevent unofficial benchmark leaks. Accordingly the presented version of the Supersonic Sled demo was running without anti aliasing and in a few really challenging scenes there have been some minimal lags.

We have also learned that the smaller version of the Geforce GTX 4xx is supposed to cost about 300 to 400 Euros and the top model about 500 to 600 Euros.

Read on and view: Geforce GTX 4xx: Fermi graphics card pictured at Cebit - Nvidia, Graphics Card, Geforce GTX 4xx, Fermi, GF100 - PC Games Hardware
 
GTX 480 is faster than HD 5870 Link Removed due to 404 Error Link Removed due to 404 Error Written by Link Removed due to 404 Error Tuesday, 02 March 2010 15:30
nvidia.gif

Cebit 2010: Twice the speed in Unigine

Nvidia's
GTX 480 ends up faster than HD 5870 in some cases and according to Nvidia it is the fastest GPU on the market, or to be precise, it will be faster than the HD 5870 once it finally shows up.

We must note that we can confirm that it does happen in some cases, as Nvidia is still keeping the performance numbers for itself as it wants to keep AMD at its toes. Nvidia reps are pretty sure that AMD is still tapping in dark when it comes to Fermi. The GTX 480 is faster in Unigine Heaven benchmark in some situations where actual GPU power does matter.

All this is pretty good news for Nvidia, but it still has to launch the card, while AMD has been selling its HD 5000 series cards for months now.

Link Removed due to 404 Error
 
nvidia.gif

GeForce GTX 480 PCB and Cooling Assembly Pictured
Sources wanting anonymity sent these pictures of PCB and cooling assembly to sections of the media. The PCB is that of the GeForce GTX 480, and is NVIDIA's reference design. It gives away a fair amount of information about the card that has created quite some hype over the months, which is slated for release on March 26. To begin with, the GF100 GPU on which GTX 480 is based, uses essentially the same type of package as the GT200 and G80. To help cool the large die (with a 3 billion-strong transistor count), an integrated heatspreader (IHS) is used. However, unlike with the G80 and GT200 (past two generations of extreme performance GPUs from NVIDIA), the display logic is integrated into the GPU package, instead of being spun off into NVIO processors.

With 12 memory chips on board, the GPU connects to them over a 384-bit wide memory interface. The reference design board is expected to have 1536 MB (1.5 GB) of memory on it. There's also an unusual amount of simplicity to the board design and choice of components. The GPU is powered by a 6-phase vGPU circuit using more standard DPAK MOSFETs. There is a 2-phase vMem circuit. With wide open spaces in the PCB, NVIDIA actually made two cutouts to help the blower's air intake.



Power is drawn from an 8-pin and a 6-pin power connector. Fan connects over a standard 4-pin PWM controlled line, while the white 2-pin connector in the picture powers an illuminated GeForce logo on the top of the card (next to the power inputs). Connectivity includes two DVI-D, and a mini HDMI connector. The PCB itself seems to be about as long as reference GeForce GTX 280 boards. The cooling assembly doesn't show off lavish use of copper, but looks equally complex as older cooling assemblies by the company for such GPUs.

 
Hot, buggy and far too slow ???

SemiAccurate :: SemiAccurate gets some GTX480 scores

February 20, 2010

NVIDIA HAS BEEN hinting about the performance of its upcoming GTX480 cards, and several of our moles got a lot of hands on time with a few cards recently. If you are waiting for killer results from the 'puppy', prepare for severe disappointment.

The short story about the woefully delayed GTX480, and it's little sibling the GTX470 is that it is far slower than Nvidia has been hinting at, and there is a lot of work yet to be done before it is salable. Our sources have a bit of conflicting data, but the vast majority of the numbers line up between them.

Small numbers of final cards have started to trickle in to Nvidia, and it is only showing them to people it considers very friendly for reasons that we will make clear in a bit. Because of the small circle of people who have access to the data we are going to blur a few data points to protect our sources. That said, on with the show.

There are two cards, the GTX480 having the full complement of 512 shaders, and the GTX470 with only 448, which is 64 less for the math impaired. The clocks for the 480 are either 600MHz or 625MHz for the low or half clock, and double that, 1200MHz or 1250MHz for the high or hot clock. Nvidia was aiming for 750/1500MHz last spring, so this is a huge miss. This speed is the first point the sources conflict on, and it could go either way, since both sources were adamant about theirs being the correct final clock. *sigh*.

On the GTX470 side, there are 448 shaders, and the clocks are set at 625MHz and 1250MHz in both cases. If the GTX480 is really at 600Mhz and 1200MHz, and the GTX470 is slightly faster, it should really make you wonder about the thermals of the chip. Remember when we said that the GF100 GTX480 chip was having problems with transistors at minimal voltages? Basically Nvidia has to crank the voltages beyond what it wanted to keep borderline transistors from flaking out. The problem is that this creates heat, and a lot of it. Both of our sources said that their cards were smoking hot. One said they measured it at 70C at idle on the 2D clock.

The fans were reported to be running at 70 percent of maximum when idling, a number that is far, far too high for normal use. Lets hope that this is just a BIOS tweaking issue, and the fans don't need to be run that fast. It would mean GF100 basically can't downvolt at all on idle. On the upside, if it's any comfort, the noise from the fans at that speed was said to be noticeable, but not annoying.

If you are wondering why Nvidia made such a big deal about GF100 GTX480 certified cases, well, now you know. Remember, higher temperatures mean more leakage, which means more heat, and then the magic smoke that makes transistors work gets let out in a thermal runaway. You simply have to keep this beast cool all the time.

While this backs up many of the theories on how Nvidia lost so much clock speed, it isn't conclusive. The take home message is that this chip has some very serious thermal problems, and Nvidia is in a box when it comes to what it can do to mitigate the problem.

Now that you know the raw clocks, how does it perform? It is a mixed bag, but basically the cards are much below Nvidia's original expectations publicly stated as 60 percent faster than Cypress. The numbers that SemiAccurate were told span a variety of current games, all running at very high resolutions. Here is where we can't list specifics or the Nvidia Keystone Kops might find their first SemiAccurate mole. We will bring you the full spreadsheets when the cards are more widespread.

The GTX480 with 512 shaders running at full speed, 600Mhz or 625MHz depending on which source, ran on average 5 percent faster than a Cypress HD5870, plus or minus a little bit. The sources were not allowed to test the GTX470, which is likely an admission that it will be slower than the Cypress HD5870.

There is one bright spot, and it is a very bright spot indeed. No, not the thermal cap of the chip, but the tessellation performance in Heaven. On that synthetic benchmark, the numbers were more than twice as fast as the Cypress HD5870, and will likely beat a dual chip Hemlock HD5970. The sources said that this lead was most definitely not reflected in any game or test they ran, it was only in tessellation limited situations where the shaders don't need to be used for 'real work'.

Update: The games tested DID include DX11 games, and those are still in the 5% range. Heaven uses tessellation in a way that games can not, Heaven can utilize far more shaders for tessellation than a normal game can, they have to use them for, well, the game itself. The performance of Heaven on GTX480 was not reflected in any games tested by our sources, DX9, 10, 10.1 or 11.

The GF100 GTX480 was not meant to be a GPU, it was a GPGPU chip pulled into service for graphics when the other plans at Nvidia failed. It is far too math DP FP heavy to be a good graphics chip, but roping shaders into doing tessellation is the one place where there is synergy. This is the only place where the GTX480 stood out from a HD5870. The benchmarks that Nvidia showed off at CES were hand-picked for good reason. They were the only ones that Nvidia could show a win on, something it really needs to capture sales for this card and its derivatives, if any.

There was one problem that the sources pointed to, on Heaven, which was that the benchmark had many visible and quite noticeable glitches. If you were wondering why Nvidia only showed very specific clips of it at CES, that is why. DX11 isn't quite all there yet for the GTX480. This is probably why we have been hearing rumors of the card not having DX11 drivers on launch, but we can't see Nvidia launching the chip without them.

Getting back to the selective showings of the GTX480, there is a good reason for it. The performance is too close to the HD5870, so Nvidia will be forced to sell it at HD5870 prices, basically $400. The GPU isn't a money maker at this price point, and at best, Nvidia can price it between the $400 HD5870 and the $600 HD5970. The only tools left to deal with this issue are PR and marketing as the chip is currently in production.

If potential buyers get a wide range of benchmarks and correct specs, the conclusion will likely be that the GTX480 equals the HD5870 in performance. There will be no reviews based upon cards purchased in the wild for months. The way Nvidia has dealt with this in the past has been to control who gets access to cards and to hand pick the ones sent out.

If you give the GTX480 to honest journalists, they will likely say that the two cards, the GTX480 and the HD5870, show equivalent performance, so we hear Nvidia is doing its best to keep the GTX480 out of the hands of the honest. This means that only journalists who are known to follow the "reviewer's guide" closely, are willing to downplay the negatives, and will hit the important bullet points provided by Nvidia PR will be the ones most likely to gain early access to these cards. If this sounds unethical to you, it is, and it's not the first time. This is exactly what Nvidia did to cut Anand, Kyle and several others out of the GTS250 at launch. That worked out so well the last time that Nvidia will probably try it again. Expect fireworks when some people realize that they have been cut out for telling the truth.

The end result is that the GTX480 is simply not fast enough to deliver a resounding win in anything but the most contrived benchmark scenarios. It is well within range of a mildly upclocked HD5870, which is something that ATI can do pretty much on a whim. The GTX480 can barely beat the second fastest ATI card, and it doesn't have a chance at the top.

GTX480 is too hot, too big, too slow, and lacks anything that would recommend it over a HD5870, much less the vastly faster HD5970. Nvidia is said to be producing only 5,000 to 8,000 of these cards, and it will lose money on each one. The architecture is wrong, and that is unfixable. The physical design is broken, and that is not fixable in any time frame that matters. When you don't have anything to show, spin. Nvidia is spinning faster than it ever has before.


SemiAccurate :: SemiAccurate gets some GTX480 scores
 
Lol......:

Link Removed due to 404 Error

Sorry Greg only teasin' ya... :)
 
Nice video, although for all the posturing the 480GTX is like gonna be close to £150 more than the 5870 so would expect higher scores...
 
ATI Trillian: The Secret Weapon with 2GB of Memory and Support for Six Displays.

ATI, graphics business unit of Advanced Micro Devices, is planning to release commercially release its graphics cards that supports output to six monitors in the coming weeks. The novelty will carry rather unprecedented amount of memory and will represent a secret weapon of ATI against Nvidia Corp.’s GeForce GTX 400-series that is to be released late this month.

ATI Radeon HD 5870 Eyefinity 6 graphics card – code-named Trillian – will carry one ATI Cypress graphics chip with 1600 stream processors clocked at 850MHz along with 2GB of GDDR5 memory operating at 4.80GHz. The graphics card will feature six mini-DisplayPort connectors that will be able to support monitors with DisplayPort, DVI-I and D-sub input with special dongles. According to ATI, the price of the product will be “under $500”.

The Radeon HD 5870 Eyefinity 6 graphics board will finally be able to show the potential of the Eyefinity technology in all its glory – it will be able to render three-dimensional video games onto six displays at once in a bid to provide eye-definition visual experience. In order to ensure maximum possible performance, the developer installed 2GB of high-speed GDDR5 memory onto the card.

At present there are not a lot of monitors with thin bezels that would ensure splendid quality of multi-monitor setups, but ATI promises that such displays will be available from Samsung Electronics sometimes in April.

While the Trillian – or the Radeon HD 5870 Eyefinity 6 – will not feature much higher performance compared to the original model 5870 that was released in late September ’09, it is supposed to provide experience that is not possible on Nvidia’s GeForce GTX 400-series, the output of 3D video games onto 2, 3, 4, 5 or 6 monitors at once. There are not a lot of people, who actually own six displays, but there are some, who can build setup consisting of two or three monitors to enjoy their favorite video games.

Nvidia’s new offerings based on Fermi architecure will also have their own trumps: stereoscopic 3D Vision, exclusive CUDA-based software as well as several games that use PhysX API to process effects physics. Time will tell, which advantages will persuade potential buyers.

Source: Link Removed due to 404 Error
 
Some feel the result might best case scenario as the bench software used is version 1.1 whereas the one on release is version 1.0 so Nvidia isn't being totally honest about everything here..
 
Well March-April is supposed to be the time of release but I wouldn't hold my breath.. :)
 
Back
Top