Posted on Monday, December 28 2009 9:06 pm by forum member: SeniorEditor
Filed under: News Around the Web
Nvidia is expected to delay its next-generation DirectX 11-supporting GPU (Fermi) to March, 2010, while AMD will launch more GPUs in January-February, according to sources from graphics card makers.
Nvidia originally scheduled to launch Fermi in November 2009, but was delayed until CES in January 2010 due to defects, according to market rumors. However, the company recently notified graphics card makers that the official launch will now be in March 2010, the sources noted.
Nvidia plans to launch a 40nm GDDR5 memory-based Fermi-GF100 GPU in March, and will launch a GF104 version in the second quarter to target the high-end market with its GeForce GTX295/285/275/260, the sources pointed out.
For performance level markets, Nvidia will allow its GeForce GTS250, GT240/220 and 9800GT/9500GT defend against AMD's Radeon HD 5770/5750, 4870/4850 and 4670/4650.
For the mainstream market, Nvidia will mainly push its GeForce 210.
Meanwhile, AMD will launch 40nm Radeon HD 5670/5570 (Redwood) and HD 5450 (Cedar) GPUs at the end of January or in February 2010, the sources noted.
In related news, although Taiwan Semiconductor Manufacturing Company's (TSMC's) 40nm process yields have already improved, capacity is still not sufficient to supply the two GPU giants fully which may have an impact the launch schedules, the sources added.
Nvidia declined the opportunity to respond to this report saying it cannot comment on unannounced products. AMD did not respond by the time of publication.
In an interview with the good people of DonanimHaber, Nvidia PR Head for the EMEA Region Luciano Alibrandi told the world + dog that the company's new Fermi GPU will be the mother of all GPUs and that it will win in every market segment.
Ã¢â‚¬Å“We expect [Fermi] to be the fastest GPU in every single segment. The performance numbers that we have [obtained] internally just [confirms] that. So, we are happy about this and are just finalizing everything on the driver side,Ã¢â‚¬Â said Alibrandi.
However, the major issue with Fermi isn't its performance, but rather its delayed launch. Nvidia already confirmed that it would ramp up production in Q1 of its fiscal 2011, or sometime between January 26 and April 26. Add to that the additional couple of weeks needed to stick these GPUs into actual cards and ship them and things start to look even worse for Nvidia.
Ã¢â‚¬Å“We just want to make sure it is as perfect as we want it to be in both graphics and computing,Ã¢â‚¬Â said Mr. Alibrandi.
While Nvidia is taking its time to make sure everything is perfect, ATI is supposedly working on a refreshed version of Cypress and it is clear it will have a two-quarter lead over Nvidia in the DirectX 11 market.
The last time I saw this going on was when waiting for the release of the ATI 2900. It was delayed and delayed until eventual release and the card was awful anyway... I just wonder if we are going to see the same thing again with Fermi?
According to our well informed industry sources, Nvidia will display Fermi GPUs at the upcoming CES in Las Vegas. This means we should be able to see it right on January 7th when the show starts.
The company wonâ€™t share all the information especially as after CES Nvidia wants to have its famous editor's days in Vegas and show them the power of Fermi. Since this is an NDA event, this is not something we will be attending anytime soon, but most of our good friends will.
The Fermi launch is still â€œon scheduleâ€ for Q1 but we can only hope that this will happen in real people Q1, latest on March 31th and not at Nvidiaâ€™s financial Q1 that runs one month behind.
Fermi is real, but since volume production is ramping up, even if launched in January, Fermi would have very limitedly availability for quite a few weeks.
All the delays definitely have casted a bad aura around this product and many people, especially the one whos love the power or ATI, now compare Fermi to NV30. So far, at least with all of the delays, they are not that much off. Fudzilla - Nvidia to show Fermi graphics at CES
At Pepcomâ€™s highly acclaimed digital experience event, Nvidia finally showed its Fermi GF100 graphics card, and this time it was actually working. Nvidia did show a Fermi dummy board back in October 2009, but three months later, they have got to the stage when they are comfortable with showing the actual card to the world.
The card features a dual slot design and needs two power connectors, which is actually nothing new for this part of the graphics market. The cooler completely surrounds the card and our first impression was that the card was noisy and probably quite hot.
Nvidia representatives did try to assure us that the final card can end up cooler and quieter, but this is something that we will have to wait and see. The card was running Uniengine in DirectX 11 and they did demonstrate tessellation, one of the cool features of DirectX 11, but we couldnâ€™t see any performance figures.
Nvidia hosts its Editorâ€™s Day in the next few days in Las Vegas, so we are quite sure that they will let people know a lot more about the card.
Wish Nvidia would get a move on, cos at the minute i'm looking at a ATi 5850 (Best card for the money at the minute I think) and I'm really struggling at waiting around another few months for my preffered GPU maker, especially when I got my birthday cash next week burning a hole in my pocket....lol
As expected, Nvidia have finally demonstrated a working Geforce based Fermi (GF100) card at CES 2010. Few details are available, but at least we know GF100 is up and running. PC Watch have captured a video of GF100 running Unigine's DX11 benchmark.
Visually, the GF100 prototype looks like any Geforce product - though there is no doubt the final design could look a whole lot difference. The length appears to be a standard 10.5", and it covers two PCI slots. Two SLI connectors are present, as usual.
All in all, no surprises. Except the power connectors. GF100 has been rumoured in some parts to have a dual GPU variant, which would suggest a single GF100 would draw 200W at maximum. This would require 2 x 6 pin power connectors. However, the prototype is equipped with 8 pin + 6 pin, which is designed for 300W. This indicates the TDP is likely to be between 225 and 300W - way too hot for considering a dual GPU. Of course, the final stepping could (and probably would) significantly drop power consumption, but at this stage, there is no doubt GF100 is running hotter than expected.
No other details, such as performance, are available. But the important thing is GF100 is real. We can expect more details on GF100 at Nvidia's Editor's Day in Las Vegas right after CES 2010.
40nm process getting 60% to 80%
Claims that the current TSMC 40nm process that is being used for the AMD/ATI Cypress family is achieving yields of in the neighborhood of 40% donâ€™t seem to have a shred of truth. Our sources close to TSMC claim that actually the 40nm process that is being used for the Cypress family is routinely achieving yields in the 60% to 80% range, which is actually confirmed by the fact that ATI is keeping product in the pipeline and you can actually buy boards at retail.
Chip yields seem to be the topic of discussion lately as the much larger Nvidia Fermi chip is struggling with yield issues as our sources suggest that the actual yields are as low as 20% and are not improving quickly according to sources close to the fabs. With wafer starts costing about $5K the numbers suggest that each chip would cost an astounding estimated $200 per chip which pegs the card with a sticker price of about $600.
Those in the known are claiming that Fermi despite the yield and thermal issues is only about 20% faster than Cypress, while Hemlock smokes it. The combination of low yields, high thermals, and marginally better performance than Cypress could be conspiring to place Nvida in the position of having to release the card, but have to sell it at a loss till they are able to address the issues in the next spin according to sources. Because of the situation a mole we know is suggesting that Nvidia may limit the sales of Fermi to consumers and instead use the chips for the Tesla and Quadro products where prices and margins are much better.
All of the talk of yields seems to be in some ways nothing more than a smoke screen to shine the light off of the current situation to stall sales by promising something that most consumers are likely going to be unable to buy. The moles claim you will not need a pair of 3D glasses to watch this shake out in the next few weeks.
At this year’s Consumer Electronics Show, NVIDIA had several things going on. In a public press conference they announced 3D Vision Surround and Tegra 2, while on the showfloor they had products o’plenty, including a GF100 setup showcasing 3D Vision Surround.
But if you’re here, then what you’re most interested in is what wasn’t talked about in public, and that was GF100. With the Fermi-based GF100 GPU finally in full production, NVIDIA was ready to talk to the press about the rest of GF100, and at the tail-end of CES we got our first look at GF100’s gaming abilities, along with a hands-on look at some unknown GF100 products in action. The message NVIDIA was trying to send: GF100 is going to be here soon, and it’s going to be fast.
Hmmmm after reading that story i think nVidia may win the tessellation for less performance hit war against ATI but as for raytracing in games until they can make a chip that does in real time i can't see it making a big difference and it looks like you'll need 3 or more GF100's to get anything descent happening on screen as one card was giving them how many fps oh that's right .6 fps and that was probably at 99% usage
ATI knows that it will take serious muscle to go against Fermi GF100 based cards, but it still has a few more cards to play. Whenever Nvidia realistically launches its new GF100 40nm based card, ATI should have its kicker chip around the corner.
They did this with 4890, the first gigahertz card and got quite a lot of attention so you can easily expect a new 58x0 card with pretty much the same agenda. We don’t know any specs at this time, but we know that the company is working on such a card.
Before this card comes out you can expect huge price cuts on Radeon HD 58x0 parts, but this will only happen when ATI's sales start to suffer due to GF100 Fermi launch. Of course, this is yet to happen.
We've learned that the six-month delay of Geforce version of GF100 Fermi chips and cards won't actually hinder the introduction of other, slower version of Fermi card.
The whole point of making the insanely big Fermi is to win the hearts of journalists and enthusiasts and to bring this architecture to performance, mainstream and finally entry-level markets.
Entry-level is where the money is, even though you need to send millions of these cards to make a decent profit. Mainstream Fermi is a project that Nvidia is pushing side-by-side with high-end Fermi and we've learned that mainstream Fermi should be on time.
You should roughly expect it one quarter after the first Fermi, and if all goes well June should be a good month to launch. Since Nvidia changed quite a lot of plans in the last few months, don’t be surprised if it launches a bit later.
Single-chip Fermi and the card codenamed GF100 should launch in March. This should be the launch time frame and a dual-chip version might follow in roughly one+ months after the official single-chip launches.
This is roughly the plan as it will take some time to prepare this dual headed beast, especially since Fermi will be the hottest chip in Nvidia's history. Naturally, putting two such chips on the same card won't be a walk in the park.
The important thing is that dual-chip card is possible and Nvidia is working on it. The big issue is of course that it will take roughly three months from today before you will be able to buy one, whereas ATI's dual-GPU card has been selling for more than a month, closer to two now.
512 CUDA processors
16 geometry units
384-bit GDDR5 memory bus
48 ROP engines
4 raster units
64 texture units
DirectX 11 support
Each GPU is made up of four separate GPC (Graphics Processing Clusters). Breaking this down further, we can look at each of the streaming multiprocessor (SM) cores within the GPCs separately. Each of the SMs comprise of 32 CUDA cores, 16 or 48KB of shared memory, 16 or 48KB of L1 cache, 4 texture units and a polymorph engine.
However, ZDNet's Adrian Kingsley-Hughes emphasized that there were still "more questions than answers" about Nvidia's next-generation Fermi architecture.