NVIDIA's Fermi: Architected for Tesla, 3 Billion Transistors in 2010
NVIDIA's Fermi: Architected for Tesla, 3 Billion Transistors in 2010
Link Removed - Invalid URL Link Removed due to 404 Error
Date: September 30th, 2009
Topic: Link Removed due to 404 Error
Manufacturer: NVIDIA
Author: Anand Lal Shimpi
The graph below is one of transistor count, not die size. Inevitably, on the same manufacturing process, a significantly higher transistor count translates into a larger die size. But for the purposes of this article, all I need to show you is a representation of transistor count.
See that big circle on the right? That's Fermi. NVIDIA's next-generation architecture.
NVIDIA astonished us with
GT200 tipping the scales at 1.4 billion transistors. Fermi is more than twice that at 3 billion. And literally, that's what Fermi is - more than twice a GT200.
At the high level the specs are simple. Fermi has a 384-bit GDDR5 memory interface and 512 cores. That's more than twice the processing power of GT200 but, just like RV870 (Cypress), it's not twice the memory bandwidth.
The architecture goes much further than that, but NVIDIA believes that
AMD has shown its cards (literally) and is very confident that Fermi will be faster. The questions are at what price and when.
The price is a valid concern. Fermi is a 40nm GPU just like
RV870 but it has a 40% higher transistor count. Both are built at TSMC, so you can expect that Fermi will cost NVIDIA more to make than ATI's Radeon HD 5870.
Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first samples. Widespread availability won't be until at least Q1 2010.
I asked two people at NVIDIA why Fermi is late; NVIDIA's VP of Product Marketing, Ujesh Desai and NVIDIA's VP of GPU Engineering, Jonah Alben. Ujesh responded:
because designing GPUs this big is "fucking hard".
Jonah elaborated, as I will attempt to do here today.
AnandTech: NVIDIA's Fermi: Architected for Tesla, 3 Billion Transistors in 2010
Love the above quote ....lol.