Nvidia declares Moore's law dead

Discussion in 'The Water Cooler' started by kemical, May 3, 2010.

  1. kemical

    kemical Windows Forum Admin
    Staff Member Premium Supporter Microsoft MVP

    Joined:
    Aug 28, 2007
    Messages:
    31,816
    Likes Received:
    1,563
    Nvidia declares Moore's law dead Written by Nedim Hadzic
    [​IMG]

    Multicores are bull, parallel is the bomb


    Although
    one would think that Nvidia is keeping busy these days in trying to catch up with its red competitor, it seems like some within the company have enough spare time on their hands to try and talk their way out of this situation. This time around however, Nvidia took a stab at both Intel and AMD.

    Moore’s law, which holds up nicely since it has been “put to practice” 45 years ago says that the number of transistors on an integrated circuit would double roughly every 2 years. Nvidia’s VP, Bill Dally on the other hand says how Moore’s Law is done and dusted and how it’s time for new ways of tackling today’s computational demands.

    After saying that the process of adding cores to existing architectures is like building “an airplane by putting wings on a train”, Dally conveniently offers the “best” and “only” solution, which just so happens to be his company’s much touted parallel computing.

    Interestingly enough, Dally puts a lot of emphasis on conserving energy. He adds how adding cores is adding to consumption without matching it in performance, so on paper, we’d probably agree that Nvidia should be our savior.

    Unfortunately, this is coming from the same company that launched Fermi, so we should take the energy consumption claims with a grain of salt. Perhaps a user’s comment sums it up best: ”Considering the joke Fermi has become with regard to die size, heat, and relative lack of scale of performance he is best advised to take Moore's law more seriously and produce a decent product.”

    More here.

    Fudzilla
     
  2. cybercore

    cybercore New Member

    Joined:
    Jul 7, 2009
    Messages:
    15,823
    Likes Received:
    321
    Good point, more progress is needed. :)
     
  3. whoosh

    whoosh Cooler King
    Staff Member Premium Supporter

    Joined:
    Apr 15, 2009
    Messages:
    25,694
    Likes Received:
    379
    Chief Scientist of Nvidia Condemns Moore’s Law, Microprocessors.

    William Dally, chief scientist and senior vice president of research at Nvidia, said in a column that Moore’s Law was no longer enabling scaling of computing performance on microprocessors. In addition, Mr. Dally indicated that central processing units (CPUs) in general could no longer fulfill the demand towards high performance.
    “[Moore’s Law] predicted the number of transistors on an integrated circuit would double each year (later revised to doubling every 18 months). This prediction laid the groundwork for another prediction: that doubling the number of transistors would also double the performance of CPUs every 18 months. [Moore] also projected that the amount of energy consumed by each unit of computing would decrease as the number of transistors increased. This enabled computing performance to scale up while the electrical power consumed remained constant. This power scaling, in addition to transistor scaling, is needed to scale CPU performance. But in a development that's been largely overlooked, this power scaling has ended. And as a result, the CPU scaling predicted by Moore's Law is now dead. CPU performance no longer doubles every 18 months,” said Bill Dally in a column published at Forbes.
    Perhaps, performance of CPUs no longer doubles every year and a half, but, firstly, those chips are universal and very flexible, secondly, they can be manufactured in large volumes. Graphics chips, which, from time to time, outpace the Moore’s Law, quite often cannot be manufactured in large volumes because of poor yields. Moreover, although GPUs can provide higher horsepower than CPUs, they are not that universal and flexible.
    Even though historically developers of central processing units were concentrating on increasing clock-speeds of chips, five years ago Advanced Micro Devices and Intel Corp. concentrated on creating more parallel multi-core microprocessors that work on moderate clock-speeds. However, the vice-president of Nvidia also claims that multi-core x86 CPUs will ultimately not solve problem with the lack of necessary computing performance.
    “Building a parallel computer by connecting two to 12 conventional CPUs optimized for serial performance, an approach often called multi-core, will not work. This approach is analogous to trying to build an airplane by putting wings on a train. Conventional serial CPUs are simply too heavy (consume too much energy per instruction) to fly on parallel programs and to continue historic scaling of performance,” said Mr. Dally.
    It is rather logical that Nvidia calls central processing units obsolete since it does not produce them or develop them. The big question is whether AMD and Intel give up and let Nvidia to actually capture part of the market of high-performance computing, where multi-core CPUs rule today.
    “Parallel computing, is the only way to maintain the growth in computing performance that has transformed industries, economies, and human welfare throughout the world. The computing industry must seize this opportunity and avoid stagnation, by focusing software development and training on throughput computers - not on multi-core CPUs. Let's enable the future of computing to fly – not rumble along on trains with wings,” concluded the chief scientist of Nvidia.

    Chief Scientist of Nvidia Condemns Moore

    [​IMG]
     
  4. cybercore

    cybercore New Member

    Joined:
    Jul 7, 2009
    Messages:
    15,823
    Likes Received:
    321

    Yup, this is what DirectCompute 11 is all about.
     
  5. kemical

    kemical Windows Forum Admin
    Staff Member Premium Supporter Microsoft MVP

    Joined:
    Aug 28, 2007
    Messages:
    31,816
    Likes Received:
    1,563
    Moore's law is hardly dead though... He's just saying all that because his company builds their graphic chips that way. (Plus the fact that they don't have a X86 licence and want one badly).. [​IMG]
     
  6. cybercore

    cybercore New Member

    Joined:
    Jul 7, 2009
    Messages:
    15,823
    Likes Received:
    321
    Good point as well. Of course Moore's law isn't dead. I liked the comparison of "putting wings on a train" because just adding more cores and increasing frequency doesn't seem effective enough. (For example, Pentium 4 3 Ghz vs AMD64 2.2 Ghz, etc.).
     
  7. Digerati

    Digerati Fantastic Member
    Microsoft MVP

    Joined:
    Oct 25, 2009
    Messages:
    2,069
    Likes Received:
    159
    I disagree with this comment,
    In fact, I disagree with the whole concept of this new propaganda campaign by nVidia. While adding cores may not offer the same performance gain as adding a second processor, to suggest additional cores must "match" (whatever that is supposed to mean - we can only surmise) performance gains to power consumption gains is ludicrous. For one, it's the same old story - the software must support it - and software traditionally falls behind HW capabilities. 64-bit, for example has been around for years, but only now are software developers taking it seriously - with the exception of MS, who has supported 64-bit for years too. BUT - and most importantly to consumers - the performance gains with multi-cores is not only matched, but exceeded in terms of costs! And everything boils down to bang for the buck.

    Oh, and Moore's Law never said anything about performance - it was always about transistor count (density) doubling. Other's (probably the uninformed media) incorrectly suggested Moore's Law referred to performance gains matching density gains.

    This is just marketing fluff - parallel computing has always been the way to go if you want the best performance - but to suggest you will accomplish "more work" per dollar invested is bogus. Parallel computing requires a much greater investment for dual processor motherboards. Two CPUs cost a lot more than one. Two CPUs require a lot more power than one - so how is that "conserving energy?"

    I sure could not have written this post any faster with 2 CPUs versus the single quad-core processor I have now.

    Bottom line, IMO, is GPUs should not be compared to CPUs. While both are central processor units, GPUs are specialized, while CPUs are not.
     
  8. whoosh

    whoosh Cooler King
    Staff Member Premium Supporter

    Joined:
    Apr 15, 2009
    Messages:
    25,694
    Likes Received:
    379
    Good points Digerati :)
     
  9. cybercore

    cybercore New Member

    Joined:
    Jul 7, 2009
    Messages:
    15,823
    Likes Received:
    321
    I believe that more cores and higher clocks is definitely a plus, but it's not automatically very effective.


    e.g. gtx 295 ~ 440 Cuda Cores at higher clock speed are not as effective as ati 5870 320 (x5 SIMD) cores. Also, GTX 460/470 vs ATI 5870.
     
  10. Digerati

    Digerati Fantastic Member
    Microsoft MVP

    Joined:
    Oct 25, 2009
    Messages:
    2,069
    Likes Received:
    159
    Right! Well, the OS, in most cases, can complete it's "housekeeping" tasks more quickly which typically results in better overall performance, but other programs, unless coded to take advantage of the additional cores, will not see any benefit.
     
  11. kemical

    kemical Windows Forum Admin
    Staff Member Premium Supporter Microsoft MVP

    Joined:
    Aug 28, 2007
    Messages:
    31,816
    Likes Received:
    1,563
    This is kinda how I see it. As I mentioned above Nvidia hate the fact that they don't have an X86 licence so have been making statements like these for the last couple of years, basically mixing fact with fiction..
     
  12. cybercore

    cybercore New Member

    Joined:
    Jul 7, 2009
    Messages:
    15,823
    Likes Received:
    321
    :) Agreed:

     

Share This Page

Loading...