Intel CEO laments Nvidia’s ‘extraordinarily lucky’ AI dominance, claims it coulda-woulda-shoulda have been Intel::Intel CEO Pat Gelsinger has taken a shot at his main rival in high performance computing, dismissing Nvidia’s success in providing GPUs for AI modelling as “extraordinarily lucky.” Gels

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    4
    ·
    1 year ago

    If anything it should’ve been AMD. Intel is barely keeping up with the CPU competition these days.

    • hansl@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      4
      ·
      1 year ago

      Not really. ATI were always “G is for graphics” and built video games cards. They never really saw the potential (nor did they have the resources anyway) for GPGPU, which is why NVIDIA had a huge first-player advantage (CUDA is 16 years old, 2 years before AMD acquired ATI). When AMD bought them it was already very late.

      Then AMD wanted to build cards for people to buy while NVIDIA was more than happy selling overpriced cards to crypto miners.

      OpenCL was an ambitious project that was too big and too open for what was capable from the Khronos group. Vulkan was too late.

      Intel could have done it but IIRC the CEO at that time (can’t remember the name) didn’t want to diversify their products after Itanium was a failure. They just doubled down on CPU.

      • Eager Eagle@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        They never really saw the potential (nor did they have the resources anyway) for GPGPU

        Maybe ATI, which ended in 2010.

        AMD launched ROCm in 2016, after the first AI boom of 2012, but before GANs and transformers exploded. In recent years they’re better positioned in than Intel ever was.

      • TheGrandNagus@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Disagree. GCN cards were incredibly compute focused.

        Shit, AMD even invented HBM memory because they saw the value in ridiculously high bandwidth, dense, energy efficient memory in data centre applications. HBM is still used today in the enterprise market.

        AMD’s problem was that they had no money at the time and couldn’t build out their software ecosystem like Nvidia could - they had to bank on just getting the ball rolling and open sourcing their efforts in the hope that others would contribute, which didn’t happen to the extent that they’d have liked, especially when Nvidia with their mountains of cash could just pump out CUDA and flood universities with free GPUs to get them hooked in the Nvidia software stack.

      • bruhduh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Brother, ati radeon hd 5000 series was basically vliw architecture single board computer in gpu package, and that was before amd bought radeon

    • EvergreenGuru@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      Amd dropped the ball when it came to software and has now separated their GPU architecture so that they only have enterprise cards for data science. NVIDIA got in early and made CUDA default among all product lineups so that consumer cards could be used as entry-level cards by hobbyists. While it would’ve been nice to see more competition, the only company taking this space seriously has been nvidia.