Just wondering,what AMD would need to do…to at least MATCH nvidias offering in A.I/dlss/Ray tracing tech

  • Smalmthegreat@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Similar questions were probably asked about AMD vs Intel when it came to CPUs not long ago. Still a big market share gap there, though.

    It would take Nvidia really stumbling or for something to happen to Jensen. I think with the way Nvidia is ran someone like Jensen is a necessity / kingpin in the machine ( he has 40+ direct reports).

    Intel stumbled really hard which allowed AMD a golden opportunity and to build up a bit of a warchest they are only able to reap the benefits of now.

    • Put_It_All_On_Blck@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You’re overlooking the context of that situation. The whole reason Intel hit a wall with CPU performance was because they were overzealous with the improvements being done with 10nm, and as we know Intel couldn’t make it happen on time or as planned. Meanwhile TSMC and Samsung moved to EUV, which paid off big time. AMD soon moved to TSMC and rode on their curtails. Intel has now learned from that mistake and is why they are open to using TSMC not only for additional capacity but to make sure they are never stuck on a node while everyone else isn’t.

      So Nvidia wouldn’t get stuck like Intel did, as Nvidia hops around to whatever fab they feel is best for them in terms of performance and pricing. Now could Nvidia hit a wall with architectural designs? Sure, but so could AMD, but considering this is Nvidia’s core business, they have a ton more money and engineers and are far more desirable to work for, it’s probably less likely Nvidia will have issues and more likely AMD will.

      • capn_hector@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Sure, but so could AMD, but considering this is Nvidia’s core business, they have a ton more money and engineers and are far more desirable to work for, it’s probably less likely Nvidia will have issues and more likely AMD will.

        it is also worth mentioning that NVIDIA is probably #2 in chiplets behind AMD. They weren’t too far behind AMD’s first attempts (GP100 was only like a year behind Fiji) and theirs actually was a commercially successful product. They also have the most successful multi-GPU interconnect and it’s only with some of the recent Infinity Fabric Coherent PCIe Interconnect (not the same as normal infinity fabric) and Infinity Link (again, not the same as infinity fabric) that AMD has been able to address these sorts of products.

        Just because NVIDIA didn’t rush to step on the rake with chiplet designs this generation, doesn’t mean they’re behind. They are looking at it, they just thought it wasn’t ready yet, and really they were right. It’s been a huge albatross around RDNA3’s neck, I genuinely think RDNA3 would have been much better if they had gone monolithic.

        On the other hand, obviously this did work out with Zen: Naples was garbage, Rome was fantastic, and people immediately started theorycrafting about how this meant RDNA4 was perfectly positioned to sweep NVIDIA, how the reticle limit is going to bite NVIDIA, etc. But the difference is, NVIDIA doesn’t have a track-record of needing multiple gens to get a decent product: GP100 was much better than Fury X or Vega without needing a naples or RDNA3-level rake-in-the-face gen. When the time is right, when it makes sense as a product, you will see them move to chiplets on consumer cards and they will probably put out a successful first generation. There’s no reason to think even the reticle-limit thing is some insurmountable cliff for them, once the reticle limit drops, they will start launching chiplet designs and it will probably be just fine.

        There’s a lot of reasons for that, not only do they have more people but they pay top dollar and generally they’re the cream of the crop. AMD and Intel are both notorious for underpaying employees and then the good ones get poached by NVIDIA and Apple after they’ve finished training.