Sure, but so could AMD, but considering this is Nvidia’s core business, they have a ton more money and engineers and are far more desirable to work for, it’s probably less likely Nvidia will have issues and more likely AMD will.
it is also worth mentioning that NVIDIA is probably #2 in chiplets behind AMD. They weren’t too far behind AMD’s first attempts (GP100 was only like a year behind Fiji) and theirs actually was a commercially successful product. They also have the most successful multi-GPU interconnect and it’s only with some of the recent Infinity Fabric Coherent PCIe Interconnect (not the same as normal infinity fabric) and Infinity Link (again, not the same as infinity fabric) that AMD has been able to address these sorts of products.
Just because NVIDIA didn’t rush to step on the rake with chiplet designs this generation, doesn’t mean they’re behind. They are looking at it, they just thought it wasn’t ready yet, and really they were right. It’s been a huge albatross around RDNA3’s neck, I genuinely think RDNA3 would have been much better if they had gone monolithic.
On the other hand, obviously this did work out with Zen: Naples was garbage, Rome was fantastic, and people immediately started theorycrafting about how this meant RDNA4 was perfectly positioned to sweep NVIDIA, how the reticle limit is going to bite NVIDIA, etc. But the difference is, NVIDIA doesn’t have a track-record of needing multiple gens to get a decent product: GP100 was much better than Fury X or Vega without needing a naples or RDNA3-level rake-in-the-face gen. When the time is right, when it makes sense as a product, you will see them move to chiplets on consumer cards and they will probably put out a successful first generation. There’s no reason to think even the reticle-limit thing is some insurmountable cliff for them, once the reticle limit drops, they will start launching chiplet designs and it will probably be just fine.
There’s a lot of reasons for that, not only do they have more people but they pay top dollar and generally they’re the cream of the crop. AMD and Intel are both notorious for underpaying employees and then the good ones get poached by NVIDIA and Apple after they’ve finished training.
it is also worth mentioning that NVIDIA is probably #2 in chiplets behind AMD. They weren’t too far behind AMD’s first attempts (GP100 was only like a year behind Fiji) and theirs actually was a commercially successful product. They also have the most successful multi-GPU interconnect and it’s only with some of the recent Infinity Fabric Coherent PCIe Interconnect (not the same as normal infinity fabric) and Infinity Link (again, not the same as infinity fabric) that AMD has been able to address these sorts of products.
Just because NVIDIA didn’t rush to step on the rake with chiplet designs this generation, doesn’t mean they’re behind. They are looking at it, they just thought it wasn’t ready yet, and really they were right. It’s been a huge albatross around RDNA3’s neck, I genuinely think RDNA3 would have been much better if they had gone monolithic.
On the other hand, obviously this did work out with Zen: Naples was garbage, Rome was fantastic, and people immediately started theorycrafting about how this meant RDNA4 was perfectly positioned to sweep NVIDIA, how the reticle limit is going to bite NVIDIA, etc. But the difference is, NVIDIA doesn’t have a track-record of needing multiple gens to get a decent product: GP100 was much better than Fury X or Vega without needing a naples or RDNA3-level rake-in-the-face gen. When the time is right, when it makes sense as a product, you will see them move to chiplets on consumer cards and they will probably put out a successful first generation. There’s no reason to think even the reticle-limit thing is some insurmountable cliff for them, once the reticle limit drops, they will start launching chiplet designs and it will probably be just fine.
There’s a lot of reasons for that, not only do they have more people but they pay top dollar and generally they’re the cream of the crop. AMD and Intel are both notorious for underpaying employees and then the good ones get poached by NVIDIA and Apple after they’ve finished training.