The regulations specify the performance ceiling. Nvidia just ships products slower than that
The regulations specify the performance ceiling. Nvidia just ships products slower than that
US is not going to side against Nvidia for AMD, when AMD performance gets to par, they will get the cut as well
I remember Nvidia spent like $11 Billion in 2021 in preparation of current gen chips so $14 billion on chips including thos on N3B checks out
Igor will post about this every month I guess
It means density of transistors per square mm in relation to TSMC N3 (‘3nm’) and N2 (‘2nm’
Where I come from, that’s called overpriced. ‘Not trying’ is giving AMD kiddie gloves while dumping the blame entirely on Nvidia
Compare AMD CPU prices vs Intel and tell me Intel is the big money guys
Of course not, memory bandwidth matters much more. We always say this, and now people finally see proof with 4060ti
I guess people don’t dig into white papers to learn about how and why the architectures perform as they do
‘Ampere Next’ referred to datacenter lineup, which ended being the biggest architectural change in datacenter GPUs since Volta vs GP100. And Ampere Next Next, referred to datacenter Blackwell, which is MCM so again a big change
Its expected to be like Ampere, Ampere was 17% increase in SMs (rtx 3090ti vs rtx Titan) but the SM itself was improved such that they yielded about 33% improvement per SM in ‘raster’ and massive improvements in occupency for RT workloads. So 3090ti ended up 46% faster in ‘raster’ vs rtx Titan.
The TPC and GPC of Blackwell are rumored to be overhauled with a more hesitant rumor about the SM also being improved.
Just like they refuse to support consumer cards officially
Nah, throughput of tensor cores is far to high to compete against
RDNA3 is up to 60% faster than RDNA2 equivalent in epath tracing
Oh of course, silly me, we need Nvidia to sit still so AMD can blow past them and be the best CPU and GPU combo in the market. Great for customers!
Intel is actually closer than AMD. Apparently so is Apple
Still very accurate if you know what to look for.
For example, the reason why Ampere vs Turing CUDA cores scale different will let you predict how an Ampere GPU scales vs Turing GPU.
It’s also why we knew how Ada would scale linearly except with 4090 that was nerfed to be more efficient
Can they? Yes, will they? Nvidia hopes not, so they will do whatever they can so it doesn’t happen. Which is a good thing.
Interesting way to battle 14th gen. Well done to AMD
No, you are limited by:
Compute Performance, you will need 10,000%+ more compute than was available per chip, and those PCIe accelerators don’t have the ability to compute the way they do now. You are going to have to rely on CPUs which is worse
Lack of scalabality of interconnecting chips to behave as one, increasing IO requirements dramatically.
Lack of memory pooling (yes you qualified it), memory bandwidth and memory sizes (we are talking in megabytes), imagine waiting for 1 billion parameter model calculations to load and store in each layer of a neural network using floppy disks.