Second gen AMD HW ray-tracing still has a worse performance impact than Intel first gen HW ray-tracing. No need to talk about Nvidia here, as they are miles ahead. Either AMD is not willing to expend more resources on RT or they aren’t able to improve performance.
The A750 beats the RTX 3060 in Cyberpunk, Control, and Metro Exodus. All Nvidia sponsored titles. In Metro it beats the 3060ti. The main reason is probably because their cards have the RT hardware that was initially meant to compete with GPUs 1 level above what they ended up being. the A770 is a current 3060ti competitor, with the RT hardware meant to originally compete with a 3070ti.
But I don’t think it matters what generation AMD or Nvidia, or any of them are on. It’s not that AMD couldn’t build hardware from day 1 that could compete in RT. It’s that they viewed it to be a waste of space, so they did the bare minimum with RDNA2 to be compatible. Spending 10% more on a die is going to cut into your margins a lot, unless you also increase the price of the GPU.
It’s been a conscious decision for years, not a failed effort.
AMD’s RT hardware is intrinsically tied to the texture unit, which was probably a good decision at the start since Nvidia kinda caught them with their pants down and they needed something fast to implement (especially with consoles looming overhead, wouldn’t want the entire generation to lack any form of RT).
Now, though, I think it’s giving them a lot of problems because it’s really not a scalable design. I hope they eventually implement a proper dedicated unit like Nvidia and Intel have.
I doubt they were surprised at all. Isn’t RDNA3 very similar to RDNA2? They could have fixed it there, and they decided on minor improvements instead.
Wasn’t RDNA2 designed with Sony and Microsoft having an input on its features? I’m sure Sony and MS knew what was coming from Nvidia years in advance. I think Mark Cerny said developers even wanted a 16 core originally, and they were talked out of it, because they had die area restrictions. RT hardware area on those consoles probably would have equaled an extra 8 CPU cores in area if they wanted Nvidia-like RT. All just seems like cost optimization to me.
Microsoft told Digital Foundry that they had locked the specs of the Xbox Series consoles in 2016. In 2016 they knew the console would have an SSD, RT capabilities etc.
Second gen AMD HW ray-tracing still has a worse performance impact than Intel first gen HW ray-tracing. No need to talk about Nvidia here, as they are miles ahead. Either AMD is not willing to expend more resources on RT or they aren’t able to improve performance.
The A750 beats the RTX 3060 in Cyberpunk, Control, and Metro Exodus. All Nvidia sponsored titles. In Metro it beats the 3060ti. The main reason is probably because their cards have the RT hardware that was initially meant to compete with GPUs 1 level above what they ended up being. the A770 is a current 3060ti competitor, with the RT hardware meant to originally compete with a 3070ti.
But I don’t think it matters what generation AMD or Nvidia, or any of them are on. It’s not that AMD couldn’t build hardware from day 1 that could compete in RT. It’s that they viewed it to be a waste of space, so they did the bare minimum with RDNA2 to be compatible. Spending 10% more on a die is going to cut into your margins a lot, unless you also increase the price of the GPU.
It’s been a conscious decision for years, not a failed effort.
AMD’s RT hardware is intrinsically tied to the texture unit, which was probably a good decision at the start since Nvidia kinda caught them with their pants down and they needed something fast to implement (especially with consoles looming overhead, wouldn’t want the entire generation to lack any form of RT).
Now, though, I think it’s giving them a lot of problems because it’s really not a scalable design. I hope they eventually implement a proper dedicated unit like Nvidia and Intel have.
I am pretty sure they already have something in the pipeline, its just that it can take half a decade from low level concept to customer sales…
I doubt they were surprised at all. Isn’t RDNA3 very similar to RDNA2? They could have fixed it there, and they decided on minor improvements instead.
Wasn’t RDNA2 designed with Sony and Microsoft having an input on its features? I’m sure Sony and MS knew what was coming from Nvidia years in advance. I think Mark Cerny said developers even wanted a 16 core originally, and they were talked out of it, because they had die area restrictions. RT hardware area on those consoles probably would have equaled an extra 8 CPU cores in area if they wanted Nvidia-like RT. All just seems like cost optimization to me.
Microsoft told Digital Foundry that they had locked the specs of the Xbox Series consoles in 2016. In 2016 they knew the console would have an SSD, RT capabilities etc.
They could already have in mind the next console
RDNA3 is up to 60% faster than RDNA2 equivalent in epath tracing