• 0 Posts
  • 20 Comments
Joined 11 months ago
cake
Cake day: October 25th, 2023

help-circle




  • Is it that easy to move a design from let’s say 4nm TSMC to Samsung? I always assumed if something was designed for one, it would be trouble to move it to another. I mean if the size if cache these days isn’t shrinking, and there is a 5% difference or so in logic density between the two 4nm nodes, would that not screw up a design of logic decreases but cache was the same? Or do they just upscale everything by 5%?



  • bubblesort33@alien.topBtoIntel@hardware.watchDid anyone here go AMD?
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    10 months ago

    I think a tuned 14700k with the e-cores disabled so you can clock the ring bus way higher, as well as tuned memory at 7200-7800 with tightened timings, might beat the 7800x3D even when you tune that. I’m just speculating, but I’d like to see YouTuber try with a dozen different games. I heard disabling e-cores often does nothing, but I think the ring clock OC might compensate for a lot, from what I’ve seen so far.

    But that’s a lot of work for something that still will use a lot more power.





  • I doubt they were surprised at all. Isn’t RDNA3 very similar to RDNA2? They could have fixed it there, and they decided on minor improvements instead.

    Wasn’t RDNA2 designed with Sony and Microsoft having an input on its features? I’m sure Sony and MS knew what was coming from Nvidia years in advance. I think Mark Cerny said developers even wanted a 16 core originally, and they were talked out of it, because they had die area restrictions. RT hardware area on those consoles probably would have equaled an extra 8 CPU cores in area if they wanted Nvidia-like RT. All just seems like cost optimization to me.


  • The A750 beats the RTX 3060 in Cyberpunk, Control, and Metro Exodus. All Nvidia sponsored titles. In Metro it beats the 3060ti. The main reason is probably because their cards have the RT hardware that was initially meant to compete with GPUs 1 level above what they ended up being. the A770 is a current 3060ti competitor, with the RT hardware meant to originally compete with a 3070ti.

    But I don’t think it matters what generation AMD or Nvidia, or any of them are on. It’s not that AMD couldn’t build hardware from day 1 that could compete in RT. It’s that they viewed it to be a waste of space, so they did the bare minimum with RDNA2 to be compatible. Spending 10% more on a die is going to cut into your margins a lot, unless you also increase the price of the GPU.

    It’s been a conscious decision for years, not a failed effort.





  • If you look at certain price points Intel makes sense. People complain about power consumption, but in games something like the 13600k doesn’t pull that much. I’d have check again, though. These days I kind off like tuning Intel CPUs more for performance. If you disabled the e-cores on a 14700k, use that extra power to bring the clocks slightly higher, and then tune memory, and ring bus, or whatever its called now, I wouldn’t be shocked if you could match a 7800x3D, at not too bad of power consumption.

    I’d like to see some actual tests, though.




  • You would. In some titles. But people underestimate how few cores some games still use, and how well a 4.8ghz 9th gen still holds up. I had an 8600k OCd to 4.8ghz all core and 5ghz single core boost, and when I upgraded to a Ryzen 7700x I really didn’t see huge gains in a lot of games. That was at 1080p with a 6600xt. Likely similar to a 3080 at 4k. I wasn’t playing many games that utilize many threads. And a 5ghz 8th, 9th, and 10th Gen Intel CPU with no hyper threading Is still equal to a Ryzen 5600 with its SMT disabled.

    People often way overestimate how CPU demanding games really are. There are exceptions. Starfield, or that Battlefield game from a few years ago that came out in a really broken state. StarWars survivor that also came out in a broken state. Lots of really bad unoptimized cash grabs.

    Some games are broken in other ways, but still well optimized. Cyberpunk when it released played at 60fps on a now 12 year old 2600k. I underclocked my 8600k to 1.8ghz to see what would happen, and it still ran at 40fps.

    Ray Tracing it’s very CPU heavy, and you’ll see large gains there, especially if switching to DDR5.