What about the 5500x3D. There was chains of that, but it’s all gone quiet.
What about the 5500x3D. There was chains of that, but it’s all gone quiet.
The allowed limit is 4,800, so the RTX 4090 is about 10% “too powerful.”
…
but Nvidia will likely build in some wiggle room — to ensure overclocking as an example doesn’t become a problem. Assume a clock speed of 2.7 GHz and we get a maximum number of SMs of 108.
Like what is preventing them from just shipping 4090 cards downclocked to 2.4 Ghz, and then letting China figure out how to flash a 2.7Ghz BIOS onto the cards? I guess just Nvidia just doesn’t want to get in trouble with the US government?
The codename of Nvidia’s post-Blackwell GPU architecture could be Vera Rubin
So not next gen, but “next-next-gen”.
Is it that easy to move a design from let’s say 4nm TSMC to Samsung? I always assumed if something was designed for one, it would be trouble to move it to another. I mean if the size if cache these days isn’t shrinking, and there is a 5% difference or so in logic density between the two 4nm nodes, would that not screw up a design of logic decreases but cache was the same? Or do they just upscale everything by 5%?
Wait until CES 2024 and what gets announced to be released by Jan or Feb.
I think a tuned 14700k with the e-cores disabled so you can clock the ring bus way higher, as well as tuned memory at 7200-7800 with tightened timings, might beat the 7800x3D even when you tune that. I’m just speculating, but I’d like to see YouTuber try with a dozen different games. I heard disabling e-cores often does nothing, but I think the ring clock OC might compensate for a lot, from what I’ve seen so far.
But that’s a lot of work for something that still will use a lot more power.
I think the issues that causes are way overblown vs GPU bottlenecks. It’s a little worse yeah. But people will still drop their CoD or RB6:Siege graphical settings to low on their so they can be CPU limited all day on their 4090.
Most people won’t OC that cpu anyways, since it’s already going to run at 95c on most AIO coolers. I have no idea how the vrm on that is. Could be fine. The k CPUs clock higher than the non k. There is still a reason to get them since they aren’t power restricted. I don’t know if undervolting or other features work on b760. If they do, I’d just stick to that.
Most CPUs are way overkill these days. When a CPU limited a GPU to 90% everyone screams “bottleneck” as if it’s going to blow up their PC and rushes to upgrade with a $500 budget to get that extra 10 fps back.
I doubt they were surprised at all. Isn’t RDNA3 very similar to RDNA2? They could have fixed it there, and they decided on minor improvements instead.
Wasn’t RDNA2 designed with Sony and Microsoft having an input on its features? I’m sure Sony and MS knew what was coming from Nvidia years in advance. I think Mark Cerny said developers even wanted a 16 core originally, and they were talked out of it, because they had die area restrictions. RT hardware area on those consoles probably would have equaled an extra 8 CPU cores in area if they wanted Nvidia-like RT. All just seems like cost optimization to me.
The A750 beats the RTX 3060 in Cyberpunk, Control, and Metro Exodus. All Nvidia sponsored titles. In Metro it beats the 3060ti. The main reason is probably because their cards have the RT hardware that was initially meant to compete with GPUs 1 level above what they ended up being. the A770 is a current 3060ti competitor, with the RT hardware meant to originally compete with a 3070ti.
But I don’t think it matters what generation AMD or Nvidia, or any of them are on. It’s not that AMD couldn’t build hardware from day 1 that could compete in RT. It’s that they viewed it to be a waste of space, so they did the bare minimum with RDNA2 to be compatible. Spending 10% more on a die is going to cut into your margins a lot, unless you also increase the price of the GPU.
It’s been a conscious decision for years, not a failed effort.
That’s a significant frequency drop at 300 to 400mhz lower. Wonder if will actually be where they fall, or if they auto body higher than stated. The 5500x3D would make for some interesting budget builds.
They wouldn’t go through the whole engineering process again to change the 14th Gen. That’s too much cost.
It might be true that something about 13 and 14 are different from 12, since that’s a new die. The 13400 is based on 12th gen dies however. And if the 14400 is still based on that 12th Gen die, I’d say it’s nothing but software. So if the 14400 supports this, it’s nothing but software.
If it was cache related I could kind of understand dropping 12th gen, since there is a reduction, but it doesn’t sound like it.
Are they optimizing games with the help of developers to more efficiently use the e-cores? I still don’t get exactly how it works. I’d the game deciding the move it’s less cache latency sensitive, or performance sensitive tasks to the e-cores? I can’t help but think this a feature we’ll see pop up for 1 out of every 10 big releases. Intel sponsored titles. And that it takes a lot of work from either Intel’s side, or maybe even the developers side to get it to work properly.
If you look at certain price points Intel makes sense. People complain about power consumption, but in games something like the 13600k doesn’t pull that much. I’d have check again, though. These days I kind off like tuning Intel CPUs more for performance. If you disabled the e-cores on a 14700k, use that extra power to bring the clocks slightly higher, and then tune memory, and ring bus, or whatever its called now, I wouldn’t be shocked if you could match a 7800x3D, at not too bad of power consumption.
I’d like to see some actual tests, though.
If it has Hyperthreading then you can still boot up any game. If it doesn’t, it’s still good enough for web browsing, and other basic stuff.
Yeah, but I’d just get a large air cooler for reliability.
You would. In some titles. But people underestimate how few cores some games still use, and how well a 4.8ghz 9th gen still holds up. I had an 8600k OCd to 4.8ghz all core and 5ghz single core boost, and when I upgraded to a Ryzen 7700x I really didn’t see huge gains in a lot of games. That was at 1080p with a 6600xt. Likely similar to a 3080 at 4k. I wasn’t playing many games that utilize many threads. And a 5ghz 8th, 9th, and 10th Gen Intel CPU with no hyper threading Is still equal to a Ryzen 5600 with its SMT disabled.
People often way overestimate how CPU demanding games really are. There are exceptions. Starfield, or that Battlefield game from a few years ago that came out in a really broken state. StarWars survivor that also came out in a broken state. Lots of really bad unoptimized cash grabs.
Some games are broken in other ways, but still well optimized. Cyberpunk when it released played at 60fps on a now 12 year old 2600k. I underclocked my 8600k to 1.8ghz to see what would happen, and it still ran at 40fps.
Ray Tracing it’s very CPU heavy, and you’ll see large gains there, especially if switching to DDR5.
Spiky frame rate I think it’s actually worse than a consistent low frame rate. I’d rather be stuck at 40fps constantly than go from 35 to 70 back and forth every few seconds.
Your mind adjusts to 40 and it becomes usable. Constant fluctuations are a mess.
Processors do wear out over time, but usually not this fast. Might be that undervolt was just barely stable at one point. And maybe even unstable in some conditions you never tested. Now even the tiniest amount of wear, has dropped it below the line.
Could also be RAM being defective. But that’s usually from factory, not wear in.