I don’t get why people care about igorslab anymore.
Basic error in this article:
Corsair cleverly implemented this in their RM series. Below, we see two native EPS headers that only fit EPS cables, while the four headers above are “unisex.”
Problem is that what Igor is referring to is the bottom most connectors in the photo which are…marked PCIe/CPU. And they’re just upside down (so you can still access the latch). It’s a basic error that would be resolved by looking at the socket for 10 seconds…or, you know…reading.
So what will the 5000 series use?
Most likely 12VHPWR with recessed pins (aka 12V-2x6) and require stricter quality control from their partners. They aren’t going back to 8 pins.
Whole drama could have being avoided with only 2 words.
T-Connector…
What I don’t understand is why not skip all of the proprietary gimmicky bullshit and just use an already established connector standard capable of high current delivery.
I don’t know, something like XT60 for example. Or is that not PC Master race, not invented here enough?
Because XT60 has current rating, but no voltage standardization.
EPS 12V is better. It’s good up to 300W easily, and two of them would work for any GPU and it’s no bigger than an 8-pin PCIE
Like any corporation who changes stuff up just for the sake of changing things up because people in these positions have to do it to justify and keep their positions, it seemed like a good idea to them at the time. When failures happen they have to adjust as they go and try to weasel their way out of having to pay for as many bad units as possible.
Seems like this connector was trying to fix a problem nobody asked them to fix. Go figure.
As someone that bought an ASUS TUF 4090 and a new PSU (Using the cable (ATV 12V/EPS12V (4+4 Pin) from the PSU) not sure what I’m suppose to do lmao.
Make sure that all cables make correct contact. And pray, praying might work…
I mean, one could tell you to get a different model, but unfortunately you’ll still be at risk.
The Asus tuf had the securing clip on top vs gigabyte where it was on the bottom. Sagging would cause separation far more, from my thinking.
Their connector expect 100% perfect contact which doesn’t happen in real life. There’s no engineering margins. Also they push too much current in too little copper without any of the necessary 30% margins. So that’s the difference between theory and practice, it just keeps melting and breaking cards. 12VHPWR just doesn’t work in real life. Look how the custom cards all go back to 2x8pins.
Igor will post about this every month I guess
The final answer is nvidia fucked up and should of used 4 8 pins for the 4090 and there’s no other answer
Reminder, the 3090 FE used two 8-pins and was a 350W card. If they were going to use 8-pins on the 4090, they could have just used 3 at 450W.
It wouldn’t hit 600W as the current 4090 does but it would still have more headroom than the 3090 did.
Just adding more and more 8-pin connectors is not a sustainable solution.
The real fix is probably to go back in time and have PCI-e 3.0 revamp the power delivery to allow more than 75W to come from the slot.
Neither is adding more and more power. The real fix is making them more efficient.
they should probably not make gpus that use that much power. clearly, they are going over board with the chip size. to the point where they are charging 3x the price of what a flagship card used to cost a decade ago. if they do something like that you can’t fault the standard, they should come up with their own solution that works if they go overboard like that
That will be on another level of pain. The 75w limit is safe and does not require rework. Having motherboards designed to deliver 200-300W on a slot would be very messy and very expensive.
And probably require a connector similar to EPS12V near the PCIe slot, which would be messy. My “genius” solution would be putting a power connector on the other side of the board, directly behind the PCI-e slot. Probably creates other issues on top of the expected ones.
The real fix is probably to go back in time and have PCI-e 3.0 revamp the power delivery to allow more than 75W to come from the slot.
And where would that pcie slot pull its power from? That’s right, another connector somewhere on the motherboard. You’d just be moving the issue elsewhere.
Motherboard has way more room for connectors and power circuits than a GPU.
Talk about burying the lede in the last segment. Asus isn’t using the official connector and every other vendor thinks their connector is risky and probably defective. That’s not on nvidia, other than allowing it (and this is the reason why they ride partners’ asses sometimes on approval/etc).
The rest of the stuff is Igor still grinding the same old axe (pretty sure astron knows how to make a connector, if the connector is so delicate it would be broken by GN’s physical testing, etc) but if asus isn’t using the official connector and they’re disproportionately making up a huge number of the failures, that’s really an asus problem.
This fear mongering has been going on for a year now. Igors definitely been feeding off it for clicks.
it’s incredible that nvidia has some kind of protective coat that no shit they do adheres to that coat. Just slides off to someone around nvidia.
They can sell H100s for tens of thousands of dollars, and there’s a multi-month queue to buy 'em.
Asus: chooses not to use the official adapter and creates bad quality ones.
You: omg how could Nvidia do this!!
It’s incredible how AMD fanatics manage to blame Nvidia for everything.
Nvidia vendors shipping substandard products is in fact an nvidia problem, and pointing that out has nothing to do with AMD.
I don’t think it’s all nvidias fault if an aib decides to go against their recommendation. It’s okay to recognize that Asus has some responsibility too.
The commenter is an AMD fanatic which is why I pointed that out.
The only fanatic here is you.
what do you mean? It happened to FEs as well, not as many. It was “user error” if it’s common to be a user error is no longer a user error, it’s a product error.
It’s such an error that it’s getting replaced. How can you go and think “yaps, it’s the users that were in a wrong but nvidia is replacing the perfect connector because it had 0 problems”
We were talking about Asus, not fes. FEs don’t ship with the 12vhpwr as it’s already been revised. FEs has incredibly low error rate from what I’ve seen. The overwhelming majority of issues I’ve seen have been from aftermarket adapters like cablemod.
Funny. I see the exact opposite most of the time. Nvidia has already revised the connector and this is largely a non-issue yet people have been relentlessly fear mongering the same narratives over the last year in an attempt to shit on Nvidia.
My summary of that whole 12VHPWR situation:
-
Still no hard data on failure rates. We simply don’t know if the 12VHPWR is more error-prone. There is however less wiggle room for imperfections because 600W is a lot of power.
-
Igor writes like NVIDIA is some kind of bad villain that infiltrated the PCI SIG. I have no idea how the PCI SIG works, I simply find it laughable to think that NVIDIA can dictate stuff to the PCI SIG. It is a consortium with experts from AMD, NVIDIA, Intel, and many, many more people and companies. To suggest otherwise is dishonest.
-
There were badly produced ASTRON adapters
-
CableMod adapters are a stupid idea to begin with. Unlike a cable, a hard PCB has no wiggle room and thus can not compensate for a non-perfect production.
-
Flipped ASUS design is a bad idea
-
Igor again argues that NVIDIA tried to blame users for user errors. I agree that there could have been a better design that allows for more wiggle room and user error. But then again, without any numbers on how many cards actually failed it is hard to tell. What if this is not really an issue in real life? What if besides ASTRON, ASUS, and CableMod adapters, millions of cards work flawlessly and only a handful of users are affected? Even to Igor’s own numbers (which he got from CableMod), if we leave out the twisted ASUS design, it is an extremely small issue.
-
Igor again argues that NVIDIA blames users because it would have affected Q3 results negatively. Assuming this is true, why did it not affect later quarterly results? Also, why should any at NVIDIA care? Because otherwise, they would have to do insider trading and sell their stocks before the bad news. That is not how it works.
-
PCI acknowledged design flows and came up with 12V-2×6. You could now scream “See?! The old 12VHPWR was bad! That is why they changed it!” but that does not have to be true. If a Boeing crashed, because there was ice on the ground speed sensor, it is easy to say “Well, there needs to be redundancy and some kind of heating, this is a stupid design to begin with, how did the FAA approve that? Boeing probably pressured the FAA! And they hide the truth because quarterly earnings are up.” If Boeing then does implement these changes, it is still not a gotcha moment.
I like the work Igor has done here. These old-school electrical engineers who later switched or grew into IT are very much needed. They offer excellent knowledge. And I think if we look at Igor’s reporting, we can feel that he had a lot of fun and put his heart into it. But it also affected his objectivity. He himself acknowledges that! That is not easy, hats off!
I have no idea how the PCI SIG works, I simply find it laughable to think that NVIDIA can dictate stuff to the PCI SIG
NVIDIA alone can’t. The PCI-SIG is a traditional tech industry standards grindfest; it doesn’t do anything without the consent of all of the members sitting on the spec’s committee.
That is what I thought, thank you.
Your understanding is wrong. Nvidia pushed a connector without adequate testing and the connector failed, killing cards.
Bad adapters are a thing, as they ALWAYS were. An adapter cannot fixed a problematic specification.
Users are not to blame: badly designed connector is to blame, not the users. There is a reason why the AT connector for PSUs was abandoned: It was a two part connector that could be easily inserted wrong and burn your components.
P.S. I am an electrical engineer also that turned to IT work, so i understand Igor’s arguments and the mechanics behind the connector.
Nvidia pushed a connector
NVIDIA? Or the the PCI SIG? Why did nobody intervene?
Users are not to blame
Agree, and I don’t blame the users. I just think it is a none-issue in real life (also according to Igors own failure rate numbers).
-