12VHPWR / 12V-2x6 connectors continue to melt. New insight into the causes coming lo light
No, the melting power connectors issue on GeForce cards is not solved as Nvidia claimed ahead of the new generation’s launch. There’s fresh GeForce RTX 4090 cases happening literally as we speak, and the risk seems even greater with Blackwell GPUs. The 12VHPWR (or 12V-2×6 now) connector has not just one but multiple flaws due to which it might never be truly reliable. Perhaps it would be best if it was discontinued in its current form.
As soon as last Sunday, shortly after the release of the graphics cards – which, due to limited stock, were likely still in the hands of only a small number of users – the first cases of burned connectors with the GeForce RTX 5090 had already emerged, as we previously reported. Reliability issues have apparently not been resolved – perhaps not even reduced. Since then, more cases have surfaced, including the first instance of a burned connector with the lower-power GeForce RX 5080 card (paired with an Asus ROG Loki power supply). Despite its lower TDP (360W), this model also appears to be at risk.
At the same time, additional reports of burned cables in the GeForce RTX 4090 have surfaced, confirming that the issue has not disappeared from the previous generation of cards, either. If you’re curious, here are links to recent reports from social media. Even hardware reviewers – including our Czech colleagues (and competitors) – have shared experiences of cable failures, as you can see:
- Burnt connectors with a GeForce RTX 5080 graphics caed
- A fresh case of melting with an RTX 4090
- Connector on Nvidia’s original GeForce RTX 4090 adapter, which was never unplugged from the card, and the melting has only now been discovered (professional reviewer’s case)
- Connector melted with a GeForce RTX 4090, incident happened to a reviewer working for a Czech HW website
Excessive current on part of the contacts: we now seem to know what leads to a burnt connector
A significant finding has come from the YouTuber Der8auer, who received the damaged GeForce RTX 5090, along with a PSU and a cable that was involved in the first incident, from the user who reported it – a Berlin resident. Der8auer pointed out that on both the cable and the GPU, a single contact on each side was visibly burned, both linked to the same wire. Even the cable’s insulation and braiding were scorched, indicating that the wire itself overheated. This suggests an extreme current flow, powerful enough to burn the contacts within the connector and melt the surrounding plastic – an exceptional occurrence. Typically, these failures originate within the connectors, as the resistance of the wire itself is much lower and thus it doesn’t heat up as quickly. In this case, the cable may have even briefly caught fire, as the metal casing of the power supply showed signs of charring. It’s evident that the current, which should have been distributed across all six conductors and contacts, was instead concentrated into the damaged one.
According to Der8auer, this was not caused by user error during installation, as the affected individual was an experienced enthusiast (based on a interviewing him) who knew what they were doing. Additionally, the issue wasn’t due to a low-quality cable. Der8auer emphasized that ModDIY, the cable manufacturer, has a strong reputation for quality products. In fact, this particular cable was above standard in some ways, even featuring gold-plated contacts instead of the usual nickel-plated ones. The widespread claims online that the issue stemmed from a poor-quality third-party cable, and the general assumption that “everyone knows third-party cables are a mistake,” appear to be unfounded and motivated primarily by a desire to dismiss the problem.
A near catastrophe with another cable
Der8auer demonstrated in his video that this failure is likely far from an singular incident. Coincidentally, a similar issue occurred with another cable he used in his own GeForce RTX 5090 testing. This cable had been meticulously installed and fully inserted on both ends, yet the connectors still proved unreliable. If such failures can happen even under careful conditions, user error cannot really be used as an excuse. The issue was repeatable – Der8auer reinstalled the cable on video, and the failure reoccurred. The thermal imaging camera clearly showed a significant temperature increase on two of the conductors, which corresponded to elevated temperatures on their respective connector contacts. Testing with a clamp meter confirmed that an extremely high current – up to 23 A – was flowing through this wire, far exceeding the maximum safe limit of approximately 9.5 A per contact at full 600W load. Meanwhile, other conductors carried as little as 3 A, indicating a severe imbalance.
Due to this current overload, thermal imaging showed that the power supply’s connector heated up to 120°C within just one minute of running Furmark, and reached 150°C within several minutes – an unquestionably unsafe condition. This test was conducted in an open test bench setup. Inside a closed PC case, where users would typically operate their GPUs without monitoring cable temperatures, such a setup would almost certainly lead to connector failure. If an average user installed this cable on their GeForce RTX 5090 and simply started gaming, they would remain unaware of the rising temperature – until catastrophic failure occurred.
Extremely concerning
Der8auer intends to further investigate the exact failure mechanism of the affected cable, with another video expected to provide more insights. However, based on his findings so far, he concludes that these results are extremely concerning, in his words.
The fact that a 12+4-pin cable for the GeForce RTX 5090 can fail so severely despite being seemingly installed correctly was also confirmed by ComputerBase’s testing. This is not typical behavior, meaning that most tests attempting to check for this will not encounter the problem. However the website managed to reproduce the failure scenario. Instead of measuring temperatures, they recorded the current flowing through each individual wire and connector pin, uncovering an abnormal imbalance – some wires carried abnormally low current, while others were overloaded. Once again, the test was performed on a GeForce RTX 5090 running Furmark with a power consumption exceeding 500 W, using a native 12+4-pin cable from an Asus ROG Thor 1200W Platinum II power supply.
This particular test showed less critical imbalances, with measured currents of 5.3 A, 6.1 A, 7.4 A, 7.7 A, 10.0 A, and 10.6 A. The last two values exceed the safe 9.5 A limit per pin of the connector. Although the total power consumption was within normal operating limits of the CPU and the cable, the uneven current distribution suggests a connector failure, where some pins exhibit increased resistance. If this imbalance worsens over time with this cable, or would be more pronounced in another cable, overheating and melting could occur.
And that’s why it seems that the connector works fine… until it doesn’t
The fact that overheating is caused by uneven load distribution explains why most users never experience an issue, and why random temperature checks may not reveal anything unusual. Under normal conditions, when resistances are balanced, everything functions safely – this however is just a the ideal, “fair weather” scenario. However, ideal usage scenario cannot be what the safety of the connector is built for. The failure happens when, instead of evenly distributing heat and current across all pins (keeping the connector safe), only part of the connector becomes overloaded.
In this “bad weather” scenario, the connector is no longer safe, and Der8auer’s video proves that this can happen easily, without any significant user error. There are no visible warning signs during installation, meaning an affected user would have no way to prevent or detect the problem before it happens. This is why it’s completely misguided when people dismiss the issue on grounds of for example measuring their own GPU’s cable temperature, and declaring “works for me” when finding nothing unusual (because they simply have not experienced the failure mode).
Why lower-power cards can also fail
This also explains why GPUs with lower power consumption, such as the GeForce RTX 5080, can still experience connector failures. With a TDP of just 360 W, one would expect plenty of headroom with a 600W-rated cable. However, since the key issue is not total connector overload but the overloading of individual contacts, even a 360W graphics card can cause melting. At 12V, 360 W translates to a 30 A. If, due to unfortunate circumstances, most of this current ends up flowing through just a single wire, it could result in a scenario similar to what Der8auer recorded, leading to connector or cable failure.
For this reason, we strongly urge other GPU manufacturers to consider stay away from the 12+4-pin power standard – even if their currently GPUs do not push power limits as high as 450W (or higher) for now.
Resistance, current and temperature: what’s going on
These test results and current imbalances point to a crucial issue. When images of burned connectors began circulating, showing one completely melted pin (with signs that the plastic briefly ignited), many assumed that this particular wire was what simply failed. It’s logical to think that if a connector degrades over time or has a manufacturing defect, one of the parallel wires might fail before the others.
Ironically, the burned wire may actually be the one that was working best. In a cable with multiple parallel conductors, all connected on both ends inside the GPU and PSU, current naturally takes the path of least resistance. The resistance is determined in part by the resistance of individual wires between connectors, which is probably never unbalanced in the cables and adapters. And secondly it is affected by the contact resistance on the contacts in the connectors on both sides – the resistance created at the connection of the pin and the corresponding terminal. It is probably in this contact resistance that the variability in the resistances arises, which then translates into unequal resistance of the entire routes of the conductors across the whole system of the cable and connector-receptacle pairs at both ends. It’s likely that the issue always starts in the connectors, the wires between them are not themselves suspect at this point (meaning, the issues will not be fixed by using thicker, lower-AWG wires).
How does this variability in currents occur? Under normal operation, the contacts of all conductors in both connectors should behave identically and have approximately the same resistance. This resistance determines how much current flows through each conductor, and if the resistance at the contacts is uniform, the current distribution will be even – which is exactly what we need. However, if a contact gets damaged, deformed, or loosened (for example, due to pressure or tension from a bent cable), its contact resistance increases. If the entire connector deforms, some of the contacts may experience increased resistance, while a higher pressure in the corner of the connector could in instead reduce the resistance of the corner contact in theory. This could quite possibly be the primary “failure mode”, since it is often the corner contact that has burned.
In such a scenario, contacts with higher resistance will carry less current, but since the GPU still demands the same total power (in this case, an enormous 500–600 W), the missing current will find its way across the remaining contacts – the lower the resistance, the greater the load they bear. The current flowing through individual conductors in the cable will be inversely proportional to their resistance, as dictated by Ohm’s law.

Why would there be uneven resistances in the connector?
Here, we will engage in some speculation about what exactly might cause the formation of these uneven contact resistances in the connector. If a wire in the cable burns out or overheats, it indicates that the current flowing through it exceeds safe levels. And if only a single wire overheats, it means that an excessive amount of current was passing through just that one conductor (and its contact). If the cause of melting were simply the failure of a single contact, without the entire connector failing, we would observe a drop in current on one conductor and an even redistribution across all other remaining – without them becoming imbalanced relative to each other. Such a failure would likely not be catastrophic, even though with the (dangerously) low safety margins for individual contacts – which we will discuss further – all conductors might still end up operating outside of their design specifications. The temperature would rise, but in most cases, the cable and connectors would probably not be completely destroyed by them. The excessive current has to reach certain magnitude for the resulting heat to destroy things.
The situation we see here, where only one wire and its contacts have overheated, suggests that what happened was not merely the failure of a single contact due to damage or bending of its terminal. Instead, multiple contacts must have failed, resulting in lower-than-expected current through large part of the cable while an extreme amount of current is concentrated in a single contact and wire – because that contact has the lowest resistance of them all. In reality, there is likely a spread in the values, where the current distribution among the contacts might be something like for example 30%, 40%, 50%, 60%, 120%, 300% instead of an even 100% across the board.
In short: if the connector fails in this way, it is probably not due to a manufacturing defect or some individual mechanical damage that kills just one of the pins (or rather one of the terminals, since the terminals receiving the pins are the part usually more fragile and susceptible to damage). What is happening in the melting cases is more likely some overall failure affecting the connector as a whole.
The connector is too small, too weak
Even for a layperson, it’s fairly easy to imagine two different mechanisms that could lead to this issue. First, the connector as a whole might lack structural integrity and be prone to deformation – perhaps because the attached cables exert pressure or pull force on it in different ways. This deformation alters the pressure distribution and slightly shifts the position of the terminals relative to the wires, leading to variations in resistance across different connections. (If the resistance changed uniformly across all contacts, it wouldn’t be much of a problem.)
The second potential cause is that the plastic housing itself is not necessarily weak, but the individual pins – especially the terminals (which are generally more susceptible to wear and tear) – are mechanically too delicate. As a result, even minor inconsistencies in these contacts can then introduce variability in resistance. Since wear and tear affect each contact differently, even a properly and securely inserted connector may still exhibit uneven resistance (and cause serious issues). The 12+4-pin connectors are notably more fragile than usual – something even their official rating acknowledges, as they are only designed to withstand around 30 connection cycles.
With particularly bad luck, such uneven deformation might even occur on the very first insertion. Alternatively, it could develop over time due to prolonged operation, as tiny shifts occur within a fastened connector caused by vibrations or thermal expansion – even if the connector is never unplugged.
Another possible cause could be poor manufacturing quality, where variations in resistance stem from inaccuracies in the pins or terminals, or even slightly warped plastic parts that are there fro the start. This would likely be the easiest issue to address and detect by screening, so we doubt it is the sole cause of the reported issues – otherwise, stricter supplier quality control would be enough to eliminate the problem.
Regardless of whether it is the first, second, or third mechanism at play, the root cause likely remains the same: the connector is simply too small and delicate for this application. If Nvidia had opted for a physically larger connector, the design would have had a sturdier plastic housing as well as more robust, durable pins and contacts. The conclusion remains unchanged – the connector is too small and fragile for its intended use.
Perhaps such an extensive explanation isn’t even necessary. Even at first glance, it feels weirdwhen you compare the size of the included power adapters – featuring a bulky cable “octopus” – to the relatively tiny connector that is supposed to hold all these cables (plus those from the power supply) connected to the graphics card. Just by looking at it, it seems like something that could well have trouble remaining firmly in place without any flexing in the plastic housing. That impression is likely not mistaken.

If the 12+4-pin connector were larger, it would likely be able to handle its load better. While resistance imbalances between contacts and uneven current distribution across wires would likely still occur in some cricumstances, the deviations from the expected current would be smaller and therefore less dangerous. Additionally, a physically larger and heavier contact terminal would heat up less under the same amount of thermal load.
Interestingly, this suggests that the 12+4-pin connector in its current form wouldn’t be completely flawless even if 48V power supply standard was adopted (which some suggest as a possible solution to the connector troubles) or if two connectors were used instead of one (with a maximum load of 300W per connector instead of 600W). While failures would become less frequent due to the lower current levels, uneven current distribution would still occasionally occur. A 48V configuration might be somewhat more suitable if the power cables leading into the connector were thinner, lighter, and more flexible – reducing mechanical stress and deformation. However, rapid wear and tear could remain a problem.
It’s also worth emphasizing that Nvidia was under no obligation to choose such a tiny connector (or to insist on using a single connector instead of two). This was purely a design decision freely adopted by the company, likely made for aesthetic reasons rather than technical necessity. As a result, there’s no real justification for the technical inadequacy of the solution.
Almost non-existent safety margin
This brings us to the second major mistake Nvidia made – one that has been widely discussed for some time. The specification for both the 12VHPWR power cable and its updated version, 12V-2×6, includes an extremely small safety margin, inadequate as arguably demonstrated by the failures. This issue was recently highlighted again on Reddit by someone claiming to be an Intel engineer who works on silicon process node development and previously contributed to PCB design at Gigabyte.
This engineer pointed out that the 12+4-pin connectors used in Nvidia’s design (commercially known as Molex Micro-Fit+) are not adequate for the current loads they are subjected to. This isn’t just an issue with Nvidia’s implementation but rather a fundamental flaw in the entire specification. The connectors have six 12V power wires and six ground wires. The total current load must be distributed across these six wires and their corresponding contacts at both ends – contacts that are the weak point in this equation. The 12+4-pin 12VHPWR and its updated 12V-2×6 standard allow a graphics card to consume up to 600W (for example, the GeForce RTX 4090 has a TDP of 450W, while the RTX 5090 reaches 575W, with overclocked versions exceeding that). At 12V, this translates to a total current of 50A. That means each of the six power wires and contacts must carry 8.33A under full load, when the currents flow evenly that is.
Now, here’s the issue – these connectors are manufactured with maximum current ratings of 8.5A, 9.0A, or 9.5A per contact. According to available information, most (if not all) implementations use the 9.5A-rated variant. This means that even under completely normal operating conditions – assuming a perfectly functional connector – each contact is already utilising 88% of its maximum rated load. (In reality, even a properly functioning connector will exhibit some variability in resistance and current, meaning some contacts will carry slightly more and others slightly less). In other words, the connectors have virtually no safety margin. The safety factor – the amount of extra capacity available to handle unexpected problems – is only 1.14×. That means the system can tolerate an abnormal current increase of just 14% before exceeding limits that are specified as safe for it. However, testing by Der8auer under normal conditions has observed 23A current spikes which are up to 2.42× above the expected levels, highlighting just how inadequate this 1.14x safety margin is.
The low safety margin in the stadard has sparked heated debates online before, with some arguing that a higher safety margin is unnecessary and would only increase costs. However, many of these arguments seem motivated by little else but a desire to downplay an issue that is sadly real. Experts with technical backgrounds generally seem to agree that the safety margin is insufficient and should be higher. It was designed only for “fair weather” scenarios and fails to account for real-world contingencies – and those are something that must be considered, especially for mass-market consumer products that will be installed and used by a wide range of users, many with limited technical knowledge. And crucially, even perfect installation doesn’t eliminate the risk anyway, as shown – so there’s no excuse for hiding behind claims that only trained professionals should be allowed to handle these connectors (incorrectly implying issues will only happen when this is not obeyed).
Excuses from Nvidia or the company’s supporters claiming that failures are due to user error do not hold up. In reality, a product of this type must account for a certain level of common usage errors and less-than-ideal operating conditions. If Nvidia’s standard fails to accommodate this (which, it seems, has already been proven), then it is a flaw in the standard itself – showing that it should not have been used in this role and likely should not have been approved. Other members of the consortia responsible for ATX and PCI Express standards may also have lapsed by allowing Nvidia to push this power connector standard through the review process and standardisation.
By the way, there is now also another interesting statement from an engineer regarding the connector issue, published on Reddit.
For illustration, a comparison which speaks volumes: the 8-pin power connectors used on graphics cards before the advent of Nvidia’s new design (and still presenting an alternative to it) also have a maximum permissible load of 9 A per contact, with three contacts used for current delivery (not four, as is sometimes mistakenly assumed). However, the entire cable is designed to provide a maximum of 150 W, meaning there is a significant safety margin, as the total required current is 12.5 A, or 4.16 A per contact. This results in a safety factor of more than 2.0× when using these connectors. Apologists defending Nvidia often argue that this is unnecessary and only drives up the cost of the graphics card. It may well be true that this margin might be larger than necessary, a reasonable minimum could possibly be something around 1.6× for example (though the exact ideal number for this application would need to come from an expert, not an uncritical fan or Nvidia marketer).
In reality, however, the additional costs that a larger safety margin on the connector would entail are likely minimal. They would certainly be negligible in the >$2000 price range of a GeForce RTX 5090, where using two 12+4-pin connectors instead of one (which would increase the safety margin to 2.0× by limiting each connector to 300 W) would hardly affect the overall BoM cost.
- Suggested reading: Nvidia now the highest value company in the world, surpassing Apple
However, even the safety factor of 1.6× is dramatically higher than the entirely inadequate 1.14× chosen by Nvidia. It is likely true that an 8-pin connector could suffice with a smaller margin than 2×, but this does not necessarily mean the same applies to Nvidia’s 12+4-pin connector. Recall the 23A current measured by Der8auer (2,42× overshoot). If the connector is less mechanically robust than the 8-pin and has a greater tendency for uneven contact and higher amplitude of resulting excessive currents, then it could be true that a higher safety margin should be factored in. Instead of 1.6×, a reasonable minimum might need to be even higher.
Separate issue: Nvidia eliminated elements from the cards that could have mitigated these problems
There’s no way around it, Nvidia deserves criticism for not addressing these issues in RTX 5000 cards despite being aware of the failures, as the widely affected GeForce RTX 4000 graphics cards. Despite this, the company did little in response, with its only measure being shortening the signal wires and rebranding the 12VHPWR cables to 12V-2×6 after this change. In the current connectors on the cards, the power pins are also slightly extended (by 0.25 mm). However, this alone does not increase their current-carrying capacity or reduce their resistance; for that, the terminal on the other side of the connector and its contact area would also need to be modified. This, as far as we know, has not been done yet. Therefore, these two improvements only reduce the risk of the user running a GPU with a cable that is no fully inserted, and the card still powering on and loading the connector. With the 12V-2×6 connector, the card should not be able to power on in most such cases, but do not rely on this entirely – as the mitigation might not be 100% effective with no wriggle room. You still should ensure the connector is fully seated.
The connector is problematic even when fully inserted as it turns out. By now, all users will be aware of the issue and will be cautious, but Der8auer’s video shows that ensuring full connection is not enough to weed out the issues.

Instead of addressing the clearly weak connector, which can lead to catastrophic overload and overheating, Nvidia used it again in the RTX 5000 generation and even increased the power consumption of the graphics cards, which contributed to the problems of the previous generation. While the GeForce RTX 4090 had an official reference power consumption of 450 W, Nvidia increased it to 575 W for the GeForce RTX 5090. In other words, after seeing that the connectors were failing with a graphics card that loaded them to just 75% of their maximum specification, Nvidia proceeded to use the same problematic power solution on cards that now load the connector to 96%. The currents are now almost at the maximum specified level, which means the consequences of uneven resistance in the connector are bound to also worsen. It is reasonable to expect that the risk of failure and melting or burning of the connectors will be higher with the GeForce RTX 5090 than with the previous generation.
Nvidia GeForce RTX 5090 but also RTX 4090 Founders Edition cards introduce a feature that increases the risk of uneven current loads, because the PCBs connect all the voltage supplying wires and all the ground wires to a single path right after the connector on the card. It used to be common on older GPU boards that used more than one power connector to connect some VRM phases on one connector and other phases on another. This naturally served to split and balance the current load between the power cables/connectors and prevented the current from a failed connector to bypass it flowing through the other cable (thus overloading it). When Nvidia used the 12+4-pin connector on GeForce RTX 3090 Ti, the paths were not connected right after the connector, instead the 12V wires were treated as three separate pairs, so that the system behaved like three individual power connectors in one, in some ways (this probably was not true for the ground wires though). And each of these three pairs was monitored by a shunt resister, which is a component that is capable of measuring the current flowing through. These shunt resisters were capable of reporting a situation in which one of the contact pairs is delivering more current than normal. The card could react by refusing to power up to alert the user to the connector issue. Note that this was not able to detect an imbalance happening within a single contact pair between the two contacts in it, that would require a shunt resistor for each 12V wire.
In the design of the GeForce RTX 4090 and 5090 Founders Edition graphics cards, as well as reference PCBs (meaning this design is used in most cards, with few exceptions, like the expensive Asus Astral edition), these shunt resistors measuring separate currents are absent (this is discussed in detail by Buildzoid in this video on the Actually Hardware Overclocking channel). Instead, all six strands are immediately connected into a single 12V terminal behind the connector contacts on the card. This means the GPU loses up any way to possibly detect connector failure through current monitoring and shut down before hardware damage or even a fire hazard occurs. It also prevents the card from attempting to balance the load on the connector by connecting individual VRM phases to different cable strands (and connector contacts). Due to these design decisions made by Nvidia in the PCB layout, there are “better” starting conditions for the issue of uneven contact resistance to cause damage.
Would monitoring individual strands or load balancing solve the 12+4-pin connector issues?
There have been opinions that the solution to this problem would be for the cards or power supply to monitor the current on each of the six strands of the connector. However, this addresses the symptoms rather than the root cause. Monitoring does not fix the improper function of the connector under uneven resistance; at best, monitoring software can shut down the card during gameplay to prevent damage. Currently, only the Asus Astral edition of the GeForce RTX 5090 (which costs several hundred dollars more than the base $1999 price) implements this solution, and this safety measure is not even always active – it works through an Asus utility that must be running, and it only warns you, prompting your action, without shutting down the card automatically (though Asus could change this in the future).
There is also a weak link that cannot be resolved by attempts to monitor currents or even implementing load balancing (dividing currents by connecting individual strands separately to different VRM phases). Load balancing through VRM phases or some form of active connector management from the power supply only addresses half of the conductors carrying current to the GPU. However, after the GPU “consumes” the current, it must return to the power supply through the other half of the conductors, which are the “ground” wires. On the return path, the current is already merged into a single flow leaving the GPU, and efforts to balance the 12V side of the connector/cable it have no effect on the ground side. This entire technical solution would need to be somehow duplicated for the return current path, which, it seems, cards have never attempted to do, and it is probably not very realistic. In any case, it would be more complicated and likely more expensive than implementing a real solution – such as doubling the connectors.
Overall, it can be said that such proposals miss the actual mark and would not address the root cause. It is as if you had a faulty, failing wiring and components in your apartment’s circuit breaker box, which occasionally sparks and smells of burnt insulation, but instead of having it fixed (and checked by a certified technician), you installed a smoke detector next to it and declared it an adequate solution to whatever issues could ensue.

Why connectors on the PSU burn more often and why this does not excuse the GPU and the 12VHPWR/12V-2×6 standard
If someone points out that it’s connectors on the power supply that are burning now more often than not, while the graphics card appears untouched – this is not really an argument in favor of the problematic connector or its use on Nvidia graphics cards. Even the damage to the power supply is caused by the standard developed and implemented by Nvidia and also by the manner in which Nvidia’s graphics cards function. It is likely not the fault of the power supplies. The power supply is more at risk than the graphics card because it is usually hidden in a separate compartment within the case, whereas the connector on the graphics card is located in the open space behind a glass side panel.
It is quite important whether the problematic area is easily accessible for visual inspection (and whether you might notice smoke quickly, for example). Airflow is going to be dramatically worse at the hot spot of the on the modular panel of power supplies in case’s PSU compartment, and thus temperatures are higher, which is likely why most connector failures now occur there, while the GPU connector fails only in some cases. Simply put the temperature on the power supply connector is higher due to worse external conditions, causing it to fail first – even though, in these cases, the connector on the opposite end should also be overheating simultaneously (since an overloaded wire carries the same excessive current at both end contacts).
Even a cable actually igniting into flame can in theory go unnoticed for a while in cases with a separate power supply compartment, and due to poor ventilation in this area, it may you take you longer to notice smoke and a burning smell. This will be worst in SFF and ITX cases, where cables are often tightly packed. This phenomenon, where only power supplies appear to be affected, should be understood more as an aggravating circumstance for the entire 12+4-pin cable standard (because it exposes the power supply to a hazard) rather than as an evidence that Nvidia graphics cards are problem-free an the issue is not their fault.
Is it better to use an adapter from 8-pins instead of a cable with a 12+4-pin on both ends?
Actually yes, it seems that using a power adapter might now ironically be safer, even though native cables were recommended until recently. And it is somewhat amusing that a new standard for power cables was introduced (and pushed into PSUs), only for the graphics card manufacturer, who bungled it, to now advise you to graft it onto older-type cables it was supposed to replace.
However, this is not necessarily because the adapter from Nvidia or its partners is of significantly higher quality. For example, wear and tear on the GPU-facing 12VHPWR/12V-2×6 connector, which is still officially rated for just 30 insertion cycles – a surprisingly low number – will be the same. An adapter splitting into multiple 8-pin connectors, however, can help distribute currents across individual wires and thus reduce the risk of overload. That said, this does not apply to ground wires and only works if the 12+4-pin connector does not have all its strands interconnected inside it – which some adapters used to do. If yours is designed this way, it will not help the 12+4-pin connector at all, and uneven resistance could lead to excessive current even more severely (because the mitigating effect of the constant resistance of wire length is lost).

However, an adapter from 8-pin connectors always has one advantage: If it comes to worst, the connector, or rathers connectors on the power supply’s side will not fail because there is no 12+4-pin connector there. With 8-pin connectors on the adapter, there’s little to no chance they will not melt (at least compared to the failure rates of the 12V-2×6/12VHPWR). This ensures at least the power supply will be safe, isolating the danger of damage to just the graphics card and the adapter itself. It is not a full solution to the fundamental problem created by Nvidia, but it puts you in a better position, should the trouble happen.
Summary: Several mistakes that should not have been tolerated
Nvidia made at least three mistakes, each of which is a problem in itself, even if they only lead to failures occasionally and not in all installations (though this is no reason to dismiss them).
1. The power connector specification leaves practically no safety margin on individual contacts of the connector. Under normal cable operation (up to 600W load, as permitted by its specification), they have only a 14% safety margin left. This means that practically any issue causing an increase in current on one of the wires will lead to overloading that wire and exceeding the specification.
2. The connector has a problem with ensuring uniform resistance across individual contacts and thus balanced load distribution. The reason is not yet clearly established, but the most likely explanation is that it is too small and lacks in robustness. Combined with the first factor, this leads to individual contacts exceeding specifications in the field – which can happen with just a 14% upward deviation in current. Users often do not notice this until this excursion leads to catastrophic failure, this masking the true frequency of the issue.
The first two problems are not resolved by the newer version of the standard (12V-2×6).
3. Nvidia further made mistakes in the card design that exacerbated the previous weaknesses. Instead of designing the cards to prevent issues, they increased the risk that occurrence of uneven resistance in the connector would lead to severe failure. Moreover, Nvidia not only failed to learn from the issues in the RTX 4000 generation but made them worse by allowing the GeForce RTX 5090 even higher power consumption, further straining the already inadequate connectors.
Unfortunately, the company may be exploiting its position and popularity to ignore, downplay, and avoid addressing the problems (let alone actively preventing them in the first place), as it likely does not feel pressured to act and remedy the situation properly.
What is the solution?
A truly prudent solution for the GeForce RTX 4090 and RTX 5090 would be to recall the affected cards from the market. The defect in them cannot be fixed easily. It could in theory be be circumvented with a special cable capable of preventing uneven currents (for example, if it were powered by six completely separate 12V rails in the power supply, though this still does not solve the issue on the ground wire side).
The card design should be revised, and the maximum load on the connectors should be reduced to avoid operating so close to their limits (a practice known as derating). This would effectively restore the safety margin to an acceptable level. For example, this could mean limiting the maximum power delivery capacity of the 12+4-pin cable to 300 W and replacing a single connector with two parallel connectors on all cards with higher power consumption (as it turns out, even the 360W RTX 5080 may not be entirely safe). Additionally, the load between these two connectors should be balanced, and they should not be immediately interconnected on the PCB behind the connector.
Derating the connector does not solve its mechanical susceptibility to excessive currents. However, halving the total current means that even the excessive loads on individual wires will now be milder more likely to fit within the newly increased safety margin. This may not necessarily be a perfect solution (that would involve transitioning to more mechanically robust and less fragile connectors and use them in sufficient numbers and with adequate margins), but the risk of failure and dangerous incidents should drop significantly and would likely become acceptable – even though locking PCs to using this problematic connector in the long term seems like an unnecessary mistake to make.
Personally, I try to avoid doing this because I am not a reviewer, and even for a reviewer, this is something that should not be done lightly. But in this case, I have every urge to just outright recommend that you do not buy these graphics cards (RTX 4090, RTX 5090) at all, due to the dangerous power connector design. Perhaps there is a less harsh approach though: wait with purchasing them some more to see if Nvidia perhaps decides to heed the voice of reason and allows card manufacturers to release models of these GPUs with two 12+4-pin connectors. Such cards would be a big improvement and as it looks now, they should not be reason to worry anymore. Until then, I would not risk buying these GPUs.
Sources: Der8auer (1, 2), Reddit, ComputerBase, Actually Hardware Overclocking, Corsair
English translation and edit by Jozef Dudáš
⠀
⠀