NVIDIA RTX 4090 PCIe 3.0 vs. 4.0 x16 & '5.0' Scaling Benchmarks
Video Statistics and Information
Channel: Gamers Nexus
Views: 400,191
Rating: undefined out of 5
Keywords: gamersnexus, gamers nexus, computer hardware, pcie 4.0 vs 3.0, pcie 4.0 x8 vs 4.0 x16 rtx 4090, pcie 5.0 vs 4.0, pcie bandwidth scaling rtx 4090
Id: v2SuyiHs-O4
Channel Id: undefined
Length: 13min 9sec (789 seconds)
Published: Mon Oct 31 2022
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.
The nice part about 5.0 support would have been the same bandwidth in half the lanes, allowing us to free up another 8 lanes for whatever we choose.
That being said, it doesnβt look like we lose a ton with 8 lanes.
Video description via GN:
Well I have a faulty z690 mobo from gigabyte that does only PCIe 3.0. Guess I'm fine for a while if all I do is gaming?
Did a GPU finally saturate a PCIe 3.0 x16?
If Nvidia made this a PCIe 5.0 x8 card, Iβd buy. Given the scarcity of lanes on even the high-end PCIe 5.0 motherboards, and the market this GPU is for, having it occupy 16 PCIe lanes running at half speed is plain stupid.
My 4090 was stuck at x16 1.1 after my initial install on a MSI b650m mortar and a 7700X. My scores were 20 to 30 percent lower in 3D mark than the average 4090. Luckily there was a bios update that fixed the issue and I am running at x16 4.0 and scores are similar to other 4090s now.
If I have two NVMe SSDs where one uses PCI-e 3.0 and one uses PCI-e 4.0, does the system get locked to PCI-e 3.0 (the lowest common denominator) or do the various devices co-exist with their corresponding protocol support?
So this is pushing me to the 5800x3d as an upgrade, as it won't be hindered by lacking 4.0?
So, barring one outlier that is probably related more to driver optimization than PCI-E bandwidth limitations, the FPS in real games barely changes when moving down from Gen4 to Gen3.
This means that even the absolute fastest GPU today, in late 2022, cannot fully saturate an interconnect rolled out in 2010. This, if anything, makes me wonder if PCI-SIG develops new standards a bit too quickly - it's not necessarily bad as long as they remain fully backwards compatible, but there's still barely any Gen4 gear that actually utilizes the increased bandwidth, much less Gen5.