FIXING my USB3 2.5Gbe network adapters on Linux / Proxmox!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I've got three of these and that can only mean one thing clusters So today we're not even talking about my cluster we're talking about all the problems I had getting these USB 3 2.5 gigabit Ethernet adapters to work on Linux I did in fact buy a set of three intending on using these with my super low cost hyper-converged cluster and I was hoping that I could get a performance Improvement out of ceph by moving up to two and a half gig ethernet and what did I find well these things when you plug them in they just don't work right so today I'm going to go into all the troubleshooting I did the really simple fix and how to get these things to work under Debian based Linux distributions so how can we tell we have this problem so currently enx is disabled so let's add it to a bond like I did in the past episode so I added it to a bond so now enp4s here on my gigabit is the backup and enx which is my USB Ethernet which should be two and a half gig full duplex is the primary so let's go in and see what's actually happening yeah so here's the information from the bond so chose slave interface is up but it's half duplex and if we do an iperf test it's not going to be good so I for free client so I have a server on my network that should be accessible at 10 gigs except we have a 2.5 gig ethernet interface here so we're expecting good things and we get not awful 1.4 gigs about the reverse ooh that's that's pretty bad um how about bi-directional yeah so 50 megabits in One Direction and 1.4 gigabits in the other so clearly we're getting better than gigabit but it's not great and a lot of people would probably just blame this on poor real Tech Hardware which is what I did initially but then I started looking at it more and there's other things that you can't do so for example if I want to set the MTU so IP Link set enx blah blah MTU 9000. MTU greater than device Max and even if I go just a little bit bigger like 1550 it's still greater so I can't use jumbo frames and I can't even use baby jumbo frames which are really helpful if you're trying to run vxline in your cluster so what is the root of this problem so if you're in D message D message graph in X this will show us all the kernel logs related to the device enx so it's part of the bond the bond is aware of it but it's part of the CDC NCM driver and this is the generic USB Ethernet driver so it's not running the correct real Tech driver and what's funny about this is the correct Realtek driver is in the kernel it just is not loading it correctly so if we go to the files that Realtek actually published which people have been kind enough to mirror on GitHub um there's the file here 8152c which is in the Linux kernel but then there's another file here 50 USB realteknet.rules and this is basically a set of udep rules telling it what driver to load when it encounters a USB device so if we just copy this whole thing and paste it into udab and reboot in theory it should load the correct driver so Nano let's see you Dev s.d 50. USB realteknet.rules 50 USB rail pack that moves and then we paste and Save so let's reboot and see what happens to the message we come back up okay so system came back up let's look at the bond again so now we got full duplex so if we run D message again and graph for the enx interface what do we find there load a different driver this time and loaded the driver r8152 so the reason that it's the 8152 driver even this is the 8156 chipset is because the 8152 is USB gigabit and Realtek decided to add 2.5 gig support to their existing gigabit driver named 8152 so just by adding that udev rules file which was part of the release from Realtek we got it to load the driver that's already in the kernel we didn't have to compile anything I tested this on Linux 5.15 5.19 and 6.1 so all is good so because I don't have enough two and a half gig switch ports to test this I decided to directly connect pv2 and pve3 with a loopback cable so we're just gonna be able to Ping and send data between them but not through the global Network so I gave each node its own address fe69 beef Cafe 3 and fd69 beef Cafe 2. so should be able to Ping them so I am on pve2 so I'm going to Ping pve3 and what do you know okay so that means really sending traffic across the interface as it works what do I set up a server here on pv2 hyperf 3 server and I go over here to pve3 and we run iperf again okay 1.94 gigabits per second that is pretty darn good I wasn't expecting a whole lot more out of these uh real Tech chips what if we run bi-directional that was a big pain point before okay so we're getting greater than a gigabit in both directions simultaneously looks like we're averaging about one and a half ish was slightly less on the received side yeah so 1.6 and 1.25 so it's better than gigabit and that's symmetrical roughly so we're not complaining about that either so what about jumbo frames you know how this has a gigantic name and it's the MAC address enx and the MAC address so IP Link set giant name empty so can we do baby jumbo frames oh yeah I'm a big jumbo frames oh yeah I actually tried setting this as high as 12 000. which is outrageously big but uh it lets you set it so can we ping across it now with gigantic jumbo frames flip over here and set the high MTU here when you're working with high mtus be very careful it's easy to get things to stop communicating with each other I would not recommend this except in very private links so what if I ping back to pve2 now he's 69 beef Cafe two it still works how about we add 1000 bytes until we add 2 000 bytes oh yeah it's working let's go bigger how about 9 000 bytes they're nine thousand eight bytes so jumbo frames do work one thing I did notice with jumbo frames or an eye perf again so fd69 beef Cafe two I did not get a higher bandwidth with iperf with high MTU so there's something else in the system that's limiting bandwidth and I suspect it's USB 3. now there's still reasons to run High MDU in some environments especially if you're dealing with encapsulation sometimes the high MTU can help you get around the encapsulation overhead so if you're doing a tunnel for example that has a 50 MTU byte overhead you can add 50 bytes to your MTU to use baby jumbo frames to get the encapsulated network back down to 1500 but other than that it's it's not really recommended to be used but it does support them just fine in these real techniques so in this case with uh 12 000 byte MTU are barrier was actually a lot closer to symmetrical which is cool still about the same one and a half gigabits per second so hopefully this little tip here helped you guys these are in all kinds of different devices I'll have a link down below to the exact one that I use is made by sabrant but they're all over almost all of them have this real Tech chip um the driver is in the kernel but it just doesn't seem to be fully configured by distros I'm not sure why so if you're using proxmox and this helped you let me know like thumbs up all that good stuff um if you want to chat with me more about anything I have a Discord server linked down below to that too and as always I'll see you guys on the next adventure
Info
Channel: apalrd's adventures
Views: 9,067
Rating: undefined out of 5
Keywords:
Id: sAfPm2CxfI4
Channel Id: undefined
Length: 8min 59sec (539 seconds)
Published: Thu Jan 19 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.