Proxmox on Erying Motherboards! - Testing Hybrid CPUs in Virtual Machines

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
sorry if the content's been a little slow lately I've had uh other things on my mind today's video is brought to you by me check out craft computing. store for all of my official merch and help fund the content that you enjoy watching here on the channel from custom laser engraved pint glasses to coasters and whiskey stones and even our brand new double wall insulated coffee tumblers all of my merch is designed 100% inhouse and made to order by me I'm also now offering flat rate international shipping to 23 different countries and if you live in the continental US free shipping on orders over $35 so what are you waiting for head on over to craftcompany.co.uk Jeff so I made some comments in a recent video that got a lot of you pretty proper angry I said Intel should not have removed their efficiency cores from their newest line of entry-level Xeon CPUs as a quick recap Intel launched their zeeon 2400 series which is basically just a rebadged raptor leg desktop chip they have the same performance core counts with the same boost clocks and the same amount of cash but they're designed for servers so of course they're going to cost more but Intel did make one change before launching these CPUs and that was to remove the efficiency cores so instead of getting something similar to a core I9 13900 with 24 cores and 32 threads thanks to its 8p cores and 16 e cores we get CPUs like the Zeon e 2488 which has eight cores and 16 threads I was critical of this decision for a number of reasons but it mostly centered around what Market these CPUs are aiming to fill that is small and medium businesses that host their own infrastructure I felt it was a bad move because you're essentially getting half the number of threads taking away from the number of services or virtual machines that you could potentially host on one of these servers but Jeff you all yelled at your screens don't you know that hypervisors don't work with non-homogeneous architectures in fact quite a few of you decided to wall of text correct me for my obviously stupid take because don't I know the world runs on VMware so it would be stupid for Intel to include efficiency cores if you just had to disable them to get VMware to work anyway first off the couple of things I love the irony of being told that I'm being too narrow focused on the lack of ecores while also stating the only reason that Intel removed them is for a single use case in the industry VMware does own around 45% of the global virtualization market and is the largest player by far but in my experience of doing this job for about 15 years you typically only find VMware in larger organizations and Enterprise think 10,000 employees or more small and mediumsized businesses on the other hand companies will say between between 50 and 5,000 employees more often than not I encounter either hyperv or KVM based hypervisors and while VMware itself doesn't support big little architectures hyperv and KVM certainly do and given that smaller organizations are far more likely to purchase zon 2400 based systems than eneral rapid that's where my focus for both software and Hardware was coming from in that review my scrutiny of this decision was also based on I highly doubt that Intel is creating unique CPU dies for the Intel xon 2400s it's far more likely they're using the same chips that go into the consumer CPUs on the Raptor Lake line and just disabling ecores and graphics on them which means the hardware exists on the CPUs they're just disabling it though without deleting a CPU I can't verify that claim now there wasn't a lot of comments but the comments that were on there were pretty vitriolic about my disdain for indels practice there and the whole backlash got me thinking what does Big Little support architecture look like inside of proxmox is proxmox able to utilize efficiency cores does scheduling happen automatically or were the commenters correct and should I delete my channel retire to the woods and never show my face in public again and yes I know I'm being a little bit snarky and it's kind of on purpose but I was already planning on testing big little architecture I was just waiting on a new motherboard to arrive from erying and let's just say of all the mobile CPU strapped to a desktop motherboard this is probably the weirdest one I've come across yet these types of motherboards typically come in two different flavors boards with retail mobile CPUs and others with engineering sample CPUs being I'm all about budget for projects like this obviously I went with an engineering sample but based on the Intel Core i 9 12 900h we're looking at a total of 20 threads on this chip thanks to six performance cores and eight efficiency cores along with Iris XE graphics and Intel Quicks sync support for video encoding now I've been using a pair of eering motherboards in my Home Server rack but based on the 11th gen tiger lake with eight Cor and 16 threads all being the same core type if you watched my most recent server rack tour video These systems are a big reason why I was able to reduce the power consumption of my rack to less than 350 Watts at idle total I figured with four more threads I might be able to run a couple more services on individual boxes thereby lessening the need for more Hardware also with the efficiency cores it might actually mean my idle power is even lower than if I was running the 8 core 16 threaded tiger Lake system so I started off by firing up proxmox on this 12th gen board and I made sure I was updated to the latest firmware which is 8.1.4 as of the time of filming it's also important to note that this was on kernel 6.5 as big little support was added in kernel 6 so if you're still running on version five you're going to need to update to take advantage of the non-homogeneous architecture inside proxmox the summary tab lists the CPU as having 20 threads so right away we do have access to both the p and the ecores though there's real no differentiation between them I decided to just go for broken testing and fired up a pair of Windows VMS each configured with 10 threads maxing out my core allocation for the CPU now I know what some of you are already going to say how are the VM supposed to know if they should use peores or ecores and how will I balance performance between the two in theory because big little support is a kernel function performance balancing should actually happen at the hardware level with the Linux KVM kernel automatically assigning peores to more strenuous workloads inside of your VMS you could also set CPU Affinity on each VM to manually assign either P or ecores ensuring that VMS that need higher performance always have access to it at least that's the theory loaded into Windows I wanted to run a CPU test that is not aware of big little architecture as I didn't want windows for some reason muddying the waters here plus if scheduling really is happening at the Linux kernel level that's where I want to see it demonstrated for the first test I had both windows VMS run cinebench R15 multi-threaded at the same time time and I wound up with scores of 657 and 619 not the most exciting of scores but let's compare those to the sanity check I ran before starting all this I ran Windows natively on this motherboard and cinam bench gave me a multi-threaded score of 923 which means the same test running on two separate virtual machines was a full 38% faster than running on bare metal that confirms two things for me number one cinebench R15 when running on bare metal hardware is not capable of differentiating between p and E cores and they kind of got in each other's way number two the Linux kernel when just left to its own devices is capable of assigning faster threads to faster performance which is why our overall performance went up that result gives some Credence to the idea that Linux actually might be balancing the workload among the different core types as remember R15 can't schedule big little cores properly which is why we see such a low score when it runs natively but I wasn't satisfied so I figured I'd run the test a couple more times and around halfway through the second run the system crashed and I mean hard we wound up with a kernel panic and an automatic reboot not a great start but maybe it was something I'd done or configured that had caused it now one thing I considered was the fact that I had allocated 100% of my CPU threads to Virtual machines in this system now normally that will result in the occasional performance drop as the hypervisor still has work to do outside of the VM and will typically steal CPU cycles for itself but with the big little architecture we're also relying on the Linux kernel to handle scheduling for different CPU types and maybe there just wasn't enough resources to keep up with that need so with that in mind I figured I'd go ahead and try again this time with three virtual machines but each configured with six threads that's 18 threads in total for the VMS leaving two threads for the hypervisor to handle its own tasks the other thing that I did was lock CPU Affinity on all three virtual machines to see if there would be a difference in performance between the VMS the first two VMS received threads 0 through 11 so essentially three PE cores and six threads each the third VM received threads 12 through 17 which should give it six ecores so with windows again loaded up I got all three machines logged in and it crashed again this time without a kernel panic message on the console it just had a hard luck so I tried again and again until finally I was able to get all three VMS loow and it seemed like I might actually get a test out of it first up was to run each VM individually to make sure the CPU Affinity was actually working the VMS with performance core scored a 799 and a 744 while the third VM scored a 687 not a huge difference but that's actually not that surprising as there's not much separating a performance core that's hyperthreaded and two efficiency cores especially when they're at similar clock speeds next up let's run all three benchmarks at once and see if we can get similar results but no points for guessing what happened instead yep we crashed again at this point I was starting to question if the problem was proxmox or if it was this engineering sample CPU as there's some more Oddities with this CPU and I'll get into that momentarily but I wanted to find a culprit so I swapped out this motherboard for one that I have set aside for another project this time with a core i7 13 620h retail mobile CPU this is a Raptor CPU with 10 cores and 16 threads that is six performance cores but only four efficiency cores I ran the same experiment again running three virtual machines two of them with three performance cores each and the last with four efficiency cores and this time it decided to run as expected yay the two PE cor VM scored just over 1,000 points with the third VM managing 600 with its four ecores the results were the same whether the VMS were running cinam bench individually or all at the same time which means yes CPU Affinity is working and there is a difference between performance cores and efficiency cores when it comes to multi-threaded Performance science so now let's actually test this out what happens if I disable CPU affinity and let proxmox allocate power dynamically for this test I set up four total VMS each machine received four CPU threads out of this 16 threaded CPU if all four VMS run catch at the same time we should see nearly identical scores assuming proxmox is actually doing what it's supposed to be doing and allocating its performance and efficiency cores evenly across highly threaded workloads but of course what should be simple wound up causing a whole host of problems with four VMS each of them set up with four threads on this 16 threaded CPU I could only get three of them to start at the same time the 4th VM would start up and post but shortly after booting into the windows splash screen one of the other VMS would just die but as Adam Savage would say every results a result and that kind of goes towards confirming my theory that CPU scheduling in proxmox needs unallocated CPU threads to handle this task but that also means if you have additional cores and threads thanks to Intel's big little architecture and the inclusion of efficiency cores a couple of them can only be used for making sure that the others do what they're supposed to do taking away from that core Advantage but here's the crazy part I did eventually get all four VMS booted up at the same time and running cinebench all at the same time and the results were virtually identical in performance across all four with proxmox handling performance allocation for the p and E cores I also ran single-threaded benchmarks on all four VMS simultaneously and they all hit scores of either 255 or 256 meaning that proxmox is properly assigning performance cores to do all that work automatically cool but of course when I wanted to keep running my testing everything crashed again only this time it went down so hard it managed to to corrupt my proxmox install so I can't necessarily say I recommend running non-homogeneous architectures with proxmox at least not right now which is kind of disappointing as there are some genuinely good deals on Intel Alder Lake and Rocket Lake Mobile Parts which should be killer for low power home Labs both of these CPUs and motherboard combos require less than 100 watts of power at Full Tilt which also means you can use something like an Intel box heat sink to keep them cool further adding to that value proposition but of course there's one more thing to mention you know how I said that this was one of the weirdest motherboards that I had come across yet while it does have a 12th gen 12 900h engineering sample and the motherboard CPU combo was only $165 it's due to some pretty severe PCI Express limitations on this exact CPU the motherboard has a PCI Express 3.0 X4 slot right down here on the bottom as well as a pair of m.2 gen 4x4 and all of those work perfect perfectly as expected unfortunately this x16 slot has this little sticker on here and that's to remind you that that x16 slot is limited to just PCI Express 2.0 with only eight Lanes meaning any hopes of using this as a gaming PC are pretty much off the table as any modern GPU would suffocate with such little bandwidth I was interested in this board primarily for home laab use as on paper a 12th gen 12 900h is a major step up even from the 11900 H CPUs currently in my virtualization servers and even though we've only got a PCI Express 2.0 X8 slot here that's actually plenty for something like a 10 gig network card or all of the SATA connectivity you could ever handle but as it is even though non-matching core types are technically supported by the Linux kernel and if you get them running they do exactly what they're supposed to do both of these systems both the engineering sample and the retail 13620 both had major stability issues hopefully we'll see some better support with Linux KVM in the near future so systems like this can actually be put to good use but at the moment I can't say it's something I deploy in my own home lab in doing research for this video I came across so many conflicting reports of whether or not Intel big little or any big little architecture was supported on Linux KVM or inside virtualization or in hyperv and even rumors that it might be coming to Zen Orchestra or even esxi but in the end nothing ever gave a clear answer and the only result that I have is massive instability if you have any experience with this please let me know down in the comments I'm genuinely curious to hear your own experiences and if there's something that I can do to make these system stable because at $165 this is a hell of a deal for a home lab server on your way down there make sure to drop this video a like And subscribe to craft Computing if you haven't done so already follow me on social media at craft Computing for daily shenanigans like this and if you like the content you see on this channel and want to help support me in what I do consider joining the patreon link is also down in the video description and helps keep the lights on around here that's going to do it for me in this one thank you all so much for watching and as always I will see you in the next video cheers guys here for today is from Silver Falls Brewing out in Silverton Oregon it is the like yesterday '90s IPA clocking in at what is this 7.4% and 70 IBU so like yesterday '90s IPA uh number one I love the uh the boom box on roller skates can art that is fantastic and uh a background to this that I think os's Vox would be jealous of uh this is an interesting beer it's it's an interesting beer for a couple of different reasons it is definitely a a West Coast style IPA I mean this is you know here in the wamit valley in Oregon it's definitely a nice bold hop flavor but it's a little one-dimensional typically your ipas especially your West Coast you know Northwest ipas can go a couple of different directions this one starts with a nice big bold hop flavor but usually you have a really thick body a lot of malt backing to kind of go behind that and the flavor changes Midway through and a lot of those types of ipas tend to be the the oilier clingier uh you know type finishes this one has a big bold front to it but then it ends very dry it almost feels a little Hollow in the middle of it like the the floor just comes out from from underneath it on the other end of the spectrum you've got ipas that start uh bright and floral and citrusy and those are usually the ones that end kind of dry that they don't have the The Hop oils and the lupins and and all that kind of stuff to to linger on your tongue so we've got like the the start of a of a hot bomb and then the finish of a drier IPA but in the middle it's kind of missing both elements it it feels Hollow it feels empty uh I wish the body was a little bit bigger I wish there was some maltt to kind of complement that big bitter taste at at the front of it definitely not bad it's just a little unexpected
Info
Channel: Craft Computing
Views: 49,038
Rating: undefined out of 5
Keywords:
Id: o2H4HqLH4WY
Channel Id: undefined
Length: 18min 45sec (1125 seconds)
Published: Mon Jan 29 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.