Proxmox 8.0 - PCIe Passthrough Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to part two of my proxmox tutorial Series in my previous video we went over installing proxmox and all the ins and outs and configuring your first virtual machine today we're going to jump into PCI Express pass through or how to allow virtual machines to directly interact with Hardware you have in your server this can be storage controllers graphics cards network cards or any other device connected over PCI Express let's get started today's video is brought to you by me in the allnew craft computing. store there's no better way to help support the channel than by picking up a set of coasters Whiskey Stones rocks glasses or any of the other accessories we have to help set up your own home bar and it's all designed 100% inhouse visit craft computing. store and start drinking like a pro cheers everyone welcome back to craft Computing everyone as always I'm Jeff like I mentioned today we're going to cover device pass through in proxmox 8.0 we'll go over what Hardware is required how to enable it what types of devices you can pass through and a couple of use case examples to start off what is device pass through essentially it's taking physical Hardware attached to your server and allowing a virtual machine access to it directly and I think we'll start with some use cases as you might be wondering why you'd want to do something like this as we covered in the first video a virtual machine is a combination of real and emulator Hardware allowing a host to run multiple types of virtual PCS with different Hardware configurations operating systems and the like but most virtual machines at their basic level rely on emulated IO devices this means things like your CPU and memory are directly accessed by the VM but devices like your storage controller network adapter graphics card even USB devices like keyboard and mice are all emulated while it's sometimes possible to configure all available resources directly on your host Hardware and then emulate Hardware access for guest systems to be able to use them there are some situations where giving direct physical access to real Hardware is a far better option as a quick refresher the server we're going to be working with is my $1,000 8 bay DIY server powered by an earing 11 900h motherboard and CPU combo with eight cores and 16 threads along with 32 GB of ddr4 for storage we've got a pair of 1 TB nvme drives which are responsible for proxmox as well as are virtual machine discs on the front here we've also got eight 6 TB hard drives connected to an as media 1064 PCI Express SATA controller and here's where our first use case comes into play as how would we go about giving a virtual machine access to all eight of these hard drives sure it is just Linux that we're running on after all so inside proxmox we could set up an NFS or an I scuzzy share via a terminal then configure the virtual machine to automatically connect to those drives at boot but then we're running all of our storage over Network protocols rather than having direct access to them that adds needless complexity and actually limits the speed of the overall setup instead we can give a virtual machine direct physical control over the PCI Express storage controller which basically is like plugging in the PCI Express Car directly into the virtual machine because the storage controller is passed through the virtual machine will have direct bare metal access to each hard drive over the PCI Express Bus this kind of access is vital for systems like trass which runs the ZFS file system and needs physical access to hard drives for it to work properly other instances of PCI Express pass through being needed are for virtualized gaming where you want to pass through a graphics card to a Windows Virtual Machine the same goes for media servers like Plex or jelly fan where graphics card might be used for video encoding network cards can be passed through for dedicated and isolated access such as a pfSense server running as a virtual machine anywhere a PC requires or would benefit from direct access to Hardware pass through makes it all possible there are also some instances where Hardware can be shared between a host and multiple virtual machines while I've shown this off in my cloud gaming series of videos we're not going to get into any proprietary configurations like Nvidia vgpu or Intel's srov for their sharing of their integrated graphics card today PCI Express pass through requires CPU and motherboard support for it to work now while almost every modern CPU supports this feature motherboard support especially with consumer boards can be a bit hit or miss it also doesn't help that PCI Express pass through has about a dozen different names and each OEM and vendor use those names interchangeably the basic technology at play here is called immu or input output memory management unit it's a fantastically catchy name I know it's a CPU function that essentially allows a virtual machine direct access to specific memory address spaces this is going to get a little complicated so I'm going to simplify it as much as I can but traditionally a virtual machine is presented with a virtual memory address space which basically means there's no way for a virtual machine to access specific memory addresses that are required to interact with Hardware directly IMU allows a CPU to pass through memory address locations directly to a virtual machine and a VM with access to real memory address locations can utilize anything in that space that means bare metal access to Hardware so long as the CPU has granted access to that particular space as I mentioned each OEM and vendor call the technology something unique and different to them AMD refers to it as AMD VI Intel calls it virtualization technology for directed IO or Intel vtd motherboard oems can also use their own naming conventions as well both AMD and Intel also have a branch technology called single root IO virtualization or srov sometimes you'll see options for both immu and srov in your Motherboard BIOS sometimes just one or the other will enable both features but in short you want to make sure thatu and or srov is enabled in your motherboard's bios it can be called by dozens of different names but the root technology at play here is called ommu which should give you a good term to start googling if you can't find the answer in your particular products manual on the earing motherboard that we're using today the option foru was located under chipset system agent vtd again your particular motherboard is going to be different make sure you have mmu enabled and we can finally get into proxmox unfortunately inside proxmox there is no gooey option for enabling ommu support there are two different methods for enabling immu depending on if your system is booting via Legacy BIOS or via EFI for this video I'm going to assume that proxmox is booting via EFI but my written documentation covers both methods click the link in the description for that file to start off you'll want to SSH into your proxmox server or open up the shell window from the web goey once you have a terminal open you'll want to type in Nano /c/ kernel commandline this file configures kernel parameters for proxmox and in my case defines the ZFS pool used for storage by the OS as well as booting proxmox itself this file loads parameters on a single line so any changes we make will need to be appended at the end of line one since my server is running an Intel CPU we're going to add intelcore ommu equals on at the end of this line for AMD based systems you want to enter amdor ommu equals on with that done press contrl X to exit y to save the file and enter to confirm the file name along with immu we'll need to load some additional kernel modules into proxmox to support Hardware pass through go ahead and type in Nano /c/ modules in this file you'll want to copy and paste the four modules listed in the vfio section of my written instructions those are vfio I mmu type PCI and vir qfd once again go ahead and press contrl X to exit y to save and then enter to confirm the file name to apply these changes there are two commands you need to know first for the kernel parameters type in proxmox - boot d tool refresh and then press enter for the kernel modules file type in update init RFS d-k all once both file commands are finished go ahead and reboot your system hopefully now withu support once you haveu up and running many devices can be passed through with no additional configuration things like storage controllers network cards and the like can be used by a virtual machine without any other settings required graphics cards often require a bit more special sauce to make work and we'll cover that later in the video to start off we're going to pass through my as media 1064 storage controller which will give a virtual machine direct access to not only the storage controller itself itself but every hard drive that's attached to it passing through a device like this is basically like unplugging it from the proxmox server and connecting it directly to a virtual machine proxmox will no longer be able to use the hardware directly with the VM having complete control for creating a trass virtual machine passing through the storage controller is the best way to handle it as ZFS requires bare metal access to the discs in a pool and for those who are about to say you shouldn't virtualize trass that advice is more than a decade outd did long before direct access memory virtualization and Hardware pass through became commonplace in server Hardware so please stop repeating it to get started I'm going to create a very quick virtual machine for trase with the current version being 13.05 3 since we intend to pass through a device you want to make sure the machine type matches your host type that is Legacy versus UEFI I covered this in my previous video about the system tab if you need a quick refresher again my system is is an efib based system so we'll use a Q35 machine type with an ovmf bios for CPU I'm going to allocate four cores and then leave the CPU type at the default for memory we're going to allocate 16 GB of RAM but since we're passing through Hardware there's an extra consideration here since Hardware pass through is entirely memory address based you'll need to disable memory ballooning for it to work the two technologies just aren't able to work together as memory address locations need to be hardcoded for Hardware access access and ballooning makes addresses Dynamic inside a virtual memory with the virtual machine created I'm actually going to boot it up and install trass before passing through my storage controller no particular reason for this it's just the order that I prefer to do this operation in once you have tras installed on the virtual machine I'm going to shut down the VM and head back to the proxmox web guei click on the tras VM go to the hardware Tab and then click on ADD PCI device there are two main sections of this window one for for mapped devices that means devices that have sharable resources like certain video cards and network cards and raw devices which means a PCI Express device as a whole click on the raw devices bubble and then in the pull down menu find your storage controller again I'm using an as media controller which is PCI device 05.00 in this system once you've selected your device to pass through go ahead and click on add to verify this is all working I'm going to open the tras web GUI once I've logged in I can click on the storage Tab and then on discs and if the gods are smiling on me there should be eight 6 TB drives visible another Surefire way to tell this is working is open a shell and type in lspci to list all PCI devices present sure enough down near the bottom is the asmia 1064 controller with that the tras VM has 100% control and direct bare metal access to the drive controller and all of the attached hard diss meaning trass and ZFS have everything they need to work perfect perfectly inside this virtual machine with that all up and running let's go ahead and switch gears and talk about graphics cards specifically passing through a graphics card as a whole because this can add a little wrinkle to the equation as Graphics devices can be used by even the Linux shell they often have kernel modules loaded and active which prevent a virtual machine from taking control well I've covered in great depth partitioning your Nvidia graphics card inside of proxmox today we're only going to focus on single device pass through and the same instructions can work for NVIDIA AMD or Intel Arc based gpus for today's example I'm going to be passing through an Nvidia quro p1000 to an Ubuntu VM a very common setup for media servers like Plex or jelly fin for media encoding for starters you cannot pass through your primary display source that is the video Source you're using with your proxmox server in my case the Intel 11 900h has integrated Graphics so I need to set the integrated GPU as the primary video source not the Nvidia graphics card once you have that configured as well as your graphics card installed go ahead and boot up into proxmox and we can make sure it's configured properly to be passed through to a VM open up the shell and type in lspci to list all PCI devices on your system locate the graphics card you want to pass through in my case again the Nvidia quadr p1000 listed as device 01 colon 00 we're going to need a couple pieces of information for the next steps starting with the exact PC ID of the graphics card to disable its features on the host this is a heximal value and is often unique to specific models of graphics cards our GPU is on PCI slot 010 so enter the command lspci - n-s followed by the device id- V this will display in detail everything proxmox knows about this device starting at the top you can find the hexadecimal identification for the card in my case it's 10de e 1 CB1 also take note of the kernel modules that are in use here in this case there's an Nvidia FB along with nvo and we'll deal with those momentarily now do keep in mind if you had for example two identical graphics cards in this case two quadr p1000s and you wanted the host to use one of the cards and a virtual machine to use the other via pass through that's actually not possible as you need to disable video from outputting from all matching PCI IDs it's one of only gotas with this method and shouldn't be an issue for most people but it is something to keep in mind to prevent cards from displaying any display output in prox MOX we need to add the hex ID of the card to mod probe with a flag disabled VGA that can be done by copying and pasting the line here and replacing the pound signs with your card's hex ID to finish up we also need to make sure there are no active kernel modules trying to use the card for anything on the host system my p1000 is using nvo right now now which is an open- Source Community replacement of the Nvidia driver built into the Linux kernel as well as Nvidia FB adding both Nuvo and Nvidia to mod probe with The Blacklist flag should prevent them from loading and free up our GPU to enable pass through once you've entered in all of these lines of text you'll want to update init RFS again and then reboot to apply your changes once prox MOX is back up you can verify everything is working as expected by again typing in lspci dn- s followed by your device id- V to see your graphics card but now it should say vfio PCI under active kernel modules if this is what you see you should be able to pass through your GPU just like before click on the virtual machine you'd like to add the card to go down to the hardware Tab and click add PCI device in this case we're going to select the Nvidia p1000 but once you have it selected make sure you check the all functions box under the device pull down graphics cards often have multiple unique devices on them in addition to the graphics processor itself attached sub devices like the Nvidia audio controller or the USBC controller need to be passed through as well to avoid conflicts with the host system now unfortunately there's a bit of bad news to all this you may have done everything correctly in this section you can spend hours troubleshooting double-checking going line by line character by character over your Blacklist vfio and command line files only for everything to be 100% correct and this still might not work for you on my iring motherboard while I am able to pass through the as media 1064 card in the X1 slot on this motherboard and even some X4 and X8 PCI Express cards in the top slot the quadro p1000 or any other graphics card that I plugged in simply will not work for me seriously I tried a p1000 I tried my P400 my p620 I tried an m2000 I even went as far as to installing a Radeon Pro wx2 2100 which is an X4 graphics card still with absolutely no success GPU pass through has so many possible ways it can fail and as I mentioned each motherboard can Implement features in different ways this can be their arrangement of PCI Express Lanes grouping of devices and ports or any one of a thousand Hardware combinations that can cause the system to just not work as expected for this PCI Express pasture solution in proxmox 8.0 it works on on three out of the four PCS that I've tried it on with the one failure being this iring motherboard but I've also read some Forum posts of others getting this exact same board working with GPU pass through it might be that I'm utilizing both nvme slots for storage and that's causing it to fail but the end of the day it fails all the same for my use case it's not a huge deal as I had not planned on running a graphics card in here instead I plan on running a 10 gig Network C in that slot long term an upgrade I'll likely do as soon as this tutorial video is done PCI Express pass through through IU can transform a useful server into an essential one bringing more functionality and power to Virtual machines through the use of real Hardware rather than running a virtual machine host on one system and a Nas server full of drives on another we've now got the functionality of both in this tiny little desktop unit with true nass accessing all the drives directly inside of a virtual machine and proxmox able to manage all the virtual machines themselves now one question I get asked in every single video about either proxmox or trass are trying to use one or the other as a Magic Bullet a be all endall solution for all of your server needs and that is if I install trous on a virtual machine I'll get the why did you even bother installing proxmox can't you just run all your virtual machines with tras or conversely why did you bother installing tras in a VM can't you just run a file server off proxmox since it's just Linux and already has ZFS on it they are both great operating systems but my answer for why haven't I just chosen a camp and stuck with it instead of installing both is pretty simple proxmox is built from the ground up to be a hypervisor it doesn't have any guey or features built around ZFS pool creation file system management user management permission allocation sync tasks backups or anything else that I'd use a file server for it is just Linux and it does have ZFS but it's not the right tool for the job unless you like managing your server entirely by hand through the terminal likewise while tras does have some virtualization features and they're getting better with every single release that's not the main focus of trass either it doesn't have multi-server management for easy migrations fine-tuned resource allocation virtual switch management or anything else that I'd use a virtual machine host for tras is a great file server but it's just not robust enough to be a One-Stop shop for all of your server and file storage needs besides running a home home lab is so we can experiment and learn and part of the fun of running multiple complex systems like this is making everything work for you hopefully after watching this video even with a minor failure of my Hardware you'll have everything you need to pass through PCI devices to Virtual machines inside proxmox if you want to know more about my specific setup or think you might have a solution to why my GPU passer is failing on the iring 11900 hes motherboard I'll post my dmesg and error messages down in the video description there'll be right below the links to written documentation to follow for this video on your way down there make sure to drop this video a like And subscribe to craft Computing if you haven't done so already as always if you're interested in any of the hardware that I used in the server I will have all the links down in the video description as well as where you can find the awesome glassware that I use in this video available on craft computing. store that's going to do for me in this one thank you all so much for watching and as always I will see you in the next video cheers guys beer for today is a West Coast Classic it is Sierra Nevada out of Chico California it is the atomic torpedo Imperial IPA at 99.2% now see Nevada has a number of different torpedo variants this is the Imperial it is the heavy hitter it is the you want a West Coast IPA we'll give you a West Coast IPA I love the torpedo series of ipas they they're West Coast but instead of being the the very dry hop forward form of an IPA they're a little bit more wet they're they've got a little bit more body to them and especially the Imperial IPA it kind of leans on uh some more logger and pilsner Roots rather than trying to be you know this massive hot bomb uh it actually reminds me a lot of like sourdough French bread uh like it's got that kind of of complexity to it it's just good it's just a good fantastic classic example of a West Coast IPA cheers
Info
Channel: Craft Computing
Views: 81,766
Rating: undefined out of 5
Keywords:
Id: _hOBAGKLQkI
Channel Id: undefined
Length: 22min 31sec (1351 seconds)
Published: Fri Oct 06 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.