Unraid 6.12 - All the new features including ZFS and much more

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
foreign hi guys so welcome to another video so this video has been a long time coming today we're going to be looking at unraid 6.12 now as of making this video the stable version hasn't quite been released yet I wouldn't be surprised by the time I upload this video that isn't the case and the stable versions available now I'm making this video using the version 6.12 rc5 and I've waited a while to make this video because often during the RC development additional features get put in so I wanted to make the video as close to the stable version as possible and to be honest I'm glad I've done that because there's some quite cool things that came into the rc5 that weren't in previous versions okay so the first thing to do I think is to go across to my test server and we'll take a look at the update process from on raid 6.11 to run raid 6.12 now the first thing I like to do before doing any update is to update everything that it's on the server already and although it's not actually necessary to do I like to update the docker containers first so I go across to the docker Tab and click check for updates and if any updates are available for me there aren't then I'll just update all of the containers now more importantly here's to update the plugins before we make an update at the top here we can click check for updates and we can see two of my plugins here require updates so I'm going to click on update all plugins okay so with all of the plugins now updated I always like to do a bit of house cleaning at this sort of time and remove any plugins I no longer need now I don't have an AMD GPU in This Server anymore so I may as well remove this and I don't need the Wake On LAN plugin so I'm going to remove that as well now as you can see at the bottom here I've got two ZFS plugins already installed now as you know unraved 6.12 has full CFS support so I'm not going to actually need these plugins now you might be tempted to remove any ZFS plugins before updating but I really recommend not to because if you've installed ZFS plugins on your server well they're probably in use right now and the update process will remove any incompatible CFS plugins anyway so let's let it do its job and also there's another plugin that will be automatically removed also after updating we'll notice this plugin here will have been removed because this plugins only for unraised 6.11 which fixed an issue where you couldn't see on some Docker containers sometimes if there's an update available and this patch fixed that okay so why am I mentioning this if it's only needed in 6.11 and it's automatically removed well I'm mentioning it because some people may have not used the patch because they man really set up a script to do what this patch does so if you're one of those people then please make sure to remove the script before updating okay so with that done let's go across to the main tab here click onto flash and click flash backup to make a complete backup of the unraid flash drive how it is now now this can take a little while so just be patient okay so once you've downloaded your flashbackup we can now go across and update the OS and to do that we need to go to tools here and of course we want to click on to update OS here and on the update page here if it doesn't say there's a newer version available just click on check for updates and it will do a check and at the moment on What's called the stable Branch unred 6.12 isn't available but if you wanted to try out on raid 6.12 before it goes stable then just switch the branch here from stable onto next and we can see now the unread 6.12.0 rc5 while that is available to install but please do remember that this isn't the stable version so if your server is Mission critical I'd recommend not installing on raid 6.12 until it's available here on the stable Channel but for me I'm quite happy to install rc5 so I'm going to click onto install here now you'll notice at the bottom there's a button that says close make sure not to actually click this we need to wait for the whole process to be finished after which this button here will change to done okay so the button says done so let's click onto that now and if I click back onto the main tab here now we can see here it tells us to reboot to upgrade the OS now some of you might find a little pop-up window on the right hand side saying that as a process running in the background if you see that make sure to wait until it says it's safe to reboot okay so I'm not actually going to reboot right now what I'm going to do is I'm going to look at a few things in unraised 6.11.5 so we can compare it when we reboot into 6.12 now the first thing I'm going to do is I'm going to go onto the dashboard here and you can see here there's little arrows where I can minimize the tiles and I can move things up and down but I can't move them left or right so when we boot into unraid 6.12 we'll see some nice changes on the dashboard now let's go and have a look at shares now and if we look here under cash we can see prefer cash because this app data whenever it can will be on the Cash Drive the data drive here is set to Cache yes which means new files will be put onto the cache and then mover will move it onto the array let's take a look at the data here and we can see in 6.11 but this is the structure for how files are actually placed in unraid whether they go into the cache first whether they go straight to the array and basically how mover works well in 6.12 all of this has been changed so we'll be looking at that in a moment and also on the shares if we look on the right hand side here if we want to look at the files our view button is right over here and I've always found it difficult to get to it especially if there's something halfway down and this will find is also something that's changed in 6.12 okay so I'm almost ready to reboot now but first let's just see what the kernel version is in 6.11 we can see here it's Linux 5.19.17 now while I'm just up here I don't have any notifications at the moment but the way notifications are handled in 6.12 are slightly different so just remember what this looks like up here now let's quickly have a look at what Docker version we're running in 6.11 so Docker version is 20.10.21 and let's have a look at the VM manager the liver version is 8.7.0 and qmu is 7.1.0 now obviously the qmu version this relates to when we make a VM to the machine type when we see like 7.1 after the 440 FX or the 7.1 after Q35 and whilst we're here in the VM section if we scroll down here and we look at the network models available in 6.11 there's only four that we can select from but in 6.12 there's actually one more okay so I think that's enough looking at 6.11 let's reboot and go into 6.12 okay so let's log back in now This Server isn't set to automatically restart so I'm going to start it up now now the first thing I'm going to bring your attention to is if you remember I said how notifications are handled they're slightly different if we look in the top right hand corner now we've got this little bell and if I click onto it I can see the alerts warnings and notices so on notices I'm going to click onto View and we can see there's a new version of nerd tools that's available which actually came out after I updated all of the plugins before I did the OS update and if I go to alerts and click view here we can see here that the ZFS companion plugin was removed as it's incompatible with this version of unraid so that's just telling us that that plugin was removed due to incompatibility now let's go to the plugins tab here again so I'll quickly update this plugin and scrolling down to the bottom here we can see here yes the ZFS companion plugin has been removed but also we can actually see the ZFS plugin itself that was removed as well okay so let's go back to the main tab the now I'm sure everyone's wanting to see the native CFS are non-raid but let's wait a moment first let's go to the dashboard here and if you remember before we could just come here and drag things around but now what we have to do is Click onto this little padlock at the top here and we can unlock the dashboard and then that allows us to drag things and put them where we want so it's a really nice that we can fully customize the dashboard now and for example if I didn't want this tile here I can just close the whole tile and totally get rid of it and once I've got everything how I want I can just click onto the lock and then everything's locked in so we can't accidentally move things around now if I go to the docker page the padlocks here as well so if I unlock this then I can move the containers around into the order that I want them and that's also the same on the VMS tab but I don't have any VMS actually installed on this server but the process would be the same unlocking it and moving them into the order that we want okay so now let's have a look at the Linux kernel version and we can see now it's Linux 6.1.27 now I'm sure people who have Arc gpus are wishing that we had the 6.2 kernel which gives official support for Arc gpus for things like transcoding and unraid but remember in this video I'm using rc5 so I'm not using the full stable version now maybe 6.2 will come in the stable version I don't know for sure but there's definitely a chance I think because 6.2 is now officially stable even though it's not an LTS kernel and also I think last month in April CFS has support for kernel 6.2 as well because obviously that's very important that unraid now only use kernels that have full ZFS support so you never know in the first stable version we might have full support for Arc gpus let's hope so and let's go across to settings Docker and we can see Docker is 20.10.23 and VMS it's still live vert version 8.7.0 and qmu 7.1.0 so no changes there but if you remember we can see here if we scroll down for virtual next we've now got vertio net which is default vertio E1000 this is the new one here rtl8139 an older networking card but great if you want to run things like I do I know it's a little bit sad things like Windows 98 and Windows 95 and obviously we've got vmx net 3 still so one extra virtual network card here and also there's been some changes made under the hood there's been an update in the memory backup processing for virti ofs so there's no kind of editing of templates if we've got an existing VM that's all gone automatically now which is nice and let's have a look on what we've got on this list here oh yeah we can add serial numbers now to drives which is quite cool so why would we want a zero number well the V disc can actually appear as it's got a real serial number like a real disc would have so that's quite nice being able to do this rather than having to edit XML is a real nice touch it would have certainly made it easier for me to make my April Fool's Day video a couple of years ago where I created a VM of an unraid server with the two petabyte array okay so as fill doesn't take very long to have his lunch let's add a bunch of them to the array now okay I've got to be quick well there's the first 10 that's brought it to a petabyte it was fun making that video certainly been a lot easier if I could have actually added them straight into the template okay we talked about the lock to actually sort the VMS and what order they want to be earlier oh but what I haven't mentioned is copy and paste now an option for virtual consoles so we can enable that in VNC Etc and so in the VM section nothing else really very interesting we got ovmf stable 20 2302 and everything else really does bug fixes okay so moving on let's click onto the main tab here okay so I think it's the perfect time now to move on and take a look at ZFS in unraid well we can see here I've only got one disk in the actual main array I've got a butterfs drive here which is my cache and I've got some unassigned disks here which we can use to make some ZFS pools now one thing I've just noticed is I'd pre-cleared this disk earlier as this disk I'm going to pop into the array but it doesn't actually say that it's been pre-cleared so just a little tip if you click on to start pre-clear and you have a disk that's already been pre-cleared under operation just click verify signature and click Start and then after a couple of seconds if you can see the signature on the actual disk then it will tell us the disk is pre-cleared great okay so what I'm going to do now is I'm going to stop the array and with the array stopped that's going to allow us to make changes to the configuration so we can make our first C pull now you can see I've got three four terabyte drives here which I'm going to make a z pool with so what I'm going to do is I'm going to go up here and click onto add pool here and we need to say how many drives we want inside the pool I've got those three four terabytes so obviously I'm going to choose three and now it's very important you've got to give it a cool name you can't just have like a boring name much as I call it um I think cyber flux for this one okay so now we have to assign those drives now with a z pull it's important that all the drives are the same size well actually they don't actually have to be the same size but the problem with using mix size drives is the pool is actually limited to the actual size of the smallest disc so what I mean at the moment I've got three times four terabyte drives so my smallest disc is four terabytes and in CFS the usable space formula for single parity or Raid C1 is the smallest Drive size multiplied by the number of drives minus one so for me obviously the smallest Drive is four terabytes I've got three drives so three drives minus one is two so two times four terabytes will give me eight terabytes of usable space so let's look at what would happen if I added a one terabyte drive to this pool so then the smallest drive would be one terabyte I'd have four drives so minus one for raid C1 so that would be three times one so I'd only have three terabytes of usable space right so with the drives assigned I'm going to click on the name of the pool here and for file system I'm going to set this to ZFS but please note that ZFS encrypted is also an option this will create encrypted Luke's disks and then have the file system on top I'm going to use just regular ZFS so now with CFS chosen you can see we've got a few options now the first thing is what type of Z Pool do we want now RAID 0 in my opinion not a good idea that stripes the data across all the disks with no redundancy so you'd only want to do that with something you really just didn't care about at all so we could choose mirror here which would then mirror the data across those three disks but what I'm going to do is I'm going to use a raid Z pull and what this will do is one of the drives will be parity and the rest data but it's slightly different than we're used to in unraid with one parity disk because yes one disc's worth of parity is used up but the parity is striped across all three of the drives okay so once we've decided the type of Z pull that we want we can then choose whether we want compression or not and the compression can either be turned on or off by default when it's on it uses the CFS default compression algorithm lc4 but in future versions of unraid we're going to be able to choose in a GUI additional compression algorithms as well now there's a question I hear ask a lot is does enabling compression slow down the reads and writes on the server while compression can actually speed up data access and writes due to the following reason there's a reduced disk i o when data is compressed because it's taking up less physical space on the disk for example if you had some data that was 180 gigs when it wasn't compressed and is say 120 gigs when it is when it's physically taking up less space on the disk So reading and writing this compressed data can significantly speed up the disk i o this is especially true on slower mechanical hard drives however there are use cases where compression may not be a good idea if the date is already highly compressed if we think of things such as h.26 5 video the additional compression is not really going to yield any extra space savings so it may in fact actually slow down the data access and writes because of the time taken to try and compress and decompress the data and we're not gaining any speed because there isn't any reduced i o time because the data was already compressed so when choosing whether to compress a disk or not think about what kind of data you're going to be putting on there as to whether you should do so or not and use what's going to work best for your use case anyway today I'm going to enable the compression the next setting we can see here is Auto Trim which by default set to on Auto Trim is really when we're using ssds so I'm going to set that for off and enabling user share assignment basically do we want to better create shares in this C pool or not most of the times you're probably going to want to so with that done I'm going to click on to apply and done and so that Z equals now prepared so I've got this last disc here and one thing I think is quite overlooked in unraid is the fact that you can actually add disks to the existing unraid array and format that as CFS as well now when we're adding a single ZFS disk whether to the array or just to a pool obviously there's no option for what type of Z Pool we're creating is basically it's just a one disk pool and do we want compression again I'm going to set this for yes but please think about what you're going to put on this disc if you're going to have movies on there like I say don't bother trying to compress things that are already compressed that's my opinion anyway and with that done we can click on to apply now while I'm here one thing I want to show you there's been an additional feature added to butterfs as well if I click onto the cache drive here we can see we can also use compression on butter FS drives now so that's pretty cool but one thing I think I should point out if you select compression to be on on an existing butter FS drive it's not going to recompress all the data that's already there it will just compress the new data written going forward file system Auto Trim has been added to butterfs as well I'll talk more about the difference between file system trim and Os trim when we look at the xfs improvements later on because it's been added there too okay so I've created those Z pools so now let's start up the array and format the drives okay so we can see the drives are formatted now I've got the single ZFS disk in the unraid array so now I've got an xfs disk and a ZFS disk and here's the raid Z1 Z Pool that I created now I can see the total size of it is 7.8 terabytes and now if I click on to the name of the pool here and I scroll down we can see these three settings at the bottom we've got the critical disk threshold here which is set by default to 90 and the warning set to 70. but also we've got this minimum free space setting that by default isn't set to anything now when a z pull reaches around 80 capacity it can start to suffer from reduced performance due to things like fragmentation so I think this is why we got the warning at 70 in the critical at 90. now what I recommend to do is for the minimum free space set that to 10 percent so what this will mean the on-range shares won't be able to write things to this drive unless it's got more than 10 free space so I think that's like a good safety thing to do now scrolling down here we can scrub the ZFS pool now you may have seen the scrub feature on batter FS pools in the past on unraid and what data scrubbing does is basically a proactive preventative maintenance events for the Integrity of the data think of it like a kind of deep clean or a full health check of the entire Z Pool what the scrub does is check every single piece of data to check it's not corrupted and it does this by comparing the data to a checksum which is created when the data was first written and if it finds anything wrong and the pool has redundancy like it does in my pool here because it's a raid Z1 then ZFS will automatically repair the corrupted data using the Redundant copies now scrubs can take quite a long time especially if the pool's large and you can set it to automatically run or you can manually initiate a scrub by clicking on this button now it's worth mentioning that ZFS can also repair data reactively as well so what I mean is CFS does check data Integrity when it normally reads the data just accessing your normal files as you access them and if there's a problem the checksum doesn't line up then it can automatically heal the file system now this is one of the great advantages in my opinion of CFS so for you guys out there who worry about bit rot well ZFS is the perfect file system for you now personally I'm going to leave the schedule disabled and I'll manually activate a scrub from time to time when I feel fit now just scrolling back up here we can actually turn compression on and off but we can't do this while the array started we'd have to stop the array if we wanted to turn compression off on the pool and again it would only affect the data going forward from that point it wouldn't decompress the data that had already been written okay to see the next improvements in 6.12 let's go across to the shares tab now we can see here under storage we can see cache and array and the arrow is pointing from the array to the cache now if I click onto the app data share here we can see now there's a conceptual change in unraid for how cache pools are used and how the storage is managed so here we set the primary storage basically where all new files are written for my updater it's set to Cache but obviously I could change it to the other pool I've just created or I could just have it as the array so rather than having things like use cache only prefer and non we have this option here called secondary storage which we can set to array or non and if I set this to non here we'll notice because it's set To None we've only got primary storage so this would have been the same as use cache only in the previous versions of unraid because here we can see that mover doesn't do anything now if I set the secondary storage back to the array then the Mover action at the moment is set to go from the array to the cache so what this means is my primary storage is the cache and only if the cache gets filled up it will write to to the array and so then what mover would do is it would move those files from the array back to the cache if free space became available on the cache drive so this would be the same in earlier versions of unraid when the cache is set to use cache prefer now we can set the Mover action here to be from Cache to array now I wouldn't use this for app data but what this would do is new files would be written to the cache and then when mover runs it will move them from the cache onto the array which is my secondary storage now one thing I'd love to see to be honest is if the secondary storage could also include other pools maybe that's something we'll see in the future I really hope so anyway I'm going to change this back to the Mover action being array to cache and click on to apply and going back to the shares tab here we can see now we can view our files by clicking on The View icon which is now on the left next to the share so that's much better now because why don't you used to be on the right hand side it was very easy to get confused and click on the wrong thing so this makes it much easier I think okay so enough on This Server now now let's go across onto my main server and have a look there okay so here we are on my main server and you can see I've got a few more drives here now I've got one CFS Drive in my unraid array so why do I actually like this well one I like having compression yes I could have that with the butter FS drive but the thing I like about having a ZFS Drive in my array is I can use CFS send to and from the actual main array and to me that's really useful so what I can do is I have my app data on a single drive nvme here so it's nice and fast being an nvme drive but I don't have enough nvmes to make this a mirror so what I can do is use the fs send to replicate it onto disk 5. and because this CFS disk is inside the main raid array is protected by the parity of the unraid array and should it fail unraid would be able to rebuild the missing drive so my date is protected now obviously I'm not going to have the advantage of speed which we have in a regular Z pull like this rain C1 pool I've got here of these regular SATA ssds so having these all together basically the datas is written to and read from all at the same time so it's really fast what I'm going to do is I'm going to do a quick test now to show the difference in speed between reading and writing to this C pull with three SATA ssds compared to writing to this single SATA SSD here which is an xfs disk now talking of xfs if I click onto this here there's been an improvement made on xfs in unraid as well for single Drive pools we've now got Auto Trim which is enabled by default so that's nice to have now you may be wondering just what's the point of Auto Trim when the actual OS supports trim commands anyway well as of Linux kernel 5.10 xfs now includes built-in support for automatic trim operations so basically by enabling automatic Trim in xfs it allows the file system to manage trim operations independently so with this feature enabled xfs can issue trim commands based on its internal policies such as when files are deleted or when space is reclaimed and so theoretically improving the ssd's performance and its lifespan so definitely worth enabling it okay so let's get back on track and do our speed tests now I'm going to go to the docker tab here and I've got this container here and I've mapped it to the Prometheus pool here which is the Z pull and to the single drive Sata SSD now this is just a simple Debian container with fio or fio I'm not sure how it's pronounced installed which allows us to do some various tests so I'm going to start this up now and open a console window and first I'm going to test the xfs location and so this first test I'm going to do is going to simulate the workload of reading say video files from this location okay so that test is done and we can see the results here and I'm getting a read speed of about 540 Megs per second so that's about the max speed I'm going to get out of a single SATA SSD okay so now I'm going to do the same but this time on the Z pull okay so the read test I'm getting a better speed I'm getting almost double what I was out of the single drive I'm getting 900 Megs per second okay so we all knew that reading from three drives is going to be faster anyway the last thing we're going to look at in this video is how to import an existing Z Pool that you may have had made elsewhere now I've got a z pull that I had on true Nas so I'm going to stop the array okay so now the array stops so what I'm going to do now is I'm going to add four eight terabyte drives to my unraised server that used to be in true Nas it's really best practice to actually shut the server down but I know I can hot swap on this machine okay so the hard drives are connected now and we can see these are from a true Nas machine because the drives are actually partitioned if we quickly take a look at say this 12 terabyte drive here if I unassign it it will pop down as an unassigned device which is this one here we can see the unread created Z pulls well they only have one partition anyway I better pop that back in so how do we add this existing Z Pool well one thing I've got to be careful of is these are all eight terabyte Toshiba drives but this one here is an xfs one I record CCTV footage on so I'm going to be careful not to accidentally add this anyway all we need to do is Click onto add pull give it a name I'm going to call it Battlestar because that's what it was called before and there's four drive so I'm going to choose four slots and click on to add now I just need to assign the drives so let's quickly have a look at what that CCTV one is okay it's SDI so I just don't want to assign that okay so that's the four correct drives the sign now all we have to do is have file system type set to Auto and that's it and so with that done all we have to do now is start up the array and now we can see that pool's been imported if I click onto it here and scroll down we can see that it's raid Z1 and it really is just that easy anyway guys I'm going to wrap up this video here now and the next videos is going to show how to easily convert your existing app data so each folder is a data set and then using snapshots on those data sets so should a container go wrong it's just really easy to roll back anyway guys it's getting late here so it's time for me to go but I really hope you enjoyed the video if you did please share it with anyone who you think will find it interesting and if you liked it well hit the like button as all YouTubers say it helps the algorithm apparently anyway guys it's getting late here and it's time for me to go but whatever you're up to for the rest of the day I hope it's good and I'll catch you in that next video foreign
Info
Channel: Spaceinvader One
Views: 78,581
Rating: undefined out of 5
Keywords:
Id: rEAfX75nReg
Channel Id: undefined
Length: 32min 26sec (1946 seconds)
Published: Wed May 17 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.