BTRFS Features EVERY SYNOLOGY should be using

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right how's it going y'all vtrfs is one of the best features for synology and ads and these are going to be some of the best features you really want to use anytime you're setting up synology nas not all of these features are going to apply to everybody but i can guarantee that every single nas user will benefit from at least one of these features if they have a btrfs file system on synology nas because of how powerful the btrfs file system is and how well it's integrated into synology dsm all right and so before we delve into all these great features that synology has with btrfs we first need to do a very brief primer as to what btrfs is btrfs is a file system and that's pretty simple what it does is it takes blocks of data from a hard drive or raid and it combines them into a system that a computer can read and write files from but what makes btrfs special is btrfs is what's known as a copy on write file system or cal and some people say what a copy on write file system does is whenever you're saving data and you change a bit of a file instead of actually deleting the file on that disk or that section of data that you changed on the disk and then rewriting over it it instead just stops carrying about that old data and instead writes it to a new section and then updates where that data is and now that does not seem that interesting but it allows you to do just these insanely powerful things and it's the basis for both what zfs and btrfs both do and it allows you to have kind of really two things that make it just unbelievable there's a bunch of other ones but these are like the two foundational pieces as to why copy on right is so great so the first and by far the most important and why they were developed is for data consistency in your traditional file system if you have an unexpected crash or power loss as the file is being saved then you have a portion of that file that was deleted but has not been updated to the new part and so that means you have now corrupted that file and that can be horrible think about a database that is having credit card records on there anything like that it can be crucial to make sure that your data is consistent because you would much rather have the file that was there half a second before you changed it then a corrupted file if a copy on write file system runs into this it is still consistent because the way it works is the new block of data is being written and until that new block of data is completed it's right the old block of data is still being referenced and so if you're writing that portion that new file and the hard drive crashes well it'll just boot up and it will never know what that file was that new file will have never existed and instead your hard drive will just act like that old file never got changed and so that's what makes it consistent then only once that data has been fully written does it update the pointer and so now once it's been fully written it now can go and say okay we're good we know this is done so now whenever you want that file the next time it's actually using this new block and that is almost instantaneous so theoretically it could crash during that time but that's so rare and there are also things on top of that to make sure that does not happen that would be a very bad thing to happen is known as a right hole but that's outside the talk of this and not nearly as likely to happen as the former thing i said and that's why btrfs and zfs are used for very consistent data they also have a ton of features around this because they are designed for safe data rather than necessarily being the fastest file system they're both designed to be very safe with your data and ensure files are not corrupt and things like that the second reason copy and write file systems are awesome is because you can actually have a thing called snapshots so snapshots are insanely powerful feature of a copy on write file system that allows you to recreate your entire file system exactly how it was without taking up that much space and doing it in line with the current file system so you can look at your files exactly how they were a week ago today and yesterday all exactly in the exact same file system without taking up duplicated data i want to talk about very quickly how this works so way that this works is it just keeps the old pointers and the old data in a separate file these pointers take up next to no space at all so effectively if you have a thousand snapshots but files haven't changed it will not take up any extra data and the only time they do take up data is when there's changes so if we think about that example i just gave where you've got the data and you're changing a single part of that file that old part just stays on disk and you keep a reference to all the pointers that were there at that time and so if i want to look at exactly how that file was at a specific time all i have to do is store the pointers which are very small and just make sure not to get rid of any data that are being referenced and so that is why snapshots are able to happen and be so lightweight with btrfs you can take snapshots every single day and if the files don't change your space does not increase at all but you're also able to roll back to any other changes very very easily it is such a powerful tool and i highly recommend it and it's going to be on this list all right so that is going to be the super brief primer about btrfs one other thing i did want to mention is if you read the btrfs docs it says not to use raid 5 or 6 with btrfs that is because btrfs actually has its own volume manager as well as its own file system much the same way that zfs works except btrfs it's optional this is going to be super high level running over this topic essentially there's a right hole in the current btrfs raid method so a lot of people go why would you use btrfs and raid on synology the way synology has gotten around this is synology does not use btrfs rate instead synology uses the much more common linux mdadm raid and then presents that to btrfs so because of that that issue with a btrfs raid does not happen at all because synology is not using btrfs raid sonology's only using the btrfs file system really glanced over that i've had that comment a few different times and i just want to clarify that really quickly because you will talk to people and they will go no no don't use btrfs raid what are you doing but it is in fact safe because mdadm is a very tried and true right group and it's been tested very well no issues no right holes there technically is always a right hole but it's very very very well kept from happening in like 99.99 of the cases i digress all right and so now finally on to the list that everybody's been waiting for and you clicked on this video for and that is going to be the best btrfs features that pretty much every single synology user should be using the first ones everybody should use and then after that it's going to be kind of case specific but number zero is just btrfs just by selecting btrfs as your file system you get that concurrency that i was talking about earlier which makes it so powerful and so much less likely to corrupt even if you don't use any of the other features with this btrfs will be working under the scenes to keep your data concurrent and also non-corrupted it is great for that and so even if you don't do anything else it's there working for you so you should use it anytime unless you've got a ton of security cameras and that's your only use case in which case you can actually use two different volumes one of them btrfs for all your files and then a smaller one is ext4 for your actual security cameras but other than that everybody should be using btrfs though it's not available on all nasa's i do recommend purchasing a nas with btrfs on it it's that powerful all right so now other than that you're already using btrfs great now let's talk about number one and it's easily number one btrfest snapshots btrfs snapshots are built-in ransomware protection it's not a backup but it is the ability to roll back issues so btrfs snapshots are able to be downloaded from the package center this thing called snapshot replication snapshot replication i've done a video on it i'm probably going to go do a much more in-depth video on it but it is such a great package what you want to do is you want to go into snapshots and on all your crucial folders that you have files on you want to set up snapshot schedules snapshot schedules will create snapshots any of those snapshots that you keep you will be able to roll back to that specific time so right here homes has 19 snapshots and if i just list them any one of these snapshots i could roll back to this allows me to have so much flexibility here you just hit browse and look oh here's the home shared folder exactly how it was at this time literally 2at every single file is there exactly how it was at the time that snapshot was taken and it takes up very little space it only takes up space when files are changed so for most people i would recommend setting up snapshots keep them for 30 days and you will just have such a great time the only thing it will do is from the moment you delete a file to the moment you get the space pack will be 30 days but for most people that's well worth it and if you need to clean up data if you've deleted a bunch of files already you're like okay i know the date is consistent i know i've got all the thing i'm okay deleting the snapshots because i just need the space back just go in select all of them and just hit remove and then you'll instantaneously get your storage back immediately so that's number one super powerful we'll be doing more videos on it but i would highly highly highly recommend setting it up and that's setting that up for everybody the only thing that should change is how often you do it so now on to number two and this is another one that every single btrfs user should be using and that's going to be under storage manager this is something i have no idea why synology does not set this up by default but it needs to be data scrubbing so btrfs has these checksums in there that allow it to recover from bits being lost say you go into your hard drive and you mechanically change a one to a zero on other file systems that would go unchanged and you've got a corrupted file that may never be detected and it could give unknown results with btrfs it can not only detect that change but as long as there's just one per section and you have a redundant raid it can actually correct it automatically without ever telling you so what you need to do is you need to check all of your files every once in a while to make sure you don't get multiple errors and so that is what a data scrub is a data scrub goes through the entire file system and effectively reads every single file and checks that the checksum makes sense for what the data on the disk is then if it finds something where you flipped a one to a zero it will just go oh that's wrong and fix it automatically this way if you never open the file for 15 years it's still getting checked to make sure that file is not corrupted and self-healing before it can get too many bits wrong that it no longer can recover from and so schedule data scrubbing just enable it all storage pools i can't do this right now because well it's only got one disc in there because i ripped out the other one to go put in my main build this is just my test bench but you would want to enable it and i recommend every three months then if you are having a large file system especially at an office you can set up where it only runs during the weekend when nobody's at the office it's totally fine it's resumable you can pause it you can do whatever and it just runs the background and checks all your file system so that's number two once again everybody should set that up because it's just free essentially all right and so now on to number three and that's actually when you're creating a shared folder having the advanced checksum in there it's not necessarily required and if you need the highest performance possible it's not necessarily something you need think when you're having very small reading rights that's where it'll happen so once again if you're having something like a security camera set up where you really don't care if a single frame gets a little bit corrupted it's not that big of a deal but you'd rather running fast that's where you should not turn this on but otherwise if it's just a general file system always turn this on and so you have to do it when the shared folder is created i think they are working on setting up where you can add it in after the fact but right now it cannot be modified after the shared folder is created so whenever you're creating a shared folder you'll say create shared folder just call it test and it needs to be on a btrfs volume obviously and then you want to go into this right here and say enable data checksum for advanced data integrity and that is what you need to do and it's just going to have extra bits in there that help it be more likely not to get corrupted when you're reading it though this will put a toll on yours i believe sometimes it's also going through and actually reading from all of your disks in a raid group to make sure that that checksum always works out the other thing here that's kind of an extra option is file compression it's not nearly as good as zfs is file compression where it's just turned on by default but if you have very compressible data think like a ton of just text documents saved on here and you want to get some space back compression can work though it does have an overhead that's non-zero the advanced data checksum i found to be blazing fast so i always enable it the compression i would not use unless there's a very good reason to add compression in there so always turn that on that is number three and those are the ones that pretty much everybody should be using all right and so now there are a couple of just kind of fun ones so number four is super useful no idea why it's not enabled by default and that's under file service advanced fast clone fast clone is absolutely awesome because what it allows you to do is on your smb file server or the other ones listed here it will allow you when a user says copy paste to instantly copy and paste that without duplicating the data so if i took a terabyte folder and copied it and pasted it in the same volume somewhere else oh i now have both those folders they say they're one terabyte each but in reality they're only taking up a total of one terabyte because it's all the exact same data then once either one of them is changed now they're their own objects but that data that was consistent when they were both formed will still be de-duplicated without having to do any kind of advanced deduplication it's basically free and so for the vast majority of people that's how data gets duplicated and so it can be pretty much deduplication of your file system for free especially if people are always copying and pasting their stuff into their own user folders on a busy file system this can save tons of space and tons of time i would highly recommend enabling that um i have no idea why it's not enabled by default but that is going to be number four i always enable it because it's such a great feature and it's so fast and it's really useful whenever you can use it they actually wrote a special samba module in there that allows that to happen zfs could do it they just need to write that samba module in there all right and so now for number five this is going to be where it's really only necessary for kind of larger businesses who have really really really active file systems so that copy on write process has to have that data cleaned up at some time and that's called data reclamation and that's pretty processor intensive and disk intensive deleting all that data and saying yes you can write data there so what you can do is you can say okay i only want to run the space reclamation at night and so you can set this space reclamation schedule to not run during the workday and that way you don't have that overhead running during that time and so this allows you to get your file system get all that data back at night when nobody's using it and if you're in office you're probably not crucially relying on that data instantly and so this can be useful for some people i don't necessarily use it because well one my snapshots delete at midnight anyway so that's the time that the space reclamation is being run anyway so it's not that big of a deal for me but it can be nice because it's actually fairly processor intensive and disk intensive and finally the last one which is very much a business feature we're going to go back into snapchat replication talked about snapshots earlier but now we're going to talk about replication so the replication side of snapshot replication is insanely useful for businesses so what replication allows you to do is it allows you to take a folder on one nas and have it be replicated to a secondary nas as a primary secondary setup that then replicated folder will be read only until it's failed over this means you get almost all the benefits of high availability and you have maybe five minutes of downtime should the primary nas fail it is insanely useful in a 321 backup solution for a business where downtime is going to cost tons and tons and tons of money you can have two entire nasa's exactly identical always in sync and so as soon as one of them goes down you get the second one back up and running immediately and then when you do get the main one back up you can just reverse the process and make it the main one again it is insanely powerful and i set up for a lot of businesses for a 3-2-1 backup solution i am planning on doing a video if people are interested on that i'm just waiting to get some more nasa's so if you're synology and you want to send me a couple of like beater nasa's just to set up this i'd be very appreciative all right well that is going to be it for this list i hope this was helpful these are some awesome awesome awesome features you should really be checking out because btrfs is such a cool file system copy on write is insanely powerful and it can really take your nas to the next level and make it so much safer everybody should always be having at minimum snapshots enabled as well as scrubbing done every three months the rest of those are all great but not as crucial as those first two i mentioned that's gonna be it for this video if you'd like to hire me a project there's a link in the description below for that and have a good one bye [Music] you
Info
Channel: SpaceRex
Views: 36,909
Rating: undefined out of 5
Keywords:
Id: pa5wKC4FFFQ
Channel Id: undefined
Length: 20min 27sec (1227 seconds)
Published: Wed Jun 22 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.