Testing Synology and TrueNAS NFS VS iSCSI

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
tom here for more systems and we're going to talk about nfs versus iscsi i just did a video on benchmarking because i like to show the methodologies and tools i use to come to this conclusion but benchmarking is only part of the story it's not just about the speed test it's about things like thin or thick provisioning and that is a part of the equation you need to determine whether or not you should choose iscsi or nfs other part of the equation i found interesting was using synology now i used this with synology and i did the benchmarks on truenass it is not a comparison of synology versus truenass because the machines are not equally spec to say which one of these two machines in a showdown is faster but it is to take the same machine and run the tests on iscsi and nfs on the same synology run the tests of iscsi and nfs on the juneas what was interesting to me and of course all the results are linked down below if you didn't feel like watching the whole video here but the results are also a little bit weird because the nfs was always slower on synology but not always slower on truenas this is one of those weird factors that is going to come down to the design of the storage server and who manufactured and put it all together because the product does have some effect so it's not necessarily a protocol speed issue it could also be the system that is handling the protocol and i wanted to touch on some of those topics but also talk about the results and how we come to these conclusions now before we dive into these details if you'd like to learn more about me and my company head over to lawrences.com if you'd like to hire a short project there's a high-risk button right at the top which includes a lot of storage consulting if you want to support this channel otherwise there's affiliate links down below to get your deals and discounts on products and services we talk about on this channel now everything is time indexed down below along with the link so if you just want to jump ahead feel free but i want to set up the context and scope of this project one i only had time to test this with xcpng the results may be different maybe in a future video i'll redo these tests with vmware i just didn't really have time to set up vmware and compare because i'm curious if vmware handles iscsi and nfs differently and versus xe png but unless this is here we are now iscsi and nfs are shared network storage tools and what this allows you to do is for example when we create an xcpng pool we have three hypervisors in it they would connect to a switch and that switch would then connect to your nas or sand depending on how you want to call it but essentially you have a shared storage where the vm storage lives and that way you're able to easily pass that running vm between the different hypervisors because the storage has a common place to live whether you choose synology iscsi and nfs setups or true nas iscsi nfs setups either way this is a common layout get more specifically though what is the difference between iscsi and nfs on iscsi the nas server cannot see the files as it is just hosting the blocks of data and all file level functions are being handled by the hypervisor this is because iscsi presents over the network but as a blocked device to oversimplify it a bit we could just say a picture attaching a network cable to a hard drive so the nas system may host the protocol iscsi and functionally transport that data but it's a blocked device and basically doesn't have any insight into what file systems being used how it's being formatted it's just handling the bits and blocks on the back end this means all the vms are stored in whatever formats that the hypervisor has chose so this is because he doesn't really have i mean there's probably some tools you can load on some nasa's to mount it provided you mount it in whatever format the hypervisor of choice has formatted the iscsi in but as i stated it's a blocked device being that it has a downside of being thick provisioned because it doesn't understand the files to do the thin provisioning and i won't go too far into it but thick and thin provisioning basically is an over provisioning method which allows you to say i can create a 60 gig drive for a particular vm but maybe it's only using 10 gig therefore in the nas itself it only uses 10 gig and can expand later this is kind of an advantage you have when you thin provision but iscsi because the nas is not understanding what's going on inside it doesn't really have that opportunity back over to nfs the nas server handles the files and each vm and the snapshots are in a vhd format so it can be viewed as a file on an ass we're going to demonstrate that really quick here but what the nfs share is just a standard file share so very similar to the way a windows files type share works which by the way windows files sharing the server message block is not the best way to connect vms for those of you wondering the two popular options are nfs and iscsi which is why i'm talking about them but the nfs means the nas system itself whatever file format it uses in the example of true nas it's going to use zfs an example of synology it can be done either via extent 4 or butterfs along with the synology the way it handles the raid on the back end but either one of these systems can actually see the files all right let's do a quick example of a thin provision and how the snapshots work so here we have the w11 zen speed test and it's running on my trunas mini lab with thin provisioning because it's done over nfs and there's only i think one disk in here so if we look there's the 160 gig disk but what does that actually look like to the file system logged into the trunas it's only 36 gigs because it's thin provision so if we go back over here and click on this we're going to go do a snapshot of this virtual machine new snapshot matter of fact we'll uh yeah one snapshot is probably fine if we just look at this again it created a couple of them because it's got to understand the differential between them but now they're only consuming 128k because nothing's changed vm's not doing anything right now so not much has happened let's go ahead and make the vm do something and i think i have some random benchmarks we can do it's just gonna read and write a bunch of little files and uh okay it's busy doing stuff now so go back over here and we've seen 2.1 gig worth of changes not another 36 gig as this file changes and as it does it's reading writes and it's keeping the differentials it's then provisioned therefore it's using very little space right so let's give the example here on this w lab that's running on iscsi if we go over here and create a snapshot for this one here we have a snapshot created and it's running on the iscsi so we can click on this refresh and we see quite a bit more data used instead of just using even though the only disks on here are this one and one snapshot it's now using 115 gig as opposed to which should be only a few kilobytes because we aren't even actively doing anything with that vm either this is that thick versus then provisioning so this is a big consideration not just speed when you're doing this because the next problem you're gonna have is waiting for a vdi to coalesce and i've talked a little bit about this in the past but this can be a real challenge when you're thinking oh i only snapshot temporarily for a backup or for some reason and then i delete them later and this is where people have gotten themselves in trouble building out these systems thinking they have enough and will destroy this vdi and we're going to create actually i probably should have done it with this here snapshot we'll destroy this because we already destroyed the vdi but now we'll do another snapshot and maybe we can even break this system do another snapshot then we're going to go ahead and delete these snapshots and what's going to happen if we go back over to here we have vdi's to coalesce they need enough space to get rid of the differentials because even though everything's happening in real time on the back end the way a virtualization system works is you have to give it time to coalesce all the data and get rid of these this happens again with nfs but with nfs being thin provisioned it's not as big of a deal because it will coalesce over time and each one was only thin provision it's only a few kilobytes so not a big deal to get rid of it because these are thick provisioned it can take a little bit more time but in the meantime you can run out of space waiting for something to coalesce because it had to allocate each one so those are a couple of factors you really need to consider before you decide whether you want something set up with iscsi or nfs it's not just about speed it's about this factor right here and being able to say all right are these going to coalesce and is it going to happen and before and this is when you have a lot of backup jobs before those next backup jobs run all right now on to the test results themselves now first is this analogy this is a debian xe png on this analogy rs3621 every view coming up with this device part of the reviewing of it has been just beating it up with a lot of tests and over and over again there's a couple times when uh the pharonix test suite didn't want to run for whatever reason when i put the plus at the end of it it kept getting stuck so that's why there's one more column on here but it's the same system it's still the plus model um it just was being silly but when you start looking at the results here and a couple of them didn't run uh for reasons i'm also puzzled by so you see a couple where there's not a comparison but we'll jump right down the comparison one in every category here and it's represented by anything green green or blue at the top here it was faster to run things on the iscsi even when i ran the sql test here but how much faster well this is 10 seconds versus 9 seconds 38 versus 33 seconds 80 versus 65 and we did on the sql lite 128 benchmark start seeing iscsi take you know a lot less for some of these when it comes to his small rights lots of small rights we do notice an advantage of the concurrent small rights being a little bit faster now one of the other things that happened here and i especially made sure i did this on trunas and this is an important part of thick versus thin provisioning is running the test twice giving time for the vm to expand there is a slight performance hit you take if something is thin provisioned what that means is once you then provision it because it has to expand you should at least run a couple times the test so the vm can expand to its larger size so if you copy the vm over and it's only it's a 60 gig allocation but only 32 gig used in a thin provision you at least want to run it once so it gets up to whatever it's going to expand to this is not an issue when you thick provision something so you can just run the test once but that's also what causes some of these anomalies including especially on here it did the same thing again it showed some of the tests running twice so there's not much uh variation here on the trunas system but let's talk about the numbers there because they're a little bit different so nfs versus iscsi and trueness trueness gives you a lot more options than synology for fine tuning but i left everything at default so standard default data set size of a 128k and this is where you could actually dive in and do some fine tuning either on iscsi or nfs and i talked about this in benchmarking you fine tune in order to optimize for your workload this is a generic baseline set it at the middle 128k block size upping the block size or lowering the block size can cause variations in the speed but like i said everything's left at default for this also asynchronous writes on both the truenas and on the synology were turned on for nfs that's just an important tuning factor that you may want to have on there mileage may vary it's just a factor i want to make sure it's documented here nfs on truenass is because the entronas nfs on truenass the sqlite barely faster barely faster but once we get up to these larger ones much like we seen on synology there were some speed differences where uh it fell behind a little bit but this is where things get a little strange is it was able to perform better than in a few categories so let's scroll down and look at where nfs won and i believe it was all in the streaming categories would have been the larger block size tests right here flexible i o tester random read we favored nfs flexible random read on 256 kilobyte and one megabyte all favored on the nfs side same thing with some of the 16 kilobytes sequential writing was actually faster which i thought was interesting and we got about a 21 speed difference on the flexible i o tester for sequential write sequential rate some of these i said were repeated so this is just a duplicate uh 256 and one meg again so there are some factors that it was a little bit faster but like i said there's a lot more to it than just the speed the speed is obviously an issue if you're optimizing but the second part that was the thin versus thick provisioning may be more of a deciding factor than a five or six percent speed boost and obviously we've seen some of these as high as uh 80 but that's also because the workload may be just more optimized with the default iscsi setting versus the nfs settings but if i were to tune it for those settings that gave it the speed advantage on iscsi then it would probably lose its advantage of the higher larger file sizes that were able to be written so ultimately it comes down to which one works for you based on the workload you plan to run on it and if you plan to learn a very mixed workload no problem just go ahead and leave it in the middle and kind of get as best as you can which yeah the middle never really helps much but at least it's it's one of those things if you have mixed workload but this is where the opportunity comes into you if you set up different iscsi extents for each workload and different nfs extents for each workload so you create a data set for each nfs for each optimized workload instead of just creating one large nfs pool then is kind of a big advantage the other one snapshotting in truenas the zfs snapshots are wonderful but they create kind of a problem for iscsi because of the way ice because he stores all of the vms into a single iscsi lun that may be a problem for you because what if there's eight vms in there and you wanted to restore one of them by rolling back to a snapshot you can't you have to restore all of them when you're doing this inside of nfs the snapshots back to being just a file system you can snapshot it then you can fork the snapshot and go grab that one vhd file and copy it back over to the pool and you're done iscsi because the nas itself is unaware of the file structures within there unless you're using some other tool to mount it creating a snapshot forking it attaching to that particular ice because you want a lot more work and you can't manipulate the files natively on the nas so now you've gotta try to figure out how to extract that vm out of there it's not an impossible task it's just a way more labor intensive task and setup task finally i will do an upcoming video on storage design because there is another factor when you're doing this and the answer might be why not both you may want to set up your nas where you have nfs for the vms but iscsi for things like windows and this is a common storage design where you take and the vm itself may run on nfs and that's where the windows vm lives but do you store everything within that windows vm not really it's not a great idea you may want to present iscsi to windows that way you can have a block device presented to windows that has all the full set of features that come with windows and ntfs presented over iscsi as the storage device and that way your vm backups can be separate i'm going to do a future video talking about storage design because a lot of people just want to stuff everything into the vm and that's not the best place to do it you either want to use a separate nas for your file shares or when you have to mount some type of data store because you're running a database and it's more practical to do so you may want to have the vm boot and then mount the nas again in a different way and like i said with windows eyes because it's a popular way to mount block devices but you can also if you have some linux servers at startup they can mount iscsi or nfs where you get the best of both worlds the virtual machine itself reading from manifest is fine but because of performance reasons the actual data store may be on a nice because the extent that's mounted by the vm as opposed to trying to stuff everything into the vm this also gives you the advantage of course of being able to back up a vm because the vm itself is small and gray log would be a great example of that i've talked about this on the channel before i have to have a video on gray log greylog needs a large data store stuffing all that into a vm not a great idea because if you wanted to back up your gray log vm you can end up backing up all that data so just creating a small vm and then mounting the storage separately probably a better idea i'll probably do some videos on that in the future leave some comments below so i can kind of get an idea what some of the gaps are actually a lot of my videos are very much driven by the audience feedback or different knowledge gaps that people may say hey it'd be great if you could explain how this is set up so that's also a great discussion having the forums where all this will be linked to links to everything down below links to some of the previous videos of course and if you want to interact directly with me and dive deeper into this topic the forums is a great place to engage with me on that or of course hit me up on twitter alright and thanks and thank you for making it to the end of this video if you enjoyed this content please give it a thumbs up if you'd like to see more content from this channel hit the subscribe button and the bell icon to hire a shared project head over to lawrences.com and click on the hire us button right at the top to help this channel out in other ways there's a join button here for youtube and a patreon page where your support is greatly appreciated for deals discounts and offers check out our affiliate links in the descriptions of all of our videos including a link to our shirt store where we have a wide variety of shirts and new designs come out well randomly so check back frequently and finally our forums forums.lawrences.com is where you can have a more in-depth discussion about this video and other tech topics covered on this channel thank you again and we look forward to hearing from you in the meantime check out some of our other videos you
Info
Channel: Lawrence Systems
Views: 20,077
Rating: undefined out of 5
Keywords: LawrenceSystems, nfs vs iscsi performance, nfs vs iscsi, nfs vs iscsi vs smb, nfs vs iscsi reliability, thin vs thick provisioning, thin vs thick provisioning performance, thin vs thick xcp-ng, nfs vs iscsi xcp-ng, nfs vs iscsi xenserver, synology server, xen server, network attached storage
Id: 2HfckwJOy7A
Channel Id: undefined
Length: 18min 22sec (1102 seconds)
Published: Thu Aug 26 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.