Tuesday Tech Tip - Demystifying Benchmarking for your Workload on ZFS

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey everybody welcome back to another weekly tech tip here at 45 drives so today's video is geared towards demystifying benchmarking and specifically benchmarking for your use case or your workload so with that being said let's jump into it all right guys before we get started i definitely recommend watching tom's video over at lauren systems because he does a great intro into file and how it all works he also talks about iops versus throughput and does a great job showing different ways to benchmark today however i want to go a little deeper on benchmarking specifically for your workloads so one thing i really also wanted to touch on in the beginning is something that tom touched on as well and that is in a vacuum when you have nothing to compare it to benchmark really doesn't mean all that much while it's fun to try to get to the biggest number you can it necessarily isn't really that helpful when you're trying to understand how your drives are going to perform in very specific workloads so it's very easy to skew your benchmarks in order to make your numbers look a certain way if you have a specific goal in mind so also as tom touched on marketing just really loves to take those really really big numbers and shedding them from the rooftops so one thing you'll notice however is 45 drives doesn't really have a ton of published benchmarks that we parade around like some other vendors might our philosophy has really always been geared towards benchmarking for workloads specifically and workload applications as we get more familiar with our customers use cases but that being said as a storage vendor like us there is a real tangible benefit to taking the dyno approach so to speak where you put your car on the dyno and push it to its limits so what that ends up looking like for us is benchmarking our servers without file systems or raid arrays and really getting to the core of what our servers are capable of pushing when all the disks are maxed at 100 with as few bottlenecks as possible we will be actually publishing some really really great stuff showing these type of numbers so look out for that in the future so while these numbers certainly have their place they may not be what is most important to you depending on your workload and that is exactly what i want to explore in this specific video so let's look at why benchmarking storage seems really really simple from the outside but then when you start digging things get really complex really quickly so starting at the disc level is great and all but that's not really how we use our storage there are many layers that come into play before we actually get to put the data on the storage itself so every time you add another layer you're further complicating the chain so for single servers for example you can think as adding your zfs array and everything that comes with it such as your redundancy profile like raid z1 raid z2 your vdev configuration your record size then you have how they actually the data will actually be accessed by clients is it over the network and if it is should we be doing our benchmarks over the network so finally we get to our actual workload so what type of i o pattern does it follow is it sequential or is it random is it mostly reads or writes what block size does your workload favor does it have a favorable favorable i o pattern that we can queue up for a good number of i o or is the queued app very very low where you can't have that queued up and then for ceph it gets even more problematic because you have client access via rbd for block you've got cfs for file system you've got rgw for object and then you've also got replication versus erasure coding for your pool types and all of these things have different performance characteristics however don't lose hope yet today what i really want to do is try to demystify this black magic we call benchmarking and try to help you learn how to benchmark for your workload instead of just trying to get that biggest most impressive number you can that might not necessarily mean anything when it comes to your end use case so the end result might not actually not only help you get a better understanding of your workload but it can also help you better prepare your storage array for that workload so you can actually end up with a better built and better configured z-pool or storage array in general so let's get started all right so for the demo here as you can see i have an agenda laid out for us at first glance it definitely seems quite daunting there's a lot of stuff here but if you're interested in this stuff there's going to be a lot of really really good information here so i recommend if this is something you really want to know like benchmarking for your workload or understanding how zfs works to help you build your zfs pool better for your workload there's going to be a lot of really good information here however if there's only certain things that you might be interested in in this uh kind of agenda here what i'll do is i'll leave timestamps down below as to every little part of it so if you want you can skip around to different parts of the the demo so without further ado let's get started and i want to go through the entire agenda here and kind of so you are prepared and you know what uh what it's going to entail so to start um i guess the the format or the name of this is really benchmarking for your workload to help you inform your decision on the best cfs configuration you need so when i started this i really thought it should be benchmarking for your workload but as i really started thinking more and more i realized just benchmarking for your workload with zfs is one thing but really understanding how zfs works underneath to help you and guide you to the best type of pool that you can have for your performance needs is also very very important so both of those are really intertwined so first we'll start with talking about storage array caching read and write and whether it's proper to actually disable that for benchmarking there's a little bit of a debate around that but spoiler for for us we recommend taking both into account especially depending on your use case there's going to be lots of reasons for that and we'll get into that so i'm going to talk about arc and l2 arc and zil and slogs and whatever whatever they are because they they sound pretty crazy at first glance but we'll get into detail about what zfs is and how it uses those i have a link here which has some really good diagrams i'm going to show when i talk about that to help drive that home then we're going to jump into synchronous or unbuffered rights versus asynchronous rights or buffered rights i'll talk about the fsync call when doing fio benchmarking and we'll talk about i o or q depth and how they affect performance during these benchmarks so we'll run a benchmark that is just essentially a unbuffered random write to our array and are sorry buffered right so asynchronous writes to our array we'll show that the performance is great then we'll jump in and we'll do a unbuffered write which actually has to go directly into the array before the acknowledgement and you'll see the drastic difference in performance there and then we'll look at q depth and how that can affect that as well then what we'll do is we'll add a slog or a zfs intent log that is separate from the one that's normally on the pool on a ssd to show how that can actually drastically increase your unbuffered write performance yeah so then after we add that that slog or that separate zfs intent log we'll rerun the synchronous write test and see just how much of a performance benefit we see by moving that off of the z pool and putting on a dedicated ssd drive and then i'll explain why kind of that happens and what exactly is going on under the hood here then we'll take a look at some example workloads that you may have in your environment and what and how they use storage so we'll look at like virtual machines and databases which are typically the synchronous type workloads then we'll look at things like video editing or backups specifically veeam because i love veeam i've done a lot of work with veeam so we'll look at how they actually commit writes which spoiler alert they are more of the asynchronous type and then we'll go into recaching and kind of how that plays into zfs because it is very very hard to benchmark zfs without arc having an effect and we'll explain why that is and then we'll explain how you can kind of get around benchmarking without the arc but because arc is such a core feature or core part of cfs what you should really do is benchmark with arc and take it into account to your data set and try to understand your workload and how large your active data set is and then try to build your arc around that so you're never exceeding or very rarely exceeding your arc on your active workload so we'll talk about that and then i'm going to show you some really cool benchmark that actually ran prior to coming here because it takes a while to run this one and we'll show you the results and show you kind of how that ends up then we'll get into the end here and we'll talk about the best configuration settings for your storage to maximize performance for your workloads so we'll take your understanding of how these workloads read and write that we've talked about up until this point and then we'll design your storage pool around it so things like the best raid profile for a given workload so things like mirrored v devs or random io or raid z for sequential type workloads the best block size for a given workload so this one is very important and we'll show that with some benchmarks as well so we'll find the block size of your workload which is very very important so for example databases many times use 16 kilobyte block sizes veeam backups can vary depending on how your backups are set up or if you have nas based or direct attached storage based backups and things like that but you still can't find that and then the cool thing is is with zfs especially is you can have your data set match that block size to really give you a boost in performance and we're going to run some benchmarks to show just how much that can help and the really cool thing about zfs is is since you can have multiple data sets with different record or block sizes you can have all these different workloads with different performance characteristics for your needs which is really really cool um and then we're going to wrap it up by just talking about sizing your ark and your l2 arc and your slog for your given workload and then the very very last thing this is actually one i want to put to you guys in the community because it's one that i personally don't have a whole lot of experience with it's one of the newer features of zfs but that's a special v dev and what it does is it allows you to offload your metadata to a special v dev so like you could have like nvme high performance and from what i've seen and like i've done some work with it but nowhere near as much as i've done with the rest of zfs so i'd love to hear if anyone has any experience with it and what their experience is is it great has it helped has it not helped but i imagine if you have a very very dense zfs file system with lots of files it could be a pretty pretty big game changer so yeah with that being said that's that's the agenda and let's let's actually just jump into it now all right so let's talk about read and write caching and how that works with dfs so first let's take a look at this link here so we can talk so to start the way that zfs writes is like i alluded to you have asynchronous rights for most workloads like things like you know if you have a samba share if you're just writing you know video files or regular file sharing things like that anything like that is typically done with asynchronous rights and how that works is the client will contact the storage array and it will have a bunch of writes ready to go and essentially they'll go right into the ram on the zfs pool and immediately the client is okay with that and it'll keep giving more and more right so the zfs pool can continuously buffering those rights into ram and then flushing them into the z pool in transactions because zfs is a transactional file system now that's great because it allows the client to just keep giving write requests as fast as it can until the the ram or the z pool slows down so a lot of times if you've got you know small bursty writes and things like that it can happen very very quickly and the client sees it as a very very fast performing array now asynchronous rights are very different than this because what they do is while they do still go into ram these type of workloads are very important that the data gets flushed actually to the long term storage before a commit happens before an acknowledgement happens this is very common for databases and for vms especially so if you do hypervisors and things like that and you're running virtual machines it's very typically all sync writes and the reason why they do that is because you know if for example with an async right for example if you lost power to your array it could be that when it comes back on you could potentially have some data loss and with these type of workloads like databases that's not acceptable right and so what happens with zfs is that sync write goes to a zfs intent log which is where the right first gets committed now when that first gets committed then it's able to go back to the client and say it's written um it's at least been in the zfs intent log you can keep going here's your acknowledgement and so by default in a zfs pool especially if it's all spinning storage that is located on the same disks that has the actual data where it also will be committed after it goes to the zfs intent log so what you end up with for a sick right if you don't have a separate cfs intent log is a double write penalty so you get the zfs intent commit and then you also get another right to the actual pool where the final right gets committed and so that is not great for performance so believe it or not if you take a separate device it could even be a spinning device and you put a separate zfs intent log on that you will actually see performance benefits because it removes that double write penalty we wouldn't really recommend using a spinning disc for this because it's best if you're going to do a separate cfs intent log put it on some very fast flash but it just it causes it not have to do that double write penalty on the same disks that are doing all the actual writes so when we have workloads like virtual machines databases we definitely recommend if you can have a high performance cache of nvme or ssd typically in a mirrored array because you need some safety there as well and put the zil on that the separate intent log on that and you will see a good performance benefit to that and we're going to show that right now actually i will be talking about the l2 arc and the arc as well but let's stop there and let's do some benchmarks and see how this works so we have our z pool here set up and let's take a look at it if we do a z pool status we can see we have a raid z2 with a number of spinning disks here okay that's great so let's pop back to our agenda here and let's look at our tests that we're going to run so the first one we're going to run here is a random write test we're using the posix aio engine there's lots of engines that fio has but i like this one for linux for file systems we're doing a random write we're using a block size of 4k we are not using the fsync flag and it's four gigabytes in size a single job and a single queue depth and we're going to run just for 60 seconds so this is going to be your asynchronous type workload so regular workloads where it doesn't have to get that acknowledgement and it can go much much quicker so let's run that right now and see what we get here we go we got our test results back and we can see here we have an iops number of 344 and a bandwidth of 1377 kilo kibi bytes a sec so that is with the asynchronous writes that we talked about and just to rehash on that that's this write comes into ram and then the client is okay with it and we'll continue writing to it it doesn't have that zfs intent log penalty it actually doesn't write to the zil at all in the async type of writes okay so we ran that now let's switch it up and let's make sure that every single write has to be synced prior to the next one uh beginning so we're going to run the exact exact same benchmark but this time we're going to use fsync equals one and so this is going to be with the double right penalty okay so we got our test numbers back and we can see we have a considerably slower performance here so we dropped all the way from 344 iops all the way down to 86 and our bandwidth is only 344 kitty bytes now i want to say in the beginning as well even that first number obviously is not anything fantastic but you have to remember this is a spinning disc array and it's using raid z2 not any mirrored v devs or raid 10s which is what you would want for you know high iops especially of random iops but we will get to that as we as we move on but we can see here now so with that double write penalty we were considerably slower so now let's throw a ssd slog in the mix and let's see how that performance changes so we'll go back to our agenda here um so we ran our buffer and our unbuffered rights uh well we could also do one with a queue depth but i think maybe i'll skip that for now because that is really interesting just quickly to talk about why i have that here so the really cool thing is if you're having the synchronous uh right what's happening here is with the single q depth that means only one 4k right gets sent and then it gets committed where if you have a high i o depth workload what happens is the client can actually send you know in this case 32 of those 4ks all at once and then they can be committed transactionally all at the same time so you will actually see a benefit even without a slog if your workload has a higher i o depth because it ends up a little more sequentially because it can take all those 4k rights 32 of them in this instance and write them all at the same time rather than have to write one commit go back write another commit go back et cetera so you get that okay but moving on so what we want to do is we want to take our z pool and we want to add a separate cfs intent log to it and so let's go back so let's list our block devices here i have quite a few ssds but i believe sdp yep is an ssd here so that will work i'm just going to copy and paste this and this will add a separate intent log to that array so here we go now if we do another z pool status we can see we have our raid z2 but then we have our slog here very quickly i would not recommend a single disk for a separate intent log because that is very important not to lose the data on it so a mirror is much more important okay so now that we have it done let's rerun our test again and let's see exactly what we get now okay so we're back we've run the test so let's take a look here this time we have 540 iops and we have just over two megabytes so 2163 kbytes a second so that is considerably faster than having your zfs intent log contained on the same z pool so as you can see that single ssd has improved this workload considerably so what is that telling us well what that telling us is if we have a workload that requires the use of synchronous or writes that we should definitely consider looking at a separate cfs intent log for that array especially if we need to use spinning disks for our right ssds you can get away with a lot more obviously because they have a lot more random i o performance than spinning discs will but if you can't having and the great part about this too is zfs intent log does not have to be a very massive ssd you can get away with some of the smallest ssds which is totally fine so okay so we got through that and we now see and we've have some great conclusions here about sink rights and what can improve that now again we haven't gone in depth on like different types of arrays yet but what we can see is raid z2 is not fantastic for any type of random i o to begin with so you might want to also benchmark at the same time in this stage before you actually put your workload in production take your disks and benchmark them in a few different configurations for your workload so mirrored v-devs or raid 10 is what you would call it in traditional raid is definitely the best way to go for getting the highest amount of random iops as you can believe it or not a raid z2 or a raid 6 in this instance actually only ends up with when you're all right at the disc level without any caching or anything helping you only get the iops of a single disk in this configuration random iops i should say of a single disk in that configuration and that's why these are are very very small numbers that we're getting right so definitely try mess around with the different types of raid arrays while you're doing this testing and you can really get to understand the best array types for your workload okay so let's move on so now we're going to look at example workloads and really i'm just going to talk about this quickly and we'll move past it and we'll get into some other things as well we're going to talk about example workloads and how they use storage so we already kind of mentioned that like virtual machines databases are really based towards like we said sync writes video editing backups standard day-to-day stuff you're really going to be using async like anything on a file system level if you're just using it as like a shared drive or things like that you really have nothing to worry about there you're going to be able to take advantage of that asynchronous right okay so so what about read caching now and this one is actually really interesting when it comes to zfs specifically and that's because arc or the adaptive replacement cache which is a read cache in zfs is so core to what zfs is so when i say that what i mean is like a traditional file system other file systems what they will normally use is your operating system's page cache to cache rights and reads and things like that whereas zfs has its own adaptive replacement cache with a much different algorithm and a different way that it works than just a regular faucet so let's get a look at this link and i can talk more about it all right so here we have a traditional cache and file system and how that normally works it's called least recently used caching and so what it ends up being is the most recent data that comes into that cache goes to the front of the line and anything that can't remain in here the oldest stuff is going to get kicked out and it's just going to continue on that cycle all the time that's fine for some workloads but zfs has something much much better especially for you know things that aren't just a standard file system like if you've got a lot of different workloads running on a zfs array which you you definitely can because it's built for that type of workloads the arc algorithm is going to benefit you considerably more so it uses um much much better algorithm so it not only uses the the least recently used but it also uses a most frequently used which is more important so as you have a piece of data or some data in your data set that gets used more often it gets a higher weight within your adaptive cache as time goes on and so what you can end up with particularly with cfs you can have a very very large adapt replacement cache you can have you know 256 gigabytes of ram dedicated just to your arc which can allow for a lot of your active data set to remain cached and be ready to be accessed for your workloads very very often so when people say you know you should get rid of your cache when you're doing benchmarks it's it's not real world well in this case it kind of is and then not only that it's actually really hard to benchmark cfs and have arc not play a part or not actually affect your results there are some ways and i'll show that but really what i would say is benchmark with the adapter replacement cache in mind but also keep in mind the size of your data set and try using that type that large of a data set on your benchmarks it might take a lot longer so like if you actively use you know upwards of 156gb gigabytes a day between all the different users that are using it you can benchmark using that type that large of a data set it might take a lot longer but you can get more real world statist statistics that way so that being said so that's really all i just wanted to get to is arc is is very different than standard file system cache and it plays a much more central role to zfs as a whole and so that's why we like to benchmark our read recap or sorry our read performance with the adapt replacement cache in mind but also keeping in mind that if it does if your workload does move outside of that adapter replacement cache things will degrade performance will degrade and so the the idea is is when you're building for your workload try to get your arc size correctly and if you have to you can put an l2 arc in as well which is a layer 2 arc using you know more traditional flash or nvme but my recommendation would be first go with as much possible ram as you can and only at that point when you can't go anymore then start looking at an l2 arc okay so that being said let's head back to our agenda here um so yeah let's talk about the option so like i said um testing with that that exceed your arc which can help you so when i said that benchmarking zfs and having the arc now play a part is it's really hard so in file for example you have a direct flag that you can turn on that's supposed to bypass cache completely for the longest time zfs actually didn't even support that it does now but you will see because arc is just so core to how zfs works you turn that flag on the performance is not going to be different if you have it on or off and you run a benchmark on fio you're still going to see read caching from the arc play a part the way that i've been able to really show the results of worst case scenario with the arc failing at this point is benchmarking with a massive data set and so just by example i have one here it took a really long time so i'm not going to run it here but let's take a look at this result here so we can see i ran a random read benchmark on the same array using just a four gigabyte data set size and we see some incredible iops right this is spinning disk storage here but we're getting 24 000 iops which doesn't normally compute right but that's because the arc is doing its job and it's caching a lot of that data set now what i did then to try to trick the arc or try to bypass the eric was the first thing i did was i put a whole lot of garbage data onto the array very quickly so flushed it with a whole lot of just data that moved some of the data out and then from there what i did is ran another benchmark with a very very large data set of 100 gigabytes now we can see those redi hubs very very quickly dropped uh very very low we went from 20 some thousand to 189 now again that is a result of the fact that these are random read iops and when you're getting to the disc level we're using spinning discs in a non random iop best scenario and that's why we see it drop off just so significantly but it is important to know these things because you could not know it and build a z pool that with a raid z1 or 2 needing random iops and not understanding why you're getting such poor performance and this just shows exactly what happens when your arc is no longer able to help you and so yeah so let's continue on and again i would recommend if you are going to do these benchmarks for your workloads again try some different z-pool configurations run the same test again try to bypass that arc but this time maybe let's have a bunch of mirrored v-dabs for your spinning spinning uh storage and you will get considerably more iops than this remember i said in a raid z2 you're only going to end up with about the random performance of a single drive well that 189 iops pretty much covers it obviously there's a little bit of extra overhead a spinning disc can usually do between 250 to 500 so we're pretty much on the money here all right so let's move on so now let's talk about the best configuration settings for your storage to maximize your performance for your workloads because you know i did a lot of talking about cfs and how it works and and benchmarking to show you how it all works under the hood but we haven't really gotten to the point of best benchmarking for your workload so one of the really good ones that i would recommend like i've talked about already a few times is figuring out the raid profile best for your workload so find out if your workload is performance or sorry is sequential or random so if you're doing video editing a lot of time that's very very sequential sometimes you'll get some random if you're scrubbing through your timeline very quickly and things like that so you need to be able to kind of have some sequential performance in some random but that's great that's where the art comes in it gives you the ability to scrub through that timeline with a lot of it being in in in your arc in your cache and be able to put up and handle that but here's one that i really really want to show off we're going to do some benchmarks to show this off so let's look at the best block size for your given workload and find out what the block size of your workload is so i talked about a little bit earlier but let's show this off so many of our databases use 16k like i said but if you have a certain database that may use a different size find that out that's definitely going to be able to be something you can find on the internet again veeam backups can vary video streaming is a little different lee so again if you're streaming video or adding video you might want to go with some larger block sizes or data set sizes record sizes sorry i wouldn't go like one mag or four mag like you might see in some benchmarks so if you've ever done benchmarks yourself or seen other people sometimes you'll see people benchmark one mag block size or even four mag block size which is really cool you're going to get very large streaming numbers but there's really very few workloads that are actually going to work that way where they're streaming large four meg blocks of data in and out um and for that reason i think the largest i would ever recommend going for my record size on my zfs data set is like 512 kilobytes and see how that works because even if you do have very sequential needs there's always going to be some type of random workloads in there somewhere throughout your workflow okay so let's let's start let's create a new data set and let's have the data set march match the block size of our workload so our workload in this instance is going to be 256 kilobytes so we're going to create a new data set that matches that so let's pop over to our houston ui and when our we're in our zfs module here so we have our smb test uh zfs pool and we actually have a zval here but we don't need to talk about that so let's create a new data set or a new file system we will call this 256k record just because that's what it's going to be now we can add or remove things but we're going to keep everything exactly the same to the apparent pool except for the record size because we want a good apples to apples comparison here so let's go we've got our 256 kilobyte record size for our data set here and let's create it perfect all right so let's pop back into our terminal here so first we're going to run this benchmark on the actual pool itself that's using the 128 kilobyte record size so let's go and grab that benchmark here we go so this again is running a random write performance number a performance benchmark with a block size of a size of 256 kilobytes and it's using a single job and a single i o depth and it's asynchronous of course all right so let's run that right now okay we got our numbers back and let's take a look here these are not any numbers to sneeze at right we've got four uh 256k block size and we've got 3756 iops and we've got a bandwidth of almost a full gibby bite 939 maybe bites here okay that's great so we did this on our 128 kilobyte record size data set so let's pop in to the one we just created we go so we're in our new records or sorry our new data set at smb test slash 256k record and let's run that exact same test and let's see what happens now where our block size and our record size match okay so we got our numbers back here let's take a look so we can see we've got a nice uplift in performance here of 4521 iops and we've now broken the one gibby byte barrier we're at 11 30. so what's different here absolutely nothing the array the z pool everything's the exact same you're getting free performance here because you matched your record size to the workload that you're running which is such an important thing and you can say well my workloads are very mixed i've got 10 different things that i'm using my for well that's the amazing thing about cfs is you just curve off a new data set you can have one data set using this record size one data set using another and they're both getting different performance characteristics so yeah so that's that all right so we're almost ready to wrap it up here we've gone through quite a lot i don't want to drag this out too long but before we wrap up sizing your arc and your l2 arc and your slog for a given workload so the arc in the l2 arc especially what i would recommend is when you get an idea of how your workload is going to be used what you're going to be doing with it like i said earlier i think the arc is what you want to try to size as large as you possibly can before you go looking at an l2 work a few reasons for that one is because you're never going to see a performance benefit from an l2 like you are going to see from increasing your arc size and secondly adding an l2 arc takes up some of the ram space you could normally be using for your actual arc so you end up with less arc and you end up with a slightly slower performing or quite a bit slower performing l2 arc so that's the big reason why i say max out your ram first then go looking for an l2 arc next for the slog much much simpler you don't need a lot of data um ceph or some stuff i'm so used to cfs is a transactional file system and so the way it works is you're never going to have you know a terabyte of data sitting in your zill waiting to be flushed you really only need uh really at max i mean that's even pushing it like 128 gigabyte nvme ssd or sata ssd so but what i would recommend is definitely going with mirrored ones at the very least to make sure that you have your redundancy in place there and so yeah that's that if anyone has any other questions about workloads or what you're using your zfs pool for or what you want to do with it please leave them down below we'll definitely answer on the last thing here though so special v dev and metadata like i said i want to talk to you guys i'm sure we have some dfs official autos out there a special v-dive is really cool it's fairly new especially on like cfs on linux while i've played with it and done some work with it i think it's really really cool and has some really big benefits like i said it's nowhere near the depth that i've gone on the rest of what cfs can do so i'd love to hear your experiences have you seen a big benefit from that and if so how have you sized them how have you built them things like that so that would be awesome if you could do that okay so that about concludes this demo all right and that just about concludes today's tech tip uh hopefully you enjoyed it before i finish up though if anyone actually takes our suggestions here or some of our benchmarks that we run and try to tune your own workload please we'd love to hear it let us know what you tried if it worked if it maybe didn't help that much we'd love to hear any and all feedback also if this video gets a hundred likes i will do a follow-up video but geared more towards ceph clustering and maybe block storage and file system storage on ceph specifically so yeah we threw down the gauntlet if you get 100 likes we'll do it and then finally i just have to shout out tom lawrence from lawrence systems again because his channel is great love his content um always learning stuff from him so if you like open source um you love linux definitely check him out because he does some really really great stuff and that being said we'll see on the next one [Music]
Info
Channel: 45Drives
Views: 988
Rating: 5 out of 5
Keywords: 45drives, storinator, stornado, storage server, server storage, storage nas, nas storage, network attached storage, storage benchmarking, FIO, IOPS Throughput, ceph storage, ceph clustering, storage clustering, tom lawrence, lawrence systems
Id: X1wY7D9meo0
Channel Id: undefined
Length: 38min 3sec (2283 seconds)
Published: Tue Sep 21 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.