Part 2: DIY AMD NAS with Unraid & ZFS Software Setup, ft. Level1Techs

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

The problem with server videos like this is that in the "real" world, you just wouldn't do this.

It's not this video, it's all the server videos done by youtubers where they build a custom solution. They create problems for themselves where they don't need to exist. If they wanted a VM server and a storage server they should have put a super low power system in to act as the storage controller and file server running a better solution for ZFS, not on an OS that doesn't support it, then create a separate system to run the VMs. Then they could have used proxmox or esxi for the VM server and stored all of their files on a ZFS based server.

I have no issue with people doing unsupported things on unsupported platforms for the sake of learning, my problem is this presents solutions created by people with huge technical know-how to people with little to no know-how as a good solution to run at home. Or even worse, at work.

It's the same with all of LTT server videos, if the company was paying real money for the storage and support contracts rather than using sponsored products and having a personal relationship with the companies supplying them, I very much doubt they would approach things the same way. Don't get me wrong, they are entertaining to watch and I love watching content like this, but it bugs me a little when you inevitably get people trying to replicate those solutions for their own projects when there would be much better ways to achieve the same solutions.

👍︎︎ 32 👤︎︎ u/jkirkcaldy 📅︎︎ Nov 11 2019 🗫︎ replies

I like Marshmallows

I also like Cumin as a spice.

Combing two things one likes is not always the best idea and leads to horrible, painful, and time wasting efforts.

👍︎︎ 3 👤︎︎ u/cyrixdx4 📅︎︎ Nov 11 2019 🗫︎ replies

I'll try to watch this later I guess. If I was going to set this up I suppose the simplest way would be to run freenas in a VM with passed through disks under unraid. I wouldn't do it though, I ran unraid as a vm under esxi for a time and it seemed somewhat cumbersome to me.

I can understand the appeal I guess. Unraid's vm manager isn't perfect but its almost a turn key solution in this space which doesn't exist anywhere else.

👍︎︎ 2 👤︎︎ u/AnyCauliflower7 📅︎︎ Nov 11 2019 🗫︎ replies

Unraid is a decent hypervisor and docker host, but for the money i'd've gone with basically any linux distro install for a ZFS setup like this.

The value of unraid is the ease of use, and mixed drive compatibility, which they are simply wasting in this setup.

ZFS on unraid is cool, but I'd probably use it as part of a tiered system, with large bulk "warm" storage on the unraid array (slow, but disks can spin down, and it can be made of any mixed disks), hot bulk storage on ZFS (fast, disks are spun up all the time, and requires matching disk sizes in each vdev), then SSD cache.

👍︎︎ 2 👤︎︎ u/faceman2k12 📅︎︎ Nov 13 2019 🗫︎ replies
Captions
everyone so I am joined by Wendell from level one Tex and we are embarking on part two of our journey to build the server so it's pretty cool stuff we have basically a kind of a host box a computer that we just assembled in the cs3 81 from Silverstone it had a lot of small issues along the way we talked about that in part one we ran some challenges we have a couple of other highlights cool motherboard that's got an interface we can access from other computers even for BIOS and then you also brought with you a disk shelf yep that we can hook into the whole setup so part two what we're doing right now is going to be defining the problems we're trying to solve and setting up the software side of things yep unrated in this case it's really a coming-of-age story a server bar mitzva if you will and that that will start us for part two here before that this video is brought to you by Linode cloud computing we've trusted Linode as our web host since 2012 and recommend it for excellent technical and customer support reliable uptime and clean interface aside from cloud hosting Linode comm recently added GPU hosting plans for machine learning and neural net use built with our TX 6000 GPUs and 10 gigabit per second network speeds they're also starting to deploy epic CPUs in their servers sign up for Linode comm cloud computing with code G at Nexus 20 for a $20 credit or click the link in the description below to visit linda.com slash gamers Nexus so let's let's start this one well define the things I'm trying to solve it's what our workflow I guess I'll go through the workflow briefly not just the workflow you couldn't you have now it's just a big old stack of hard drives plugged into the network yes that's no optimal no it's not we film a video all the video goes from the SD cards to straight through a computer like an editing computer onto the NASS Scientology and ass and then that project is edited off the nice and eventually those files are compressed SIF their b-roll if they're a role they're deleted and then I guess other use cases we store test data on there so I've got an isolated testing folder and isolated folder for media production and so no telling me about there was a mouse button that was mapped to accidentally do yes now accidentally Andrew used to map his mouse for to delete you still have something master map delete okay so now it's not only working from here but the first time I realized that or one of the first times I remembered it was when I was browsing the server folders and I went in the wrong one wanted to click Mouse for to go back so that's the hotkey for back and it started deleted in 2018 so snapshots are gonna be important for the solution that we deploy yeah so what I'm using now in that instance one you go in click cancel really fast and then to is I've got a recycling bin set up on psychologies side so I can restore stuff if that happens or we've got isolated user accounts so certain permissions for testing versus media that way there's no accidents that are like company-wide just one group and yeah so some kind of recycling been equivalent would be good just for mistakes like that also the other nice thing about having a server especially when you're like sort of graduating into the server being able to do stuff is that can do double duty like right now I think you've got a dedicated machine for transcoding yeah it low on space you'll do transcoding but also things like steam cache for games like your workload yeah that might not help a little bit but maybe the internet cache does but you can also run virtual machines with full virtual machines on your server system if it's powerful enough this analogy stuff can do that as well but it's not really super powerful if you're really pushing it to the ragged edge asking it to do all those things there's a lot of limitations or stuff like memory capacity and also whatever CPU is in there is what you get now not like it's socketed yeah so yes so there's a lot of options that steam cache is a cool idea - like I was saying off-camera but we we we don't really download games that much we kind of pull all the ones we need for testing and keep them on dedicated test drives but if a new game comes out we probably need to download it two to four times this way you can ensure that you only download it once and then everything else happens at wire speed and with the 10 gig interfaces you know multiple machines can be pulling from that and honestly the bottleneck is probably going to be compression rather than network speed hmm yeah well the virtualization also means that if a workload comes along and you want to experiment with it you could experiment with it in a virtual machine on one of your workstations or just create a new virtual machine on the server experiment with it there and if it pans out great and if not just wipe the virtual machine oh there's one more that just reminded me of his image hosting oh yeah so for pulling down images for test systems right now we just clone drives on a on a machine and what I used when I worked at Dell what we did was we used ghost 32 Norton product and it works great you boot into pixie boot and then you navigate the server so you'd overreact some server in a closet somewhere pull down the image that you built maybe the previous couple days and just load it onto ten different laptops that are all the same SKU yep and so that would be nice to have an image store yeah we can create a virtual machine to do that and to do the PXE boot there's a couple little things we'll have to reconfigure our on the router but once that set you will be able to boot off the network it's pretty neat yes so yes very cool so we've got the use cases to find if you didn't know some of that stuff was really possible or haven't thought of it that gives you some ideas let's uh let's get started with setup I guess we we previously just put on raid on a USB key it is in the server now and you said I'm gonna need that I guess that key stays in there yep permanently okay is that why yeah that's what one rate actually runs from okay so unrated will boot and run from the USB key and that's to tolerate any kind of changes that you might want to do to the disks and that chassis has four empty slots for mechanical drives one empty slot for nvme but the disk shelf also has twelve empty slots okay so yes we have expandability for the shelf yeah so if you want to add 10 more drives you can or two or one or right or whatever okay cool so the first thing is unread comes up in trial mode it is paid software on rate is really I tested under a DIN ZFS so I'm gonna have a separate video on that about like why and about six different systems but really unread versus freenas and FreeNAS was problematic for me booting on thread ripper with the newest agiza on every thread Ripper system I had it hung and that was disconcerting and it doesn't do hardware pass-through I really like hardware pastor said I do a lot of content on I'm going to sequester Windows to a VM and now I trust it to run and then I can come out here in the real world with Linux doing actual work and then windows can have its little sandbox where it's eating paste and whatever else in paste and installing malware in the background I don't know why candy crush keeps coming back why does this keep happening so we're gonna do get trial key because we don't care it's gonna hit the Internet and that's basically it I should mention we don't have the disk shelf connected right now yeah just the 14 terabyte disks although I don't see the nvme maybe we need to go toggle something on or we bumped it or something earlier might you - about me - I'd be in a different different area that's not it if you want it all as part of the same pool then because it it won't let you do anything until you start you start an array and the ZFS pool doesn't count as an array so we use we end up using the in via me with ZFS you just a ZFS pool can be made of one or more V devs and each of the dev is responsible for its own redundancy if you lose one Vida if you lose the entire pool okay so the 14 terabytes would be one V dev with one drive of redundancy then we have 12 drives so I probably just put that into two V dibs of six drives with one or two drives the redundancy each so we'll have three v-dubs and as ZFS uses the disk it runs performance counters and so that'll figure out how fast each of e dev is okay and based on how fast each video is it will balance the read and write loads across them and then from user interface and how is that interpreted as far as folder structure file structure you you get to pick how you want things so it sounds like you need at least two shares but you could even create individual user shares give them a quota so like every every user here could have their own user and password and you could give them say you know a terabyte of space and then it's like you need to clean up your stuff because this doesn't work we've also got flash and spinning rust and so we'll probably split the flash into its own storage and I can be like a work area always guaranteed to be fast but later we can add flash devices to use as caching if we want and then then this can be cut if we need to for time but and then for for that working area so if we have active projects is this an issue where we just need to get into the habit of manually moving the project over to because automate at all I was gonna say is or is there like a Roenick cleanup weekly or something and just move yeah I think because your storage pool is so large compared to the amount of flash storage that you have we can just set it up nightly and like two or three in the in the morning to low priority doing our sync to a folder on spinning rust okay so then if something does happen to the flash you've got last night's copy or whatever but generally you work from that and he would only go to that sequestered area if something really bad happened right yeah and and then just I guess delete it once the project is done Hey well that can be scheduled as well because if you remove it we can enforce deletes so if somebody removes it here then the following night that will also be removed from the spinning rust area okay think about it cool so we took a quick intermission to get the drive working we had the switch drives the other one was to say to drive so he swapped it out for M view me and now we're back on this page basically this is gonna be throwaway and so we're just gonna create that okay unright is weird because it s shoes like I get it because like btrfs was not trustworthy there for a while it's gotten better some of the other file systems are a little sketch and the approach like that the Synology takes is to use the linux md+ LVM and it creates a zillion l vm slices on ND and then when you add another disk to the mix it can sort of rebalance how things are died ask a question uh-huh what is l vm oh it's a logical volume manager okay got it you know I'm with you it's like the partition manager on steroids yeah yeah so unwrite takes a different approach on raid wants you to explicitly set parity drives like one or to parity drives and then your data drives okay there's a little program that watches everything you do and make sure that there's multiple copies of your file so we were talking about snapshots earlier on the unright forums it's like oh if you use btrfs make sure that all of the drives in your pool use snapshotting otherwise some of your files will be snapshotted in some of them won't and this is a very odd way to approach that problem as someone who is an outsider perhaps to the Unrated community so not a philosophically I have a lot of problems with that choice yeah but you know this ticks all the boxes so we're gonna use this okay next step is really to just start start the array and it's just the one thing participating in this but now we've sort of activated all the other functionality and her face looks nice yeah it's pretty clean it's a little it's weird because like under VMs well we probably have to turn something on but under VMs and some of the other things it has some really esoteric features it's really well built out but then other things like I would like to run a task on a schedule no you don't get a GUI for that okay so you just build a cron file for that or yeah okay so all right used to anyway it's annoying but why why do you do this why do you do this all right we're waiting on dr. eats mother stuff to start but that's fine so this is running now and by default the flash device is shared on the network and this is so you can manipulate the configuration over the network okay or if you want to back up your configuration you can like once we get this set up you can literally go copy paste this and if something happens to the USB driver you just make a new one and I just paste it back in perfect the config anyway and it you're good to go but we can see that we've got our drives here and the temperatures reported we got 35 37 38 36 and how many reads and writes and errors have occurred and things like that call and that will persist even though they'll always shows unassigned but they're actually gonna be part of a ZFS pool okay I think the next step is probably to start following the tutorial that's on the level 1 website okay so let me did you write that I did okay do you trust you I think the tutorials okay is it gonna be like the next step is are MRF copies just command into the terminal and then one four eight seven six four that's freaky though that I responded on that one I'm bad with I'm bad with sequences of numbers as it turns out unexpected with maintaining the order of sequences of sequences of letters in acronyms you're quite good but numbers no that's questionable ZFS on run right here's how well the Shadow Copy setup so Shadow Copy is this really cool thing in Windows I'm familiar with a name so if we look if we go to this is just the flash this doesn't count but it works to show so the previous versions tab this is built into Windows and this is a great and every nice on planet Earth should use this Samba has supported this forever on Linux and so it's handball and FreeBSD Samba on literally everything Samba that Mac's use has a binary extension for handling this the problem is that just about everything except freenas and freenas has got a master ther because it's not really like super idiot-proof out of the box which is annoying because this is like a basic feature of an ass that should be on all nasa I think the Synology might do this out of the box but in terms of like do a free software no and what we do is like once this is set up as this creates snapshots those are going to show up here so welcome back to that okay so the first thing we need is the unread plugin and this is the yeah let's paste untrusted code into the thing it's how I built my first web server install plug-in you know what could go wrong just paste this link it's totally fine right so earlier we were making jokes about NSA and Russia and China having backdoors into the NASA that we just built yeah but actually it's you and it's all through I think Stinney 84 has a community developer has a backdoor because it's a github user content comm slash tiny 8 1984 so good job 1984 you're actually not ironically not ironically the next plug-in is github user slash George Orwell and then this is this is the auto snapshot script although we're gonna modify this a little bit alright what do you want to call your storage tank you can call it tank that's pretty common I called the test one that I brought dumpster a pool or anything really where is it seen just on the command line and like when you get status messages it'll be like storage pool whatever is currently experiencing an issue okay Horus grows like is it like SDA except for wall yeah or well more accurately it'd be more like in d0 okay it's like it's indeed it's it's a synthesis of all of your other devices okay tank is fine unless you want something different no it's fine I don't know hundred percent remember the syntax for that was probably fine yeah okay so zpool status will show you pool tank you know current stats online scan done requested config and then there's all your stuff okay and so it's like oh well wait yeah so this is raid z1 and then you've got all the desks yep and so and then the SSD is not in there yet no the the SSD is gonna be part of the Unrated system so it's a different thing okay and these are just the disks and we'll add those and so this could be this is its own video and so the next video that we add we'll be like probably right z1 or z2 and then it'll list out all the other disks and I'll have different letters okay and actually probably when we hooked a dish shelf up up it's gonna be multipath so it's probably not gonna be named ABCD we'll actually get these really long disc names because each disc has a globally unique identifier like a MAC address and because there's more than one way for the system to get at them that's handled in the Linux software side through something called multipath okay so ZFS is smart enough to know it's like I just need this disc I don't care how we get there so it's like I need to take a flight to LA but I don't care if we go through Chicago or Austin ok or whatever yeah yeah I'd prefer not to go through Chicago okay great all right what's the next step here now we need to do testing so I I was using FIO to do testing and so you created that you create a data set in like your ZFS pool is itself a data set but you can create like you would normally think of em as like directories but it's directories on steroids because you can tell ZFS I plan to put uncompressible stuff in here don't bother compressing it ok or I plan to put you know this kind of data in there and it will optimize how it stores it and deals with it and it also has to do with things like write synchronization and stuff like that when you're using FIO you create a test folder that tells the discs to basically be as conservative as possible because otherwise ZFS is so sophisticated it'll see that FIO is trying to do a benchmark and it will optimize away the benchmark so you get false numbers I see it's like you're running a benchmark you're just having me dude drivel of stuff the correct answer is this is really fast and so if I was like you're really fast so these are being a cache that's fine ok you turn all that off and then you run the tests so once we do all that we set up ZFS automatic snapshots so this will be a cron job and this is the first ingredient for mapping the snapshots to shadow copies on the windows interface so even though Shadow Copy is a Microsoft Technology and the snapshot is a ZFS thing we're going to use this script to create and probably like to peer snap shots a day and then map those snapshots to the shadow copy interface on Windows to make it easy okay this is a very well written guide we should with screenshots yeah and that's what it will look like should post reply thank that's good it's all well formatted and everything so we'll link this in the description below I guess he send me a link when you're ready and if you want to follow this at home we're probably gonna cut stops in the video yeah cuz otherwise it'll be too long so it'll all be in his guide okay cool the next thing that we do once we get ZFS going is we can start adding virtual machines and docker containers okay so we and by we I mean Wendell did some work on the server further and we'll have an intermission in here again you can follow his guide if you want the steps that we've skipped in video but you did some additional setup and the real reason you'll now know that I brought specifically this man out here to help is because I knew I wouldn't have to give you Star Trek terms yes for my folder names right so I walked out of the room and I came back and he's like look I made holo deck and engineering and warp this was perfect yeah you mentioned you needed two different security levels plus we've also got the nvme so of course the nvme share is warp and then holodeck and engineering are isolated security groups so the researchers can't bungle the video and the video can't bungle the research right yeah but we've also got extra safeties which is that Shadow Copy stuff I was mentioning so you've got a demo of this already - I think yeah so like we go in the holodeck share it's like you copy the windows I so there so we can set up a Windows VM we'll get that in a minute and it's like oh no it's been deleted on No so we get the Shadow Copy from like whatever this is we open it what I sound like it's like this one this one's not far enough back in time this is from like 11 that go farther back in time and say oh there it is yeah so then we can even open it and show everything's still there it's still there we saved it ya know we're still setting up the time zones so the times are a little funky but we actually want to use local time and not UTC or California time notice your network hands out califor time for whatever reason because obviously we're in California yes all tech tubers are there all roads lead to Los Angeles yes all roads lead to Jays two cents you know this is really cool because it's a simple Windows interface yeah you don't even have to like browse to the web or do anything it's literally built into Windows and this is what all Nass devices should do like even like all the free distributions and on raids not free I was really surprised it doesn't easily do this out of the box even if you're not running ZFS I think ZFS is snapshotting capability here is far superior to even btrfs butter FS Oh engagement challenge but uh I'm surprised it doesn't do this somehow out of the box butter FS was again it's like you related why did you relate that to you really that that to Facebook yesterday yeah it's they helped fund a lot of the early development and use it internally because it has some of the features of ZFS but without all of the overhead and complexity of ZFS okay cuz like if you're Facebook and it's like man I need a file system that does all this ZFS takes the boxes and then you see that Oracle is involved I mean Oracle acquired the intellectual property after it was open source EFS yeah but again Oracle Oracle leaves a bad taste in everybody's mouth so not not the best situation so butter FS was unattainable feel like I feel like if Facebook is judging there they're maybe not in a position to do that but I understand and ZFS on Linux is now a very mature project so this is pretty decent I mean it's well the so again the interface has been you did show me some what did you call it oh you showed me some aspects where the interface is a bit bipolar on what it offers you oh yeah so like that the vm's like check it out so we're gonna add vm's and it's you've got all this stuff and it's crazy like I'm always talking about iommu and hardware pass-through and we set that up and there's a great interface for that and all you do is I guess load the ISO and onto the server or something yeah and it will let you get at the like this user interface a lot of other people could learn from unraised interface here because I'm gonna edit this is our windows VM that's setup and we can access it through VNC but we've also got hardware pass-through for the Navi GPU now we still get the reset bug so if anything goes wrong we have to like turn off turn it back on cuz that's just Navi right but this is forum GUI and like gooeys from manipulating vm's is not great sometimes but this interface is like alright just let me switch over and deal with this at the XML level this is great like a lot of people could learn from this but then it's like I want to set up a scheduled task to you know check the ZFS health or a scheduled tasks to restart the docker containers or scheduled tasks to do whatever and there's nothing there's no there's no interface for setting up scheduled tasks so it's a little it's like this part is really sophisticated this part is rock dumb simple so just yeah why let's I learned a couple things from you off-camera we should go over so some of the specific naming terminology containerization of things so just I guess recap I don't remember how much of this we've gone over at this point but containerization as I'll call it and you can correct if that's not the right word but we've got pool and then there is a Vida under that and v-dubs what does that stand for again it's like a virtual device or it's it's really just the building blocks that make up a storage pool and so with ZFS you add storage Aveda at a time and then yeah and what you were if I remember this correctly I think it's if I go to add let's say for more drives then that becomes a Vida and then that V dev is responsible for its own redundancy yep so if a drive fails on V dev one and on V dev - there's no relationship between that failure yep other than maybe you got a bad batch in which case yeah better get replies but there's no yeah in in terms of how they're contained you can replace one drive in each one and also if we have one disk failure for DV dev one one disk failure for me dev - that doesn't mean we have to disk failure yep so and you can mix one and two drive failures so like with the external disk shelf we can run raid Z 2 and have two failures in any of the v-dubs that are implemented with raid-z - and you can and match and it's basically okay it's not it's usually better to like do some planning for performance reasons but generally the more v-dubs you have the more performance you get I think there is also did use that was a data set was that the other term yeah so with ZFS like the file system is ZFS but then you can create a ZFS data set under the file system and the data set can have a bunch of tunable zhh so we've got data set for your docker containers and your docker containers are like super lightweight virtual machines and we're using that for steam caching which we'll talk more about in a minute but that file system is case sensitive because it's more like Linux whereas the engineering share and the work share those are case insensitive because you're going to be mostly using Windows clients to access it and while SMB will preserve case it's supposed to be case insensitive so having those data sets mean that you don't have to have different partitions with different parameters I can say this data set is case sensitive this data set is case insensitive and they all share the same pool of storage and so this is the docker functionality on raid which is pretty good although it's a little idiosyncratic and I wasn't planning on doing a tutorial on setting up like a LAN cache which is like the steam cache but it's a steam character that supports origin and battlenet and Windows updates and a whole bunch of things Windows Update supports caching Windows updates and that propagates them to every device on your network yeah yeah I mean I guess that's fine but it does support it but this yeah we can do so I guess to kind of step back a second caching things like downloads from Steam is just useful because then like you were saying earlier your operating that cable speed yeah the wall or whatever you know yeah I think motor lines 3head look at 20 or 25 gig update and you know if you're on a LAN with three or four other computers and like downloading that Borderlands 3 update on a bunch of computers it's just gonna kill the connection no matter what kind of connection you have yeah yeah so even if you're at home or if you have people over to play games kind of LAN party style it could be useful I think I think PDX LAN my actually do something like this yes steam cash yeah specifically for that reason because they have however many hundreds of people there yeah valve has actually been really good about supporting this kind of technology they have a thing called a site license where you can if if you're if you're worthy steam will give you a site license and it'll actually let you cash even games you don't have Oh so it'll just download the entire catalog ok of steam eight so that that makes sense for a LAN event because the LAN event organisers don't have all the games yeah but all the people there will so cool and this is this is nice versus a full virtual machine too because these the operating system and the containers are in close communication about what kind of resources they need so instead of like pre allocating say a gigabytes of memory for the windows VM that's the you know for eight gigs or whatever eyes we have the windows VM these will just use what they need and then when they're not busy or there's not a lot going on they use less memory and that'll free memory for the host to be used for file system transfer caching or whatever there was something you you mentioned yesterday that you spend some time troubleshooting something and you said I might update the tutorial for this because nobody should have to go through this but you know what that was so engine X which is the like this web interface listens on all IP addresses by default and in the GUI here for unread I've got this LAN cache container and the Lancaster container actually wants to listen on port 443 which is encrypted and the sni proxy which is encrypted so with the encrypted traffic live what we're doing here with the caching if steam or somebody elects to use an encrypted connection we can't actually cache that but we need a mechanism to just send the encrypted traffic on to where it needs to go to all right on the computer yeah without messing with it so for some reason the dr. GUI here even if you you can specify by IP address but you can't share an IP address between containers but from the docker command line you can but one of the problems I run into is that the in genetics proxy was listening on these ports and adding another IP address to the machine meant that this web GUI also used that IP address there's a workaround for that in the web GUI but it means you still can't share up addresses between containers which is an Mis implementation I think on unraised part but the tutorial I think I'll update to include that if you're gonna do that because we've got all three containers running here which is it's like you go to do this and you just drag and drop and won't do that it'll be like I can't start this one because that one's running or vice versa yeah right and what else do we want to show in the interface VM I guess we can show that if it's still there oh yeah yeah so so you've got a windows vm set up yeah and this is going to run your transcoder with with up right now 5700 xt maybe eventually a 10 series NVIDIA card with the code 43 work around because in videos like all V hands-on but you can work around it with this setup so that's nice yeah yeah so we can continue using our existing scripts and now rewrite them yeah and one machine is now doing the duty of what was previously two machines so the electric electricity savings alone II well yeah and the space savings as always my biggest thing just if we can compress multiple machines into one box that's always better but the really awesome thing about this 5700 XT is that it can do multiple h.265 streams in real time I think probably you like to 60fps streams okay I was doing some testing with Adam he post Vox yeah and I think we got to 60fps h.265 stream so it's like navi's encoder is unlocked it's a little sketchy with the h.264 encoder they've been working on fixing some bugs around h.264 but mostly what you're doing with the script look like h.264 to h.265 so that should be a good use case if handbrake can use the hardware in Navi yeah right yeah and if not handbrake something can there's options and we can rewrite it if necessary handbrake Co I was just what we happen to use because it was accessible and I was familiar with the interface so it was easy to kind of just spawn the flags and then copy/paste them a vm that we plan to add here is a Linux VM running fog which will enable PXE boot and with PXE boot you'll be able to boot from the network and do imaging and that kind of thing yeah yeah so being able to boot over Ethernet basically as one of the boot devices in BIOS and then you can go grab images off server which we talked about previously yep and some of the magic here also requires a little bit of reconfiguration on your DHCP server to use this for DNS so the caching works but again tutorial right yeah so this is this is the first run on downloading a steam game and so this is showing us basically is it hit or miss and what chunk are we downloading so as we go through and download whatever game it is that we're downloading from Steam ideally we see hit yeah because that means it's coming from local instead of from over the Internet right but these chunks are being cached locally on the NASS as well as being forwarded to this Windows client and so then the next machine can pull it from the NASS yeah which is cool because in an unfortunate era where some people not us thankfully but some people do have bandwidth caps from their ISPs and if you have multiple people in the house who play the second game or all multiple instances in the office near limited on your bandwidth and this could be useful there as well so it slices it dices it makes julienne fries it's it's the network attached server it does everything very nerdy info do you or someone you know use the butter FS filesystem I'm so sorry you need to stage an intervention called a hairline now no actually I did test butter if s a lot for this in like doing my homework or for this collaboration and butter FS is actually pretty good I was a little scared off by the whole unread let's DIY the cache thing ZFS I trust the FS yeah so that's the server setup I think we'll probably will cap this at two parts for the series pretty straightforward there's additional stuff I still have to do after you leave to finish configuring it and yeah a tutorial yeah I got it usable for us and it's pretty much there though and then we have a separate video we've already filmed but not sure what orders are uploading in and that is the epic server Evo yeah and you should definitely check that video it's really cool stuff and the fans spin fast this that's it's a cool demo of linear feet per minute flow I had a blast putting this together so thank you for having me out very gracious host it's been awesome you did all the work appreciate it thank you thank you check out level one tax subscribe in the link in the description below to them and we'll see you all next time [Music]
Info
Channel: Gamers Nexus
Views: 284,163
Rating: undefined out of 5
Keywords: gamersnexus, gamers nexus, computer hardware, level1techs, wendell, diy server, diy nas, diy nas build, diy nas build guide, diy raid, unraid, unraid tutorial, zfs tutorial, unraid zfs, unraid with zfs, set up unraid, ryzen server build, diy home server, home media server, unraid install, configure unraid, zfs on unraid, steamcache
Id: SqaAmVN4J4A
Channel Id: undefined
Length: 35min 41sec (2141 seconds)
Published: Sun Nov 10 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.