Exploring MAAS with LXD

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay looks like i'm live so today the plan is to look at lexi and how it integrates with mars so mars is machine as a service it's a another canonical product it's effectively it's mostly used for bare metal machine provisioning and management but it can also interact interact with lexi and with virtual machines so we're going to be taking a bit of a look at that first thing first let's jump into a terminal here make that a bit bigger so it's the same system i used to demo running ubuntu desktop inside of virtual machines so we still have those virtual machines and containers left around let's just clean things up quickly because we don't really need those right now and they're just making things slower there we go it's just gonna clean all that up and now that everything is nice and clean let's launch a new let's just do the official images ubuntu 2004 and call that container mars so as itself runs just fine inside of electric container there's no real there's no reason to use a virtual machine for that um so that's what we're going to be doing now so i just need to download that image so prank i didn't have already so it's going slightly slower than i'd like the um ubuntu 2010 released yesterday so i can expect some of those levels to be a bit slow right now uh actually slow enough that i'm just gonna give up on this particular one and we'll use the other image server that has slightly slightly different images but it's quite a bit faster there we go the most important difference between the two the two images that those images are not really official urban images they're not tested by canonical before they're being put out and also have some slight differences like snapd is not installed inside them which is actually slightly inconvenient in this case since we want to install mass and using the mouse snap so we need to go fix that it's just going to be installing snap the in there there we go and we've snapd installed now we can snap install mouse here we go just installing the girl snapped and installing mouse itself and we should be good to go um mas has a web interface that's how most people interact with it so let's open firefox okay and just just check here okay so mars is now installed let's get the ip address of that particular container so it's that and go look at that i believe it's 50 to 80 slash mass oh maybe not let's go what's going on in there okay we've got the my supervisor running i bet the rest of my starts too that'd be nice um let's see if anything failed it's actually a good start nope okay what's going on with the mask now i'm still active okay that should be a good sign um i mean we mostly see the supervisor running i should probably just look at the install instructions frankly because i've not installed mass from scratch from the snap in a real one so it's quite possible that there are some extra steps that i just completely forgot okay install snap oh yeah that is what i forgot so it got installed but never actually um configured what does it introduce mass in specified mode so because it's gonna we're not gonna be splitting between like having a region controller that manages multiple racks with different rack controllers when it's going to be doing any or not so we're going to want region strike i suspect for non-production environment on this machine you can just snap install masters db otherwise it needs to con oh yeah okay so it still connects to an external database all right so we need to install like postgres or we can just do what it says and install the masters db snap and use that the mouse i usually interact with um is the one we use for xt development um and that one does have a proper postgres database alongside it uh and was actually using the dev which would then transition to the snap which is why i'm not particularly familiar with how to do that in short setup so i mean it's literally copy paste from the init command so that's easy enough um as url that seems right so it actually gives us here the url we need to open in firefox which i think is exactly what i opened earlier except it wasn't running yet okay let's start anyway oh i decided to open with chrome for some reason not special anyway so we're just waiting for the database to be initialized at this point i'm hoping won't take super long once that's done i would expect this to start working which it's not yet okay so it looks like the mass testdb is actually a snap including postgres which gets you just a basic single database deployment all right so mouse is now installed we could do conf we're gonna just do a local admin setup so we just need that mass create that main command no any time the other option is to use external authentication or if ken did an r back which is definitely not something we want to do for just a simple local setup of mouse okay so we can actually import the ssh keys for us which is nice it's not gonna do me much good in this case because my that virtual machine doesn't actually have any sh key uh but it would have imported everything now all right 5240 okay close enough that's enough and here we go so this is mouse and can now login don't save that's fine already see that my user has an imported ssh key um what we should probably do is actually just create a new ssh key yeah whatever just do the default and then import that into mouse so that when it deploys systems for us afterwards we can just stitch into them okay so search key is all done now go to dashboard and now we've got a nice and empty mass so name of the region don't really care connectivity we could do something different if you wanted different dns or different archive servers um because i'm in north america let's actually do us dot for both of them that tends to be a fair bit faster around here images we wish to sync so we're going to be doing let's do 1804 and 2004 we'll only do intel 64-bit update selection so that should start downloading images for us okay that's all good it's done loading continue setup and here we go so now we're on the main the main view of mass that shows all the machines we have which is currently none at all that's perfectly fine the other thing we need to do now is actually create a network for mass to operate on if we go back here um and get out of mass container and we look at the network list we see we're currently on nextdbr0 which is the default bridge we got out of flex dns last time i i did the setup in that vm that bridge is perfectly fine it's got dhcp um and radar that is meant for ipv6 it's got ib4 ipv6 with not configured we don't want mouse to directly run on that because maz will operate it's on the http server and that would conflict with what we have instead what we're going to be doing is create a new bridge let's call it mass br0 and in this instance what we're going to do is we'll still let xd just set up ib5pv6 for us like that now if we do a list we see it picked a new subnet that's fine but we don't want dhcp service on there otherwise we're gonna have problems so what we'll do is do must be r0 and do ipv4 dhcp false ipv6 dhcp false and if i could type properly that would work a lot better there we go so at that point if we were to launch a container on there it's not going to get an ipv there's no dhcp running anymore but what we need to do now is actually attach the amaz container here onto that network too so let's do this with device add we'll add this to the instance called mouse we'll call it device eth1 it's a network card um and we want that connected to mouse br0 um and again we want the interface name edh1 inside now i'm really bad at typo today so it's mars not mass there we go um let's go back inside our mouse container i'm still used to using is it net tools not details not tools i'm still used to using f config instead of ip um just an old habit okay so we see th0 eth1 is there not not up at all um what we're going to do is just add basic configuration for eth1 as i said it doesn't have dhcp so there's no point in trying um address so we're going to set the static address i can't remember if it's a dresser addresses i think it might be addresses and we need the right subnet which i forgot so what is it okay 10 56 61 okay so 10 56 61. um let's give the 10 for mars and subnet is a slash 24 so we do not um we don't actually need to specify a gateway in this case um because we'll still keep it h0 and using that as the gateway format that should be fine um so let's just save that and do netplan apply okay so we've got an ip on there now if i if i refresh an lxc list here we'll see we've got now an ip on both eth0 and ath one and that should be fine i'm not completely sure what our mouse will just notice so just to be safe let's just restart now and now back to the web interface so that usually notices reasonably quickly when things are back online oh we can just keep refreshing should be starting up yep and here we go so we're gonna go and configure some networks um first of all let's look at controllers so controllers effectively the the mass systems themselves um we only have one we only have one rack because it's just playing an investment machine it's fine uh here we can see what's in there so we see some hardware information and cpu limits well amount of memory and everything we've not actually set up any limit on on that particular container i guess we could let's just do that so limit cpu let's give it two cpus and memory i don't know let's give it four gigs and to avoid issues let's give it up to 5000 processes okay let's see just how quickly that refreshes see if it pulls the information off or if it's uh no it doesn't um so i guess we need to restart mouse if we really wanted um that to refresh but it's just the detail so we can see what it's connected to all right there we go so we see both our 10116 subnets which is the lex dbr0 and we see the 1056 61 which is msbr0 we don't want it to mess with this one it can mess as much as it wants with this one so let's see if we can configure that let's rename it name okay fine let's actually call it must be a zero it might be easier for us to figure it out okay then we see the different subnets that are on that network so the 1056.61 here and let's try to edit this one um so let's call that mass ipv4 so we could have mass like automatically scan for whatever is available on it doesn't really matter there's not going to be anything other than what's deployed on it so it's fine manage the location means that maz will take control of the subnet which is exactly what you want it to do it's perfectly fine it does ask for the gateway ip because it doesn't have an ip on it so it doesn't actually know what it is um so we're just gonna specify it i think we can keep tns empty and it should work if it doesn't we're gonna go and configure something and then save that okay let's go back and let's do ipv6 as well so same deal mass ipv6 managed and everything is good the gateway is gonna be the first address on it so it's start and same thing we won't touch dns here save that now if we go look at our work first of all let's just go back to this view okay so now we see we've got the mazda br0 we've got our two subnets they've got names mars iphone as ipv6 uh it still shows the htcps disabled which is a bit odd is it on the filament basis we do not yes it is okay so we're gonna enable dhcp uh so we provide the http directly from our controller i don't know the subnets or anything on there so we're gonna do sure the default ranges look fine enable um i wonder if we need to do it again to set up this the other range or if it just does it for ipv6 which works okay so i guess we reserve dynamic range huh that seems a bit more manual than i would like but okay so let's do the equivalent for v6 so we're gonna give it also one nine one two two five four let's see if that works fail to detect very w okay well i guess that doesn't work that way then what does that one let you do now okay that's to reserve uh outside of mother's on yours okay anyway we've got [Music] we've got the http running here there's no nothing really to configure on there and reconfiguring the hcp doesn't really do as much good um so just leave it at that now if we go back to the controller so this time we should see that most provided okay oh and we actually do see the dhcpv6 is enabled too okay it was just the ui being a bit confusing here um so that's perfectly fine we've got v4 we've got v6 um with mass provided mass dhcp on it and absolutely nothing running on the interface on our traditional next dbr0 bridge so that's perfectly fine we could look at logs and events and some other stuff on there but there shouldn't be anything else we need to touch at that spot you could go to subnets directly to to see again the msbr0 with the v9 we've configured and the current allocations but that should be all fine we're not gonna mess with azs because when you have a single rack anyway um images they should have synced by now hopefully uh yep we've got uh 1804204 that's been imported all good now on to dns so that's why you can actually set up the a domain right now it's set to dot mouse which is probably good enough um so anyway let's go to the important interesting part where we'll actually add some machines so mars is effectively set up at this point we've got a working network it's supposed to provide dhcp on it and we've got lex d that can run containers and vms mars doesn't really understand moscow's provision containers because they're not dhcp enabled so like you can't dhcp you can pixie boot a container but you can definitely mix it with vms so let's set up some of those um it's it's kind of cool it's say vm01 um we want to put the machine and want it to be empty um and that should be pretty much it actually let's do something different let's create a new profile called mouse on that profile we're going to be adding the root device so that's the the root disk way where we store all the data the pool for that is going to be the default pool i think that's what we need oops uh right uh we need perfect slash to just hit it as the red disk there we go so now if and then we need to do let's add a network card so we do onto mars etf0 um network device the network is must be r0 and we'll call it th0 all right so now if i go and look at the two different profiles we have we've got the default profile that likes this setup originally um this one just has an eth0 device attached to lex dbr0 and it's got the root disk now mouse is something very similar with cth0 must be r0 also attached to the disk um so the same the main difference is just we touched a different bridge the other thing we can do let's just do edit on that on that one is for virtual machines unless you specify differently the root disk is going to be 10 gigs large that's not a lot so we're gonna just bump that up a bit and let's go with 20 gigs okay now what we can do is create a new vm um might as well launch it it's fine um so we'll launch a vm called vm01 it's gonna be sorry i forgot to tiny detail um we want to always uh fix the boot so we want the wood priority on the neck to be high uh that would just ensure that even if there's something on disk we still boot from the neck instead that's important because mass needs to have a way to redeploy a virtual machine and it can't do that if it doesn't network good so forget that detail just fixing that now for anyone who wants to see the old thing that's what it looks like and now we can finally do a launch of vmo1 is empty so there's no source image it's a virtual machine and we use the mouse profile so there's a question around whether mouse can deploy virtual machines also containers mars can only deploy virtual machines but it can understand containers and manage ip and dns for them so we'll show that a tiny bit later um for now that virtual machine has now been created and started so let's just look at what it does so we're saying that it's effectively pixie booted and showing booting under mass direction so the first time you boot a avm um picks up a vm against mars or a physical machine against mass it will do a very quick initial check run few wires which will make it show up in that list but just take a tiny bit for now once it's in that list you can do some initial uh config sufficiently commissioning run which will started started up and goes through checking hardware like making sure that pulling the storage information and network information violating that network is the network that mars thinks it is and once that's all good um it's marked as being available inside mass we can go configure storage network that we want it and then you can deploy whatever you want on it so right now the vm just booted the basic ubuntu system from maz it's just running completely off the network with nothing on disk to just make sure that it can actually interact with mouse and then it will shut down once it's shut down i would have a dummy entry effectively in mounts that we can edit and then use that to to deploy whatever we want that shouldn't take very long seeing it that it's it's doing all that stuff through cloud in it so clinic is running in background and running a bunch of tests and yeah right now updating package lists so it can grab some stuff if need if it needs to let's see if maz has actually started showing the entry yet no it hasn't um well that's going on let's just add a few more um because like one vm is fun but it would be better if we had panel let's let's do five so just gonna copy paste the command and do again let's do a vm2 and then three four five and i do i'm a bit lazy so i'm just gonna do this and let it sort it out so that virtual machine is now pretty much done with its initial run and we're seeing seeing it show up here maz doesn't know what it's called so it's showing with that random name right now um it's best to rename it to its actual name in next year so you've got an idea as to what it is going to get very confusing very quickly um if you don't rename it fast enough that you remember what it is you can always line it up with the um the mac address but that's always a bit bit more annoying than it needs to be so is it still running it's commissioning run it's right now it's analyzing the network to see what the switches are because it's there's downward actual switches just a virtual bridge on linux it's not gonna find much but it will still work and there's a question around uh lexdeep profiles and kind of where they're coming from so lexi profiles must exist in the lxd database they're just a bunch of yammer but you can totally distribute those in source control if you want for example so one thing i could do here with my mouse profile um i could just write down write it down as much.jmo then if you create a new profile create called blog can do edit blur and then load up the the yammer and now if i look at what blur looks like it's going to look like the the mask profile so that's how you would uh you would easily like store and restore on different systems there's also um a way to actually copy profiles between lexi remotes directly and so vm01 is now ready um it's in mass it's been commissioned we see it detected one cpu one gig of ram which is the default on the next day and 20 gigs of storage what it doesn't know what to do at this point is how to manage it like it there's no way for mass to actually turn in on at this point um if you start it it's good just it's going to be manual mode and just ask you to do it which is not very convenient there's a way around that but it requires lex d to listen on the network and to have a trust password the current legacy has no configuration for that so we're going to just do it let's configure right to listen on 8443 on every address and set the trustpassword of let's go with blah as an easy one now you can choose power type lexi virtual system then the ip address or flex d so in this case we're gonna go with ip address being this one 1056 61.1 the instance name in this case is the same as the the name in my so it's vm01 and the password we used which is blah and we're done um now this machine currently has one cpu one gig of ram that's not a lot let's change that so we're going to modify the profile and do limits cpu and let's be generous it gives 4 cpus and 8 gigs of ram so each vm using that profile now has four cpus eight gigs of ram maz won't know that the only way it can know that if you commission it um which is all right we're gonna do that anyway so i mean it's a virtual machine so it must can run a number of tests during commissioning there's really not much point in violating the the hard drive is in good shape uh so you can just skip that because it's a very tall machine like there's never gonna be any useful result from that okay and we've got our four other vms here as i mentioned earlier it's a bit annoying because you get a random name you don't really know who they are i think i'm lucky enough that the ips i dumped most of them earlier so i know that 192 is vm02 um so we can just go and rename that one to vm02 okay then we've got 193 which is vm038194 vm04 so it works out they put it in the right order so that's sequential it's nice let's just rename them all so vm03 vm04 and vm05 okay and now for all of them we're just gonna need to go and configure that um that power control bit so let's just open them all in different times go and head into configuration for all of them configuration configuration and let's go this one is zero five so ip address let's just copy paste that vm05 and we shouldn't need to enter the password anymore because mouse will have already added its its uh tls key to legacy we needed the first time around after that anything else you add you don't need to and down to vm03 save and last one okay i'm gonna select all the new ones uh so just to check let's see at this point should i just have zero one running yep uh all the other ones are stopped we're just gonna select them all as one in one shot and do commission on all of them and again let's ignore that script and just run so we see they're all moving on to commissioning now with the vm01 being quite a bit more advanced already getting information doing commissioning you could actually go on there and you should be able to get the live view of what's going on here there we go so it's currently running a a test that runs for about 50 seconds that just wait is the network so we've got another 20 seconds or so to go on that one before it's done then the last test i think is pretty quick and we just the vm is going to be ready and there's another question around access control and whether we can go through some of that um yeah i can we can definitely do something that shows um candidates can be used for external authentication and some basic amount of access control it's effectively machine-wide access control like you you can control what um what group or what authentication provide provider in candidate can talk to a given next system but you don't really get much more control on that then we can also show what you can do if you've got a canonical rbac which is an additional service that comes for ubuntu advantage um users and that then gives you per project granularity on access on nextday so we can definitely go go through that and show can how that stuff works okay for now that vm is now ready uh and we should see it update yep so we've got four calls now eight gigs of ram still one disk 20 gigs so let's deploy something um for now we're just going to be doing deploy and okay fine we can do 1804 on this one i guess and we'll do 2004 on the next on whichever is the next one that's ready if you want to keep an eye on what's going on you can always attach the console so right now they're all running vm01 is about to get started by mars like we see it's powering it on there we go and it's powered on so if you do console vga you get to see it's currently yeah start pixie over ipv4 it's pulling the the bootloader and then the demand scanner and the tremefs and will eventually show more stuff as it's as it's doing the deploy deployment's gonna take a bit longer than the commissioning was doing because it actually needs to write stuff to disk the other ones are almost done with with their commissioning at which point we can deploy 2004 on one of them and we'll do some more slightly more advanced config um on probably like on one of the others to just show how we can easily run containers than inside those virtual machines and have those containers also integrate with uh with mass and then it would probably be a wrap let's just see so vm02 how far is it with commissioning i suspect it's probably that last test but maybe not yeah i'm not sure what it's saying um if you want to check more you can always go to advance and then view for history and it actually shows um everything from the time the machine was turned on to it uh getting files over tftp uh the pixie boot itself pulling the kernel and then once the system is booted uh running cloud in it to to run all of the different tests and everything that it needs to run so you can always do that to get some more details oh there we go so this one is now actually commissioning and we get a bit of a sense as to how long things are gonna run like we see that most of them are pretty much instant some can take like up to five seconds like the one we're currently running there we go and commissioning run is quick and now we're doing the 50 seconds one now on the one that we're deploying looks like it's booted into the um the ephemeral environment that's used for the installer and it's now coming running cloud in it which will communicate with mars and download the installation code to then dump an image on on that system effectively while that's all running one thing we can do is always kind of fun is other operating systems so let's just sync a center seven image as well so we've got that's ready when now when some of the other vms are ready to use did it save the selection yet yeah cue it for download okay so that's good to go now back on the machines list okay so we see vm03 is done with its commissioning again the number of cpu and everything has been properly updated and the other the other two should be almost ready so we're just going to deploy 2004 on vmz02 i think uh let's do that so deploy go 20 windows 2004 and apply all righty our 18041 is still busy installing so now we see the what's first thing on screen here is the installer that's doing the partitioning and you get the live state also showing up directly directly in mouse here so we see it was setting up storage now it's showing installing os you can get the same the same view of what's going on if you open it up and go into events you can see it it's currently installing the os if you open the details you can see even more so it's currently writing the install source to disk earlier it was doing the formatting and some other bits and here we can see exactly what it's doing just running rsync to copy the root file system onto the target drive so the part is done there's still a bit more to go because you need to actually set up a bootloader install account i'll prepare any trim fs and still quite a few more steps but it's usually pretty quick in storing machines that way especially if they're like in this case i'm dealing with you know like a nested virtual machine that's not the fastest environment in in the mass that we run for the xd team we've got physical systems um that deploy on you know very fast nvme ssds and take just a couple of minutes to install pretty much or we've got virtual machines that are also stored on very fast storage and can also just take a couple minutes to to deploy and be ready but yeah here we see it's now installing the camera so if we go back on that page and refresh we should see probably some movement yep got an installing car so we see that it did that actually gets you a pretty good timing for exactly how long the installer is taking like in this case we started so let's see exactly when the vm was booted there we go um node powered on was um 735 utc and we're now at 7 40. so we're just five minutes later and we're doing the canon install looks like we're pretty much done with it actually right so it's not mounting everything it looks like it's about to clean things up and just reboot and we'll be done one thing that's that's maybe to notice that uh when the virtual machine that you're attached to is rebooting lex d will actually disconnect you you'll never see an actual reboot sequence uh while being attached to the vga console because to apply config vm config changes lexi always wants the rebooting vm to actually stop qmu exits like they can reconfigure it if needed and then starts back up that makes it slightly annoying to to stay remain attached because you can remain attached across the reboot but the vm will still reboot will still come back online just you need to reconnect okay it's doing all your mounting all the cleanup so it's probably let's see what it logs but it's probably just about ready yeah it's just configuring apt and configuring some uh some bits of the system before reboot uh let's see what the other ones are doing okay so we've got vm0102 um zero three we'll do some special config and see let's see if we can do centos on zero four already or if it's still downloading operating system center center seven okay fine deploy now okay so that one's going to be deploying uh during that time let's go and edit vm03 so you can you can do quite a bit of changes inside mass for how you want the system deployed like you could go in storage and here we'll see that it's just gonna do basic partitioning um with like a uh efi scheme so it's got the 400 megs efi partition followed by 20 gigs of ext4 storage but you could totally change that um there's really nothing that prevents you from doing whatever you want so you can actually go and like okay i want to remove that drive and instead i'd like to oops it's not that special partition it's here so i want to create a new partition here so we add a partition whatever size we want whatever file system we want in this case i mean i don't have any reason to do anything special but you could definitely uh well i guess what we could do is like 15 gigs ext4 on slash like that followed by another partition that's the rest of the space and i don't know let's put it as like xfs and put it unlike such data so you can already do that kind of stuff the part that's more interesting to us for lexi is gonna be on the networking side of things um i'm sure you know what on storage i'm not i don't actually want that data i'm just going to keep it then partitioned so i want the button to exist but unformatted effectively that way we can fit that to lexi as a back-end for storage ball okay so in this case we see there's a partition sda3 that's six gigs large and that's got nothing on it that's fine on the network front uh we've got a single neck here it's called the mp5s0 for legacy that's not super convenient because we can do like mic we can do mac vlan on it but it's not the most con the best setup we could do a bridge would be a lot more convenient to us um so let's see if we can do that um you can take the device here and then do a create bridge we'll call it hbr0 that's perfectly fine then vlan when you have one but we want to attach it to the ipv4 subnet and to a auto assignment file save okay and so far we've not actually done any ipv6 so let's change that if you wanted to do ipv6 the way it's done is you do add an alias so it's an ios interface on top of that device and then you do ipv6 and add and you're gonna get an pr0.1 uh well come on one device and now we can do deploy and let's do 2004 okay it's going to deploying so we'll give that to everyone maz01 is now rebooting so as we see we got dropped from our from our console and next you must be take the almost must be doing the power cycle right now this goes quite a bit faster when you're not in a nested vm can i require what i'm doing right now qmu is quite a bit slower here than than usual i'm actually kind of surprised that vm01 is not starting backup yet not sure what's going on let's see if qmu is still still around fight it's not okay well maybe it needs a bit of help okay that worked so let's just touch back on it uh this time that vm should actually be fully functional it's ubuntu 1804 so we see it still boots over the network but mars should just tell it to continue booting locally instead yep there we go booting local disk and then on on to grab bootloader has not verified image haunting huh okay so it's a security issue that's why the vm didn't start up uh that's interesting i would have expected 1804 to do secubit properly s but apparently not um so what we're going to do is we're going to disable the keyboard in that vm [Music] inside backup it's going to be interesting to see if um 2004 and uh centos work properly or if they also need secure disabled if they do then it might be because of the way mass setup sets things up that secure but doesn't work so well um if that's the case we can just turn it off in the profile and that will apply to all vms but as much as possible i'll try to keep the keyboard enabled it's always good to validate your your bootloader car now on good time but in this case it didn't seem very happy about it all right so that should have that particular issue yeah it's not failing this time that's good so we're booting up what's the state of things vm02 is also rebooting let's see if it did the same thing where just it just stopped yeah it looks like it did okay so it's possibly some issue with mouse and secure board let's just turn it off globally in the profile when you do that you don't then you don't actually need it set um on vm01 anymore because the profile itself has it you might as well just and set it there yeah i'm just gonna start vm02 and if we look here vm01 do we have a login prompt we do okay so now you've got the ipv4 vm01 actually put two ips somehow mars uses the ubuntu user so we should be able because our ssh key was loaded we should be able to just do that yep here we go so we're inside that vm ubuntu 1804 lts with 8 gigs of ram for vcpus so that worked properly um i would expect it to use mass as its as its dns server so while outside we can't do this because we're not using mazda as dns server inside the vm that should work just fine just like it should be able to ping the others yep except that vm02 is not actually up yet but it will be soon enough and now because that vm actually started properly it should show us being deployed yep we can go look at what's going on with vm02 which should be pretty much the same thing but on 2004 instead instance is not running it was last time i checked uh you can do show expanded to see the conflict with everything applied and we see that the keyboard is disabled so that's that shouldn't be the problem um well actually it could be the problem uh though the way we handle security in next days we can't actually go and modify the firmware nvram easily so what we do is that we reset it but if we reset it at the wrong time uh it could have actually written it back to being on so what we'll do is we'll temporarily set it to true on the vm and then we'll unset that key which will force it back off sorry okay so that should have properly reset its um it's nvram this time around um and just do let's just attach the console directly during start so i would expect it to go properly now it most likely just got caught in a race where it's uh it's secured config got cleared just at the wrong time and it wrote it for it back to enabled right afterwards i expect that other vms will probably hit the the same problem so we need to go and fix them too okay so back to starting up big c of ipv4 getting that boot onto the bootloader so so far it's identical to what we saw with 16 with 1804 it's not very verbose putting ubuntu but and we'll get there yeah here we go so camera has been booted properly we should uh be hitting cloud in it soon we should then tell mars that the vm has actually been deployed properly and it will show the stairs too much okay the interesting thing with 2004 is that the legacy agent is pre-installed so you can actually like see exactly into those um as well as ssh it's kind of interesting okay so now mouse is like it's talking back to mass so we see vm02 is now marked as deployed okay let's let's play with it so vm02 we can see here um as it's uh to the cip that must assign it uh we can from it ping vm01 just fine the but as i mentioned the interesting thing is that with 2004 we've got the agent pre-installed that's why if you look at vm01 we only have the rig codes here that lexi knows about to some extent um so that iep is what links d was expecting to be assigned by dhcp but never actually was applied and the interface name here is wrong because it it can't run anything inside it like if we do that that won't work but with 2004 we've got an agent that's why we see enp5 0 that's why we see only the correct addresses we don't see any of the temporary addresses that it could have used and uh what else can we do like you know that's as expected again the right the right config and it's running on 24. okay uh so the really interesting one for us is going to be a vm03 so let's see just how far those got okay they're both and rebooting let's go take a peek at vm03 then okay let's see if it manages to boot or if it's just gonna fail uh at this stage if it fails i'm gonna have to go clear it's secured yeah stem compromised okay it's the same issue all right um okay it's stopped pm03 security security true and we didn't set it it would have been a lot smoother if i had said that directly in the mass profile right from the start but i was hopeful um so vm03 um and might as well fix zero four immediately since apparently it's right about to hit the same issue maybe i mean zero four is uh i think that's that's the centos one so maybe it will behave differently oh yeah it will behave differently and that it went straight into the the firmware um okay that's a different way to fail i guess yeah so the drive boots straight back into the firmware it's not bootable let's see if it's a general issue with the waymars deployed centos or if it's also the secure segue boat causing us issues here so insert rm04 security secured and set it back up uh okay that's vm03 is starting up vm04 is booting now let's see what it does okay so zero three is definitely boosting fine now open to 2004 so it will be perfectly functional vm04 is doing network boots told to boot locally and centos is booting okay so it was also secured causing issues just slightly different symptoms all right so vm03 is the one that's really interesting to us uh so let's go take a look at that one uh just to check mouse should now show pretty much everything that's being deployed okay just zero for the still needs to talk back to it but the rest is good so we're gonna ssh into vm03 should just show one address now okay it shows br0 which is good it also shows we've got the static ipv6 this time around that's the only one that's configured for that so that's that works and now we're gonna be in there so we're in vm03 we should have the legacy snap pre-installed and we do next d43 is pre-installed there um that's also the one that's got a slightly different partitioning scheme uh so we should see yeah sds3 here is six gigs large that's what we configured earlier in the web interface so let's configure lxd it it's going to be a slightly different legacy in it than usual so we still don't do any clustering storage we're going to be using that sda3 because that's why we created it for so we create a new pool and we'll be using an existing disco partition so that's dev sd 3. uh would you like to connect to a ml server yes we would like that what's the name of this host in mouse it's vm03 it's correct and mars is at let's copy paste that address yeah so that's the address to connect to mouse and now we need an api key for that integration with mouse api keys are i believe over there so we do api key there's already one generated that's nice let's just use that one then api key uh we'd like to create a new network bridge so no because we already have one i would like to use an existing bridge on this host yes and the name is called br0 that's configured in mouse is this interface connected to your mouse server yes it is and now we need to enter the name of the pv4 and ipv6 subnets so if we look here um they are called master ipv4 master ipv6 so do not okay um no need to make it available network and the rest of the settings are good all right so now we've got what feels like a normal xd um let's create a 2004 container let's call that c1 and then we'll see what's different with that but this whole thing okay give it a few seconds to just get an ip all right so we've got a container called c1 it's got an ipv4 address it's got an ipv6 address and the thing that's interesting here as you you'll probably notice that it's in the 1056.61 so that's the mass network subnet which makes sense it's bridged um but you can also do this of ipv4 it actually has a dns record that's managed by mouse and if we were to configure it to do ipv6 properly which it currently isn't we would still have a working ipv6 and the press of the integration here is you can go in mouse and if you click on vm03 there's a new tab here that doesn't show up on the others which is instances and when you click on that you see c1 that mars with the mac address of the container and its ap address and if we were to launch so if we launch c2 and we go look at what's going on yep there we go so the containers directly appear they are visible in mouse and they pull an ip address directly from the man's subnets in this case so if we go now look at what's going on on our ipv4.net we see the addresses that were you that are used by the virtual machines and we see the addresses that are used by the containers and they are accurately reported here as being containers and last thing we're just going to check that uh that centos i believe that for centos you've got to do is just say just centos instead of ubuntu it would make sense and here we go we've got a centos virtual machine uh if we do that release that should show center seven yep and once you're you're done with everything on mars you can just go here take everything and be like i don't want those anymore just release them all and that's gonna wipe everything that's gonna shut them all up all down all the instances here will just be stopped and the dns records and everything that we're created for the vms and the containers they're all gone now and those are ready to deploy whatever you want at a later point mazu so has like a cli an api so that you can use it to directly deploy things on there if you want and the interesting parts that can drive hardware so you can totally add um internet or servers with ipmi or whatever you want and just add them on there and just manage both a fleet of virtual machines um that might that can be hosted by lxd or physical servers um all at the same time and there's a lot more options like you can like maz is is meant to go a data center scale where you can track multiple racks you can do availability zones you can group machines into different resource pools um and then give access to different people to different pools it's it's quite powerful for for machine management and it's quite nice that that it can be used to drive lexi virtual machines as well all right well that's all i've got for for today thanks for following along and i i hope you might have learned something now i might start using uh mars with flex day or if you're already familiar with uh with mars then maybe consider starting using next day with it thanks and talk to you later
Info
Channel: LXD
Views: 4,096
Rating: undefined out of 5
Keywords:
Id: b_bdVfG47G4
Channel Id: undefined
Length: 63min 14sec (3794 seconds)
Published: Wed Oct 28 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.