Hands-on Introduction to K0s | Rawkode Live

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
👍︎︎ 1 👤︎︎ u/Corsterix 📅︎︎ Aug 11 2021 🗫︎ replies
Captions
[Music] hello and welcome to today's episode of rockhold live at the rockloot academy i'm your host david flanagan although you will know me across the internet as raw code and i realize that i said real good three times in like five seconds however today we have an awesome episode as we're going to take a look at the k0s project a kubernetes distribution for all used goodies before we dive into that there's just a little bit of housekeeping please subscribe to the youtube channel and the button's right below my face right now so click that and tick the bell that means you're going to get notifications for all new episodes of rockwood live i'm going to do my best to explore the vast cloud native landscape and produce more videos and more materials so that we can all learn this crazy mess that we are in together if you want to come and chat cloud native and kubernetes and pretty much everything in between there is a discord server available at rawcode.chat come and say hello and i look forward to meeting you we've also been kicking off the rockwood academy courses this month taking a look at a complete guide to influx db so take a look at the membership options you can support this channel for 99 cents per month or become an incubating member join the courses and there's more guest lectures and other cool stuff coming very very soon all right back on track for today's session we're taking a look at kzeros and i'm joined by youtube from the mirantis team who works on k0s hello how are you hey i'm i'm i'm i'm excited to join the change the show and and uh and and yeah it's a pleasure to see you guys pleasure to have you here we're really looking forward to this today's session i think there's a lot of interest in nk zeros and people want to see more so i'm sure we can give them lots more details and explore in our hands-on way that we do in this show before we dive into that could you do us a favor and just introduce yourself and tell us a little bit about you all right all right so so yeah i'm i'm i'm using and and been working with containers and and kubernetes for well quite quite a few years uh a bit of a historical kind of background we we actually in in one of the previous companies i worked for we we actually went to production with docker 0.6 version and and and if i would would really have a time machine i would go back to that day and tell tell myself that oh please please don't do it we but but but i i i'm i'm in general i'm i'm kind of a stubborn person so i'm i'm still working with containers and and kubernetes and whatnot so so so i'm definitely definitely enjoying the whole cloud native landscape and and and the the kind of uh all that all the possibilities that it provides and and well of course all the trouble that it also gives us some days so so yeah to be in the container ecosystem that early and still be in it today you must have the patience of a scene you see no i don't i don't i i said i'm stubborn i'm just stubborn so so i i i still i still enjoy working on on projects like like k0 for example which uh which uh uh kind of makes the technologies more accessible and more easier to use because as we all know kubernetes and and and basically all of the all of the building blocks that we have in the cloud native landscape they are not there they're not the ones that are trivial to graphs grasp so all right well we already have our first question in the chat which i think we'll get to later but we do see it cooked in and we'll do our best to answer that at some point here to kick things off you're going to guide us through a little bit of slides tell us a little about okay zeros and then we're going to get hands on so i'm going to write throw your slides up and they're now live take it away you see all right all right so just uh i mean i mean that's as the whole point of this this raw code uh streams is is to go hands-on so so i'll i'll glance just go through this couple of slides quickly uh uh whether where the name comes is is basically a sort of a play from the from the like zero friction zero dependencies and of course as as in any open source it's a zero cost well that's a that's another discussion whether whether open source is really a fully zero cost but but but anyway uh we we try to make zeros really that like the one of the easiest easiest options to to boot up a kubernetes cluster so you don't really have to have to be a like like a seasoned expert and veteran in the industry to do really get get the like a production crate cluster up and running uh and also the the one of the main main drivers for everything that we do is this zero dependencies so so we we try to make everything everything in a in a standalone way so so you only really need the k0s binary and and and that includes everything that that that you need to have in order to run kubernetes successfully in basically any linux node uh that's that's where the zero in that in in the name really comes from um of course that i mean that that's there's quite a quite a few different cube distros out there uh already existing of course uh and and and it's it's not the first distro that i'm actually working working on either but but uh one of the one of the main reasons why we why we kind of started to work on k0s uh like uh about about a year ago uh what was really that basically there wasn't a distro that could allow allow us to have like a super super versatile distro that would fit that from the let's say from the basic cloud use case where you have like vms in a cloud and and and and whatnot and all the way to that to the environments where you have like industrial pcs and and and a lot of network segmentation on that on on let's say like uh like on a factory floor for example uh and and and that basically took us took us to this this like one of the main distinguishing features of k0s is this full control plane isolation uh what it means is is really that that the control nodes by default are not really part of the cluster from the networking point of view from the pod scheduling point of view so you we we don't really have like cubelet or any any container d or or anything running on the controller nodes so that it's it's like fully impossible to schedule workloads either on purpose or by accident do that to the control nodes uh always independence so so uh we don't or or we didn't really want to want to get into into that into the uh working mode where you where you have to maintain like a lot of debian packages rpms and and and deal with the dependencies between packages because we we've done that actually in the past and and it was uh it was a sort of a nightmare to manage i mean i mean it's it's uh it's it's something that we we definitely wanted to avoid and and and really get into this one binary approach which is not like like a new thing k3s has been has been doing that already before but we actually technically do that slightly differently than k3s does and then of course we we we wanted to have a like a pure upstream distro what i mean by that is that that we don't maintain a fork of kubernetes we don't we just basically take the upstream stuff and and just compile it as a static binaries and that's it so so you what you get is is like pure vanilla upstream kubernetes uh yeah battery is included uh what we mean by batteries included is that that we we we have everything that you need to have in a kubernetes cluster so that it actually works so we bundle like container d for the runtime we bundle cube router or calico as the cni uh they're of course lcd for for data plane uh for the for the data store as the as the state for the control plane uh we've also adapted this coin from the from the k3s side uh so you can actually use sqlite or or even mysql as the data store but everything everything can be actually swapped to your favorite solution if you if you really need so you can you can bring your own container runtime if you if you have a good reason for use like like docker for example you can you can still use that and and uh or or or something else um then of course you can you can also bring your own like like cni implementation for for example if you need to use or want to use wee war or something else or ceiling or something one of the one of the sort of design design kind of drivers that we have is that uh we we want to we want to keep the core k0s as a sort of a bare bones like like um i i think one way to one way to describe it is to to keep it as a sort of a un opinionated so so that's one of the reasons why we don't like bundle a lot of different things like we don't bundle like like ingressy service meshes or or anything into the into the core k zeros because they those those though you know on that level you have like like way too many opinions so we are not really in the position that that we can we can make a selection for the for the majority of the users that okay everybody should use this ingress uh we of course do have ways how to sort of extend the the the core k zeros and and we've built in mechanisms that you can basically dump dump set of yaml on the on a certain place on the on the disk on a controller node and and it'll be applied automatically or or you can you can actually deploy helm charts via the the k0s configuration directly so that's sort of a reconciler for that also available so but of course in the end it's just kubernetes so so you can of course extend it in in in many many different ways uh so i i already mentioned that that we we have these case errors as a single binary so so it's uh one way to look at it is is actually that it's it's sort of a self-extracting binary uh so so uh it's actually compiled or built in a way that that we of course have the k0s binary itself which has that they're all the logic of of setting up different components and running those and configuring everything on that on the kubernetes side uh but to the basically basically we append diff other other binaries at the end of the the k series binary file and then at compile time we actually calculate the offset of different files and and and when we run k0s we actually extract the binaries out at at runtime on the fly and and and then we can boot up cube api container d and and all the other needed processes as separate binaries so this is this is something that that that we implemented in a in a quite different way than compared for example to k3s so so uh on k3s they they basically compile everything into a single binary and then just run different things like cube apis and others as basic let's go routines on the on the runtime in our case the cube api and everything else they are like real separate processes on the on the host can i ask a question of course does that mean that casey does is like a really just a supervisor for these other processes yes yes it's a it's a it's a glorified process supervisor yes i like that approach very cool yep yep um about the control plane isolation because that's that that's something that uh that that really is uh is a different thing compared to all the all the other distros that i know of at least somebody might correct me if i'm wrong if there's some some other destroyer that implements a similar thing so so as i mentioned that the cubelet for example is is is not running on the controller nodes so we don't really need any need to play with that detains and tolerations and whatnot to to to kind of isolate the control plane nodes from that from the cluster uh and and and that is that is something something if you if you consider like your your typical typical cluster set up with for example with cube adm you get this master node which has these chains and and then you have to play with a lot of lot of tolerations and and and whatnot with that with the workloads where you when you're scheduling and where you're scheduling things at so so we we wanted to get get sort of away from that that uh that paradigm and and and there's no like like standard or back controls that would allow to say that okay you see is is stupid enough that we don't allow him to to to deploy anything on the controller nodes for example you don't really have that those sort of a standard controls for that in kubernetes of course you could do your own admission controllers or or opa policies or whatnot but but uh we we wanted to want to really have a like a full isolation which things things like scheduling to a control mode is just not technically possible it sort of simplifies things quite a bit what it actually allows us to do is is also have a lot of versatility on the deployment architectures so we we use this this connectivity component to actually enable the communication between the controllers and the worker nodes so basically the connectivity agent running on an on each worker node opens up a tunnel to the control nodes and then the control node whenever like like whenever the api api server needs to call the cube let's do exec into a pod like get the logs of a pod or or uh port forwards and and whatnot the connection actually goes to through this tunnel so so you can think of it as a sort of a like like reverse ssh tunnel in a way so what this really allows us to do is is uh is deployment architectures where your controllers can be actually running on a say public cloud and your workers running on your private data center with no direct access from the internet at all i actually have a setup on my desk here where i have couple of industrial pcs running worker nodes and the controller for those is actually on a cloud and my home connection definitely is not connected to that or i mean it's connected to internet of course yeah but but not from the internet title i don't have any port forward to anything so so um it's a it's a sort of a neat neat way of enabling this needed communication and still having like a lot of versatility on this network segmentation and whatnot all right i think that's about it that's a as a slide intros we can of course talk about a lot of these these points while we go through the hands-on parts yeah great yeah thank you for that that answered a lot of the the questions that i can have in my head coming into this um the firewall one and the way the communication works is fantastic that was always one of my favorite things about salt stack is the fact that you know the control plane or the solstice master as they call it the workers only had to be able to open that connection to that and then they used the zero mq messaging for all the stuff back and forward so it was always the minions or the worker that was in control of the communication it just makes the firewall rules so simple because they just have to be able to speak to this one thing over here and then you don't have to worry about anything else because of the tunnel approach very nice exactly exactly exactly and there's that that there's surprisingly surprisingly many features on the on that for example that are really tested on the on the uh conformance program even that really must whether where the api server must really be able to open connections to the to the cubelet for example okay so we got a question in the chat and it's from tai and we have a small follow-up from daniel so ty's just asking that we can talk about maybe a comparison with k3's mini cube and daniel has added on michael case and i think we kind of covered that as result of your slides but is there anything you want to add to that just to kind of um add a bit more extra flavor um yeah well well i i think as as as we've kind of learned from the from the get-go is that that the most comparisons that we get is is with k3s and and i i think partially we we throw ourselves with the naming indeed into the discussion but that's a that's another discussion uh so so uh i i think the main main technical difference is that is is the fact that we have this true control plane isolation from the from the day one and and and it's sort of a not really hardcore that you of course nothing prevents you to run the worker parts also on the same notes but then you have to take care of the detains and donations again so it's it's not really that like the default way of of deploying k0s um maybe maybe one other other kind of real different differentiator is is how we run these cube apis and container ds and whatnot so so so in case yours they are really running as separate processes so it's not like everything is embedded into a single process of course there's like with any solution there are pros and cons of each of these so so uh uh but in in uh at least in in in how we've been working with zeros now for the for the year or past year or so uh i think this is actually having having like this vanilla upstream kubernetes binaries it actually makes us makes us able to really move fast so so say that there's like like a new patch release of kubernetes it's basically hours that we can ship new k0s version because we don't have to deal with the goal module dependencies and and and whatnot because that that's always a nightmare when with kubernetes definitely uh okay um we're gonna get hands on in just a minute there's one more question in the chat i'll throw up from russell uh who is asking can you balance the control plane across local and cloud servers and assuming if you wanted a highly available control plane can you distribute them yes yes you can you can but of course if you if you have multiple controllers you have to have like some sort of a load balancer that that balances the load between those those servers yeah and be careful of cloud providers ingress and egress costs russell very good yeah all right let me get my screen shared keep the questions coming we'll do our best to answer them as we go we're going to install k0s now on a couple of machines i've got the home page available i have the documentation i have my equinix metal servers that i've got four i don't know what we're going to do with them i assume one will do a maybe a manual installation and then we'll look at other options for the other three sounds good okay so we'll just use the the getting started guide here so it seems to be the popular pattern these days but carol bash is the installation method of choice is that like i guess that there is a convenience for people that want to experiment your tires have a play with yes yes absolutely absolutely absolutely i mean of course nobody should curl pipe bash in production or any any like real environments so uh so yeah but but but but it's it's of course it's it's of course convenient to do to have these sort of face scripts for for purposes like like today for example yeah definitely so this is a ubuntu 2004 machine and i don't need to do anything i can just literally run this then it's going to work and you mentioned on your slides that it's os agnostic right it can really just run anywhere and i guess that's the beauty of just being aesthetically compelled by a really good yep it'll run almost anywhere do you see people using k0s for like i mean this is a sizable machine you know um but i guess it runs on iot and single boards raspberry pi's all that kind of stuff as well uh but at at least the worker part yeah because i mean we in in the end we we have to remember that that the control plan is is running stuff like kubernetes api at cd which are as we know quite resource hungry so uh so uh they they do take like uh at least a gigabyte of ram to to be able to really run the control plane but the the the worker plane is actually actually quite quite slim you know in a way that uh that one on worker playing the k0s process itself it's it's not really doing much more than being the glorified process supervisor so uh i i think it's nowadays roughly 200 megs of ram that it uses right okay yeah i can get on board with that uh how big is the binary uh about 200 max 175 it seems yeah all right nice that's just because we embed everything into the same binary so uh yeah so this is your k0s extractor and supervisor you've got container d you've got the cube api server and i guess in the cube scheduler the cube controller manager you've got all of these things stuffed into there but i i really do like the supervisor approach rather than the go routine approach i think that's that's pretty nice yep yep okay so we can use the k0s binary so it's obviously got some helper sub commands here yeah and this allows me to install k0s as a service on my image let's just see what let's see we've got there's quite a few quite a few different commands and and and helper functionalities on the on the binary too so yeah i can see we've got an air gap set up we've got the controller apis oh we've got backup stuff what does that do uh it takes a it it basically takes a snapshot on the on the on the state of the control plane and speeds out the turbo file there we go yeah we've got ctr lcd install control start status sorry yeah there's there's a fair picking on there let's run the so this is going to install the controller is that just does that mean control plane uh yeah yeah but if you if you add the signal option then it will it'll basically be a sort of a special configuration where where the controller is actually also spinning up the worker parts because well it's a it's a single node setup so it's a single node cluster so uh it's it's it's mainly intended for this sort of a developer use cases where you where you just want to run a a quick single node setup for to test something test your application and and whatnot and then it also it also actually disables disables some of the some of the components which we don't don't really need in this single node use case okay well that's it all right that's it and then we have a start command right yeah kz star so that's just going to run all of my components for me yep and it'll actually just call basically called the the system ctl to start the k0 service all right okay let let's talk about that but i'm really curious to see if this get notes it's going to work it did right yeah i did okay so i'm curious now um when i do the k0s install controller it's creating systemd services for each of the components so okay no it's not no it create it creates a single systemd service for k0s all right okay and then and and and the process that that systemd service manages is then the clarified process manager for the other needed kubernetes components like api controller manager and whatnot oh yeah so here's our process tree here yep all right so we've got our kz which is running kane with sqlite we get the api server the schedule the controller manager continuity the kiplet the proxy there okay cool okay i understand that yeah perfect uh yeah that's that's me i like that yeah so so basically the install command and start command are just like helper utility functions to get get get you get your system d set up easier rather than having to write systemd units yourself because that's painful all right my next question was why did get node return nothing earlier but it's returning something nice i guess we were a bit too early in the api server and cubelet spinning up for that to respond but yeah yeah and and and also as as i mentioned on the slides that it kind of works as a self-exact binary so when it boots up first time it it actually says that okay i i haven't extracted the binaries yet so it'll take a bit of bit of disguise to do in the first boot okay got it awesome well we now have a k0s cluster uh we've got access to a status command let's see what that does okay gives us the version process id parent and the other thing okay nice and i've already kind of done that i'm not going to uninstall it that just that's nice okay so pretty painless i guess that's the developer experience you're going for right you just you just wanted to get out of the way and just work yep that's that that's the zero friction that we aim for yeah and this is like completely i mean it's it's not that it's compliant i mean it is upstream kubernetes is what you were seeing as well yep it's not passing any i mean it can go past the compliance test but it really has option kubernetes which i think yes we actually we actually do run that compliance test for for basically every single release that we do which is well sort of my one of my favorite things to nag about is the the flakiness of the of the conformance so it's it's sometimes it's annoying to get it past but but uh we we do run it for every single release that we do i think just saying sometimes that it's annoying to sums up kubernetes in general for me to be fair true true so is there a 122 release of k0s uh it's in the it's in the works so uh hopefully hopefully within next few weeks we'll we'll ship it out and what's involved in that process for you then on on that side like what are you looking for what are you testing before you adopt a new upstream version uh we do we we do of course the the full conformance testing of of course and and uh we we also also do a bit of stability testing and and and make sure that everything works together nicely uh i mean technically technically if we if you think about from that from the like like k0s developer point of view from basically somebody from from my team that that does the actual actual kubernetes version change is actually changing the version number and couple of files and that's technically it but then of course we have to make sure that everything still works and and especially now in the in the in the 122 because there's this actual stuff that that fine that is finally being removed so we have to make sure that everything everything works clean and still and and uh but hopefully in next couple of weeks we we we will be able to do the release and we of course always want to bundle in some bug fixes for k0s itself and and maybe maybe some cool new features and and whatnot so cool do i get to select the version of k0s when i install it or is there a way can i like what i'm curious i'll ask the real question it's like could we maybe run 121.2 or 120.0 and like do an upgrade of a cluster and the first question we got at start from coach in there is they were just curious and interesting how they manage cluster upgrades especially in production if we could talk about that so is that something you think we could run through uh sure sure i mean uh technically technically as upgrading upgrading is is uh is is like just get a new version of k0s binary itself and and and restart the systemd unit and that's it but of course when we are talking about production services you have to do it in a like controlled way and i think that's where that's where this this k0ctl helper tool comes actually in a in a in a in the play so so k0ctl can actually do it like a rolling manner so it first goes through the controllers one by one and and and always waits that that the previous one comes back online and and whatnot and then it it'll move into the into the worker notes and and do the normal like upgrade upgrade drain upgrade on cordon sort of a dance what we call it okay uh maybe that links us into another question from ty then ty is asking if there is something similar to casey or ketchup i think it's supposed to be pronounced or is this where ferrous comes in um is k0ctl like catch up or is it yeah it is it is gazer ctl is is is quite similar than than k3s up ketchup there's too many numbers and an acronym try to try to try to say k three s and k zero s and and k three yes up in the same sentence and your tongue is twisted yeah i think i'll have a strong drink before i try that i think uh yeah what's fieros is that something i should be familiar with is that some fentanyl no it's uh ferrous is actually actually something that i was i was working in the in the past with so it was uh it was a cube distro that we did in the in the past with with another company oh okay okay got it so so if if twice is uh resemblance between k0s and ferrous it's it's mainly because there are some some of the same people are are behind both of these so but i think from from technical point of view ferrous and and k0s are actually completely different so uh okay got it uh nice okay so should we take a look at k0ctl or k0s control or k0s cattle whatever your preference is we we don't have a preference so so uh i always always use that the ctl like cube cdl and i guess that's that the one of the most important battles in the in the cloud native ecosystem whether it's cube cdl or cube cuddle or it's just technology in general we all pronounce something different i mean okay like sometimes i say sql light sometimes i say sql like sometimes i see mysql sometimes i said i've given up trying to find any sort of rhyme or reason to this yeah yeah yeah i'm pretty much the same so okay so we are going to install the case doodles control and i know i'm purposely going to say it a different way every single time is there a brew tap or would you suggest i just grab the release from the github ah i don't i don't think there is a pretty tap so probably just like either go get it or just download it okay darwin awesomeness empty okay zero's cuddle here's the definition of [Laughter] there we go uh we now have k zero's cuttle installed on our machine so do you want to give us the high level overview what is this tool for and when should people reach for it yeah so so so gators cuddle is is basically designed as a as a sort of a helper tool to do like like a special purpose helper tool to set up k0s over multiple different hosts so as as we saw when you when you when you did that setup for a for a single node that's i mean it's it's super easy but but imagine you have to have like hundreds of nodes in your cluster or or even more then it i'm i'm at least myself i'm going to be bored after the second one so and and when when people are bored they make mistakes so so of course so it's it's mainly like like an automation tool you know in a sense that that uh just automates the setup over multiple hosts and then as already mentioned it provides like also these sort of a day two operational benefits like like the the seamless upgrades of the cluster and and whatnot okay perfect uh we've got a question from alex in the chat who is wondering if there is a reason for the change and the default cni from calico to cube rotor right the main reasoning actually was that the resource usage so a lot of that a lot of the use cases we we saw earlier on where k0s is being used is is uh is use cases where what the infrastructure is is or or has less resources like these industrial pcs and and sort of like edge computing use cases whatever edge means for people but but uh but but those sort of use cases where where you you really have one to want to save like uh like a lot of a lot of the resources or as much as resources as possible so that was one of the one of the main drivers uh and and actually sort of adjacent to that is is the fact that calico at that time i i haven't actually checked the latest versions but but at that time it didn't actually support 32-bit arm at all which gprover does so so those were the main main two drivers perfect okay so let's jump back over to the documentation we have the tool and we can use an init command to generate i guess just a default configuration yep okay so this is [Music] a is it an actual crd or just made to look like one like can i apply this to a kubernetes cluster with some definitions not yet at least at least currently you can't but it's it's it's definitely something something that we had in mind that that that that from the day one let's make this look like a kubernetes resource so if we ever want to do that then we can a cross plane provider that could use this resource would be really sweet and just have it go out and do all my upgrades and stuff yep uh okay so this just expects me to have some hosts which fortunately i have got um a user a key path uh yep and then i tell the version of k0s that i want so this is really a kind of an orchestration tool i'm going to i'm assuming i'm going to do some sort of k0ctl apply or converge or whatever the subcommand is it's going to read this file it's going to ssh onto all these machines and give me back a multi-node cluster yes exactly exactly all right okay so let's drop in some ip addresses here from my equinox console so we've already burned machine one so we'll jump for two uh if i ignore the key path will it use my host's default agent yes good because that will well we're about to find out because i've got a bit of a weird ssh set up so i might have to okay quickly jen some keys but we'll work out what happens and i need to copy these five lines and one more ip address okay and we're going to install 121.3 and save okay and apply awesome okay yep what do you think the chances of this working with my ssh set up are we feeling confident i'm i'm feeling fairly confident yes i just remember that we we've had some problems with that with this agent based setups but but uh i think it worked um my ad blocker is clearly blocking the dialogue matrix though but other than that i think we're okay i think there's a there's a flag or environment variable for you to actually just disable the metric stuff completely sorry when i first brought in this ad blocker to my home network i actually blocked the traffic for the stream software so that was a fun couple of days for me trying to debug that so i guess it's just doing this thing how long's that i think i guess it's just going on to each machine does it do them concurrently like at the same time as it's one of them multiple processes or does it do the control plane first and uh it it it does the control plane first of course i mean there's no point of of going to the worker nodes if we don't get the control plane working first so yeah make sense i guess oh my god it's it's annoying to see these errors actually i really really really want to get rid of that i mean if if if sending the metrics actually hours out you you as a user shouldn't really care or have to care about that fact so yeah i guess i need to do something about that i i definitely am supportive of open source software having this at home as one of the most difficult things in the world is trying to understand what versions of people are using and as they're still actively using it because you need to know where to apply uh effort and maintenance and stuff like that exactly if projects are doing this now it's i think it's better longer term um if i could enable it just for this software i would i promise so i can run what i see here is this is going to download that cube contact to my local directory yep or it will it'll actually speed it up on that on the screen so you'll have to now probably pipe it to something and yeah we didn't get any error messages that's okay i wonder if there is a flag must be an environment variable okay oh and it said that it might be like a like a hidden option uh oh i'm gonna need to keep ctl cube config superconfig look at that yep everything's up and running and fine and dandy but but here you actually see that that because remember that you had three hosts in the in the yaml paint's not listed right because it's not running a couple or even container d or anything like that yep exactly exactly yeah when i first seen two i was like oh we're still waiting on one and i was like oh no it's the control pad yeah that that's probably the the most asked question that we see on slack or or even in github issues that that why don't i see my my note here you shouldn't so the did it it basically works exactly like like if you if you get your kubernetes cluster from amazon say you don't see the controller nodes you just get the api address and that's it so it's it's pretty similar here nice well that's pretty neat i like that tool and it's pretty straightforward you know just adding the ip addresses of each of my machines i like to just use the the ssh agent i was worried that wasn't going to work but it just worked and so yeah pretty solid too i like that a lot is there anything else with k0ctl we should take a look at uh i think that that that covers that that covers the the like the sort of basic basic stuff uh i'm gonna run it back of course now you know you yeah well it it should actually dump you a a tarball in your current yeah directory yeah so what does that backup of is that the kane database yeah it's the kind database uh that the ca certificates and and and and basically the needed state of the control plane okay awesome i mean of course we we have to get the ca into a safe backup place because if you if you want to change the ca on the cluster then well it's a slightly more difficult exercise yes definitely uh let's tackle a couple of questions and then we'll see if there's anything else we want to just we want to run over so we've got one more from alex that's just a follow-up from the calico cube router question and i guess alex is just saying if that makes sense if we want to use on local emperor with medium sized virtual machines and i can still use calico right so i think what he said is it's all swappable right and yes and and and uh it's it's of course all swappable but but we do include calico within k0s itself so so k0s itself has the capability to run either q router or calico but then of course there's a third option you're like bring your own so so you can deploy k0s without any cni and then it's up to you to configure whatever cni you you want to use but we we support out of box all right thank you coke jen is asking so how do i debug the control plane i guess if something goes wrong what's the options then is it ssh onto the machine usually yeah usually yeah or use uh use whatever whatever mechanisms that you that you use to to connect the note to the notes and and and whatnot so yeah so i guess as part of the you know when you're deploying k0s control plane mode you'll probably want to stick some monitoring on that machine get some logs of a few other bits and pieces yeah yeah absolutely absolutely okay uh yeah i don't think alex has a question there i think alex is just agreeing with the control plane not showing up on the list so we're all good all right awesome a really cool project uh i like that yaml format and just spinning up and deploying the machines i think that will save me a lot of time and a lot of manual steps is there anything else with k0s or k0ctl that you think we should cover before we finish up today uh i was i was not really planning but i was i was prepared to to show a demo with k0ctl where i actually integrated with terraform if if that is something that that people would be interested to see i think we would love to see that if you're happy to share so absolutely absolutely all right your screen is up we can see vs code and your terminal take it yep yep excellent let me bump up the font a bit sounds good so basically i'm i'm using a a smaller european cloud provider called hetzner for this demo mainly because they are super super fast to spin up all the all the needed uh needed infrastructure for them all purposes so it's it's basically a simple simple like uh three plus three case where i spin up uh uh three three uh controllers and and three workers and and with this cx-31 type so i think that if i remember correctly that's like uh four cpus and four gigs of ram so uh but i mean it's a it's a it's a typical three plus three case and and uh as we as as we learned from the slides and the discussion whenever i run hp control plane i of course have to have a load balancer in front so so my terraform also deploys this uh uh hesner load balancer thing and connects it to the to the workers and the usual cloud cloud stuff let's let's call it cloud stuff people get sometimes annoyed when i refer to complex technical things as stop i'm okay with it all right yeah yeah uh one of the one of the neat tricks that that uh we we have in some of the examples and and and some of the documentation on kjrctl is this use of this terraform output variables so basically i have the the hosts like you had in the yaml i i basically concatenate the list of controllers and workers and i just have the similar structure or the same structure as you had in in yaml in in terraform output so and then then i defined that the output is actually a yaml encoded value of this terraform variables so what what in the end i actually get out is is pretty much the same that like like you did manually so what it allows me to do is is uh is do stuff like like uh terraform apply yeah yeah yeah i trust what i'm doing i don't really but let's assume that i do you just see the main reason why i use hitsner in many of the demos because it took like 20 seconds to boot up six six vms and a load balancer connected to those so yeah that was pretty neat yeah so what i'll do is i'll i'll take the output as a raw because that'll be the yaml encoded value and then i i can actually pipe it to k0ctl and apply and then we don't need this oh die redirection so basically i said that okay let's apply whatever you get from the standard input so this is a sort of a neat pattern that we can use with k0ctl2 so imagine you could actually actually have like a like a ci cd sort of a pipeline for your infrastructure and and for your kubernetes clusters with with this yeah i can see like a github action that runs a terraform reply and it passes the the output forward and then you've got the k0 ctl kick in and it often goes and does its thing yeah exactly exactly or maybe in future you could actually actually dump the dump the kzrctl yaml in a in a kubernetes api and then some some magic somewhere kicks in then yeah i'm looking forward to seeing that crossplane provider you'll have that ready for me next week right i nope no promises no promises no promises but to be honest it's not the first time i've i've heard the idea so yeah i think it would work really really well because you could have like a k zeroes that runs cross plane on a single node set up and then apply your other k0s control yamls to it and then have the controller crossplane or otherwise go and create more virtual machines that deploy yeah that would be a pretty nice setup i like that yep yep yep or then what that what the uh what the like that this true control plane isolation actually actually allows us to do is is you could basically run the k0s controllers in a pod because there's no requirements for cubelet or anything else so what's it's just set of normal processes so what's preventing or running in a in a pod yeah definitely so this sort of a mothership type of a type of a pattern right looks like it's done yep it's done i got my three plus three setup done and and keep config let's dump it out export keep config keep and there we go one three node k zeros cluster yep and an ajha control plane with load balancers and everything and it took me like a few minutes even with my typing speed so very nice i like that yeah but i think that that of course we or or it doesn't of course matter how people set up the k0s cluster whether you use like like chefs or ansibles or sol stacks or whatever tools you use it doesn't really matter we just wanted to build this sort of a special purpose tool to help with the day two operations like upgrades and and everything so uh it'll it'll make life easier in in many cases awesome maybe we can actually actually try and upgrade where did i have the version it's a variable yes in the virus file the verse file actually contains a [Music] secret to so i'll switch you don't see the screen now right no but i think it did she was a virus fail at the start oh i i shot the example ah the example right okay gotcha i really should learn to do these sort of things that's right i flash my secrets on this show a couple of times a week people have been embarrassed and not hacked me yet so thanks yeah all right so what i what i did actually uh because we we see that we actually have this 21 to 2 version running so uh so i'm gonna actually bump it up to 21.3 now so it's it's uh i have to do that apply for the terraform first you see that in in the terraform output that the version actually changes doesn't change anything else then i'll do that output and apply thing we'll see what the upgrade process actually looks like let me erase that a bit based on what you said earlier what k0s control is going to do is ssh onto each machine pull the latest binary and then basically just flip them over yep okay uh we've got a question in the chat from nuno you know no how's it going uh as a windows workers know it's something that has been thought of or working on uh we do have a experimental support for windows workers too so in the in the case there is download release page you actually see a k0s dot xa already existing there you go try it out and you know let us know how you get on yeah it has an experimental label so be very nino is not shy from experimental levels he is always that's what i've learned too yeah yeah but i think that's that that the whole like cloud native native world that that uh what i've been i've been basically telling everybody whenever working with kubernetes or or anything if you if you see like like uh something does something slash we want beta one well just use it it's stable enough yeah v1b was too mature for me if it's an yeah yeah yeah so be better is the new stable in that's the cloud native world so well it was only 116 where we got rid of v1 beta 1 for like deployments and everything else like yeah yeah if it's v1 then it's almost like legacy already right all right so so uh what what you see what actually happened is that uh it upgrades the controllers basically one by one and then it moves on to that onto the workers but because i have like three workers only we can't really do it like uh in parallel upgrades that well so by default that the k0s cuddle actually takes like 10 of your notes and and runs that the upgrade in power parallel for the 10 percent at the time so if you have 20 nodes it'll actually update two nodes at a time and and and so on so it and and then it actually does this this typical upgrade drain update on cordon dance for each of the notes and and at all steps it waits that everything becomes ready again and whatnot awesome very cool i'm glad we stuck around for that extra bit of demos the upgrade was nice i mean we got some love in the chat for the upgrade as well so very cool all right well that is k zeros everyone i hope you liked that and you've got uh five seconds to get any more questions into that chat before we say goodbye and uh i'd like you to get back to the day well if you have any questions drop them in there so i'll finish with a question that i ask quite often it's just like is k0s finished complete are you just tracking upstream now or do you have any new shiny stuff coming down the line uh we do have we do have some new shiny stuff coming up so uh one of the one of the things that we are we are working work actually quite early early in the process but but we've started to work on a on a feature what we call like autopilot what it'll it'll do is is is it'll take the cluster itself and and uh it'll it'll apply the cluster itself will handle all the upgrades and updates and everything by itself so we'll we'll basically basically bundle this upgrade logic and all the control mechanisms between the node range and and and on coordinates and whatnot we'll we'll bake it into the into the control plane itself so basically it's a it's a cluster that is on an autopilot in a way nice sign me up for that uh we got one question that has snuck in there so cool jen is asking is can the 10 percent be controlled i assume this is in relation to that upgrade today yeah oh it's nice to post that it is if it's not then then we have to make it as an argument or some sort of a parameter one somewhere yeah pull request welcome right yeah yeah yeah it's just changing one magic number to something else all right there's maybe a good idea for a nice simple contribution at cook gen so if it isn't configurable already uh feel free to give that a go yeah absolutely absolutely all right well thank you so much for joining me today really good to see that our demo well i'll say our demo your demo went off without a hitch it was really nice i hadn't actually used tetracloud before but the speed of spelling that up was pretty impressive so maybe i need to check that out at some point as well uh alex sneaking them with one final question we can do it right yeah we've got a little bit of time alex says i tried once to deploy portworx to k0s but i could not get it to work any ideas if it has something to do with the locations where it stores the files or configurations i i do remember reading about that issue but i do i can't remember that the details where we where we landed on on and and to be honest i don't i don't really i don't really know port works at all how it works and what it does so so uh uh but it probably has something to do with that where where k zeros puts in in in the files and and sockets and and everything so uh that that's why that's my best guess based on based on the information that i i have and know about portworx all right no worries uh okay we gotta thank you from ty in the chat so yeah we're gonna finish this up thank you again you'll see um really good love the demos very cool any last words before we before i let you go um well as as in any open source project just we we always appreciate feedback and and and both bug reports and and feature requests and and everything in in between and even better if you can pull up a pr um but yeah thanks for thanks for having me join the session and i i i really enjoyed the format of of your sessions here so awesome hands-on not afraid of of uh demo effects no no i i i look silly enough on a stream regularly so i'm not phased by it anymore but uh yeah thank you it was great fun and i'll hopefully speak to you again soon have a great day all right bye [Music] [Music] you
Info
Channel: Rawkode Academy
Views: 413
Rating: undefined out of 5
Keywords:
Id: pXbJwlUDnUI
Channel Id: undefined
Length: 65min 13sec (3913 seconds)
Published: Wed Aug 11 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.