DigitalOcean Kubernetes GitOps walkthrough: DevOps and Docker Live Show (Ep 152)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Applause] [Music] so [Music] [Music] do [Music] so [Music] so [Music] [Music] [Music] do [Music] so [Music] hello merry christmas happy new year happy holidays all those things i'm glad you're here so this is the last show obviously of the year we've only got one more day and i thought i would totally change it up so we're still going to do the regular q a thanks for being here here to answer all your questions i have no guest and uh if you're here last week we talked about the idea of going through the digital ocean kubernetes challenge which is going to be kind of hard to finish in an hour unless i did something super simple like install a registry so on this channel this year and last year i mean i'm an ops guy i focus on devops stuff which typically means that i'm helping with some sort of product in the pipeline between a developer's commit and running that version in production right so there's all this stuff in the middle and the simplest setup nowadays for kubernetes involves a bunch of different tools and a bunch of different automation and if we want to truly get to that get ops style approach where we're letting the machines do the most of the work and we're really just editing stuff editing yaml editing code submitting prs approving prs and that's really all we do right we're not spending our life in the shell manually typing commands to deploy code that's fine until you get to a more automated approach and then you really want the tools to do it for you so today i have no promises because i did not run through this right this is not something where i did it all perfectly and you're just going to watch me demo a perfect deployment of tools and it's going to take more than an hour i know it is it's going to there's going to be i'll probably spend an hour just dealing with secrets so um i'm going to try to go as long as i can today this this will be a little bit different style again i'm here to talk and have conversations ask answer questions whatever but uh it's going to take me a while to get through just the plumbing of setting all this up and maybe we'll chat while we're doing that maybe we can talk about the products themselves that i'm setting up so i'm glad you're here hopefully this will be interesting and i won't make it too boring and let's get into it so if you haven't been here before thank you so much to the patrons just gonna applaud you all for being patrons thank you so much um if you want to get updates on who's gonna be on the show when we're doing our live stuff every week and special events every month when i've got new content coming out which i just released a new blog post at udemy and other stuff you can see all that on our patreon page you can just scroll down you don't have to give me any money if you want you can simply click the follow button and you'll get the updates if you want some of the extra benefits or if you just feel like buying me a coffee is a thank you you can subscribe to one of the membership plans the high fivers by the way we do this monthly call where we just hang out talk about our devops projects have conversations around the right ways to do things or the better way to do things because there's never any perfect way to do it so you we have that let's see when is that going to be the next one that's going to be in january 19th so if you subscribe there before 19th then you get to show up at that that'll be in our discord channel which is over here in the devops.fan that's our discord server and we have a special channel in there just for patrons as well as for the high fivers monthly meetup and the let's see what else the next udemy q a which i now do monthly for those in the courses you i did that last week we're going to do it again next month that's going to be on january 20th and then on january 6th that's next week i'm going to have a guest on the show that's jake warner founder of cycle.io so go check out cycle they're a new startup in the cloud native space and we're going on the show to talk about their product all right django what's up i bought your course today thank you so much thank you so much for buying the course i really appreciate it hello everyone everyone out there uh that i a lot of people here today so we all got the day off hopefully or at least we're partial day off maybe maybe tomorrow new year's eve yeah yeah some of us are lucky enough to work from home so i'm just sitting here in my office all right let's do this thank you everyone for showing up and i'm going to describe what we're going to do today so i'm going to write instead of drawing i meant to draw this out but i didn't get a chance so the typical get ops workflow is that a dev changes code right and then a dev submits pr for their code it gets approved likely a docker image or an oci image gets built all right actually let's change all these to a list and then pushed to registry and then we are going to then just because code is pushed to our registry doesn't mean we always want to push it i'd say the most aggressive form of devops is as soon as code is pushed to an image registry that automatically gets deployed to production probably on your way to fully automated devops you will have a middle ground there and that's what we're going to do today where is i need to uh update the get ops repo today we're going to use argo cd to the new image version image tag right you can see that and then argo cd in cluster detects changes and deploys changes okay so that's a pretty simple linear path but it requires lots of things right it requires a kubernetes cluster it requires us to obviously install argo cd we have to have a place to store images we have to um have we have to have at least a couple of repos here to do this usually we need the manifests for our app and then maybe and likely we have a separate argo cd repo that then maybe overwrites maybe if it's a helm chart it's going to overwrite that with a different values file and it's going to have sort of the production values of our manifests and then that repo is watched through web hooks by argo cd running in our kubernetes cluster so there's a lot to this right if we were home brewing this we were building this purely on our own solution we would need to worry about which kubernetes distribution we're going to deploy which image registry we're going to use maybe we're going to use it in in the cluster itself how we're going to build those images in cluster how you know what blogging and monitoring solutions are we're going to use in cluster and we're going to skip a lot of that because we're going to be using cloud services as much as we can so that we're going to use existing stuff and just focus on the get ops pipeline right this isn't about building everything on top of each other on a single server in your in your closet which is also a fun project but we're not going to do that today because that would take eight hours and i think a lot of us are not doing that a lot of us are probably eventually needing to deploy this in some various form of cloud infrastructure so let's use that let's use that and the kubernetes challenge from digitalocean obviously is using their products so we're going to use theirs all right so if you didn't know this from last week the way that this works is we have to sign up on the digitalocean website through a web form and you have until tomorrow to do this so you can start with me you likely unless you've got the rest of the day clear you likely won't finish with me but you have today and tomorrow to do a few things and then you get some pretty sweet swag uh you'll get it in january so let's see what we're gonna do here step one is to pick the challenge from the list below all right so well i guess step one is to have a digitalocean account and if you want that digitalocean account i have a coupon that will get you free free hours so let me get you that code i probably should have that code handy huh and that way if you don't have a digilotion account when you sign up you'll at least get some free credits you can also get free credits as a part of this challenge so you basically could get double the free credits in this case so let's see managing my account why is it not all right one second gotta find that code didn't think it would be this hard all right while i'm doing this um does anyone have any questions yet all the credits where would testing happen in that list so um in the in that list that's a part of building the image right that's the ci portion where you're building an image testing the image if scanning the image for vulnerabilities super linting all that's going to happen in github actions so maybe if we go back to that real quick and i'll explain to you what the products are going to be so we're using github for this right um github fan so let's see github and if you did you if you use git lab or bitbucket it doesn't really matter those are all going to more or less do the same things just you're clicking different places so this is a github docker image gets built with that code push to registry and this is going to be in github actions which i've been talking about for the last year and will use my a lot of my templates to do that and then we're going to update the get ops repo using argo cd and then the last step so the last two steps are our go step number three is github actions that's that's essentially your automation tool and argo provides a little bit of automation but it's specific to the cluster right so it's not just doing general stuff it's it's very much just applying kubernetes yaml based on a set of requirements and parameters right and then the rest of this stuff is all just standard github workflow stuff so we could put in there test lint scan as a part of that and then we push it to registry right we might even just put github actions and then push to registry push the registry and we're going to use github container registry so we could use docker hub but in this ta and this one i'm going to i'm going to avoid again this isn't a demo of docker hub or a bunch of other tools so i'm trying to keep the scope of this this demo narrow so we'll just use everything that github provides as much as possible so our three main tools are github argo digitalocean right so detect changes and deploys changes to dok its cluster all right hopefully that makes sense the challenge is ending uh hey barry the challenge is ending tomorrow at midnight so basically once your calendar switches to 2022 in the specific time zone so if you're not in the pacific time zone if you're east of that which i imagine a lot of us are we get extra a little extra time because it's not until pacific time basically utc minus eight tomorrow night midnight in the in california where this will cut off and and the the requirements let's go through the requirements we're going to submit a chat pick a challenge so they have a whole list here right and we're going to do the cicd one so they they list one down here where deploy a getup ci cd implementation and it talks here and you can follow you could walk through this yourself right you could actually do this pretty quickly i don't know at least a couple of hours but you got to set up a kubernetes cluster and we got to fill out a couple of forms if you want to get free credits you don't have to do that if you already have free credits or if you're not going to have it up very long technically you're supposed to keep it up through until they have checked it out but if you don't want to do that and i'm not going to do that i'm not going to leave my kubernetes cluster running for days or weeks because the people that are working on this are on vacation so they're not going to get to our prs for weeks but as long as we have the pr submitted and their form filled out by tomorrow night midnight we'll be good to go so they have an example here of deploying get ops solution using tekton which is fine but i'm not going and and i'm not going to do that basically i want to use github for that because that now for me is the default tool if that's where my code is living i'm going to do as much of the automation image building scanning you know i'm going to use github's container registry i'm just going to put the stuff there right next to where my code is at so it's all going to live in the same place now if i wanted to do a demo of one of these other open uh ci cd solutions or image building solutions or whatever that's great but i didn't want to muddy the waters and have us learning three or four new things all at the same time right all right so we're gonna pick the cicd one and then it says you know create a github project which i did all right so i have this one over here it's completely empty and i'm not even going to send it to you because there's nothing in it so when i have something in it i will later in this live stream i will post that up there but i've made this repo and this repo is going to have the walkthrough in it i'm not sure exactly what else it's going to have in it because i have my app code i have my app manifest whether we use customize or helm that's not something i've decided yet and then we're gonna have an argo repo now that may be the same one that may be a different one i personally like them to all be separate separate repos and that i don't know that way i can control permissions easier i know who has authority to write to a repo so i know who has authority to deploy my code so i feel like the app itself the you know the app itself is going to have the github actions in it and then the manifests are probably going to let you know they're probably going to be a helm chart if you're if you're a big enough team you're going to make home charts of your things you can use customize that's fine too um but then you're going to have that repo and it probably isn't going to be your code repo you're probably going to have a separate one and then you could technically do argo in there or in the in the app repo but again it's there's not a lot of benefit in my mind around mono repo all of this right so unless you're just one person and you want to do this the simplest way possible on github but i will mention that sometimes when you add these things into a mono repo it actually makes things more complicated because then you have to make sure that you know for example that argo is only watching the certain files that it cares about that github actions only cares when it when it does the image building it's only looking at the app code not the devops code because you don't necessarily want your helm chart to cause an image rebuild like there's all this stuff that happens when you shove everything into one repo so i tend to find app code manifest argo that's just me that's i think these tools are all flexible enough to kind of make it the way you want it all right so i've got this is my base this is where i'm going to add to the readme and that's part of the requirement so you you have to do this in a way to help others and that's kind of what uh digitalocean is doing right they're spurring on useful open source tutorials by giving you this huge list of possible tutorials and then you create them and then we're going to create we're going to fill out this code challenge form so let me just give you the kubernetes link in case you're interested in spending the next day essentially the next 48 ish 36 hours or so of this all uh to get what we get what do we get down at the bottom we get to use digital oceans money to donate 150 to an open collective project we then get 100 gift card to digital ocean swag store they've got some neat shark swag because i can't remember the name of the shark this one remember the name of the shark it's like deploy or whatever their their shark logo sharky i don't know and then you get 50 to the cncf swag store which is pretty neat because that swag store is for all the projects all of the cncf projects so you could you could buy a lot of things there a lot of different tools all right so we have to fill out the basically the form to get our credits now the credits can be retroactive they're based on your billing date so um what i was going to do was before we did that i was going to give you a coupon for signing up in case you haven't signed up so don't sign up this second i'm going to get you that coupon because it's in my courses and now i've got to go into my own courses to find it okay i think yeah so this will give you a 60 day 100 credit sammy the shark thanks paul all right so there's a you are the url there i need to save that somewhere because so we're gonna make this goals and then 100 sign up credit and let's see i'm going to [Music] share this page with you share to web and let's copy that put that up there so if you haven't used notion this is another actually really cool app that i've been using for years uh it's fantastic and it even has now fancy little features for github repos so they they're not specifically a github or developer oriented product but since so many people do they they've been adding some features specific for us like you can put in code snippets and stuff like that all right so we do that and then we got to fill out this form which will get us more credits link to your project github or gitlab you must create a github or gitlab repo for your project and then i'm going to copy the repo url from my empty repo and they say that's okay and then i'm going to deploy a githop ci cd solution and then submit so the whole purpose of that is that they eventually give me 120 dollars worth of credits and that'll happen in january so that i don't have to pay any money to do this but again if you read the document my intention is not to leave the cluster running so you need to take screenshots to prove that you did all the things because if you tear down the cluster there's no evidence that you actually went through your walkthrough so i'll take screenshots along the way and you know save those and store them in my repo essentially and put them in the readme because one of these things here the extra credit or the minimum requirement in order to get the the final prize is that you actually create a blog article or you at least fill out the readme with the step by step of what you're doing right and yeah write about what you've built and share it on a blog or in your project readme so obviously digitalocean wants a little bit of for for their free swag they want a little bit of credit so we're going to do that and so complete the challenge i already signed up for the digital ocean deployed discord and that's just a help channel you don't have to do that you can obviously obviously ask for help in our server up at devops.fan and go into the kubernetes room but they have a specific one with digitalocean for you to go into kubernetes challenge room and you know ask questions and talk to other people all right now once i've finished the challenge and got my thing set up which that's what we're going to do next and then i filled out the readme which i will not be doing live because who wants to watch me write a readme we will then make a pull request against their repo so their main repo is where you basically the finish line is here you go here you make a pr with a file and all that file is is your username and then kubernetes challenge that's your github username and then when you go to it it's just going to be the minimum links it's basically going to be a link to the github repo a link to the where you where you documented it or wrote it up and then maybe your contact info it's kind of weird that they have you put your email address in there but whatever uh you do that and then they will when they accept your pr and they've by the way they have said inside of the discord server that they're on vacation so they'll be back next week and they'll start going through because if you can see there's 95 pull requests to go through so it's going to take them probably a week or two to get to accepting your chat your thing and then obviously they got to give you the credits and then they're going to send you the swag coupons so expect that stuff to happen later in january now once you've done that pr and submitted it you then need to go fill out the final form which is telling them what you did where it's at and to go check out your pr i guess it's basically i think this is the way they prevent spam like they're not going to look at every pr unless they've bothered to fill out this form right all right then once you get that you get the the swag so let's get to it if we go back here and we have like general milestones of the projects right of the project here we have to do these certain things in order and if we did these in order they would first be create digitalocean kubernetes cluster so we're gonna walk through that real quick and then create you know create your app which in this case we're going to use dog versus cat or the voting app and you've seen me use that in my courses and my talks and my conference talks so it's a simple distributed app that has i think five containers to it and we'll need to set all that up in a kubernetes manifest which i believe already exists so we won't necessarily have to change that we will probably just stick with today for the kubernetes manifest i won't turn that into a helm chart necessarily but we may need to template some of the values which will force us if we're going to want to template it so that it's not storing secrets in github we're going to probably need to use helm or customize all right so create app manifest or yaml really then then we're going to deploy argo to the cluster and uh there's probably you know underneath each one of these there's a bunch of steps there right github secrets is where we're gonna store all that um we're just going to have create app repo so you're going to fork this this reap this app right so we're going to need our own copy this app because you're going to have to build the images and store them in the registry so we're going to create the app repo and then we're going to add github actions for image building and pushing to ghcr all right that way when the app changes it gets new images put into github container registry and then down here we're also going to need to store uh you know create argo yaml repo store secrets and github secrets and then configure yaml for our app and image urls now up here what we you know if we were going to do this end to end i wouldn't go through the gui to do this i would use digitalocean's command line tool i think it's doctl let's go check that out yeah d-o-c-t-l that's their command-line tool so we could create the cluster that way um but because i didn't plan all that i'm not going to know the very long command that probably is necessary to create a cluster um but we will we will probably use that tool at some point today anyway because we'll probably use it to do something like get the the kubernetes configuration yaml basically the secrets in order for us to connect from our local cube control command line so spin up cluster right and we're going to be able to we want to basically not have to ssh into our server right we're because in this case i don't actually know that i'm sure you can because you actually are managing the nodes a little bit in digital ocean kubernetes but i always do everything locally so we're not going to ssh in the server we're going to use our local cube control so you have to have cube control installed i'm assuming you're going to have all these tools already installed in your local machine and we'll use vs code to um edit all this stuff we'll probably use we'll probably use um code spaces and that's kind of the setup right and then and then that takes us to here right we'll go back and that'll be the real production workflow where ideally it'll all work if there's a code change in our code repo maybe we change the color of one of the websites and then we go and it builds the image and then we go in manually again we're into this manually today we'll manually tell argo to use a different image tag and so we'll have to make a pr to that repo and that pr will essentially kick off the deployment right you there's another way you can use argo and flux in these git ops tools where you tell them to watch a specific set of image tags maybe you automate your image tag to something like a simver or a date based tag and you can tell it to watch for those tags that if it sees them to automatically deploy right i'm not going to do that today for various reasons hopefully this way is faster for us to at least get set up but that's just an extra step right that extra step for full automation requires that you come up with a standard schema for naming your images and then you implement that inside of argo to make sure for example that you never accidentally deploy test pr code to a production server you're going to want to make that image name a standard so that only the you know you don't accidentally deploy an image you didn't mean to deploy and that takes some careful consideration and we're not gonna do that today all right so i already have kubernetes um barry's saying i use habit tracker yeah in notion hey anton what's up the write-up blog post and stuff needs to be done by tomorrow night yes barry um that is true so uh if you want to push the envelope what you probably could do is you submit the pr and the form by tomorrow night and then maybe over the weekend because there's no one working on this stuff right so you could maybe cheat there and finish up your write-up over the weekend and get that basically published before they check this stuff starting next week right so so um as long as we're getting the pr in with sort of we did the work and then the write-up happens maybe over the weekend because you know the write-up's going to take some time right if you want to make a good one that steps people through this step by step what we did and the first thing we got to do is we got to set up that cluster so i'm logged in to one of my digitalocean accounts and i'm going to get that that started under create kubernetes cluster and this is a cluster as a service which means they take care of upgrades they can i can control the version but i can let them either do automated upgrades and all that stuff or i can choose the version here you know i'm saving me a lot of time and this is always the way i recommend to do it on cloud providers they have entire teams of people that design this stuff secure this stuff and manage this stuff so why take on that work yourself now for learning great but i always recommend using a cloud hosted kubernetes cluster if at all possible because it's it's just one less thing for you to manage and one less thing for you to worry about accidentally not implementing securely or having open holes in your cluster so i'm not going to do this in amsterdam not sure why they're recommending that to me but i'm going to do mine in new york i'm going to use the recommended version which is 121.5 there's obviously newer versions of kubernetes but that's the one they recommend so we're going to stick with that now when you do this like most clusters you'd create in the cloud you get to create your nodes it will create the nodes for you but you get to decide the size and the cost and how many so we're going to start we're going to kind of leave this as default as much as possible so these are 2.5 giga ram usable notice how it's pretty neat that they actually tell you that you know kubernetes itself it's got kubelet uh it's got dns stuff running it's got well that's not a service running on every machine but you've got the kubelet running on every machine you probably have networking pods running on every machine depending on which networking provider you use so you've got all this stuff already running in the background and they're telling you basically that if you get a four gig server you're only gonna have two and a half gig there so i'm gonna do three nodes which this is not by the way me creating the control plane the control plane is managed for me and there's an option right here if i want to do a fully redundant control plane which means they're gonna by default if you didn't realize a lot of these cloud providers they'll give you a very cheap or sometimes free control plane but it's often not redundant it's not highly available so if the control plane goes down in most cases you will probably your apps will probably stop working like there there's doesn't mean that your nodes stop working but the control plane probably does things that at this point when you get a relatively complex setup it's going to require those at least one of those running for all your networking and policy and all the other things to operate correctly so you're in a production environment you'd probably want to do that it's just a checkbox it means that they'll add two more nodes to have a three node control plane and that's separate from here right so this is the nodes that my apps will go on and i'm only going to make one pull i'm going to leave it at the default the basic nodes three of them and so this whole thing is going to cost me 60 bucks a month and i'm just going to leave the default name actually i'm going to call it let's call it the k8s challenge get ops i'm just going to call it that and now we're going to wait a few minutes for that to spin up so while we're doing that let's go get our app and again we're going to be using the dogs versus cat repo that's the voting repo i've got to find it all right so i have this example voting app now i'm prop what i'm going to do is i'm going to just leave this here and use mine i'm not going to make another set of this of this repo i may regret that but i'm just going to use the one in place yeah so this is under brett fisher this example voting uh app this is actually a fork from the official one under docker samples and i've made some tweaks to mine and i think i don't know if i don't if i fix bugs or what exactly i did but one of the things it does have is it already has the k8s manifest so these are not a custom they're not templatable but at least it's those manifest and the nice thing is about this simple demo app is that it doesn't really require passwords it kind of hard codes them for simplicity's sake that's not really um what you want to do in any real world scenario but it does make it easy at least for day one simple setup and examples so we've got that repo there we'll try to deploy this specification and then while our so i've already filled that out we're not going to need that yet all right so we're still setting that up and then let me see if i can clone those repos locally all right so if you this is a little pro tip if you are a github person there's the github binary which i love to use so code github right here i'm going to download that repo i just created the github's argo cd repo so i'm going to clone that and i can do this and that's so much easier now granted i could just go copy and paste a get a git clone command but um github the github command is really user friendly and i highly recommend you check it out so they have a whole website for that i think it's cli.github.com yeah so if you don't have that you can install with brew on mac and i'm sure they have a windows version and all that but i love that tool i use it all the time and i recommend it to everyone all right so now that we're in there i'm gonna change my screen there all right and we have nothing in this repo yet we just initialized it i gave it a license in a readme all right i'm gonna get rid of that okay it looks like it's finished this little bar here basically indicates that deployment um we're still we still got a little spinning thing here just meaning that we don't have like a health status but the um the thing is deployed so let's walk through there getting started so the first step is we gotta get authenticated to our cluster and it's nice that this tutorial happens step by step right so we have cube control on my local machine i already have that installed and notice the version that's kind of a tricky thing with cube control because kubernetes only supports i think it's uh two versions back i think it's it's either two versions back or one version back somebody's probably gonna help me out and chat but if you have an older cluster and you have a really new con a client command line uh they're gonna it's gonna complain to you a lot so you're going to want to make sure that it's within you know 121 if you have the latest version you're fine but you may need to update your cube control command or run one in docker if you're not on the newest release so this is showing you um and then okay so this is me uh this is a shortcut so if you have the doctl which you can again get with brew or on their on their website if you have that then you can paste this command and what it's going to do is it's going to save us the step of having to edit our kubernetes config file locally manually it'll allow us it'll basically save that to our local machine all right and so i just copy and pasted that command in really anywhere in my cli and it's and you can see that saying it's adding cluster credentials to the cube config file found in dot cube config so that's where all of your connections the different kubernetes clusters live and normally you either need to find a utility or edit yourself because it's a it starts to get to be a pretty big file and it automatically set the context for me right there so if i do something like this command to see what resources i have i'm talking to a cluster you notice that it was kind of slow right i'm talking to a cluster over the internet using all the nice security encryption all that stuff so uh i have nothing there right we're in the default namespace and there are a couple of other tools you can use here by the way cube ctx is a and also q ns those two cli commands allow you to quickly change context that changes your cluster which cluster you're talking to and you can see i have my built-in docker desktop one i also have a local micro case um and i have rancher desktop installed right so i have those different kubernetes clusters and the highlighted one is the one i'm currently connected to and you can just search the names right cubectl or i'm sorry cubectx just search that that's a github project you can download that binary you can install it with brew and all the things and then you can also change your namespace so if we wanted to create a custom namespace with this which we're not going to do today but if we wanted to not use the default namespace we could quickly change those back and forth here without you know normally you would have to do a cubecontrol config and then like a set context you know dash dash current all this stuff right so that's a long tedious command just to change namespace or something like that uh yeah the the move terminal edit they did this they did that earlier this year and it's it basically means now that i don't i don't need any other i can just use this main app right and i love doing my demos this way that way i don't have to constantly switch back and forth the other thing i can do is they actually have they have a web browser that i can put in here but it kind of right now it kind of stinks it's not great so uh i would love to have one window shared and i don't have to switch back and forth but anyway all right let's see how our cluster's doing so now we're connected to it so we can continue there they give you a couple other little choices here like automatically install minor versions which is fine um when do you want me to upgrade it you know this i'm not going to leave this cluster around for a long time so it's not really going to matter and they will do minor versions for me and i get to control when that happens and then they give you this sort of i think it's a really cool little one-shot install option where i can say i want loki for my logging which is a nice new way to consolidate your logging maybe i want some monitoring stuff maybe i want an extra ingress controller because i can use the kubernetes load balancers or just really just the load balancers that come from digitalocean i can use those but i can also have another ingress inside my cluster basically i could do all these one-click apps myself and just click on it right but we're not going to do that we're just going to leave it super vanilla for today and then it's done all right so i can always go back to those things there's lots of stuff in here things you can click on and explore it's pretty nice for a thing that's only been around i don't know a year and a half maybe um it's a pretty nice setup not too complicated and then because let's see how this works yeah so i have the default standard kubernetes dashboard that i can use and i'm already authenticated and all that you can see my nodes again my total capacity is here my nodes list is here defined as pool so you can have multiple pools then you can target your resource your resources to specific pools all right so if we have this is sort of um now where we the rubber hits the road right we've got our cluster we're talking to it now um what i really need to do is i need to need to see if my app will work on it before we do argo helm or anything like that i really just need to get my app and clone this right so the way i do that is in in here i do a github repo clone bret fisher example a voting app now in a perfect world my app is fully we're going to fully fully work on kubernetes and i can simply deploy it so i could go into the k8s directory or cates depending on your preference and then i should technically i could just back up and then do a cube control apply like that and let's see what happens ah so this is wanting to run in the vote namespace which is fine so then all we need to do is just make that namespace because we definitely we don't have a resource that create oh no it says namespace vote created all right so let's just try it again because it may have been an out of order there it goes yeah so the namespace didn't get created before the rest because of an ordering problem all right so now we're going to have all this stuff happening now these k-8s files are hard coded again this is kubernetes manifest not a template so it's going to have all the image names hard coded and all that stuff that we maybe want to change so one of the things in my repo in my app repo is i should have workflows that build and push all right oops you're not watching me sorry so inside of the app repo i have a github directory i have the workflows under here and then i have a bunch of stuff going on i have a linter so this is already in my app and if you go back in the youtube live shows here i have a bunch of different shows from this year that talk about github actions i have one specifically about super linter which i'm a fan of and you should implement in all your repos that have code or really anything in a repo because it it lends everything markdown yaml kubernetes files docker files it links them all so i have that and i have all these different workflows and then for each one of my apps so this app is in a mono repo which means the three different custom apps each one of them is in a different language and they do different things but and if you've never done this app before i have a little image somewhere here i thought yeah so you see that that's kind of the layout of this app right it has a postgres database a redis queue it has a back-end worker container and then it has a front-end website for voting and then for viewing the votes so that's sort of the this would be like for your customers and this would be sort of the back-end app for administrators to watch the votes happen okay and we do that a lot in my courses in my workshops and stuff so in here i have this github and i have under there the workflows and i've already got these being built but right now they're only being built and deployed to docker hub not to github container registry so let's fix that first because in this case i want to do as much of this on github as possible and if you scroll down i have logging into docker hub and then i build and push it to docker hub but what i want to do is i want to not just do it to docker hub i also want to push the github container registry which in this case is actually quite easy to do because we already have all the boilerplate stuff here and we'll start with let's do the top one first we'll start with result in case you want to follow along uh let's see let me catch up on questions here so ben um cube control is supported with one minor version older or newer excellent thank you i'm glad you uh found that yeah and cube ctx is awesome uh fzf it's so cool i i do i know about fcf i don't know what that is um so the answer is no i have not i've used a video studio visual studio again for.net framework project and it feels so bad compared to vs code after using vs code for more than a year yeah um yeah i haven't used the actual visual studio in so long that i probably would feel the same way yeah because i'm i'm just not you know the plugin the plugin architecture is completely different and i just love the marketplace for plugins and all the integrations for vs codes so it would be hard to go back it's hard to go back all right so let's just look at look at uh look at this here so i um i like to do a git pods and actually there's nothing in the default namespace so i'm going to do this in the vote name space and we'll just see how they're running okay so this is a pretty good sign right the simplest form oops sorry would you would you see that uh cube control get pods in the name space right so this is a pretty good sign that my app's working because the pods are running and they haven't restarted except for the worker pod and it's it says it's running now and it hasn't restarted let's get it again okay so it's not continually restarting so the one of the ways that this is one of the quick ways i can say okay well at least my app pods are running and then i could use my cube control logs to dig in deeper but in this case i kind of know that the worker app needs the database and the redist to be able to work and if they're not available it will basically crash and restart and try again and you know it takes those all a few seconds to spin up so while they're doing that it's probably crashing and recycling it's just basically doing a restart of the pod so if i just check out that um oh is it not it should be a deployment oh because namespace all right i'm gonna i'm gonna solve this once and for all cube cube ns vote so now i'm gonna be working on that name space so i don't keep making the same mistakes so now so you can see uh waiting for db a very simple log it just goes on and on forever um now we don't need that namespace anymore because we're going to be we hard coded ourselves to the vote namespace and let's just look at the database i think it's a deployment not a yeah oh look at that okay so i have a authentication problem which is probably some outdated manifests so we probably need to fix that it's probably a health check i wouldn't i'm i'm trying to think why would the user constantly be failing postgres login but it's probably a health check that's doing that or the apps themselves are failing login so we've got an auth issue we need to fix that first because the reality is is that we know argo's not going to fix any of these problems argo's just going to automate deploying updates so we can't implement that until our app can simply just be deployed to a kubernetes cluster without any additional modification right so we're going to go back and i'm actually going to open a folder in here yeah okay and then i'm going to add more folders to the workspace oh no i want to i want to create a workspace i'm choosing the wrong option here there we go all right so i have my two repos i have the example app and we need to go fix some of that kubernetes so let's go look at the deployment of the database so we've got postgres and what it looks like so we've got these right here so we've hard-coded the password and the username so again not production quality um but ideally what this what this is is we can create these secrets we can create them several different ways we're not going to get too much into the secrets today because again that's i don't want this to be a four hour live stream brew install cuban s gives an error no available formula um paul when you in when you install cube ctx it installs both binaries great question so both ns and ctx are just two different binaries of the same repo the same project and the project is called cube ctx so just install that one and then you should be good oh fuzzy fine pretty neat with integration with cube ctx is pretty simple to set up okay nice thanks for the tip fuzzy find i have a couple of little tools like that like i have f which is a directory uh fuzzy find thingy oh no it's j sorry the j project whatever that's called yeah so i can type in j and i think it's called jump so it's kind of like a directory fuzzy find so that's one i use but probably not the same thing you're talking about all right so we've got the postgres user and password here we're using an older version of postgres i know the app's added a little bit out of date and then we're mounting data which at this point it's a demo app we don't really care so much the service is up so under the redis let's see that is the vote result deployment let's look at the result deployment so see how it's this is a hard-coded image url so we're going to eventually want to change those to github container registry and i only see that's interesting so we're not pinning the password in here if i look at the vote the vote doesn't have any password in it either and then the worker let me guess no password so there's no secrets being created there's no passwords being stored so again this is a mono repo so everything's all in the same repo so i'm going to refresh my memory i'm going to go in the result app i think it might be hard-coded the password it's not in the docker file you should never hard code passwords but what are you going to do with demo apps um server.js let's look for the no i don't care about your error uh it's trying to do javascript stuff in the background so database so yeah so see right here this connection string is hard cutting the password so this is exactly how you don't do it but it should work so the question is what's getting that error let's go look at the health checks or do we even have any so i don't see see container ports and volume yeah so we don't have any anything in there and we don't have anything in the service so we're not doing any sort of probes which we would want to add so that's not the cause of it let's go back to here and let's just look at what's all right so we have our replica sets they're all right there we know that because the pods are there they come from deployments and then we have ports now what i can do is at least bring up in the browser and i don't think we've you know we've done node ports here and i'm not sure if the firewall is available let me see if let me see what we can connect to i'm not even sure what my cluster ip address is so i don't have a load balancer in front of this cluster which i probably need one so for now the only thing i can do is i can talk to each individual node which is not ideal and one of the things we don't have in this app is we are we're using node port but we're not creating um these are web services right there's two different web services so technically we should probably create in ingress resources maybe we're using nginx maybe we're using an external load balancer but have those ingresses rather than just raw node ports but that's the example we have right where this is the app we're deploying as a devops professional i don't always get to deploy other people's apps the way i want so we'll just stick with the node ports for now and these are in cluster ips i believe so we are going to pick an ip address of a node and see if we can get it to work and what was our our port 31 001 all right so because we're just the beginning of setting this up let me show you what i did so inside um because my pool of nodes for this kubernetes cluster just like with aws they're they're just ec2 instances in digitalocean they're just droplets so these are my ip addresses i picked just one of them and if i go to that node port on that ip address okay so this is my result right this is the result app and then i can go to the vote app which is one port lower so it's 31 000. and i can vote and hello captain corsair what's up nuno happy new year happy hello from tokyo at 4 am well that's dedication you you you get the dedication air horn um very cool all right so one thing is we know this isn't working because this would be moving back and forth to 100 percent zero percent because i'm the only voter yet so far and when i'm changing this the votes aren't getting tallied which further proves that that password problem is happening so i'm going to go back and i'm just going to look at the logs real quick at all of the different pods right so we got to figure out what exactly so the worker yeah so see how it's still staying waiting for db so there's sometimes bugs with this code i didn't write this code but sometimes it doesn't recognize things so what i'm going to do i'm going to get all the pause i'm going to kill the worker pod so um i'm just going to delete it and then hopefully that will when it comes back up it'll connect it properly because i don't want to ideally don't want to have to troubleshoot exactly why this pod is not talking to the service and finding the database because it should totally work i noticed that paul thank you um i don't know why i don't know why they're not working let me see if i can fix that well while we're waiting on the delete here to happen there we go i don't know how that got broke it's probably my new machine i'm on the m1 everything breaks um all right all right so we've deleted that so now it should self-create a new pod and hopefully that's up and running so let's do the same thing let's do the get the logs oh it's still saying waiting for db so i'm going to go on the worker deployment docker samples example voting app so that that image may not be the right one that i want i'm going to actually go look at my repo because my repo has so here's my workflows under the worker remember how i said we're pushing this to docker hub and we want to push the github so we haven't done the github part yet but my image is under brett fisher example voting app worker so what i'm going to do is get rid of the the docker samples and use the brett fisher example voting worker and see if the worker because the worker in this particular example app is a little problematic i've had problems with it in the past and so now we've saved that i'm going to do the same thing again where i do an apply in the directory and you'll see how it's reconfigured the worker it's going to now to pull it's going to pull a new image i don't know if i can use desk no okay so 17 seconds ago scaled down yup scaling up the new one because it always does that first and then scaling down the old one and then there we go okay so yeah it was basically the code uh i was using an old image from docker's dockers docker hub repo and i had basically if you remember postgres a couple years ago didn't require passwords and then at some point i don't know in 2020 i think it was they started forcing passwords and this app had to be changed and i changed all of it but i didn't have permissions to push my images to the docker sample samples repo so i didn't update those i updated my own so there we go now if we go back over to our voting app it says it's processing it but it doesn't seem to be working all right let's go back to the log files so we processed the vote but maybe we didn't store the vote yeah see how it's seeing votes coming through redis and so maybe what i probably need to do is just go fix all the apps to use my versions of them maybe all the codes everywhere else so result deployment so i'm just going to change all of these to in fact we can go find out all right we can go back to that repo again because my github actions are the ones building building my images so i know that's the one for worker and then if we go up here and go to result and see what we call it yeah example voting app underscore result so i will change that one and then i'm going to do the change the vote deployment as well i suspect that the code for all of them is old and needs to be updated so i should not use the docker images and then we're going to do another deployment now if we had argo in place here already argo you know every time i would change i'd have to commit code to the repo and then argo would see the repo changes and basically apply it in the background all right so we've got the result app and the vote app all downloading and redeploying so let's see the status here yep all right so these have been up for sec for 35 seconds so that's good that's what we want let's hope that worked there we go if i click on dog it goes to dogs if i click on cat it goes to cat dog cat all right very cool yay so we um the next step here is that we really need to go to change those workflows so that it also adds those images to github container registry all right and i do that up here in the workflows so because this is a mono repo i'm building all three images for all these repos separately and you can see that i have them scoped so that these this automation only happens when the path of the result app or the github workflow for for that app changes so you'll see in each one of these the path focuses so that you don't end up rebuilding your images when you change something that's not related to that app ideally i don't want to constantly rebuild and push my my result image in a mono repo if nothing in that app changed but if something in another app changes like the vote app i want that one to be built and pushed so that's why it is scoped to its own directories does that make sense so this is a key feature to a mono repo if you don't have the ability to filter based on file names or file paths then i then i would greatly discourage you from ever using mono repos because it'll be very hard to limit the blast radius of pull requests and changes to that repo you won't be able to do things only focused on a certain app in that mono repo in that case you would do i don't know mini repo whatever that's going to be called where each one of your apps the result app would be in its own repo the voting app being its own repo the worker app would be in his phone and that's how most of you should be doing it they should be in their own separate repos for a sample app like this it's simpler and easier to explain and help people use it by shoving it on a mono repos so just because this is here all in itself it doesn't mean that's the way you should do it so let's go down and we're going let's we're going to start with the result one first and we're just going to basically copy and paste the same stuff over and over so if you've seen me talk about github actions on this show before if you just go through my github repos i have a bunch of samples all on github actions if you just search for actions you can probably find the repos inside there and there's various examples of things you can do with github actions and this one is using the docker the docker official images inside of github option actions to build basically it's all their build tools so it's the docker engine it's the build kit it's the qemu for multi-platform building which is what something i talk about a lot this year and i'll be talking a lot about next year so that i can build arm and intel based and so if we go down so this is all sort of boilerplate but if i go down here this is the image name that image name if you remember docker commands assume that if you don't give it the full url that it's going to go to docker hub so by by me just putting in my username in the path for this image tag then it knows it's going to use dockerhub so what i need to do here is i'm going to need to type in ghcr.io brett fisher and then ideally the name of this repo so this is another problem of mono repos with github container registry the github container registry likes to match things to your repo and since in this case we have one repo with many apps i'm going to have to do example voting app and then i can do vote like this all right sorry this is the this is the result app so i will do that i don't like that method necessarily there is a way on github where you can push images and they'll show up inside of your your image list you can actually do this on if we go over to github and we go to my my username you can see that i have proj i'm sorry packages that's what i'm looking at so packages are not just docker images but this is where docker images can be found that are stored in the github container registry and i have a bunch of stuff here but they're all tied if you see how it says they're all tied here to a repo so there is a way that i can technically push these with the names i want and then later associate them with repos but for simplicity for today i'm not going to do all that work again we're not this isn't really meant to be a day of github actions but it's just enough for us to get started so over here on my over here i'm just going to do that and i think i think that's the right url we'll find out in a minute if it isn't and what i also need to do is i need to log into github container registry so this is this is going to have me logging into both so if you'll notice i'm not going to log in if it's a pull request because if it's a pull request i don't necessarily want to push the image because that's what you do once the pull request has been merged that's when you push your images right and i know i'm using a very simplistic user name here on the tag i'm not giving it a simver or anything like that i'm overwriting the same image i get that we will solve that problem a little bit later but for now i don't want this i want github token um let's see i'm going to get the right environment variables here all right looking up my notes all right so here log in to get a container registry and then this is going to be called github actor and this is secrets github token i guess that correctly so this is the way i am going to log into both docker hub and then ghcr because down here when i now that i've tagged it with these two names when we come to this push later see where it pushes here so this is going to pull basically it's saying push when it's not a pull request we talked about that already and then to get the tags from the metadata set step so we're a little more advanced here um if i was going to do things like you know add the add the p pull request number into here add the date into here i can do all those things with the docker metadata action which is quite advanced i definitely recommend you read up on it because it so far it is 100 of the time been able to meet my needs for how i need to label and tag and name my images so all those things are all done in one docker metadata it has a bunch of labels that it also adds by default and you can add your own and you can see that's what we're doing down here in the push step is that we're pulling in the output of the tags and the labels and we're using the builder we created and we're also building this for three different architectures all at the same time so we're going to be building this for arm amd64 and then arm v7 which is essentially 32-bit arm it's it's sort of the older commonly used raspberry pi arm version and so we're going to be building that and then we're going to be pushing all of those to both registries both to docker hub and then the github container registry and so i'm going to need to copy this to both my vote if i go down and find that and then also to the worker we're going to change them change their names here in a second and then worker vote and then i'm going to copy that github container registry login which is essential got to log in and we're going to put that up here all right and do the same thing up here okay so this should be enough for us to push all three of these images on a new build so what i'm going to do is instead of doing a pull request let's do a pull request let's just do that it's not going to take long so in here in the example repo we're actually gonna do this in the command line because uh that'll be a little easier so i've got a bunch of changed files but i don't at this point actually let's go ahead we're going to go ahead and change these results a pull request so that they pull from github container registry but i kind of want to separate these out right because pull requests um pull requests i like to keep them as small as possible so i'm going to go ahead and add these three files to a pull request okay and so you'll now see the stage changes over here right and if we wanted to i mean that doesn't matter you can do it either way text it through here i can type in adding g8cr and then commit that and then push so you can do that from the command line or i can do it from in here so i didn't actually do a pull request because i i feel like we can do that for the actual code changes but that was just a github workflow change so what you'll what you'll do over here is you'll see if we go to actions we should see some stuff running so what we've got here is we've got the linter running because i always have my linting running on all pro requests that's a separate workflow and then we have it building all three images and again those three images are gonna be built and go to docker hub and get up container registry and it's going to build three different images it's going to build the arm v7 arm v8 and then the amd64 and it's going to oh and then it airs out so let's go see what that is um maybe we did it wrong build x fail with build invalid tag example voting app oh okay so it's it's adding the latest automatically so that's a part of that's that's what i messed up so if i go back over here to my workflow so what this is doing is it's adding a tag of latest but i don't want that i don't want that anymore so i'm going to hard code it like this all right so this is not ideal oh sorry this is not ideal you're probably going to want more flexible tagging which is its own art form that we talked about a little bit ago but i'm going to hard code that i always want this to be latest and then over here because of because of the way that github container registry works with mono repos this is a simple setup and i'm just going to do it like this right i'll probably change this later but the goal is to get it to build and we can't we can't have it hard-coding the latest tag because we have it we have this right here we can't we can't add the tag in a separate step there so we're going to change it like that you'll see me do the same thing over here where i'm going to add latest i'm going to hard code it there that way i can remove this um see mono repo first can look better and easier but it's very much not what you have multiple people working on the same repo and you have ci cd yeah it becomes a mess yeah um robot is ghcr free to use it is a part of your it is to a certain extent i think you get a certain amount of storage for free if you just look up um your like your plan options it's just like all the other storage and github certain things are free certain things are limited you also on the free plan have limited minutes per month that you can run github actions like there's a there's a number of minutes that you can use and but yes you can use it by default it's just a question of how big your images are and how often you're running github actions um and that's all in your plan i i think i pay for the pro plan so i'm going to get more than the free plan there's probably a page on github it tells you all that okay so you see me hard-coding these in here and then i need to la and last here i need to do the worker i hope this is actually the fix because i changed all of them all right all right so let's submit that change we're going to push that and then over here we click on actions again we'll see that it's spinning up also our linting failed while ago so this goes well while that's building let's go see what the lyncher said and the linter probably failed because [Music] i didn't write this code um oh this looks like a new role in action linter so action lint is a linter sorry let me show you that i'm looking at the lint errors and the it's saying errors in action lint action lint is the linter that scans your github actions yaml files and tells you whether or not you did it right um what it's saying is that it the basically it prefers that i put squarely brackets squiggly brackets curly brackets whatever you want to call them curly brackets around the github event name object so technically see what what line is that in line 50 in the worker file and it's saying this expression must be contained within curly brackets like this since it contains an operator and then there's a help file yeah so like this in this example it's showing that there's squirrelly so i don't know i call them squirrelly today curly brackets around the whole thing so technically to have the correct syntax even though i know these work and i've been using them for a year i'm going to go ahead and change it so that i can pass my linting and we'll try again oops um i think that's all the f's yep then we'll do the same thing over here i'm actually just gonna copy this whole then we'll do this oops and we're adding those again i always just say fixing lints i'm fixing something that was a linter problem all right maybe this is going to be the lucky one all right let's go back up to the actions list and then those are still looks like it failed again all right we're gonna watch this one see if it see if it succeeds all right um invalid tag again so it's the same problem example voting app result latest main ah right i don't want it to add so it must be adding that by default so this is a case where i need to look up the syntax to make sure that i'm doing it right so we know that works we're going to close that for now and over here so what we're saying is the automation that i have here using the github metadata action yeah um paul but i what i don't want it's it's trying to add a tag based on the branch and if i take this out then it will add main there and i don't want main i want latest but i don't want latest in this one i essentially want the tag hard coded um let me see i may not be able to add the tag there there's a way to i'm pretty sure there's a way to turn it off so i'm going to go look up if i wanted to just do this i could just force them by doing this i could essentially ignore the metadata for the image names and i could do this so right here i'm going to comment that out because i don't want to use that but what i do want to use is okay where to go so down here i can just list them i think like that so i don't [Music] know i don't think i can do that i think i need it i think i need i'm going to have to go through the documentation usually what happens here usually what you want to do is you want the tag in this case it's kind of unique because usually you want the image the image name will be different because it's a different registry but usually you want the tags to be consistent and named consistently but because of the way that this is going here why don't we just try this um example voting app we're going to do example voting app vote and then uh let's see then we'll do latest that way yeah uh rodrigo that is copilot trying to make suggestions yeah it's awesome it does it does lead me down the wrong path sometimes i'm gonna i'm um i'm looking up what that line used to be before i just broke it okay yeah i was raw type raw okay so what one thing i'm going to try is something i was trying to avoid while ago but we're going to go ahead and do it anyway so if i do it like this you see how it says get obtained registry brett fisher example voting app vote um what it will do i think it will auto create this but what it will do is it won't associate with a repo and i kind of wanted it to be associated with repo i think we can do that later so we will i'm going to do that and we're going to basically say put latest hard code the tag is latest we can also give it another tag if we wanted to do some excuse me something like this example where we can say also also tag it no let's see we don't want a prefix really we're just going to say something like that gha and so what this does is this this gives it a tag of the github actions run id so that later if we wanted to if we had a problem with a github action and we could associate it to the image of that tag so this is just an example of how i can do multiple tags these um these type raws are not always necessary if you go to the documentation i wish i had the link for the documented data there i should probably put that in there metadata action so this is the action that i'm using there and it's great it's a fantastic resource and for example you can see like it'll create a bunch of different tags a bunch of different labels all in the same image and it has all these different tag options so that you can without hard coding it like i'm doing it when i'm doing a raw type which is very manual and tedious you can have it do a lot of automated things based on the pull request pull request number whether you tag your images sorry tag your commits and get not to be confused with uh docker tags but you can do a lot of that in there and there's tons of documentation here i refer to it often when i'm creating new uh new workflows because i'm i'm often having to do things like do a simver do you know make a simver option all that stuff for my tagging in this case i'm just trying to keep it super simple for an example app and we will just use those too those should work and so if i go back i did this for vote if i go into result and i try again and we're going to call this like that and then finally on this last one here okay all right let's give that one a shot so i want to add hopefully this is it this is basically my day if you're wondering what it's like to be a devops consultant this is kind of an average day you just sit here uh fixing tags we're not popping tags we're fixing tags all right let's go back over all right so which if you didn't see what i was doing because i didn't i wasn't showing it on screen i um i can close that now i basically added while those are building let me just show you what i did so i added these this line i added back the tags line which will mean every time we're pushing we're pushing the latest which isn't ideal for machines right um this is an another tag that's going to add and there are different types you can add a prefix or a suffix and then you can add the value so there's all this different stuff again looking at the documentation that that will help you and in this case i'm just going to tag them with the run id from github nothing fancy it's not a simver it's not date-based it's probably not what you would want in production but at least every time i push the image there will be the latest that's overwritten every single time which is good for humans but bad for machines and then we're going to get this run id so it's just a basically it's a an incremental number of each image built and so since this number will be unique every single time we can use this number in our go for when we want to promote sometimes this is called a promotion where we want to take our images and promote them to production we would go into our argo values file and we'll tell it the image tag to use and we'll use this unique run id tag to do that once we have our go set up all right all right so this is a good sign the worker finished building and pushing and the other ones are still working but one worked so all right so step one is almost complete so if you go back to our steps here our milestones are create the cluster let's actually let's turn these into check boxes because i need to so we have created the cluster downloaded our config we have cr we have our app repo and we are adding github actions for image building and pushing to github container registry all right we have our app manifests which are technically in the same repo as the app itself so we're kind of consolidating those two there um you could put a helm chart a very generic home chart or kubernetes customized templates in there as well and they could be in the app repo i tend to like them in their own repo but this is an existing example so i'm just using the ones that are with that example it doesn't get much different if we split them out it just means that you're you're able to track the prs about your deployment changes versus code changes a little bit easier you can still do them in a mono repo like we're doing but i find it easier to isolate those pr so that you know you got prs happening on your code on your app every single day but it's only i don't know what once or twice a week that you might change something about the kubernetes manifests so it's a little easier i find and it's usually different people so you might have different permissions on those repos and who can who can accept prs and merge them and all that so the last step here is about argo this is of course assuming if we go back to our actions yay [Music] yeah robot and rodrigo uh this is i mean yeah i'm using i've been using copilot since the beta since the summer and it's pretty great for the devops stuff it it fills in a lot of yaml uh it helps me with everything from terraform to docker to kubernetes manifest to helm charts all right so now we have those images being built one of the things we'll notice is that look at that they're automatically that's i feel like that's an improvement that didn't work before so my images are now showing up in the repo with the example app that's perfect that's exactly what i wanted so it must know since i'm building them from that repo to associate them with that repo that's awesome that's a super slick integration so now you can basically go to each one of my images right inside of github and you can see see that github action id right there that's the run id that will change each time you can also go down and you can see that there's two tags there's latest and then there's this one right and as i keep pushing this number will change and you'll see you know there'll be a history right we'll have older images they'll all overwrite each other on the latest so the latest is always going to get you that but again that's for humans the robots prefer a hard-coded never reused unique id and so that's what i'm just using there with the run id and if i technically wanted to go back i could go back to my runs if i go into my actions and let's say i go into result and i go into this one and the job id so this is the job it's not it's not the workflow it's the job and i think that the work the job ids may be in here trying to remember where i found it the first time i looked for it it's somewhere in here yep i don't know where i put it let's see anyway it's not so important that that number is uh this is the worker i'm looking at the result let me go back to the result so if i if i just copy that number and i go back over here and search the logs let's see if i can find anything yeah nothing so i'm not sure where that number happens in some other in some other ones what i do is i actually print it out so i take a step and i say github run id and that prints it out not sure why it's not available here just in the default one but you can totally do that you can have an extra step add a step and just echo out that value and then you can find it there and then you can you can basically associate builds with that you can also do there is a way i believe to get the workflow id so that like so this is the job run number and it just increments it starts with zero uh starts with one i think actually rather starts with one in increments so you could use that um i think there's a reason i never did that because i didn't like the number like the number changes in length over time or something or it just starts with one i'm not sure what my reason was but you could try that too basically all these things if we go back and look at the image again i'm using a github value right here as that that tag so there's lots of other things you can use the git commit and other things so you could you could make up your own workflow there all right so we're we're pretty far now the next step is we already have our cluster right we're still let's close these because i think we're done with the workflows if we go over here look up our pods what we want to do is we want to modify that spec to actually pull our images and to pull the proper tag so this is where we're going to want a separate repo to specifically deal with argo and how we're going to identify which image we want to run specifically for each one right so this is the beginning of that git ops workflow we have the images being built automatically we have the images being pushed automatically presumably we would add tests and have those tests also run inside the github actions we've already got the linting happening in github actions so the next and the cluster is there so the next step is to connect them together and that's what argo cd is for by the way we've talked about argo cd on this show before several times if you just go back in the in my archives of videos here on youtube you will find that and let me pull up pull up some information on this so the nice thing is this is going to be a little bit easier than your real world production because we're not going to need to deal with secrets because right now we're hard coding all those values probably maybe a separate exercise would be to add secrets to this repo and then store the secrets in those secret stores basically it'll store inside of the kubernetes cluster but then we'll need to feed it to argo in order to store in the cluster and that can come from github secrets and we can because since github is our automation platform of choice right now we're just going to use that we don't have to go use vault we could use also the cloud secret store if that that cloud has a secret store there's lots of ways we could do it right but for today these are all hard coded in this example app we don't need it so we can avoid that work all right so let me pull up some of my notes on argo so we know what we need to add argo um where did you get those fancy screen transitions you're using uh this is ecamm live this is not obs so this is everything i've got in here is all designed based on that it's a mac only feature hey thanks rodrigo thanks so much for the super chat i really appreciate it um happy new year to you and i will totally be buying a coffee with that thank you so much i appreciate it all right so what i'm doing right now is i'm pulling up my notes uh and our good documentation so that we can start from nothing and in this case we might as well since we already have since we already have a repo for our app right because this example repo already existed that one's over here right this is the example app itself and because this app is kind of a mono repo so it's building and pushing everything in here what i think i'm going to do is rather than having everything in separate repos i'm going to use this repo this is the one that i created for this digitalocean kubernetes challenge right and if those of you who are just joining or weren't here at the beginning that's what we're doing today and you can follow along and do it yourself over on the digitalocean kubernetes challenge page i'll go ahead and repost that in the chat and then i'm doing the git ops challenge where we're adding automation to our app so that it will build images and then deploy to the kubernetes cluster running a digital ocean all right and then if i finish all of this and make a tutorial about it before tomorrow night at midnight before the clock turns 2021 in california usa sorry 2022 before the clock turns 2022 in california if we do that then we can get some benefits and there's this is the swag we get we get to donate we get to get some swag from multiple stores it's pretty great so i'm doing that and so i'm gonna i'm gonna use this for the argo cd i did not prepare i did not prepare the holiday music for today should have had background music um all right so we're going to use customize so that we don't have to make a helm chart helm chart would require additional files and then we would make the take the files we actually have now we turn them into templates if we do this in customize we shouldn't have to really change much of anything so we're going to do that all right now on my new machine on my new mac let me make sure i don't even think i have argo installed on here so what i'm going to do is try to do this as generic as possible so i'm going to follow the argo cd getting started guide that way i'd hopefully i won't have to make documentation on how to deploy and configure argo because that's just going to make my readme longer so if i keep it as close as possible to the official documentation hopefully we can save some time so in case you're curious that's um because again this is just digital ocean standard kubernetes cluster so this should work with any getting started guide and over here we've got some basics this is going to get our cluster set up for argo and then we're going to have to add the repo that we're using with the yaml inside it and link that all together probably some secrets that are involved and let's just start start away so we're going to use the stable edition or the stable version of argo and we're going to install that again this would this would be better if we weren't just doing a one-off thing this is one of the challenges of get ops too is that how do you automate the install of the tool that automates your installs so we're manually creating the namespace that argo is going to live in and then we're manually going to install the deployments and and whatever else argo needs it's going to be probably a service and some secrets and whatnot but we're already talking to the cluster so let's go ahead and do that i'm going to be switching back and forth here so yeah all right so that gave us a generic boilerplate argo install and it also includes a web interface we'll check that out in a minute you've probably seen demos or even used argo yourself before so it's talking a little bit about you know if you don't want all of the extras and i on this new machine i don't have argo installed i have a new mac m1 so we're going to follow that step to get the cli because we're going to want to use the cli for a couple of basic steps now in this case [Music] we don't we we are using we're kind of this is a little bit hacky so we're not using load balancers we don't have a load balancer set up in kubernetes uh on our digitalocean cluster and we're not using an ingress provider we didn't install that those are things that are all optional and i was trying to limit the number of extra components that we all worry about right because this is focused on on github so i don't technically need any of that stuff for git ups but um right now everything is set up for a node port so one of the assumptions i have yeah by default the argo cd api server is not exposed with external ip to access the api server choose one of the following techniques so in digital ocean i like to use their load balancers so i would optionally install a load balancer i connect it to my cluster i tell it to look at all it basically knows all the different nodes of my cluster so it can point to all of them in round robin and then i would go in here and change that we could also do ingress right where we install an ingress provider and then we tell this we actually add a new resource called an ingress resource and that would allow our argo website to to be exposed but i'm going to do the port forward the cheesy one this is a very temporary thing it also means that it's only only allowed for me to access which is a little more secure since we're streaming on the internet so i can throw this command on my local machine and i'm just going to do that in a separate window and then when i do that and i'm going to get the default password and then i can use the login let's see if that works now i technically should be able to bring up i already forgot about the ip address where's my ip so because i don't have a load balancer i'm basically i'm basically using each node and i'm talking to each node directly it's kind of cheesy if we had a load balancer which you should always do in front of this then i could just point everything to the load balancer ip and i could hit it remotely but because i have three nodes i'm just going to use the node ip of one of those nodes and the networking will make sure that i automatically get to the pods and to the services i need to get to so now i need to look up one of my node ips i believe i have that here and we're port forwarding this on port 8080 so i should be able to bring that up it's thinking about it oh sorry that's dumb it's not going to be on that because i'm poor forwarding it's going to be on 127. all right and then i'm going to grab that password that it created for me automatically the first time now there's a lot of warnings in the guide in the documentation obviously about passwords make sure that you're doing it securely and that you should delete the password inside the cluster it actually creates a secret and you should get rid of it once you have it documented it basically creates a random one on startup and you can change that later and then delete it hey i got another super super chat thank you so much hello mr 20. i appreciate you very much thank you so much that's great [Applause] all right so i got the password and then we can do it multiple ways so we can do it in the user interface and then paste in that long password or and we'll probably need it at some point in the command line all right so we're logged into the site i'm logged in from the command line all right so the next step let's go back to the documentation real quick because i want to make sure we're following their guide so we've got through the authentication process and we what we need to do is we need to tell argo about our basically our context so that local command can talk to the server and right here is where we're going to tell it remember we this is the cube ctx command so we're going to use we can use that or we can use the config get context command to get the to get what our local config is set for and so we can do this as a shorthand this is the name of my cluster in my context so then i'm going to use the argo cd command cluster add and i'm telling it which one and you'll get you'll see that it gives you a warning it's telling you it's basically going to give itself access to that cluster so it can do things and it's going to give it root admin root privileges and then i have the repel right we're going to create that repo let's see here it's got a long pretty long command down here under adding we're not going to use the guest book we're going to use our app part is only needed when you deploy to other kubernetes clusters thank you um yeah because i already technically logged into the local machine so it is kind of redundant so we're creating now in the argo database that or the argo configuration that's in cluster we're telling it about this app that exists and we're going to give it the repo and all this other stuff destination namespace which we already have one there we might need to delete the voting name space we have there in order to create this and it's i don't the problem with me is that i'm so used to the command line that the the argo cd web ui has actually changed and got a lot better in recent years and i don't know it as well as the command line and that's one of my challenges is i could probably fill all this out or i could just copy and paste commands right and i probably should learn this i probably should know this a little bit better than i do but they didn't used to have all these fancy all the fancy features and so it was in the early days it was more for just after i've done it i go here and i look and i see status so we're going to follow the cli command options and i'm going to need to change some of these values and then we probably what i probably want to do is inside of our cluster let's go back to our cluster real quick we're going to want to delete if we just look at our namespaces right we created that voting namespace excuse me but we that was us just testing the app making sure it works right so let's go ahead and let's do that so it's going to delete all the resources that we created as a part of this app and then we're now we're going to expect argo to do that for us so we're going to call it vote and then i'm going to need to give it a directory of that repo so i'm going to grab the url of my example voting app and the path we're going to try the straight up manifest for now see if we can avoid the unnecessary templating that we don't need on this particular demo never actually done a non-templated manifest deployment argo but um let's see and then we're going to change the name space to vote and let's see all right it looks like we deleted everything so now again we're in we're we're in the voting name space right now so if i did a cube control get pods this probably gonna give me an error yeah there's no namespace vote so there's no there's no resources in it um let's see let's do git pods and just see so you can it's kind of hard to see this a little bit but you can see that i've got all the ones that kubernetes automatically installed as a part of our digital ocean cluster right so we have psyllium which is our networking we have core dns which is of course required for dns it's technically optional but everybody needs dns inside their cluster so it's not optional um we have the the the node agent the cube proxy so node agent i'm assuming is kubelet uh and then the csi digital ocean node for storage so then argo makes its own namespace and puts in its own pods right so it has a redis server presumably to store work some sort of configuration database i'm not really sure what they put into in the redis and we have the the controllers which are necessary for us to do actions based on the resources we're going to create because we're going to have custom resources for argo and other stuff all right and then let's see the last thing i need to change is the destination server don't think i need to change that i think i can actually leave that there so this is what the command is going to end up being this is what i've come up with it's kind of small on the screen let's see if so we're creating an app in argo calling it vote we're pointing it to the repo we're giving it the subdirectory of the manifest files and then we're pointing it at the service api inside of this cluster and then the namespace vote i don't think i need to create the namespace vote um to make it work but we'll see and then so we can get some information on that so this is what's interesting uh we haven't actually updated the manifest in the repo to point to the github container registry stuff so this is all going to be um something that probably won't work on day one on the first on the first iteration but what we're going to do is we're going to sync it so that's this is normal right if we looked at the let's go look at the visualization let's go over here yeah so at this point basically we want to sync and we can do that either through the button or through the command line i'm going to stick with the command line because that's where we've been most this time make it a little bit larger all right now um um oops i'm not on linux i can't do that and i can't do that because i'm not an item so this is going to take a minute these are all actually spun up and running but we might end up with the same problem we had before let's see because these are all pulling from github now not from my local clone of the repo so if we did something like deploy a worker and see where it says docker samples that's the wrong image remember we updated them but i didn't save that those settings of using github container registry i didn't save that and then store it up on github and argo remember argo's designed to look at github for its changes so let's go over here and we're going to need to change these files so under deployment we don't need that for the database database version is fine but under here we're actually going to change this from docker hub to be our github container registry and if you remember so we did that these are the images we're going to use for all three of these apps so the result the vote and then the worker now for now we're going to just use latest in fact i'm going to hard code that just to say latest we're just going to do that again this is something you should never do in production but what we're going to do is we're going to change it and we're going to watch it change so this is the this result so we don't because these are open source on github container registry i don't believe we need to authenticate so we didn't we don't need to store image secrets in our cluster i'm pretty sure i'm right on that i could be wrong um because these are open source so let's see if it can actually deploy them so now what we're going to do is i'm going to add these three changes to a commit and [Music] i'm going to push now what's interesting is if you paid attention earlier to our [Music] example voting app so what i just did was i pushed a commit i didn't do a pull request save time i pushed directly to the main branch and what you'll notice is that it's not going to rebuild the images and that's because in those github actions i gave them file paths to say don't rebuild the image unless i change code files in the file paths for each app and since we just pushed up kubernetes manifests then none of those images are going to get rebuilt which is a which is what we want we don't want images to be rebuilt unless something in them changes so right now all it's doing is linting technically we don't really care about that at this moment it's not going to block us from argo working so we can go to the repo we can see that i just did a commit a minute ago if we go over here we can see that it says it's healthy and in sync and then if we oh sorry i'm not sharing the right screen let me just back up a second inside the repo we have the commit one minute ago and then inside argo oh now it's indicating out of sync i was trying to get this done before that happened and then uh let's see let's do a get pods all right so these have been running five minutes so we know they haven't yet been replaced at least the ones that we care about right because we technically just changed the image paths for result result vote and worker now one thing we didn't do here is we don't technically have any argo source code inside the repo which is what i prefer infrastructure is code right everything should go in the repo and we should document that in there but let's see so let's go to what we really want to do for the full you know if this is the eight-hour live stream um thanks zolt i didn't actually know that was three minutes i thought it was shorter than that but i i appreciate your backup i got argo backup right in the in the room so um we're waiting on that to happen right we're waiting on that manual update but there are a lot of things that we could be doing declarative so one of the things we want to do and i'll bring this up in the documentation so this is the getting started guide that we followed but one other thing that the next thing we should immediately be doing is looking at the declarative setup because we really want to configure and control argo from inside uh from inside the repo itself right so we have this arg we essentially have this argo cd repo but there's nothing really in it yet and this is kind of what we want to go for right this is like this is sort of version two um to show all the different places we need to configure things and then we can scroll down we can see about how we do this this is a new kind of metadata a new resource type that we're creating essentially here you notice the api version and this is typically how i like to see it done is we're we're creating this stuff in a repo and we're doing very basic cluster bootstrapping manually one time but after that everything is changed through the same pr process whether it's infrastructure whether it's deployments or whether it's code click the refresh button let's go back let's go back and see what happens eight hour live streams yeah i was joking uh my point is is that in order to do all the things like you know ideally i'd have this this this cluster actually built and controlled and configured through infrastructure as code so maybe terraform using a digital ocean provider could spin that up for me and that would be in its own repo and i'd use a terraform standard setup for that right and then ideally i would have all this argo cd stuff inside of it's this this repo that we created and the configuration for that so that the app itself in its configuration and any overrides we might need to apply assuming we're using templating overrides are going to go in there um that's like more of the pieces of the puzzle that we need to build out all right what do i need to do for auto update zolt uh help me out okay so it's not automated by default looks like so we could do a argo cd app set vote actually i'm curious what let's see if we can get and then so sync policy none sync aloud so what we really want is we want auto sync set vote sync policy automated and then if we get that again okay so now now it's doing it yeah enable auto sync yeah uh and again that's that's me being dumb about the gui so there might be um there might be an auto sync option somewhere in the gui that i have never seen or never looked at um you can also go into the app we can see a lot more in here so just a simple little app but it has dozens of things right that is the way of kubernetes all right so let's get back here all right they're all showing us healthy if we do a cube control get pods we'll see that the three these three pods are now um correct and if we do a cube control describe on the deployment we should see the the new image name pulling from github container registry so luckily i was right about not needing authentication for github yay even in open source you don't yeah um just like you don't with docker hub with docker hub you need to authenticate to get more uh higher limits but with github right now i don't think they have they may have some limits but i think they're not near as low as docker hub right now so we got that app details sorry so if we went back over here i was just showing that the image tag has changed on the deployment and so we know we're using the right one and all of our pods you'll see that it's replaced three pods there we can get that from the argo gui as well or the argo command where it says it's healthy right these are all synced and healthy so essentially right now what we have is the is the the beginning of an ideal get up get ops workflow so again our images when we submit a pr to them they're getting automatically built assuming we have tests we don't have any tests in there if we did we could run test and we'd build the image and then it would push it to both docker hub and github container registry at the same time for all three different image architectures right because we're doing this on arm amd64 and armv7 32-bit then nothing changes right that's all that happens so if we went and let's say we go and we submit a pull request on the code and we change something in code i'm not going to do that here but if we did in the app direct in app it would rebuild the images automatically and push them and there would be a new run id now because we're not we're not telling argo cd to watch image tags yet so this whole pipeline isn't fully automated presumably there's some steps going on here right once the images are pushed to a registry we might push them to a staging cluster or to a test cluster of some sort where you know or something beta cluster whatever you want to call it right and so there might be more involved testing more humans involved with testing it maybe um or if you're like me and you just throw everything into production what we then the one little piece of this is we're choosing which image tag to deploy now right now we're doing that inside the repo of the app i don't really recommend we do that ideally what we could have is we could have we could turn these into customize we would basically just need to add uh custom customized files and then what we could do in the argo repo which is this other repo up at the top here this repo that's empty we could have the overrides in here that would be where we set the image tag we want to deploy to specific deployments to specific argo cd clusters right that's again that's gonna be more work i've already been live for two and a half hours so i'm not gonna do all that work today uh live streaming but i encourage you to get involved by the way and we're going to do one last step but we're going to actually prove that this works automatically but i encourage you to do this yourself you can either do what i did with the get ops but you do need to follow these steps right i linked them earlier in chat but you are going to need to do all these steps in order to get this stuff and you have at least to submit the final form with some screenshots of what you did you're going to need to submit that final form by tomorrow night midnight pacific time in the u.s so you've got somewhere around 36 plus hours to do that and i think you know what i did would be very sufficient for that so we have deployed argo cd to cluster that's our final little little process there um zolt's saying by the way uh i could i use argo cd declarative way you can use it you can set up everything using case manifest yeah so that's that's that's what i totally recommend you do is um because when you start to use argo cd commands on a regular basis that is not a declarative git ops approach that is one admin talking to one cluster and who knows what commands you typed who knows what changes you made right so in a git ops world all of our changes have to go through the get process and then we see those changes in our git log and we can rectify that with what's in our cluster we can sort of guarantee that our cluster is exactly what we set in our manifest so the way we're going to do that now in this very limited example is instead of doing things in here or changing anything in argo itself what we're going to do is we're going to deploy a new image version and all i'm going to do is go into my repo and i'm going to change the manifests so this is kind of just what we did before we're basically going to do the same thing so i'm going to go in the deployment and since this is not a template i can't template this out yet i'm going to need to turn that into a customized template or a helm chart before i can have overrides in the template values but what i could do is go in here and change this tag this is sort of the manual approach imagine if all of these yaml files were in their own repo if they were in the argo repo then i would be looking at these right it would be pulling these in and it would look for that image tag and what we're going to do is we're going to go over to my browser and we're going to go to see how i have all these these images now so i'm going to go into vote let me do vote first because that's the file i have up so all i need to do is go get the new tag and i'm just going to copy that tag so this remember this tag right here this tag is generated new every single time the image is built so it's a unique tag which is what we want and then if i go back over i can just add that right there and i need to do the same thing for the result app and because each one of these is built in their own job it's not going to be the same tag each image will have its own so i need to delete out latest and then the worker and if i did all this correctly and i save these right and then i commit them moving to new image new version of image in production or whatever you want whatever your commit is going to be so i i then push those changes and since we've turned on auto sync also we'll also notice over here on the browser we'll notice that we've got actions running but again the action is only going to be the linting and it may have already happened yes it already happened it was too slow um so if we go back here and a simple way is just to look at the pods and yep 23 seconds if we want to go look at describe the pod we can see see how it's using that image tag so all we did was commit on the main branch of that repo and it changed it now where do you go from here right uh you have this basic setup well the first thing is like zolt and i are talking about in in chat the next thing to do is to probably use the documentation for margo cd to describe argo declaratively to add the app the next you know so that way that repo for argo has the configuration for argo how you want you want you want to set up the auto sync you want to set up anything else in argo that you don't want the defaults right you're going to also point it to the app in there and whatnot the next thing you're going to do is you're going to take your your kubernetes manifest that we've been using and you could leave them in the repo and turn them into customized templates or into a helm chart and you can leave them in that repo it's a mono repo so we can go ahead and put those in there and then what we'd want to do and we're going to leave the image names by the way generic in those we'll just make a default maybe we'll just use latest or whatever so it'll work but it won't be the version that you're controlling then what you do is in the argo cd repo you go in and you override that templating would depart depending on which templating engine you're using you would override that image setting so that you can control it for that specific argo cd deployment in in that deployment repo that way you could start having multiple questions you could have a staging cluster and a you know a testing cluster or whatever a dev cluster and you could override that image value for each one of those clusters or any of the other settings that you want to override and argo uh depending on whether you have it multi-cluster mode or you have one per cluster that doesn't matter to me how you do that but the idea there is that you're you're controlling argo and the template overrides separately from the app itself the app is sort of like the generic manifest helm or whatever you want to use and then you're overriding it for the specific deployment maybe using special secrets for that deployment you get you get my idea okay so that those are the things i would do next after that i might set up if i wanted a fully automated pipeline i might add more testing into the github actions i also might add into argo to actually have argo looking at the images automatically image tags based on a semver so that it automatically deploys new images without me having to do a pr i personally like that pr workflow for the teams i work with just because it gives devops a chance to make sure that the custom you know unless you've got developers who are also kubernetes manifest and home experts and argo experts it gives the devops or ops person a chance before that next app version gets deployed and the reality here is what you can do is you can have you can tell your app owners the people that are developing code you can tell them to go ahead and make a pr for changing the image version to the one they want to deploy in the cluster so they can make the pr with the actual change but use them you can use an owner's file in github or github permissions there's ways you can do that where you do branch-based protection and basically what you do is you can ensure that someone from the owner's file in the repo and maybe that's your devops team people and they have to approve or they have to be one of the people to review and approve that pull request before that image gets deployed to production my point here is is that you don't have to have it fully automated that's the final final step you can get almost all of this automated and simply use the pull requests as the workflow methodology for how you deploy new versions rather than a whole bunch of other custom commands like we were doing today like argo cd commands aren't the best way to do this the the the way to do it is with yaml and github all right so any other questions before we wrap up this final show of the year um oh thanks bram by the way uh i i noticed that i didn't have the terminal up there so i pulled it up earlier sorry i missed your comment uh conrad so we will have two repositories app and deployment yeah so i think honestly if i were to do this from scratch and this is a real world app let's let's say this is a real world app it's not today what i would probably do is i would have the the the three apps in their own repos i'd have a result repo a voting repo and a worker repo because they're all in different languages probably written by different teams so they'll be in their own repos then i would have a separate repo just for the customize or the helm right whether you do customize your home is up to you but that's a separate repo today that's in the example voting app repo those yaml manifesto in that repo but i would probably want to split those out and that's what argo's going to look at argo's not even going to care about the app itself right because the app itself is just images stored in registries so what argo's really paying attention to is your helm or kubernetes or your manifest or your helmet your helm or your customized or your manifest those are the three different ways you could really do it there's lots of other templating systems you could use and argo supports other ones but those are the three that i see the most right so then there's that repo and then you're going to have your argo repo which is the one that i have created that's still empty that repo is going to have your argo configuration and then any overrides to your yaml templates and then that repo is really if you're going to use argo there's there's multiple ways you can use argo but so you could have that argo repo have many different clusters in it and many different overrides and many and and all that if you're using argo cd in a multi-cluster scenario you can also do that so you essentially it becomes a mono repo of argo argo stuff and it controls different kubernetes clusters and you can override the settings for each one of those you could also split it up even more and have one repo per environment essentially per cluster and so then in each one of those repos you can add finer permissions granular permissions that is in a much bigger team i don't that doesn't always happen that way but you might find that you also want to test argo itself so you might have the production cluster with a different version of argo and it's using its own repo and you might have like a permanent staging cluster with maybe a newer testing version of argo and you might have those separated out because you've got to have testing for your infrastructure as well so at the end of the day you might have multiple argo repos just so that you can control different versions of argo different test setups of argo and all that so that's kind of up to you but i think this this two repo approach is the absolute minimum i wouldn't want to do this in one repo ideally i'm not putting any of my argo stuff in with the app because the argo is very environment specific right argo is designed for you to override your generic settings with environment settings for a specific cluster or a set of clusters and ideally you don't want those things hard coded in the app repo because the app repo is very generic right it can be used it can be used in any different number of clusters in many different environments so i think that this 2 repo design might be the simplest setup unless you go to helm or customize and then i might say you might need a third one so you might end up with app let's say helm let's just say helm chart repo then argo repo that might be the more than the normal approach and if you're deploying your cluster with infrastructure as code then that would be an infrastructure as code repo so you might end up with a terraform repo to build the cluster your app repos then your helm repo for the helm chart or or it's customized one of the two and then your argo repo on top of that so when i work with teams they typically settle um i'd say that there's like there's one team i'm working with right now they've got dozens of people um dozens of apps and they have that setup they have infrastructure repo so it's got all the terraform and um ansible in one repo they use subfolders so it's kind of a mono repo approach and then they have all their app repos and those have no customize no helm no kubernetes yaml in them whatsoever they have docker files and they're building docker file images they might have docker compose files in them then the next phase is your helm or customized they're using helm so they have helm repos one helm repo per chart and the chart is a specific set of containers that all run the app right and then they have an argo repo separate from that that overrides any of the generic settings from the helm chart so the helm chart has a bunch of generic settings in the templates and defaults that are sort of general they might work for local dev they might work for test setup and then per deployment of argo they have those overrides to change maybe they change the resource requirements maybe they change the you know the password the names of the password secrets right because maybe in a different cluster they're different names or whatever so that's kind of how they approach it what are we writing tests for argo in um that's a great question i don't know robot um i so the way that i've approached testing with stuff like argo and helm is i write stuff in github actions so as a part of any pr that happens on the argo repo i would have github actions that are linting and specifically making sure that you have some sort of at least a yama linter probably you see if you can find an argo lint i don't know i've never actually looked for an argo linter but um i use super linter and that lends so many things that lens the the helm chart that links kubernetes and customize all that stuff and then what i would do is i would have your github actions set up a kind cluster kind kubernetes and docker and deploy argo and the app into that cluster and you can have all that spin up in minutes and it all does basically kind is just a you know a single node kubernetes cluster running in docker and it spins it up in github actions and if you go over i'm trying to think there's a youtube video that shows a little bit of this example i think i'm trying to remember where it's at i don't remember um i remember seeing one but there's you can set up kind very easily in a github actions workflow have it actually deploy to a cluster that spins up and have it deploy the argo as well and you and then really at that point you're you're just typing out the commands and in the github actions as a step by step and so if that workflow works then the at the end of that really what you're looking for is you're looking for it to successfully deploy the argo and sync and then for it to all be green and then for all your kubernetes health checks to be green right you want them to be healthy so at the end of that github action you're going to want some commands that validate that everything is healthy and then if it is it tears that whole cluster down and your pr is giving a green thumbs up and then you you that's like that's the trust that you would know at least it passed linting it passed deployment it successfully deployed the app the app has health checks that's probably like a 20 to 30 step github workflow i don't have all the pieces of that combined so that's not something out of the box that i could give to you but maybe that's something that we do for this example i mean this example for mine isn't going away so maybe we use it again in a future video if you like this video thumbs up and i will work more on it thumbs up the video if it gets if it gets enough attention i will go back and add more lessons to this and we'll see over time how much we can make it make it work on on kubernetes also check out argo cd app of apps pattern okay definitely check that out do any of my courses cover testing devops no but i think my courses don't cover that but it's something that i'd love to add and it probably will go in a separate course that specifically is talking about github actions because github actions is kind of my workflow engine of choice and that's where you're going to be doing all this work right and so you get the github actions and you do things like super linting you do things like building the images testing the images with very basic test commands maybe do an mpm test if it's a node app whatever the testing platform is of your choice and you're doing all that in github actions i intend sometime in the future to make something like that available today what i have is if you go to my github you can look up github actions in my repo list and you can find some some examples i have at least two or three repos of examples for super linting for docker building sort of the basics of that and then what you're going to do is you're just going to add steps you're just going to keep adding steps to those workflows and some of them will have test commands some of them might run a docker compose file and wait for the images to come up and make sure that they're available you know maybe you'll do some sort of simple curl or a docker health check you might even do that before you ever move to kubernetes testing but um yeah i don't really have a section in my course on that yet it's a great question romghal thanks for your course what do you think about docker provider for terraform do you see any use cases um you know it's funny i don't even i'm sure there's a docker provider for terraform i don't ever use it because anyone i'm working with nowadays they're not deploying docker servers directly they're deploying kubernetes clusters that happen to run docker images so i don't write terraform to install docker and deploy images or deploy containers with docker that's kind of in the realm of kubernetes itself plus something like argo or flux you know those tools together or fleet if you're on a rancher deployment so that i would do it that way to me terraform i try to keep my terraform as minimal as possible my point of view is we've got all these newer tools like argo and helm and kubernetes to deal with the app and i want my terraform to focus more on just the infrastructure itself i want it to just build out the cluster and build out all my aws assets through the aws api or the digital ocean api and then i'm done with terraform i'm not trying to use terraform to do everything else on top of that mostly just because i feel like these tools all have a place where they're great and that that's the best test sorry that's the best tool to use for that job but what i don't believe in is taking one tool and doing everything with the one tool like everything's not a hammer so i'm what's that saying uh if ever if you only have a hammer everything looks like a nail or something like that so um i prefer for example i prefer still you know terraform for cloud infrastructure um i used to be a salt stack fan but ansible is now the way to do stuff on individual nodes so if you need to do something two nodes ideally you don't have to do that anymore because we have all these services that control the nodes for us and things that you can do with kubernetes clusters where you never have to touch the nodes so then i hopefully never need ansible i can avoid ansible that way use terraform to deploy the cluster and then after that the automation takes over right that's where helm and r go and kubernetes takes over for me so it's a great question though conrad's like i'd really like to see that okay all right i hear you i would love to make it um this right now we're all focused in the middle of docker mastery and kubernetes mastery updates and making adding more videos so you'll see stuff from us about those courses before you'll hear anything about new courses from me because i want to make sure that the current courses are current and up to date what is the best way to install postgres cluster on docker which will be able to perform the following tasks for postgres well the best way to do that is to use the cloud i'm being snarky uh but if you i would really recommend you don't run your own postgres cluster if you do you're probably going to want to run it on a kubernetes cluster with an operator so go look up kubernetes like go look up a helm chart for postgres because that's going to have some more of the advanced stuff like ha and automatically setting up the ha docker doesn't really have a way built in to understand the relationship between different containers and to note do things like a kubernetes operator would do you know swarm even has some basic features but swarm itself can't do this for a postgres cluster so what you really are going to have to look for if you're only going to be doing donker then you have to go look for some sort of third-party tool that will manage this for you kubernetes solves this by providing you the operator pattern so with kubernetes you can actually get custom cube control add-on command lines that will allow you to control your cluster upgrade the cluster back it up you can do all these things for kubernetes but with docker you're gonna have to find someone who makes that tool separately and and in the background that tool is just running a bunch of docker commands and doing things for you and i don't know that one because i've never i stopped building postgres clusters in in docker i don't know three three or more years ago so yeah i don't have anything for you sorry rodrigo any tips for adding a windows node to an on-prem kubernetes cluster other than don't do it um no i don't really have any tips i mean no you could totally do windows i mean windows now supports you know the last couple years of kubernetes versions all are supported on windows server i think the biggest tip for me there is to use the newest version of windows possible do not use windows server 2016. that's been replaced so always use whatever the official latest windows server version is because windows and its functionality with kubernetes are often tied to the version and you can you know i think nowadays unless i'm a little outdated maybe but they have the the major releases like windows server 2019 and stuff like that and then they have the cloud releases that are faster um if you're on prem you're probably going to not be able to use the cloud release and you're probably going to want to have upgrade functionality so you're going to need those full versions so whatever the newest one is i don't i don't actually even know if windows server 20 20 21 is out i don't know if you know if that's a thing um i just the last one i looked into and and learned and played with was windows server 2019 and that you can add to a kubernetes cluster the documentation is straight on the kubernetes website on how to do that you'll you'll be installing the kubelet cli and then you'll just need to make sure that your applications are you know only running on the windows server when they're running windows binaries right so you gotta you gotta use your your selectors to do that all right hopefully that helps kubrez is a postgres operator for ka it's great i have not used that one um i will totally check that out i mostly default to the helm chart the um whatever the bit the bit one let's see hold on gotta use my helmet yeah um i don't remember the home chart i use but i normally use the help chart i need to get into the to that operator because i get this question more often and i don't have a good answer for it so thanks all you've been a tremendous help today thank you so much all right everyone happy new year i will see you next week again on the show next week i'm going to have a guest from cycle i o we're going to learn about cycle at new start a newer cloud native company and see what they're all about and i hope you have a great rest of the year again get that kubernetes challenge done for digitalocean and you can get some swag see you soon ciao everyone [Music] so [Music] [Music] [Music] so [Music] do [Music] you
Info
Channel: Bret Fisher Docker and DevOps
Views: 68,117
Rating: undefined out of 5
Keywords: docker, iac, devops, kubernetes, developer, bret fisher, devops automation, cloud native, distributed computing, devops course, kubernetes tutorial, devops tutorial, argocd, gitops, digitalocean, argo, automation, github actions, github actions workflow
Id: h4FzcvPGVOo
Channel Id: undefined
Length: 187min 4sec (11224 seconds)
Published: Thu Dec 30 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.