Kustomize: Deploy Your App with Template Free YAML - Ryan Cox, Lyft

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome everyone thanks for coming my name is Ryan I'm a software engineer at lift I work at the office here in Seattle so it's great to be at a local conference I'm surprised by this amount of people I should maybe start with a disclaimer you can see the talk title you know what that is I ran into a friend as I was coming in I haven't seen in a couple years he said your talk looks interesting does this mean I don't have to use llamo and I want to make a disclaimer it's whatever the opposite of that is so if you need to get up and leave right now like no problem no judgment ya know so this is encouraging you to embrace the Y Amal and we're going to talk about how you can deploy your applications more easily and more customized with this tool is an open-source tool the structure of our talk we'll take a look at the landscape as with most things kubernetes sometimes it's difficult to sort of place this in the overall context of all the other tools out there to understand what problems it solves what problems it doesn't solve we'll look at some features briefly and then we'll dive into a demo and we'll spend most of this talk doing a demo actually that is kind of non-trivial will will deploy will deploy a piece of software from an upstream we will start layering in various configuration we'll push it out to gke and use a bunch of the cloud features there as well so that should be good hopefully the Wi-Fi holds so lots and lots of different tools out there today for deploying our applications you probably have been to talks at this conference on one or more of these other tools helm is probably the most popular maybe you've seen what the folks over at paluma are doing a sort of a different take on things it's quite interesting as well there's kisana these other tools and there are many many more so when looking at them sometimes it's useful to develop even just a coarse-grained taxonomy of what these tools offer some of them offer dependency management they tell you if you're going to install package a that you need to install package B and C previously some of them provide application descriptors with URLs to upstreams different metadata associated with the application there are application discovery systems things like chart museum where you can go out and actually search for an application there are dashboards that live in your cluster itself to tell you the current state of application their versions there are tools that let you manage the lifecycle to deploy to deploy specific versions to roll back and control the lifecycle of your applications and then there are capabilities of these tools that allow it to be customized or adapted or tailored to your specific needs and in fact that is the the domain of customize it is a very targeted tool it does not try to accomplish all of these other things it's really doing one thing and that is allowing you to tailor your Yambol files to your specific environment so it's very useful in many developer workflows and it can find a very natural place in some CD pipelines as well this is the description from the rebe file on on the customized github page it lets you customize raw template for llamó files for multiple purposes leaving the original Y amel untouched and usable as is so this is really a key idea that we'll come back to will develop will sort of parse this and really understand what that means a very simple invocation of it is shown below you'll run the CLI tool from the command line customize build it looks in the directory at the contents of the files pulls those in and processes it and outputs the result to standard out so a very common flow is simply to run this tool and pipe it as I'm doing in this case directly to coop cuddle apply and you will immediately have those changes applied to your cluster a nuke a new configuration file is introduced here by the customized tool so customization yamo is basically what parameterize --is this tool so you may have some existing ml descriptors that are ready to be deployed into a kubernetes cluster you can drop a customization llamó file in adjacent to those other files and begin to define new behavior that will be performed during the build process so in this case this is pulling in two resources this is assuming there is a deployment and a service y amal adjacent to this this file on the file system and so when build runs it will pull these files in and then it will stamp this coop con namespace to all the resources in that file in those files and omit it to standard out so that would be a very very simple use case you may be thinking back to the 70s here and the UNIX philosophy of doing one thing well and that is very very much the ethos of customized it's a very again a very targeted tool to allow you to manipulate gamal and not not just any llamo this is manipulating specifically kubernetes api llamo herman customize has its roots in this paper written by barring grant a declarative application management in kubernetes if this is a general area of interest to you even if this tool is not of interest i would strongly advocate that you go read this paper it's quite interesting it's available on github as well as on a google dr you just google for it you'll find it relatively quickly but it talks about the motivation it talks about other tools in this space and it talks about the general concept of splitting apart some of the functionality of these tools these monolithic tools and exposing things like customize to handle these very purpose-built utilities let's look at some features so once again you see a customization UML file and a pod llamo so the pod yeah Mel should look very very familiar to everyone in the audience this is a super simple pods back that's defined here and the mo file is sort of ready to be applied to a cluster all that we do would would be spin up this busybox based container I'll point out that the that pot is called snooze if you look at metadata name so customization yam all if it sits adjacent to it on the file system you'll notice at the bottom resources pod yeah Mel so it's pulling this into the file system and then it will stamp on to that these custom labels and these custom annotations further it will apply this prefix of pre production to each of the resources processed and it will drop that into a customer preview namespace so again you run a customized build with those two files in a file system and you'll see something that looks much like this it's very similar to the original amel it's been augmented with these annotations with these labels you'll see it's reached down into the metadata name and it's altered that name by prepending this pre-production - prefix to it and it's placed it in this namespace a super common workflow that's enabled by customized is this notion of overlays so perhaps you're you've got a team within your company or you've pulled down something open-source or you're collaborating with someone they provide you a set of descriptors it's almost what you need but you need to customize it you need to tailor it for each of your your various environments in some specific ways so customized enables this via overlays and so base can contain most of your descriptors and then in each environment you can specify the different changes you want to apply to it so let's break this down what would that look like in practice I'm showing an example here of three directories on a file system one it's called base another called production another called staging base contains that similar simple pod specifier along with a base customization yeah Mille and then in production and staging that's where each of the changes per environment are applied to it what connects the two are the bases specifier that you see spouts that you see in the lower pink box where it says basis and is referring back up on the file system to base and so if you're uncommon on this it will do what you would expect it pulls in that base and it applies whatever change is not just a prefix but you can apply anything that customize can do to whatever was contained in base and it has a sort of interesting recursive property that you can take these simple primitives and compose very sophisticated capabilities and behaviors out of it you might also want to do a mixin configuration and lift we have a sophisticated system that uses this mix in this mixing workflow where basically you've got a bunch of different modules and there are many different deployment configurations that are supported and for example what I'm showing here is you've got a UI module a back-end module and you might want to deploy this in a headless configuration where you've got no UI or maybe you want to deploy it in the complete where you're saying take all my modules and deploy them and so customized supports this this is very similar to the other example we looked at if you'll notice at the bottom again these is referencing basis but you'll notice that you can pull in more than one so by allowing you to pull in more than one base you can sort of get this interesting mix in behavior and so in this example I've got this base modules directory that contains all the modules in the complete configuration and pulling in all of them and in headless I'm just pulling in the backend so you're right run customize bill that will do what you would expect it pulls all those things in order them together and outputs it when working with kubernetes especially if Leon you were probably surprised when you made a change to a config map pushed it out and nothing happened customize can help with this as well so there's a notion of a config map generator built in to customize where it enables you to externalize the contents of your config maps so they can sit outside of ya mole files they can sit in for example files on the file system so here in this example and config map generator within customization ya mole I'm saying DB params is going to be defined by everything contained in this list of files in this case DB tamil so that's one flavor of config map generator the other is the specification of just literals so I've got just literal key value pairs where I'm specifying Java home and Java tools options so it would that look like you're on customize build on that it's going to output something that looks like this where DB Tamil is pulled in and you'll notice that the base name of the file DB Tamil becomes the key in the config map and the value is the unstructured contents of the file you'll notice down below that literals are basically converted directly over to their corresponding key value pairs and then maybe if you've got a sharp eye you've noticed that the name of the config map was altered and so that the idea here is that there is a suffix appended to the config map name based on the hash of the contents of the config map so any changes to the config map it will rotate the name of the config map and not only will it do that it will find every other reference that you have to that config map within your specifiers and it'll update those linkages so the net effect is when you change things within like for example DB Tamil and you run customize build again push that to cube cuddle apply it will trigger a redeploy for you we'll look at examples of this so if you're not totally grokking all this as we go don't worry super common use cases patching again maybe you've been provided a set of descriptors and you you want to sort of reach down into the into the tree and you want to add some content or change some content customize supports the ability to apply patches so you'll notice on the right hand side and customize amel we have a list of patches in this case I've just got one entry called request limits so that pod you'll recall it was very simple it had no resource specifiers we were not specifying any request resources or limits and so what this does is it provides a fragment this path provides enough of a fragment to identify where in the target yeah Mille that you want to apply this patch and then it will be a strategic merge pull that in so if you're in custody you'll see again the contents of that original pod spec listed unioned in with those limits and request resources via strategic Marge we'll see more examples of this as well let's do a demo the the paper I referred to earlier by Brian grant talks extensively about this fork modify rebase workflow this concept where you can locate an upstream find some interesting animal descriptors that are located there fork the repo and then start dropping in customization yeah Mille your patches your config Maps alongside the original descriptors without altering that original file and then using customize to deploy this out to your environment and so that's exactly what we're going to do we are going to use this it's kind of a github clone called git T that is actually surprisingly capable it's an interesting product I recommend you take a look at it but it's good for our purposes because they shipped with they happen to ship with a default descriptor that we're gonna take that descriptor and we're gonna I forked this whole repo so if you if you got to get up calm slash Ryan Cox I've pinned that on that fork up there at the top and I've created a couple of branches where I've started to apply in various behaviors that we will compose and deploy out to gke and and see how this all works ultimately we want to build up to it kind of a non toy non-trivial work workload that looks something like this we want to basically tap into some of the other cloud resources that are provided giddy supports a bunch of different backends will our an initial version will deploy with sequel Lite a subsequent version will we'll use cloud sequel and a hosted Redis store the first thing that we'll will do though is we will use persistent disks so the example that's provided by default it actually wouldn't even work if we tried to apply it because it's using it's using some hosts paths that don't exist on my nodes so that's our first goal our first goal is to just get this thing working in gke and you get it using persistent disks then we'll layer on to ha on top of that some more complexity if we have time we'll show something a little bit more fancy let's talk about service mashes I'll talk about envoy or get lift we sprinkle envoy on all our problems so I want to show an example of how you can use customize to inject an envoy sidecar into an existing pod spec and this is an interesting use case because maybe some of you are using mutating webhooks to do this course sort of thing on on pod deploy this shows kind of a different approach where if you're using get up so you could actually sort of like inject it into your EML check that in and deploy it out so this is maybe like an idea virus I'm hoping to put out there into the universe and see it fully formed by someone else but the general ideas will inject this thing and we'll access it via 9211 and see how that works okay so let's go to the demo this demo is a little bit elaborate so I'm hoping that the demo gods are smiling on me let's take a look first at this repo here so this is the repo for the the application that we're going to deploy out to our gke cluster and you'll notice down here and contributes they've got this yellow file this is like pretty vanilla there's nothing too surprising here they define a namespace hopefully you can see that this image is little bit more a deployment that deployment has a pods back with a single container in it some vol mounts root and data wall mounts these will become important to us as we start to adapt to this thing they expose out a couple of different ports and here's the the first problem is we want to get rid of this this is bad this is gonna cause us some problems we don't want hose paths we want to use these persistent disks okay so that's that's what we're gonna start with let me show you my environment out in Google cloud so I've got this coop con cluster actually I can just go back here but this cube con cluster that is got three nodes on it through BBC three vc views and a few gigs of memory so that's what we're going to play all this stuff on to let's go into the file system here so I'm out on the file system I'm in the fork and I'm on master right now this this is looks just like what we looked at while we were browsing right through github now I'm going to check out demo step one okay so demo step one has a few more files in this directory so again this concept I've mentioned is twice but I just want to reiterate this the original get diamo file is untouched we just leave that there and we drop in other files alongside it in the file system to compose this behavior and for customized during the build phase okay so that's untouched we never touch that and in fact later on we might want to come back and rebase our our Fork and pull in all these changes and that has an interesting property of never having never having a git conflict when you rebase because we never touch the original descriptor that makes sense so there's some interesting power here in this model so we've dropped this in we have a customization llamó file and we're telling it put everything in this namespace pull in the original yeah male descriptor alongside a new one that I've proposed this has some persistent volume claims in it that refer out to those disks a patch this basically what we'll take a look at this in more detail here in a second through a little easier to see interface two patches one patch that use a strategic merge and the other this is a bit of a criticism of this tool which usually very confusingly named patches JSON 6900 - very bad product name they're made some p.m. on this I think but the general idea is this was referring to RFC 69 Oh - and the RFC that tells you how you can apply a patch to to JSON in in very sophisticated ways and so we need this we need more sophisticated ways actually let me let me pop back to the diff to show you why we need this so I'm showing you a delta between master and demo step one branch that makes sense okay so this is the same thing on the file system I'm just showing you a few so that it's just showing you what I've added all right so that's the customization file that we just looked at this ball mount patch is the piece that's needed to get a little bit fancier so instead of a strategic merge of just adding new volumes in what we want to do is you would think about like reaching down into that original yamo and just snipping off with a pair of scissors all those volumes that were specified and replacing it with a new set of volume specifiers okay we need that because we don't want to do is additively we want to just replace it and so it supports a relatively sophisticated ability to provide an op type we can do replace and then this looks kind of XPath like users passing down into that object graph and saying operate on volumes and then replace what's there with what's provided below so again our goal our goal we're not beating towards is to use these persistent disks and so this reference is the PBC's that we've defined that we've defined and have made one other small change this other patch says change the original deployment to a strategy type of a recreate and it did this for demo purposes I wanted the original persistent disks to stay there I didn't want new pods to be spun up and new disks to be stomped I wanted to retain those disks and destroy the pods and recreate hopefully everybody's with me on this okay so over on the filesystem customize build and everything goes to standard out let's make that look a little nicer so we can we can just look through this and see what's going on here here's our new persistent volume claims nothing too surprising there here's our service definition that exposes the port's as SH HTTP here's our deployment specifier you can see that if that patch has been applied to there's recreate that's the recreate patch that was applied into the original deployment and here we know that our or our fancy patch we could call it worked because it's replaced it with the person blind claims there's no reference to that original host path okay and then we had to do one other thing there's a new storage class that's defined this says this is because we're on gzp we're saying use the where is it specified we're saying use yeah here it is we use this provisioner here we want to use the GCE persistence this provisioner and we want to retain those disks that are created this is for demo purposes so we don't want to blow away those disks but it will just keep them there rather than trying to recreate them so to kick us off I actually have this this working here I'm just going to port forward in onto port 3000 and the way we'll know this works is if I hit refresh here and we see a screen that looks good huh bit yeah github like there we are okay so that we're good so we can sign in and everything looks good this checks out so that worked that's fantastic but let's convince ourselves that this really did works I'm gonna go pop back into the Google console and I'm gonna look at storage I will zoom in here a little bit for you and you'll see I've got data and root so those are the persons and volume claims that were specified in REM all kicked out to that that a GCE provision are provisioned up and mounted back into the pods that we deployed we can take a little bit closer look into the the pot itself so if we click in we've got single deployment that deployment again has a single pod drilling into the pod you know some details about the pod things things sort of look look good and unsurprising here here's our services exporting those same ports okay good so I think that's good for step one but remember what we're navigating towards is this we want to do more fancy things so we want to use not sequel Lite but we want to use a real database we may just Postgres so let's let's point it at cloud sequel and we want to use a hosted Redis instance as well so git T supports using using Redis as a caching layer and to store our credentials for the database we want to use KMS and GCE as well and so we do all those things in the next branch so let's check that out step two so I just checked out step two looks very similar I've got a couple of extra files here that you might notice specifically I'm probably most importantly for this part of the demo is app dot ini' so giddy allows you to configure it in very sophisticated ways with this initialization file so this is nothing to do burn Eddie's nothing to customize this is has everything to do with the application that we are deploying and tailoring to our specific needs so there's that and then there's PC cool that's secret so this is an encrypted binary file that is decrypted at customized build time and I'll show you how that works here so again I'm going to pop out to a diff so this time we're dipping step one and step two so this is again there's just a kind of I isolate the differences in what the the steps are on our demo coming back down to customize so what we're doing here is really mostly configuration in this step we're using this config map generator that I referenced earlier as we were looking at the various features and we're saying pull in AP and I'm off the file system map it into a config map called AB dash config and then we're using something called secret generator which I have not talked about but is very very similar to a config nap generator it lets you basically run a command take the output of the standard out of that command and load it into a kubernetes secret and so this is calling g-cloud cam as decrypt and you should see that piece equal dot secret so that's the encrypted binary file sitting my filesystem that's checked out checked into this branch specify the key ring and the key running this will emit a secret just a few characters that is the password to the database that gets pulled into a secret called password and we'll see that in a couple of different places so that's effectively what's gonna happen when we deploy this this deployment patch specifies a couple things nothing too fancy here we're just telling the Getty app to go out and look for its custom app any in that directory and then we're saying inject PG password into the environment from this secret RAF Postgres password and so the the Postgres library that this system uses happens to look for this PG password and so it's a convenient way to get credentials into the system without putting in law files list or doing something on there like this so that just happen to work out easily here and then we're mat where vomiting that config map in here the a penny I'm not going to go through this in detail I want to point out just maybe two or three things one is the default theme arc green so you notice before it was white so if things work in a moment here I'm gonna apply this out to that cluster I'm gonna hit refresh and the way you're gonna know this worked is if it goes into this dark mode so cross your fingers then I specify Redis here this is the address of a hosted reticence since I've got a GCE in that account and if we scroll a little further we see it's not using sequel Lite we're saying use post grass and that's the cloud sequel database I'm I've got in my account and the the password that it's that it's provisioned up or the IP rather for that for that database okay so customize build this looks just like the example I gave build we're gonna pipe it straight into coop cuddle apply get pods okay so this is this this is triggering a redeployment so that that pod is terminating new ones are spinning up should say running second here this is a dramatic dramatic pause running okay you didn't think I was gonna work I'm gonna report forward that and I'm gonna pop over here and if i refresh this white theme should go to dark and that will tell us like everything work that'll tell us that the theme worked and our database conductivity worked and there we are in dark mode let's sign back in and this is slightly different because this is now actually pointing out to the database and it's using this state from my from my previous invitations of this but we can convince ourselves that this is working by looking at the applications configuration itself and just zoom in on this a little bit you can see the application is telling us in its config page for the admins this is using Postgres so we know that Postgres piece worked we know the Redis piece worked as well as telling you that the rightest thing worked if we hop over to the console the cloud console we can see some other interesting things so before I believe I showed you these or if I show this or not but these are the the PVCs that were provisioned up the volumes we can go back and look at the config maps so this stage of the demo actually went out and created those config maps remember the config maps get unique names and so what do you notice here you notice that these config maps are sort of piling up right because every every time they change and get deployed a new one gets created and so there's a couple ways of addressing this you could create some sort of garbage collection thing you could be brave as well and use the coop cuttle applied after - prune if you're familiar at that Josh prune and you can give it a label and it will sort of clean things up for you but I'm sort of I'm letting me stack up more for demo purposes than anything I'll click into this Postgres secret and we should be able to see something that says password here browser so here's password so hopefully you can see sort of how this this secret threaded its way all the way through it pulled it off the filesystem g-cloud KMS decrypt it pulled it into the the the yeah mail descriptor and then when we deployed it out it pushed it out provisioned up this new kubernetes secret that is then mounted into the the the environment of this application and used to go out and authenticate against the database okay so that all looks good looks like we have time so let's let's do the the third step here demo step three again very similar stuff on the filesystem you'll notice envoy I mentioned twice here so I've got an envoy patch and this envoy patches the piece that says add a new container to that pod spec and envoy llamo is the configuration for envoy itself this is not kubernetes related this is not customized related this is what envoy needs in order to forward things around so let's let's take a look at the deltas so this this compares branch two and three coming up to the top here so envoy patch this is the piece that actually injects the container so the patch says reach down into this deployment and add via strategic merge this envoy container here's the you notice it's envoy alpine and expose off port 9211 remember from our diagram what our goal is here our goal is to come in from 9211 have envoy forward us across here and we get all the benefits of envoy what are the benefits the benefits are this is the building blocks of a service mesh so i have a service match you're gonna need something like this but even independent of a service mesh you get some interesting properties out of this you get very very rich uniform observability by injecting sidecars like this so you could emit stats from from a configuration like this to your observability systems similarly with logs even the you get interesting behavior like you could tap into the rate-limiting capabilities or a circuit braking capabilities well boy this is a really really rich potential to you know implement this sort of thing so that's that and then here's the envoy llamo this is about the as simple as you could possibly get for an envoy config on voice configurations can be quite complicated this basically says listen on all IPS support 9211 and forward those two for 3000 on localhost and so since these are in the same pod localhost forwards over to the existing application container okay hopefully that makes sense there's a reference to the patch and customization Hamill's so let's just apply it so I applied it I'm watching the pods terminating creating so the way I know this is going to work is if I go back to that browser window and I change the port from 3,000 to 92 11 and if things work then we know we're transiting through the sidecar and we're hitting the original application so down here I have been port forwarding 3000 so I'm just going to change this to 92 11 okay we're up my port forward is up and I can go over here I can change this to 92 11 and the sidecar works okay so we know that's working because otherwise we would just get like errors here so hopefully that gives you a taste for some of the primitives that exist in customize as a tool lots of places you can go for additional info the original github repo is probably the canonical place to start the declarative app dock is really good six CLI goes deeply into customize in its future there are some caps on this on the kubernetes slack there's a customized channel Jeff one of the key maintainer 'he's also recorded to talk on this you can find it on YouTube and there's interesting integration with other tools as well there's a company called replicated doing some cool stuff with this cube builder automatically kicks out support for customized and scaffold as well so scaffold is a development tool for that your development workflow your sort of inner loop and that actually has support for customized deployments as well so lots of other places to go quickly worth mentioning the road ahead coop cuddle this support has get support for customized basically customized is getting merged into coop cuddle you can read this this cap here I was just talk to him ed literally before I walked up on the stage and he was telling me that these there are PRS out there right now that will be merged in a few days time probably and this should land in you know kubernetes V dot next so so that's happening and then just I would encourage you to think about some of the the primitives and then interesting workflows and sort of separate those things so customized is a simple tool it's a command-line tool but it implements some very interesting primitives that can be composed into very sophisticated workflows so I think could you go take two take a look at it if you want to find out more I want to talk about it I'm around the conference all week so thank you very much about three minutes for questions if there any questions yeah right go ahead yet so the question is is like there's the different schools of thought there's charts and there's Yamma which do I use in it Lancers why not both why not both so Google around for some examples that have been created of taking helm output and using customize on it so the challenge you'll have is if you want to use customize in the way I've just described you have to go find up streams that are publishing amal descriptors and those are very far and few between however what's much more common are charts that will output llamo so I would encourage you to sort of like take a look at like maybe fusing those two things yeah it does it does know about that yeah if you if you dig into the source code of customize it it has special sorting that will will create things in the order necessary other questions yes well what's the question so it depends on it depends so the question is is like I made the statement that when you when you rebase your fork you want to merge conflicts which is true the development was made there like still things might not work is that your point yeah so it depends on when you run customize build do you run it at deploy time when some end users clicking on deploy or do you run it and check the results in to get so if you if you run and check the results in to get then you could diff the results between indications of customized build so that hopefully answer your question yeah sure yeah the I didn't show it but basically the kubernetes the the secret was there and it was on encrypt it was basically foreign coated so yeah valid point secret handling is a larger topic yeah okay correct I suspect you probably use something like vault if you're really doing is for reals yeah yep I don't stripe all that kind of a question is there a validation concept which in point oh I see I mean it will air out if we can't resolve one of the one of the dependencies I'd okay possible yes no but you mean again it's to be composed in with other tools so I would encourage you to look at something like cube Bell to validate the actual output or some like generic yamo validator as well okay looks like at a time we can fine me down here if you want to talk more about it thank you very much [Applause]
Info
Channel: CNCF [Cloud Native Computing Foundation]
Views: 22,662
Rating: undefined out of 5
Keywords:
Id: ahMIBxufNR0
Channel Id: undefined
Length: 35min 49sec (2149 seconds)
Published: Sat Dec 15 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.