Kubernetes Ingress: Your Router, Your Rules by Gerred Dillon, Deis

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay I think we're going to get started thanks everyone for coming and today I'd like to talk to you a little bit about ingress and the title is talk is ingress your router your rules I'm Jarrod Dillon I'm here with Deus and if you want to learn more about any of this or helm steward workflow any of it come talk to us at our booth and I spelled my twitter handle wrong up there so just drop off the last two letters and it's just at justice rise don't know where that came from so but what I really want to talk today about is not necessarily ingress I want to talk about container networking at scale and I'm going to break this down and get a little specific and what I mean by container networking by container networking I don't just talk about your web services we all think that we're running you know websites and api's and all sorts of front-end services and that's great these are all very normal things but I'm also talking about networking of different workflows this is kotas error out of Amsterdam and they deploy regularly hundreds of services for all their students I'm also talking about deploying games on top of kubernetes game servers minecraft servers if you look at the helm charts we've got minecraft factorio a whole bunch of stuff and I'm also talking about games at scale if you're familiar with Niantic works with Pokemon go they're built on top of Google cloud platform and doing it huge scales on not just traditional services that we think of when we're going out to deploy on top of kubernetes so what do I mean by scale I'm talking a little bit just more than I have a billion pods across a thousand nodes and we are talking a little bit about that being able to scale out is a very important kubernetes goal but this is a very much mechanical scaling especially if you're a stateless application it's pretty easy to do I just scaled my deployment up to a billion container pods and I'm good to go as long as I have the notes for it what I'm really talking about also here is the logical scaling of all of our different services it should be really easy to add a new service into production and not face a lot of pain in doing so and we should be able to just as easily scale our services or deployments you know our different notions of our application just as easily as our different pods so I want to take a break before we get deep into code and we start looking and tearing apart the whole point of this talk which is ingress controllers and talk to you a little bit about cue Bernays concepts and how you might see those concepts today so who here has used ingress at all great a lot of you who hears you who hears written their own ingress controller okay a couple people will get there who thinks very much in this model right now where if I'm going for an l7 load balancer this is probably a lot of google kubernetes container engine users if I need an l-3 load balancer well I'm going to stand up with service and I'll get that and if I need an l7 HTTP load balancer or any TLS I need V host routing rules all that well I'm going to reach for a level seven one and if you used a lot this is probably how you're thinking about that whole system at the moment and I'd like to change that for the purposes of this talk so I want to revisit some of these I'm sure you're all sick of looking at manifests I'm sure you're sick of seeing pods for the fifteenth time today but I want you to look at these maybe in a slightly different mindset then a lot of the other talks that you might go to any these rules expire at the end of this talk so don't go home and start you know quoting what I tell you in this talk as far as this goes so I want to think of a pod as a single internal resource and I think you can think of that no matter what like that right we have a basically at one static bundle for our site we have one API server process we have one flash media server process representing a video stream we have a single Redis process we have single Postgres instance what we're really talking about for this talk is I'm mapping a virtual IP to a singular resource and it's important that this set in containers is entirely isolated as one unit and really thinks about itself and it doesn't care how traffic gets to it so if something comes in the IP Q proxy gets the traffic there I don't care beyond that and because you haven't looked at enough manifests today here's a pod manifest not fully filled out but you get you get the gist this is just a Redis pod and pretty easy to fill in that's not the interesting part of this talk so here's where things here to get weird who here just generally throws the type load balancer on their service and calls it a day just goes about their day couple okay yeah I you know when I'm testing things I do I do - and the for this for what we're talking about today I really want to get away from that idea of let's just slap a load balancer on it and call um be done I really want to start thinking of services kind of as a semantically related set and that's really what they are if you think about how labels and selectors work you have you know your phone application API server whatever but you have a virtual IP that's selecting over a set of pods that match a certain set of labels right and again and this is this is where uh things change when you start talking about the or in a bike space what a lot of people talk about with a service is a services isn't necessarily concerned with routing some people use it that way some people expose external traffic via service and that's great and it has its uses but it's really that semantics set and and just a singular grouping that's a representation of our resource and it cares about every member that set and again lots of manifest so here's here's that service now we're here what I really want to talk about and that is our ingress resource and in a degress is a mapping of external traffic to virtual resources it is the rules by which we take inbound network traffic and map it to a set of services so here we're starting to talk about really networking primitives right virtual hosts external IP addresses domain names load balancer rules and this could be a physical load balancer we'll talk about that a little bit but you have some sort of external identifier that is potentially physical mapping onto all your virtual resources so before we get into the meat of that and talk about about in depth what you can do with ingress controllers and really how to go move towards your own ingress controller we're going to walk through the standard configurations on an ingress resource and will return to this but you can do a lot of things with them you can out of the box without even writing your own config and things like TLS things like path based routing things like virtual hosts routing based on domains with your own metadata you can come up with as many crazy custom rules as you want so here's a quick TLS ingress resource right we have a secret that's our certs and you can also use something like cube cert manager by Kelsey Hightower to get at those stores with let's encrypt and you create an ingress and pointing it a secret and point your back-end and now you have a router that ties up your TLS certs and in your application and you're done you can do much the same with a virtual host routing so you can specify multiple rules inside of a single virtual host so here we have some sort of billing API and we have a master or production and we have a staging API both backing up to different services but semantically also part of the same ingress and so this is captured up up in some sort of upstream router that will then take this manifest and expose it we can do the same thing with path based routing so it's all pretty clear uh those of you on laptops are probably taking those copying pasting them off my screen and you create the ingress resource and maybe maybe you know you type it all in and run it and great I'm about to start sending traffic everywhere this is awesome I'm going to just start creating an aggressive left and right and nothing happens unless you're on gke GK users none of this applies you're going to get a load balancer it's going to be great but if your own mini cube if you're running your own cluster and you haven't thought about this problem before and you just go create an ingress like nothing happens you sitting there what the hell like well I'm just going to go use a service like that'll be fine like screw this thing and I'm just gonna typing type load balancer and it'll it'll work I've got I've got nginx already set up and going well that's not why we're here and there's some problems with that and the problem isn't just that you created an inject the problem is in kubernetes is broken so what is ingress really if we're talking about creating an ingress resource it's a little bit different than just creating a service resource and things are done your ingress resource really are just the rules for routing your inbound traffic towards your cluster resources and this is really important it says nothing about how those rules should be applied this is a very declarative API it's not imperative in any way and so I've just set up the rules and most of kubernetes works that way so it should be pretty familiar so before we get to the solution why wouldn't I just use a service I mean we talked about how easy it is and and just be done with it and you can use a service that's fine but a lot of use cases end up where you bump up against the the Tattered edges of what a service to do pretty quickly that's because services are pretty limited they're tied to the entire controller managers life cycle and their routing rules are attached to that life cycle of the service I delete the service that's gone if I'm deploying a helmet art who's using Helmand here a lot of you okay check out helm when you're done and you'll see some helm later on but if I'm if I'm packaging up my entire applications helm service and I have and I have a service resource in there and I go to delete or migrate my networking is inherently tied to that chart I changed things around and I've now lost my elastic IP I've now lost my load balancer and I have to make DNS changes I have all sorts of problems angrist rules stand to decouple that's that those networking problems from your application and most importantly this we have what's called an ingress controller and the ingress controller is a control loop that manages these rules and applies them in certain ways depending on what you're doing so you have to have this thing running in your cluster well the services do that sure but ingress does have some really big advantages over them and I think I've alluded to a couple of them but with services you get a can service type of load balancer node IP cluster IP and whatever else the kubernetes others add in future versions this is really primitive round-robin based load balancing a lot of people are probably ok with that but there's other use cases for for doing things like a be deploys and bleeding traffic and you may want to do that the network level you may not always want to do that via rolling deploy especially if you're beginning to test out different types of applications different features your flippin feature flags you you may not want to just rely on the deployment mechanism to do that for you so we have our ingress controller and instead this runs separately from your kubernetes master and all these advantages are here and and you can deploy them all separately but the big thing here is that I may have gotten myself out of order here it's bring your own controller you if you stand up an ingress resource and nothing happens it's because you didn't have an ingress controller and typically kubernetes installers don't ship with them Google container engine does there's a couple third-party services that do but typically you just won't have that out of the box so there's no some there's more upfront work but you are able to write your own and define your own rules with that and so when we create an ingress resource for the first couple times and nothing happens it's been we didn't have an ingress controller to apply our rules to the networking fabric above our cluster so it's important to note that this rule runs separately from your kubernetes master meaning you can have one or thousand different ers controllers depending on what you're doing so let's deploy something we're going to deploy an application called croc hunter this is shamelessly been redistributed from another colleague of mine Lachlan Evensen and you can talk to him about kubernetes and the Crocs your house hunting and it's built as a helmet art if you haven't checked out how please please do check it out I use it heavily in the rest of this talk we're going to play now and hope the live demo gods are happy with me is this all visible in the back great okay so I'm going to do a helm install our charts or croc hunter and one thing I'm going to do real quick before we do that is I'm going to disable our ingress now just to throw some charts at you these are going to be a little bit more dense than a normal chart just because it is templated with helm but the important thing is here note that we have a standard deployment that you'd expect and we have a service oops and we have a service no we're in this service do we say ok it's it's a type of anything and nowhere do we expose this to the Internet on the note or anything we're just going to create a service that bundles up all of the replicas of this deployment and represents them as the singles that single semantic grouping I was talking about so we're gonna do a helm install of that croc hunter chart and here in a moment we'll have it live on a cluster so if I do a cube cuddle get pods watch and type it right we'll see we have our it's important to know that helm gave us a random name or a slot random slug for application and gave us three pots under that so we have a nice running application if we're inside of our cluster we could go to a steer qual dot default cluster dock service don't local or service cholesterol local and we'll get it but how do we get to that from the outside we don't have an external IP so we can't do that we could expose it via type load balancer of course it would create this is a Google container engine cluster so it would create a load balancer for us but let's try it with this ingress way so here's our ingress resource for this again fairly template it out but the important thing here is we have our rules and under here we just have a single path of slash and that points out towards our service so unlike a service we're not pointing a pods we're not selecting pods we actually specified directly the service that we're going to use we're not doing a selector and we're not doing a set like we would with a service so I'm gonna turn this on in my values yamo and I'm going to do a helm upgrade on our austere wall and change the name real quick so I'm not running the default GK a load balancer inside of this I'm actually using the ingress we're going to be creating here in the next few minutes and it's an engine X based load balancer that gives me sub it different sub domain routing rules so let's go here to croc hunter Kate's Georg I can select and it's live it's up and running you all can go there you can spend the rest of this talk if you want hunting crocodiles but you may want to watch what's happening instead of I mean it's addicting but I'll keep it up the rest of the day so okay that was cool we got traffic to it by using an ingress and more importantly after we defined that role it was picked up by some sort of ingress controller so let's walk through writing a basic engine genetics controller if you're not familiar with go that's okay I've kept this pretty simple and austere and we'll we'll quickly talk through everything it reads well enough so the first thing we do as I say it reads well enough and then throw a bunch of engine X on the screen is to find a quick nginx template that basically takes everything up and wraps it out the the results we want from there all we need to do is use a standard kubernetes client to get our handle on our ingress resources and make a client and just start parsing them for whatever namespaces we want notice right above me on the right we we use API namespace all you can actually use this to filter down by different namespaces if you only wanted to do this for a single namespace or you wanted to do other interesting patterns all right cool we we have kubernetes api we got a template let's go and start our engine next process so in this in this contrived world we're going to have a docker file that contains both this binary and nginx process that we're then going to deploy somewhere so we're going to assume we have a handle on engine X somewhere and we'll solve that in the docker file and call it a day next we're going to go ahead and search for all of our ingress resources the rate limiter is there so we don't crush our kubernetes api server and it's a pretty standard pattern I think it's part of the standard library actually in kubernetes and then we list out all of our in Grace's no matter the namespace we just grab all of our ingress --is and we check it against some known list that we have and we create an engine X configuration to match this is about as simple as it gets there's much more complicated ingress schemes and ingress controllers traffic's one of them there's a whole bunch of options but uh you know just keeping this very very simple and then we're going to reload our config this is the the gist of the entire basic alpha demo engine X controller that's provided in the kubernetes contribute oh so it's about 80 lines of code and you have the very basics of a working and rest controller now you can spin up to do as much as you want the the one that traffic provides or traffic provides is incredibly robust there's also very robust ones for engine X and because this is all driven by your own code there's no reason you couldn't control a physical load balancer as well and and that's really the beauty of ingress is that you define kind of the rules of what your ingress controller looks like so if we look at just what hat what just happened we get an interest resource resource that's picked up we get a virtual host definition and I can access my my croc hunter service all right so you might say okay my my infrastructure is a snowflake we do things like per branch deploys that give our QA team the chance to review this in advance and we have all sorts of interesting serious things that you know if you go read Phoenix project like we're way past they're great go ahead and brace that sort of like you you are able to do that uh you know building an ingress controller is really easy and everyone in this room could do it you don't even necessarily need to write it and go could be in any language it can be whatever you want but you are able to craft it to your own needs all right so here's the challenge and because I didn't want to test the live demo go that's too much we have our ingress controller up and running already but we want to deploy croc under in a slightly different way we want to do it Bay so on our CI pipeline set up we and really get it into a setup where our developers can make as much progress as far as possible without being deterred and then get those results to our QA engineers again without being deterred and being blocked so we'll start off masters or route domain and it's open we'll ignore the author rules for the moment but staging is on a staging subdomain and it's based on our staging branch every pull request we do or really for this demo every feature branch we push up will get a feature branch deployment with its own domain all this controlled by CI and the developers don't hate themselves at the end so we need our ingress controller and an ingress now for this I'm using build kite we have a couple other options that we can show you we have some good demos of Jenkins and really it's can fit into whatever CI system you're using but I've just chosen build kite for the purposes of this so we're going to take and add some stuff into our croc hunter app we have some typos down here and some issues and we're on the master branch right now so I'm going to go ahead and check out we'll just call this add cube con to it and I'll stash first so all right we're on our feature branch now I'm just going to go ahead and make some adjustments in our handler here and I'm just to say welcome to Q Khan okay and now I'm going to do a commit and we'll add all first and then we'll push now we immediately see in our pipeline if we come back over if I push it to the right branch so I'm going to push our remote branch there we go all right our new remote branch has been stood up and our bill is right now running and we'll go through the mechanics of this afterwards but this is basically going in and building out our new container and pushing it up somewhere that we can end up using it so we have our pipeline it's now going to build now while that's happening well that's building let's talk a little bit about the structure of this so I have some steps here that effectively leverage the upgrade ability of helm combined with our ingress inside of croc hunter to do an upgrade and install with our with the correct image tag and then potentially with the rest the right sub domain so for each of these branches I'm creating a new ingress resource that the controller will pick up and deploy new subdomain to you can expand upon this and also add say like a job to your helm cart that is a scheduled up job that would leave your ingress after 24 hours or whatever amount of time you set the nice thing is is those options are up to you and you can keep these simple through a combination of of these resources so that's been pushed and now it's deployed so now if I go to add Keuka new croc hunter ok Dodger comm we have our new deployment with that update so the nice thing is here we were able to do that without building any custom systems without doing any hard work we just use the built-in resources that were available to us and that made things really easy compared to trying to try to build out these massive deployment pipelines that really end up being these these brillo a systems now we're working just with in queue Bernays resources and keeping things much much more simple so as a recap when we talk about a service we're really talking about defining our resources and the the challenge I have here is to not just reach for if you can help it a tie load balancer and expose that to the wider world rather think of them as that grouping and pair them with your application so when you deploy an application bundle deploy as you know your deployment your service your config map all of it and think of that as one unit and ingress can sit separately of that ingress doesn't have to be part of that unit ingress really is about how you expose and route information to that so and really what I want to impress upon you is match your routing to your needs you know don't let someone else decide what routing means for you don't just pick something off the shelf and don't be limited in to creating these very insane traffic patterns because you feel like you have to use a service and and attach it to a type of some sort this is all built into kubernetes now for you and it really saves you a lot of trouble in time for that so thank you very much my demo run ran a little shorten than intended but or expected but uh you know please come to me I'll be at our booth talk to me about Deus kubernetes all those sorts of things and again this is ingress your router your rules and I'm Jared Dillon so thank you and I think we have time for questions if anyone wants them or just come up or whatever yes so let me repeat the question if you want l7 load balancing between your pods what what I recommend is that correct so it depends if you're using Google container engine I would just recommend the built and you're talking about single service multiple pots even multiple services on the same route yeah are you talking about like or you're talking about different do you want load balance between services or pods between services on the same Leland you would probably have to set up a nginx config to cover that and then uh you you could do with nginx you just make sure you make sure you have some sort of metadata in there so they all get picked up into the same server back or the same back ends and then that should cover it um hack the nginx one the yeah yeah yeah I mean it's it's yeah you you could absolutely either fork one of the like an existing one and do it pretty I imagine pretty easily uh you just have to adjust the template I can walk through it with you to uh that's a good question how is the days router different so the dance router currently is not an ingress controller it just sits separately I believe it's on a road map to enable it so that you could use it as an ingress controller if you want to but we have we there's a lot of plans around that so right now it just sits as a PHA with a service in front of it and then it routes from there yeah it works entirely on metadata so basically if you're using dais router you use all annotations on your service and that will configure things TLS and and also headers that get that get sent through so it's more advanced in that sense but it right now ingress resources are it's not based off of ingress resources yes a question was what's my approach to a che English controllers eye right now prefer to run a single replica of a single ingress controller if one pattern I've been I've had some success with is doing namespace filtering on those so I will set up one per namespace that I care about and then manage that deployment separately so I might have five ingress controllers for my tip five different environments yes sure that's a great question how does how does ingress handle non HTTP traffic such as sni based traffic or telnet or UDP or any other protocol that's that's not that standard what we're thinking about so that's dependent on what you're using for a controller so nginx supports UDP load balancing as well and you could write something for it but the resource doesn't change too much you probably would not end up using much in the way of the standard rules and configuring via metadata for your resource and then picking up that resource within whatever your controller software is but the configuration options right now are very much geared towards l7 any other questions all right thanks everyone
Info
Channel: CNCF [Cloud Native Computing Foundation]
Views: 41,996
Rating: 4.6923075 out of 5
Keywords: CloudNativeCon 2016, KubeCon 2016
Id: Syw2PzRudIM
Channel Id: undefined
Length: 32min 9sec (1929 seconds)
Published: Wed Nov 16 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.