Kubernetes Deconstructed: Understanding Kubernetes by Breaking It Down - Carson Anderson, DOMO

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay let's get started I have so much content so my name is Carson Anderson I'm here from Domo I'm here to talk about kubernetes though so don't worry it's not a pitch at all for Domo right there's my github back carcinoid that's where you can contact me and you can get the full source for this presentation the whole thing's open-source to start off who's read the description for the session and you think it to yourself how is it gonna fit all of that into thirty five minutes yeah I might have an hour the good news is I have a link at the end of the slides I pre recorded the whole thing unabridged and you can go watch that and I go through things a little slower I'm gonna skip some stuff and I'm gonna take this to you so raise your hand if you've used darker before all right that's good we can skip some stuff raise your hand if you've used kubernetes before okay raise your hand if you really want to know the dirty work of kubernetes oh good good okay we are going into the dirty work of kubernetes I know the keynotes are all about magic but I'm all about real like that's behind-the-scenes we're gonna pull apart the curtain so before I do that I've been think about how to describe this it's not a deep dive it's whatever the inverse of a deep dive is this is a like low altitude flight over the landscape of Corrine Eddie's we're gonna breeze by everything and I don't have time to focus on any one bit because we're gonna run right by it but hopefully it's give you an idea where everything fits together and you can go kind of look up the details later let's get right into it I'm gonna start at the basic user section this is where we just make a really basic kubernetes application so we can talk about the magic behind it in the deeper sections let's get right into it so for the basic user I don't care about the details I'm gonna just really quickly make some stuff and I'm gonna start with containers we're all docker people so I can skip this so we won't havn't something you want to containerize like this application this presentation that's really just a filesystem you tore that filesystem up stick it inside a box put some metadata on it so you know what kind of runtime information it needs stick some labels on it so you can identify it amongst all your other images awesome you've got one container one application one process tree you're doing containers right but we all know that that's not really enough a lot of times you want to run things together one application and don't do init systems that's a bad idea you want something better you want to run maybe a Prometheus export or an extra application you want them tightly coupled you want to put an envoy sidecar you've heard this a lot maybe even some volume data and docker doesn't really provide the low-level tools for that but kubernetes does right it's the pod we all know what a pod is I'm gonna skip a lot of this pot is a container multiple containers container and a volume any combination of all those things it's what kubernetes executes you'll kind of see it used interchangeably between pod and container in fact the docks even do this but really it's always pause so we can make a pod we can deploy this presentation in which one pod awesome but what we want fault-tolerance we want load balancing we want two pods they can make two pods manually we all know this no that sucks so we're gonna make a deployment right in a deployment is just everything it needs to know about how to create the pod the replicas and how to label it so we make a deployment and it's gonna make the pods quick note on this symbology here everything in kubernetes is going to look like this there's lots of properties underneath the name I'm only gonna point out the ones I care about at any point in time there's way more than just a port to find in these pods but we make the deployment we've got our application running that's really cool but we need to get to it and if your native needs get to it the service so we take our service pointed out our pods and that's how you go inside the cluster at either the service IP your name and you get to your pods and then you do cool things like use the selectors so that as things change we change the image in the syrup in the deployment it's going to change and cycle out again I'm skipping this watch the video so we just pointed the service its magic I'm gonna tell you how that happens later but I want to understand what we're creating is a basic user finally even you people using communities a lot may not have used ingress yet ingress rules this is the way we get from outside the cluster into the cluster so we create rules that say hey if you're coming into a predefined load balancer again as basic user I don't care I'm just gonna see what's there I will break it down you come into a load balancer destined for that host go to this service traverse all the way through awesome we do that with master access right we have a place to go credentials we're gonna use cube cuddle and at least I don't use cue cuddle but I'm a really basic user so we're gonna use an imperative create a namespace called this thing declaratives create and something defined in this yamo here's the I mean this is boilerplate this is like day one community stuff we got a deployment with all the selectors all the pod spec we've got the service using both the load balancing discovery and abstraction parts of the service notice the service listens on a tea but we're going to some weird port on the backend that's fine and we're creating that ingress rule which again is just a rule says hey if you're going to this hostname anything under this path go to this service and the service port using the service abstraction so take your credentials are files send the de kubernetes and croon Eddie's gonna spit out the things we asked for that makes sense it's also gonna spit out some things we didn't really explicitly ask for but we expected also kind of note these weird names I'll tell you where those come from in a bit let's do it well I'm doing it you can start loading either of those and hopefully it should stop getting 404s here in a bit I've also got at the bottom one just redirects to the other so for the basic demo this is just Auto type because I'm not gonna fat finger stuff you configure to hook the master address you're going to the credentials that we knew we needed that use a context to tie them together and then use that context so you can just switch around in clusters awesome now let's make them basic user so I'm gonna make things really simply I'm gonna imperative create the namespace and then one at a time throw my mo files at it this is not advanced but I'm a basic user this is simple so there we go we're making all our stuff we're gonna get it seho alright so someone raise your hand as soon as that loads just start hitting refresh on your and from now on if you're looking at your laptops I'm just gonna assume you're looking at my slides this is how you guys can follow along if you can't see what I'm doing so this is not fancy right anyone got the application up yet okay move on I don't really I'm going to go back to my slides it's up alright awesome it's all the slides it's just web server files so I'm a basic user oh yeah I've deployed my application but you guys are here for the meet let's get into the meat for the cluster admin I'm not satisfied I want to point out these symbols I've scattered them all over this section and a little bit in the rest they are places where you can replace extend and highly configure crew Burnette ease I'm not pointing out all of them I've learned I'm their stuff I don't even know you can add on to like you guys know now it's built to be extended upon but I'm gonna point out some big ones throughout the session so senator files to kubernetes we got stuff but we know communities isn't a just a nebulous thing we've been calling it that the whole conference but it's actually a bunch of things it's even more than this these are just the big ones that talk through your master components the bottom two are primarily node components but they can run on the Masters I'm gonna I'm all one at a time also notice basically everything's pluggable replaceable looks configurable in fact cubelet i've learned some stuff that's replaceable like rocket let swaps up cubelets so really basically everything in communities can be replaced or extended starting with the API server when we sent our files into the API and got stuff we kind of guess that was the API server that makes a lot of sense notice we've got our first extension here this is custom resource definitions and the API server extension they talk about this in the keynote I don't have time but they're really cool ways to just like kubernetes do the dirty work of adding stuff to the API we sent it in we got stuff out but those things don't live in the ether these resources have to live somewhere and again you guys know this it's at CB or a net CD cluster more commonly that CDs great cuz a lot of power and like reactivity of communities is just at CD features like surfaced up through the API like they even write them themselves they just use that CD features it's distributed fault-tolerant all that good stuff and that's why the API servers it's not magic it's just the heart and veins of communities that's how everything connects up and talks to everything else but it's not doing anything for you know that magics really there when we want of the scheduler scheduler is the best analogy I've heard is it's the Maitre D' of cribben at ease right so it's gonna hook up to the api server with a long watch this is one of those SED features that's really cool it's kind of a pub/sub event model so basically nothing communities if you're doing it right is like listing and checking for differences and reacting it's just telling me hey I'm subscribed to all of these tell me about it and I'll react instantly everything uses that so it's watchin API server and still watching for pods that need a place to live like the maitre d sits at a restaurant says I got people coming in I got lots of tables I'm gonna make decision who sits where so the scheduler says ok you you live there you live there there's lots of ways to influence that decision it's really really highly configurable so I don't have time to cover all these this is basically things you put in the pod or node and then you give scheduler information about what your service needs what your pod needs and it'll make smart decisions about where to run it but if that's not good enough for you replace the scheduler write your own custom scheduler it's not that I've seen it done in a few lines of bash I think it done as easy as pod dozen have a place to live first node next pod next node it's not smart but it you can do it and it's really the custom schedule is really useful if you need to make sketch decisions based on outside information that the regular scheduler just doesn't know about and if you are worried about doing that duple you can do both you can run the coop scheduler and just let it do the default for all the pods and just per pod say oh no for this pod use that scheduler to decide where he lives so that's really cool and that's the scheduler some Maitre D' controller manager if the api service the heart this is the brain he's the thing that made all this stuff if we had time to poke around in the API we didn't make almost any of this we made like two things on here and tons of other stuff got created and it's the controller manager that's doing that but notice it's the controller manager and just like a real life manager he's not doing any work himself he's in charge of the guys that do the work inside the controller manager there's way more than three I'm just pointing out there's all these core control loops that do that sit there and watch and make one specific thing one bit of logic happen for kubernetes we've got the extension here this is you heard it this morning this is the operator pattern or custom controllers this is actually even easier now at the meta controller which I'm really excited about this is almost always paired with custom resource definitions we're using this right now to basically add notes to our cluster by creating a extra resource and kubernetes it says I want more notes and it'll make them so all these controllers talk to a bunch of things in the API it is not a one-to-one I'm just showing you there's lots and lots of these connections all the controllers are watching and making things happen these are the proactive parts of kubernetes so for example a user creates a namespace a controller makes the service account another controller makes a secret a user makes a deployment a controller makes a replica set another controller makes nodes user makes a service a controller makes an endpoints object points it at the pods I didn't not going to cover endpoints they're really just kind of the actual routing meat of services I just want you to see that this is the part that's reactive this is when you say kubernetes quote unquote did something for me it's a controller in the controller manager that did that and with kind of advanced logging and making and using our vac extensively you can actually get really cool logs that say exactly which piece did what action to all your resources that's the controller manager the master components we have the heart and the veins the maitre d and the brain these bottom two do sometimes run the Masters I'll cover that in the cloud section but they're primarily node components the cubelets job is he hooks up to the API server like everything else does and his job is to live on every single node and make containers real it's all he's there for so he's watching said oh I'm on this node you have this pod that's scheduled there the maitre d said go sit there somebody has to make it real and that is the cubelet he talks to your container runtime there's actually way more than those two those just the two I'm gonna talk about again anything that uses the standard runtime interfaces can do this talk to the runtime creates the pod makes the container real he also does cool things like liveness probes like your processes running that doesn't mean you're alive and readiness checks like are you ready to receive traffic so he's constantly checking up on these containers and reporting information back up to the API server it just makes containers real queue proxy I'm gonna gloss way over this because it's in really depth in the network section in a second but basically his job is to talk to the API server and everything connects to the mains and make services real so he's constantly watching for all services and on every node every single service is remade real all the time by cube proxy so if something changes about the underlying service a pod behind it changes goes away cue proxy makes it real no one comes in cue proxy makes it real it's just this whole job and you can completely swap out cube proxy entirely different network providers do this load balancing companies do this I've seen it done with just Linux kernel features there's a lot of ways to do that and that's it that's the magic behind kubernetes quote unquote and where all that stuff you made a couple things and where it all went and how it all worked let's talk about networking in more detail so if the network admin I want to know that actually how things talk to each other I care I know we're not supposed to but I do so we're gonna kind of take the basic stuff we made and starting with the pods explain the way the networking works in kubernetes I'm just covering default communities networking the provider you use may change a lot of this but it might be you know every pot in community is in fundamental tenant has a unique IP that pot is unique across the entire cluster pods live on nodes because they're containers they have to run somewhere nodes have a unique IP across the entire cluster they also have a cider range that says here's all the range that any pot on me has to be in that's mostly for routing reasons that can change entirely based on the network provider but I'm going to cover the default they just get a big chunk of a larger range so let's make some pods put on my notes all the IPS makes sense they line up like you expect them to and they have to talk to each other they have to talk to each other with a network provider and I'm gonna go way more in depth here but this is the most replaced part of kubernetes arguably to me if you've ever gone through the install page and looked at like all the tabs for network providers and that's not even all of them there's tons of ways to do this and the reason is it's not that hard it really isn't to be in our providing kubernetes you got to do three things you got to check these three boxes and you are a valid network provider technically you have to make CNI plugins and but I'm talking about functionality so first all pods they're using containers that use them interchangeably this is straight from the docs all containers pods communicate with all other without NAT so this is simple you got lots of pods it's flat it's a very simple architecture you just need to be able to get from everything to everything directly if this scares you as a security person do not worry I have some solutions for that in the power user section but this is the network architecture it's very flat so you do that second rule all nodes communicate with all their containers pods without NAT so nodes pods flat it's just everything just talks to everything I put it here so it's easy to understand and to see but it's actually like this right but again it's just a flat it's just everything needs to hit everything at a network layer this is a weird one the IP nut container sees itself as the same IP everyone sees it as this basically means don't muncher ip's don't let me think I'm sitting somewhere and everyone else thinks I'm somewhere else it's very confusing so just keep it simple mostly according to the docs this is done to make it easier to go from a VM architecture to a pod architecture so let you do those three things right you see and I plug in your network provider there are so many ways to do this so we've covered the pods let's move on to services on the deep that like the details of how those work so when I make a service it's got a selector that points it at a subset of pods it's also got at least one port hopefully so it's listening on 140 to one on the back end they can be different that's right it's abstraction and load balancing and everything else you could have multiples they have to be unique listening but they can for do the same on the back end that's fine you can use the abstraction that way when I talk about one and services have a type the llamo I use the basic section didn't specify type because there's a default it's the bottom one here there's actually four types by the way I'm just cover than three big ones and I can start from the bottom up because they all kind of build on each other so when we create a service even weather explicitly or without one it's going to get definitely a cluster IP and you would expect cluster IP type when you do that a controller in the controller manager says here let me give you a cluster IP that makes a lot of sense and kind of a illustrate what that's for we're gonna make some webserver pods a cache pod and we need the webserver to use the cache we could point them straight at the cache IP right like it's flat they don't get there but that sucks because what happens if that pod goes away or we want to scale up the number or any of those other things we'd have to reconfigure so we're gonna use a service that that makes a lot of sense so cluster IP service we care about the cluster IP we can then point our web server pods at that and they'll get to the cache behind it and just point it at that IP and it never changes I do want to point out real quick fallaciously they don't know you can use names if you're the DNS add-on in kubernetes is optional but really highly recommended so you can go to service name name with the namespace specification full cluster qualification they're all valid ways to get to your service and they never change I'm gonna talk about ip's because everything just resolves up to that so that's what it boils down to so we point a web service at the IP get to the cache pod and we never have to change if something comes up new ones behind it we don't care we just used the abstraction let's talk about how this hop looks alright so far we've just said that's magic well I care about the magic so we're gonna put these pods on different nodes cuz they're gonna be on different nodes more often than not notice the positive on notes but the IP doesn't you can go to every machine in your cluster list all the addresses you'll never find that IP anywhere it's not on an interface it's a target for IP tables with again with default networking it's that's all it is it's a thing that says oh traffic coming out of here destined for that that means I need to randomly assign you to the pods that are active behind it and the thing that makes those rules is the thing that makes service is real so we have the cube proxy sitting there watching all the services and the endpoints and making that real so he talks iptables something changes in the API server he makes it real scales up another change he makes it real that's cute proxies whole job unless you replaced it and this might be a little different so that's the cluster IP let's move on to node port so this is great inside the cluster right like pod - pod easy but not all our communication is pod - pod we know that we need to get to it from the rest of our infrastructure and that is where at its most part service comes in so we'll make a node port service like I said it builds on cluster IP it still gets a cluster IP but it also gets tada node port that makes sense this no port comes from a weird high rain are you so it'll get that node port node poor it means nodes so we take that node port use IP tables to make just another basically another target another entry point into those same IP tables rules that pod - pod communication uses except for that node port is available from other from anywhere else that can get to that so you can take your clients pointed it out weird node port and they'll get load balance with normal methods from there that's awesome but ugly right like you're not going to that right now if you're looking at the slides or you look at them later like you're not gonna go weird node port I'm not gonna give you that URL so we need one more step we need a load balancer type service you can this is really cloud specific by the way these two are always there this one is cloud specific whether it's there and exactly the details of how it works but in general you know we're going to suppose or in a supported cloud so make that load balancer type service a controlling the controller manager because that's where all the real brain action goes sees that toxic cloud provider makes a load balancer points it at the node port and then you can point your clients at that load balancer and you traverse in through a normal kind of this can be static and not change and go to all the nodes as they scale up and down again that's a little different in different clouds but this is the general ideas that you let the communities just make those load balancers provision them for you you can also just make node ports and make their load balancers yourself if you want so that's how we get all the way through and at a network layer that's kind of the meat of kubernetes functioning as we wander the cloud layer so as the cloud layer as a cloud admin I'm a cloud ad I mean I care about execution care about where things live right like this is all well and good but I got to spin stuff up and in kubernetes we make a big distinction between nodes worker nodes and master nodes they can be the same but in big big environments they're not because there's a lot of security like safety that we get from separating them out it's not elected like swarm so let's put our components places right all these represent pieces of communities code that need to execute they need to have somewhere I'm gonna describe a kind of a default H a master scenario it may or may not look different depending on what you're doing but we take the API server we run it everywhere that is not a surprise to anybody right we take the scheduler we put it everywhere kind of it's active the code is there on every single master all the time it's always running but they talk amongst themselves to their local API servers and elect the leaders at you you're in charge the person in charge is the maitre d now and the other two are just waiting for him to die they just they're just waiting so he dies if he doesn't check in they take over his job we do the same thing in the controller manager you don't want more than one brain that would be really confusing we're not what does stegosaurus has one in its tail so we elect again controller manager runs on all the masters but if one of them explodes another one will take over they don't have to be the same notice they are not the same you can you just need one active at a time I said it was a node component but the cubelet really runs on all the masters so we also know that API server has to talk to the data store has to put data somewhere and that's SED we could use as a cloud I mean I could use an STD service just make a CD somewhere let a cloud deal with it and that's great or you know what we do and kind of hyper converge you say ok every master runs a copy of SED and they all cluster monks themselves and then api server each one uses its own local copy that way i have full control over at CD and i reduce the number of VMS that i need we had our access that's backed by cloud load balancer pointed at the API servers we point our cubelet all our node components at it and even our client libraries from outside the cluster if you're inside the cluster pod to API server you actually go straight to their private IPS with like an endpoint like it's kind of normal service discovery thing but if your components or an outside user this is what you should be using because you get all the cloud magic of normal load balancing and fault tolerance when I said ingress rules I didn't cover that in Orca section o if you notice this because it's it's really more of a cloud thing than the network thing ingress rules are just rules right we made it before but we need something to make it real and the think ranae's makes that real is an ingress controller you will probably hear a lot of operas for a lot of these all over the expo floor like there are lots of things that do this right now tons and tons of companies because it's open they don't provide when they say go go make one so the rules have to have be enforced so we make ingress controllers will also need a load balancer to get into them I'm gonna try back up eleventh ways we've done ingress just because we've kind of learned some stuff as we went we've got nodes there are normal nodes I'm just not showing the other components we used at first we set aside notice these are called ingress nodes we set aside a subset of our nodes and said okay these are ingress nodes and the reason for that is we used a daemon set targeting just those nodes and created a host port mapping I didn't cover this because you shouldn't do it but what we did there's there's a good thing right this is very direct this is like the old-school docker port mapping this is like this port on the host goes straight to this port on this container which is very direct and neat and if you're really sensitive you might want to do this but you get into port madness you have to start managing what ports things are listening on and no one can you kind of have to listen on the same host port but we did this rain grace because we set aside some notes and that's a straight hop but if you don't want to go through that you want to just create a like a load balancer type service you'll get a node port that's no surprise you'll balance into that and then you'll hop around from your service the downside is you've got balance twice you got a double hop you get load balance - once at this end and then the controller's going to bounce you again somewhere else potentially most records honestly it's fine but that's something to be aware of if you're not jumping straight in you might have to get jumped twice we then take your clients point them at that and you're almost there right we've got to our controller but it's enforcing rules that point to a service so we have one last hop to get to our service so we got our service our pods there even on some more nodes that's fine once you've hopped in from the load balancer to your ingress controller service to your ingress controller it can just do normal standard cluster load balancing to the back-end services inside your cluster and you've gotten all the way through that's what you're doing right now if you're looking at the slides on your phone so I'm going to the Linux section I have to skip a lot of this because of time but I'm gonna cover one part of container essentials because I think you need to understand that to understand docker and kubernetes and that is Linux kernel namespaces again I'm gonna just focus on this one this is not kubernetes namespaces this is Linux kernel namespaces meaning that it's ways of isolating processes in Linux so you take your application your process really but we're gonna talk about containers and you can isolate it in a bunch of ways you can stick it in its own process namespace its own file system namespace its own network and file and it's really all of them right like really when you make a docker container you're getting split all of the ways but we really care about the cool part of features is that it's not one-to-one it's not one container one namespace you can join other containers or processes into an existing one this is how you take that one container and you make your pod you just join all these containers into an existing namespace so they can share that one IP and then in the entire cluster I'll kind of go over how docker does that I have to skip the rest of this needless to say see groups are the Linux way that you split up resources and it's also provides some built in accounting and stat management stuff again watch the video if you kind of want more details on this and even file systems I just put in here cuz they're neat the way they work and the way they save space again I don't have time for them I do want to break down the notes though so we've covered coop rocks in the network section we care about the cubelet we care about running things right so it's watching API server it's checking see hey or do I have pods that you want on the node I'm on ok you do let me make it we're gonna make an interesting pod just for argument's sake we're gonna put nginx in their previous exporter envoy we're putting all the containers in there because I want to show you how they share an IP address so we're going to talk to docker docker is the main right now kubernetes way like most clusters I think are running docker that's safe to say again it's not the only one there's lots of options but I'm gonna talk about how it does it so when it sees that you want that pod running it's gonna talk to docker see they'll check the pod and it's gonna first make this weird infra container that I'll cover in a second this is why if you go on any criminales noda you do a docker PS you get about twice as many containers on theirs you thought you needed it's making this weird infer container it's joining all of the other containers into the network namespace of that infra which is how they all share one address and the port part of infra is it's really just this tiny piece of code whose whole job it is to be there and not die that's it just stay alive so I have to restart you recreate so you can always keep your address then you join everybody else in I don't have time for CNI it's kind of that standard way that the IP address gets discovered rocket is the other big alternative for kubernetes and there's some important and cool differences between using docker or using rocket so it can do CNI the normal way also seeing I came out of rocket so you don't have to have the cubelet do it it can be native but more importantly that whole song and dance at the end for container yeah it's gone Rockets pod native rocket doesn't it slowest thing is already a pod not a container so when you want to create a pod in rocket you just create a pod and rocket also you spin it off meaning there's no long-running daemon there's no dock or rocket thing just sitting there the whole time because you kick it off and it is self-contained so if you want to make another pod spin off another rocket it's all file systems and whatnot to actually manage it and also last thing you can do hypervisor isolation with the rocket you just annotate the pod say hey your stage 1 which is a rocket term basically run this thing inside of a mini super lightweight VM that's so if you want some if you're scared of container isolation and you want some extra stuff there's limitations but you can absolutely do it and you can do it for just some pods I don't really have time for locking we know we use queue catalogs mostly just want to you understand that it's connecting all the way through to stream your data directly you can also basically any log and drive it supports docker supports kubernetes you just hopefully grab some metadata on your way out so you can tag your messages and filter them let's get to the power user section what this is is kind of like this cool grab bag of fun kubernetes features I'm gonna add a bunch of stuff on top of my normal deployment with no code change I'm gonna add more features without having to change anything in my image so go ahead if you want to reload that or just add on to the slash markdown I'm gonna kick off the power demo and it's doing an apply so I'm a power user right I'm not gonna create in one at a time just gonna apply the whole directory which says create or update just do we have to do and then hopefully in about a minute or so you guys should have that markdown page running and you'll have some links that break down basically every feature that I'm gonna show you here in a second is like explicitly shown how you do it in markdown in the examples and pretty well commented so I'm just likes it just fun features that I want you to be aware of security context is a feature where you can basically define a bunch of execution rules about your what the runtime information for your pods and containers should be of things they can and can't do you create that apply it at the pod layer or the container layer or both they can even be different for different containers in the same pod and just tell the cuba extra information security stuff about how you want to run your pod network policy is that way if you're scared of the flat network architecture that's that way that you create some rules that block access to different things and it's just labels it's just yeah mo policy is just rules by the way you need something on top that enforces network policy there's a lot of options for that you just need to go find one that will read those rules and make them happen but you can define pay pods with these labels can only talk to pods with these labels on this pores and you can start walking things down in a reactive way where you don't care about the details you just say make this happen and that's explicitly decoupled from the network model download API is really cool it's a way of taking pod this is not all of it there's you can labels annotations everything but you take metadata about your running containers feed them through to the containers at runtime if you click either the proxy links in the markdown page they will spit back their pod information and their node information because they're getting it from the downward api at runtime config maps I didn't cover they're just key value data pairs config maps and secrets they're handled a little differently but that's all they are you can mount them through to your containers as volumes and you can do it it's even live which is really cool so the markdown you're looking at right now is from a config map and if I go change that right now in the config map in about a minute it will change and I don't have to restart the pods because it's refreshed live you can also take those same key values and put them in as environment variables so your docker images that expect that as configuration will work just fine affinity is one of the things I wish I had time to cover it's one of those cool advanced scheduling features that says here's how these pods relate to each other I want this pod to always be scheduled next to this pod no matter where you put it if one's there put the other one there tightly coupled you know like put them right together all the time just extra information for the scheduler to tell it what you want from it and it will always it won't move them around but it'll always make sure that if one's there the others there you can also loosely couple and say try to but I don't care if you have to split it up like and you can wait that even you can do the same thing for nodes this pot has to be on a node like this or try to put a pot on a node like this useful if you've got workloads that have to use like GPUs where you would say you have to reason with the GPO you're not gonna work ants - IDs the opposite of all those things I just said where you have hard and soft things that don't like each other usually they're paired actually so you'll see anti affinity if you have like a GPU node you'd want hard affinity for your GP or workloads and hard anti affinity for your non GPU workloads so you just reserve that and that is the power user section move on to the credits so I'll use Suzie as my presentation software it's open source you make an SVG you can animate every layer independently and it just spits out web server files so if you like press you feel like Prezi but you want to pay for it cuz we're all open source here or you just want really hard like you do make an SVG this is so cool and it can be as hard or as easy you want it to be prior to each class on that by now all the logos I used are properties of their respective companies and open clipart was immensely helpful in all of the art and the kind of the diagrams and everything that I made I just stole a lot of this and altered it so highly recommend you use that or contribute to it and I do have some time for I think I have a few minutes for questions yeah I have a couple minutes and here's all the links I will keep this up as long as I can I'll probably move it to one of those free Red Hat clusters at some point so why not if you want the video it may or may not be there they told me they're trying to finish editing so it there's a placeholder you can always go to that link and that link will always be the right one even when I redirects you to so anybody have any questions yes so um I don't know I have to double-check on that it might yeah I mean it might just be a detail I missed so yeah thanks for coming feel free to come talk to me [Applause]
Info
Channel: CNCF [Cloud Native Computing Foundation]
Views: 140,588
Rating: 4.8924136 out of 5
Keywords:
Id: 90kZRyPcRZw
Channel Id: undefined
Length: 33min 15sec (1995 seconds)
Published: Fri Dec 15 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.