Docker on OpenStack with Kubernetes

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I think we're starting are we good we're good to go hey thank you for joining us today we're going to talk here about running docker containers using kubernetes on OpenStack I hope that's the talk you're here for so my name is Craig Peters I'm a product manager at mer antis and I really happy to welcome the kit worker to join me hi there I'm Kip Mercker I'm a product manager at Google and I work on the container engine container registry and kubernetes and I'm just going to give a very brief introduction to kubernetes and then I'll play around with a kubernetes cluster later running on OpenStack so just before we get started quick show of hands how many people here have deployed kubernetes in production we got a couple three ok nice how many of you have use goodnight ease at all built a cluster played around with it nice how many people here have heard of kubernetes oh look at this look at all that before I mean before you walked into the room and we said it word and how many does anybody here know the original origin of the word kubernetes they know where the words come from reek not Patrick you know right they're shout it out man there you go yeah yeah the Greek ancient Greek word for the helmsman of the ship anyway so um like I said I'm gonna give a very brief overview before we get into career Nettie's though you sort have to back up and say okay why containers it's the hip new pipe in the technology space right now to run containers but they actually provide a lot of benefits to you're running a infrastructure you know first of all performance you can spin up container much faster than a VM so you can tear them up and down you can deploy containers repeatably there they're sealed from an image and you can take those and push them repeatedly to different environments and not have to worry about installing bits that might fail midway through deployment you get isolation so if you have noisy neighbors - containers running side-by-side one of them's noisy the other one's isolated they can't talk you know they can't reach into each other's space or consume each other's resources is you can get a consistent quality of service across your environment because you have container runtimes that are working in the same the same repeatable way and also get an accounting of what's actually running in your environment so if you think about you've got all this infrastructure you're running different applications you can see everything that's running exactly what version of what code is running which is also kind of a way getting visibility into what's running in your environment as well but one of the most important features of containers is portability being able to take code that you wrote package it into a container image or container runtime and move it between different environments so whether that's on OpenStack environment on-premises or in a cloud provider like Amazon Google cloud platform digital ocean being able to move that same code and not worry about like what specific machine I'm running on what specific infrastructure I'm running on I can just kind of let it run and for enterprises and companies today you know being able to move between different infrastructures is really important things change and people are migrating to cloud people are taking things that are in cloud moving them into on-premises for performance reasons or security reasons being able to have that choice and portability is really important so when you think of like running your code in containers it's fundamentally different way of building applications than on bare metal or a VM only style environment talking a little bit about sort of a spectrum of tools right so we think a docker the docker project is really the packaging and runtime portion of the container so the the docker format is a great way to run a run container take your code share a kernel with a shared VM share it on docker hub you can get code from other people that have already pushed out images so that's kind of the docker piece and they've really solved the Imaging Packaging developer experience piece of getting a single container running and you can deploy that onto your laptop on your server on your cloud and it runs kind of consistently and repeatedly kubernetes is the open source project that Google created last year it almost almost at our one-year anniversary the idea behind it is it's for cluster oriented orchestration of containers where we have multiple containers that are working together that can scale up and down or you can easily update deploy and not have to worry about your infrastructure really focus on your code we're focused on the operations of your infrastructure and also declaratively manages so you just find you know what you want and kubernetes tries to fulfill that that desire so you're not giving it a series of instructions and do this do that run this here run this there you let the scheduler do the work I'm going to show a little bit of that later and because kubernetes is open source and we designed it to run anywhere it really fits into you know not just Google infrastructure but any any bublik cloud private cloud on-premises etc and then Google also offers the container engine which is the hosted version of kubernetes that runs on Google infrastructure so this again is a cluster oriented service lets you run containers you get the full power of the Google infrastructure and Google cloud and is powered by kubernetes alright and just so you know it's by way background Google's been running containers for many many years every single service that Google runs whether it's Gmail search Yahoo hangouts etc all of it runs inside of containers and that's run on an infrastructure and we've recently kind of shared some the details of our internal infrastructure called Borg and Borg is that the the kind of container management infrastructure that inspired kubernetes and the same people that develop Borg that runs all these scale services they also built and designed kubernetes and are working on it today so they're really took the concepts that have driven this massive learning that Google had to go through to get to the scale that that we're currently at we're taking that and turned it into a streamlined open source project that anybody can run even for smaller applications not everybody here is running at Google sized infrastructure obviously but you know for you or your customers getting those design principles and even at a smaller scale gives you a lot more power and gets all the benefits so I talked about earlier we launched two billion containers a week which is just like an impressive number so we just say that a lot yeah two billion hopefully we'll what I should figure out what like the decade number is I think would be a really big number how Greek word for helmsman also the root of the word governor you know container orchestration runs docker containers actually we recently announced support early support for rocket containers as well but we really want to provide choice any container runtime that the community wants to contribute we want to make it run in kubernetes multi-cloud bare-metal configurations inspired by our internal structure written and go really what it comes down to is we want you to manage your applications not the machines and that's that's what kubernetes value really comes comes in let's see I want to give just a very brief overview of the concepts of kubernetes both of you may have heard about these I'm going to try and do it as eloquently as I can but I also impress for time so I'm gonna do my best here so never feel rushed oh I'm just I'm kind of joking so I'll take questions later - so container we're - talked about a container that's the single unit of runtime we also have this concept in kubernetes called pods and what a pod is when you have containers that work together very closely that have shared fate they have shared lifecycle they can communicate with each other as if there are the same network they're the same IP address they do each other's localhost they work together very closely you can put those into a pod a lot of pods can have one container that's fine but it actually is very powerful be able have two or three or four containers where you have reusable libraries or you want to do application composition and not do that earlier in your sort of build process and do it at runtime so you can take you know one example what we use here is like you have a Content server that's serving static content and maybe you have a sink or service that it goes and grabs it from some data store somewhere you put those two containers together if either one of those pieces of the application changes you swap out the container you don't have to rebuild you know an entire application so you get that nice separation of concerns and the ability to compose that application but at the same time they get to run together very closely with shared fate we have this other concept of controller the primary instance of this being our replication controller and really what a controller does is this this fulfills that whole declarative management so you define what your desired state is so maybe say I want to have you know five of these containers running at any given time the controller will look and see okay are there five are there five are there five if suddenly there are four because you know a VM went down or hard drive got lost and so your infrastructure is impacted you know it'll go ahead and find a new place to run it and add that or maybe your desired state changed instead instead of five I want to have ten it will go find resources to spin up spin up ten salsa's interesting corner cases where the vm goes away I spin up new work the VM comes back and so now I've got more than I wanted kubernetes will actually notice that state as well and spin down containers until you get the right number so the control state is really about observing the truth measuring that against the desired state and then taking action to fulfill that takes a huge burden off your back as a application administrator because you don't have to worry about implementing that for yourself you can just kind of you can just kind of take advantage of it we have this really unique word we invented calls a service it has exactly one meaning yes so in our in our world we have this word this idea of a service which basically a group of pods and be able to address those pods by one by one IP address or one kind of one handle so you think of like I have all these different pods and any one of them could do the work I don't want to address individual pod or an individual container I just want to point it like that group of pods over there that heard I was one point at that service lets you do that it acts as a load balancer in front of set of pods that can all fulfill your your work so when you think about sort of building stateless apps put those into containers that all do you know one container replicated many times with a single pointer we have two more concepts although over labels and selectors which are very closely related so in micro service style applications hierarchy is bad right you want to have all these loosely coupled services that can all talk to each other and they each do a job and then they work together to create this application right and so what we have in urban IDs is this concept of labels which lets you basically take a key value pair of whatever you want and you can label things in your app and kubernetes uses this internally to find and address different different portions of the app so like the replication controller will use labels to find the you know so the control loop so you might say okay this this set of pods over here these are all front-end pods these ones over here all back-end pods or these are all part of this one application you can use key value pairs to label things in whatever way you want and describe your environment in your application in a way that the kubernetes api can understand and use for addressing and finally the selector is basically the query you used to to use the labels so that's just a way of finding anything your application by labeling which means you don't have to worry about like which machine is running on or anything else you just set a query to go so about that so would that be like a quarry it would use the selector to find the right services they want to consume or would other pods use the Select yeah either either way either way yeah that's right alright so that was my brief overview of kubernetes it's not that complicated it's not that hard let's see okay I'm gonna hand it back to Craig now and he's going to talk about OpenStack Murano and kubernetes running an open sack sure here we go so I'm going to start by kind of setting the stage or a little demo we're gonna do which actually shows kubernetes running on OpenStack and you know kind of harken back to what kit had to say about portability so if you think about trying to use one of these systems this is a kind of a simple example you might have I want to set up a monitoring system that uses some simple components that I can get off the shelf you know some nice open-source tools like ref on or and Plex DB but I want to do it in an H a way I want to wire them up I want to make sure that they are always available and you know I want to take advantage of an orchestration engine like kubernetes to make sure that all these connections are always available to to the other parts of the service so you know that that's kind of a common premise is probably for all of us that we need to do something like this so we kind of have several choices one choice is to look at all the documentation for these tools and figure out how they should be connected together to you know configure literally thousands of parameters after I've installed them and then do lots of testing and figure out how to do that so that's kind of the Left column here on this slide another choice I have is to use somebody who's already packaged this kind of a thing as a pre-configured app in a hosted environment so I'm not even running it locally I'm kind of outsourcing all that infrastructure and so I can point and click and go through and host it there which is an awesome solution but not for every scenario sometimes you also need that stuff in-house and so what we're going to talk about here is another option which is presented on how you can do it on OpenStack using a technology called Murano and what Murano does is it essentially does a lot of the same kinds of things that the hosted service providers do in packaging up their applications into a kind of marketplace but in your on-prem cloud so you have much easier time integrating with your existing infrastructure or complying with regulatory requirements or taking advantage of flexibility you need in your other going infrastructure to serve a kind of application specific service level agreement so we'll take a little look there so I want to introduce the notion of Murano a little bit here because it it serves as the glue or the underpinnings that makes it really easy to run kubernetes and other orchestration engines and it'll be quite bright there you know we're OpenStack is allowed to or as you know it's designed to run any kind of infrastructure so it supports all kinds of passes other kinds of orchestration containers and what we've done is we've been really lucky to be able to collaborate with Google on creating an integration that shows how you can easily run kubernetes on OpenStack so Murano is a way to do application management in the cloud it's a way to package up things in a user vision way and provide repeatability so it provides a list of applications it exposes a set of API s that can then be consumed by automation infrastructure for things like CI CD and you could you know you can implement really interesting use cases like when tests fail you can automatically take snapshots and someone the developer comes in in the morning they can recreate that environment and do debugging in situ instead of having to just look at logs and figure out what happened and you can get the real picture of what happened in the cloud so the whole idea here is to provide a way for operators to create consistency in the way their applications are run across tenants and have degree of control so say for example when you deploy a certain application you always want to instrument it for monitoring in a certain way and you want to want to automate how that monitoring is used for billing show back that kind of thing so essentially Murano does this by being an application abstraction it presents that as a catalog it has an application object model which keeps track about the patient's state and then the advanced they're essentially events that occur around applications and those take advantage of the application state and those are exposed in the UI or you can consume them from any API endpoint so they just extend the OpenStack api's and the way you configure it is essentially using a domain-specific language for those kind of event-driven workflows and if we have time after the demo we'll spend a little time just digging into that a little bit it's a very powerful concept so what I want to do is show you how you would provision a kubernetes cluster and this is actually kind of awkward excuse me while I bend over my demo not exactly the ideal setup here that's actually not about the idea there's a few of them here right I can pick one so one of the things we introduced earlier this week so on Tuesday I was lucky enough to be invited up on stage to launch the app catalog so what we'll do is we're going to go get a doctor eyes da plication from the app catalog and configure to be deployed in my OpenStack instance here so I can go to my packages and I can see I have a bunch of tools here available already for users of this tenant but I want to add to it I want to go get that graph on a tool that I've been hearing about and see how I use that so I'm going to go to import package and I'm going to go find the repository and go find a list of things that I could use so I'm gonna find a Murano package and let's see let's go find refine I'll see if the search works ah yes I can't that would assume I know the command for that if it were Mac it would be true but actually I'm running Oracle OS here shift ctrl + ha there we go so I found run docker Agrafena that readable now so in the in the community app catalog essentially I could do a search I found me at the artifact I want I got a description of it I already know what it is so I'm going to use it I can see who created this thing and get in touch with them and interestingly I can see what it depends on so and in this case it's a it's a docker eyes instance of Griffin ax so it depends on either docker or kubernetes pod and obviously that's what I'm interested in showing here but it also depends on a back-end database right so in this case this packaging of it says it's all it's going to use the influx DB so I'm going to go ahead and just copy the package name into horizon paste it here and assuming all my network setup magic that was happening while you guys came in works it did so now what I've got it imported a bunch of packages into my environment here and I'm going to give them I'm going to kind of categorize this as a local I wish I had a monitoring category by dult so I'm just going to call it databases and create so what it's done is it's added additional packages to my my list of packages which means so I'm kind of looking at as an administrator I published those now so users who come to the catalog they go here and they see the list of tools that are available for them to self deploy so let's go and see what it's like to configure an environment to deploy Griffin so I'm going to add this weather you'll quickly play on that I'm going to call it sure let's call it doctor dr. Griffin and so this is a docker container so the Murano packaging knows that a docker container depends upon a container hoax right so Murano has this notion of dependencies and abstract ways to satisfy those dependencies and in this case I've got two ways to do that I can either use the docker and host which is essentially in this case it's a implement is a VM that runs the docker service or I can run a kubernetes pod so I'm going to choose that and give the pot a name so I'm just going to call this my grifone a pod you can see in a previous instance I misspelled it and that kubernetes pod as as kit had so quickly explained about pods pods actually depends on the kubernetes service itself to run so I'm going to create a cluster for the kubernetes service I'm actually just going to pick the default so the kubernetes cluster actually has a pretty sophisticated set of configurations to make sure that the state this declarative state of the cluster is maintained and it essentially maintains its own high availability infrastructure it's got minions and things like that and then this packaging implement something like called gateways which provide easy access to the Internet to place to create a public IP address so you can access that API endpoint so I'm just going to choose all the defaults there and then I get to do my way choose my flavor thing and make sure I can do an ssh connection to it and i'm done with the kubrik the cluster and I'm going to finish the pod now I can deploy that and there was an error oh that's a problem happily this is a baking show pulling out of the oven I pulled one out of the oven that I baked yesterday afternoon so the my server is right here so maybe a little resource constraint and it said I think I couldn't use it so here I've actually got a cluster already running so let's take a look at that so in this case one of the things I actually went through really quickly is that I had chosen to run oh actually that's the problem there was some problem with the dependency didn't flex DB it didn't ask about that so I chose that was that was the error that came up and I'm not sure why we'll find out why but what I did when I configured this one is I actually chose to have the the both the influx DB and grow fauna run in the same pod so that they had this shared fate right and I chose to only have one cluster of that because I'm running all on my little machine here and so what does that look like so here I've got a topology that shows what I've got so in this case I've got my graph on a docker pod that's running on a VI and this is actually the influx DB I actually see now exactly what happened and flex DB didn't come down for some reason they're both dependent on this kubernetes pod which is depending on the kubernetes service which has various minions and gateways and those can scale up and scale down dynamically in this infrastructure so you know what we can do now is we can actually look at what kubernetes does for us on theirs so should I Drive since I'm sitting down sure why not I can just do a you want I'll just do a docker PID sudo yeah doc dagger yes so I've got some containers running here so you know one of the things that we talked about is kubernetes is focused on maintaining the state of these things right so let's let's kill one of these guys and make this make the screen bigger okay we probably can't see it against it do it again please please do it again again again how do I make the form bigger on this sucker press up sorry ctrl + oh here we go hey oh wow there we go that breathable Akeem yeah well you had the same problem where it's not sorry its truncating here Oh sudo yes there we go all right the important thing here though is look at look at look at the created times we're gonna just pay attention to that yep so you see one of them we did this just before the demo we did 24 minutes ago it came to life so did you eat it mat move yes so happy - so kubernetes is watching these containers and what it's going to do is well let me show you so I'm just gonna kill one of these let's see what happens we're we're control shift C oh sorry oh yes sir save it ctrl shift C control all right so we're going to put docker kill and then ctrl shift V okay so that's the same ID for the one that was created 24 minutes ago right we're gonna kill it and then we're gonna do let's take a look what we got it's not there it's dead oh hey can we write back less than what we did I killed it oh wait there it is there it is oh okay that was a little slow that was demos whoo the server's understand load so you'll notice it brought it right back to life and so kubernetes is watching the I want to do this again it was like instantaneous last time we couldn't do it let's kill another one I haven't kill any other one I'm gonna kill the other one it probably probably confused because you'd killed the same one yeah dad yeah sure okay all right let's see you ready oh there it is one second ago right so um we're okay one second so did the can I do the resize over love because remember I had one other thing to show but we're not going to is there a note anyway we'll do anything else mess up the network and so the Cooper controllers had a little bit of networking fail so dear anything else I do so let's just take a minute to think about what what it is we've seen okay so we have seen kubernetes which is a really sophisticated orchestration engine running on top of I is running away stupid laptop simulating a cluster of machines right and you know essentially what we've got inside OpenStack in the Murano project is a way to build an infrastructure that essentially mimics what Google container engine does for Google compute right and so I just wanted to give you a little insight into what that means so essentially we talked a little bit about packaging applications which allows you to have some control over who could do what and so you as a user can create a package and there are a whole bunch of packages now as you saw published up on the apps open stack org and we invite you all to contribute more to them and that's actually kind of where I want to go with the rest of the talk is talk about how we as a community can build a community of best practices around how we do this kind of container management on all kinds of infrastructure to make sure we can have reality of this Portability and Morano is just a tiny little piece of glue that makes sure that these layers can work really well together and so from a user experience I can just do this kind of drag-and-drop experience to configure very sophisticated infrastructures of clusters and that's done because there's a little magic that happens behind the scenes and this is some of the Murano PL and I've got the link here and you'll be able to get to that that's that's just out on github and you can see how that works I just wanted to walk really quickly through this thing that actually goes and does all this work to create a pod so it kind of has two major parts there's kind of a set up and so there's a name obviously it's called create pod it takes some arguments and the arguments are a contract so what what is there about that so it's got a version for example it's kind of kind what kind of data are we doing and metadata and so that's stuff that's passed to this to give it context and then it declares a few things about that so essentially you've now got information about what's going on there and this is the this is what comes in and then the next is to say let's check the state of the system let's see if it looks like what Morano thinks that the system should look like because it's true but it could have gone and done some commands to that cluster that Morano wouldn't have known about so this there's this dollar sign deploy so it's like this deploy it'll run that method that's a separate method that has its own things and what that does that checks and sees if the cluster is in the same state that Ronald thinks it's in because you know it's not tightly bound it's loosely bound once it's no not it knows what the currents it is and that's loaded into the environment resources is then a set of associated things that are a part of this package right so this just look loads those in so for example for kubernetes there's a whole bunch of scripts that come with this to do all those coop control commands right the that you have to do and it automates that then for the user or for over is calling the API and then finally it does basically template search and replace so those those scripts have variables in them and you've got all this context about the current application state and what data has been passed in it does search and replace and then it executes them so this is just one example in this case it's just calling shell scripts to do work on the cluster Murano is very powerful infrastructure for doing all kinds of things in OpenStack so it leverages heat so you can have heat templates or dynamically create networks all kinds of stuff that's also done as a part of the package to implement kubernetes so that's just a little peek in there there's a lot more to learn of course but I wanted to give you a feeling of how straightforward it really is so I wanted to take just a minute here to talk about future things if we're going we have five minutes left five minutes so we're wrapping up the last two slides so one of the things we want to do is make sure that we invite everybody to help us keep up with all the stuff that's happening in kubernetes you know it's it's a really fast moving project I'd say you know on the same order is how fast OpenStack itself is moving and you know that's a challenge so our packages are an open source and we won't invite all of you guys to help learn about them and help contribute to them you know some of the things that lacks right now are really robust error handling it's kind of a first version it's kind of a preview I mean kubernetes is still in preview this package is in the same state one of the things you talked about with services I think it'd be really interesting because if you think about the Murano as an application catalog it's a registry of services that are available to users of cloud kubernetes represents those as you know they're they're services that are available as micro services and cross registration of those things understeering how that should work is an interesting thing how do you handle auto scaling of clusters from external events related to OpenStack you know how do you deal with multi tendency multi regions you know how do you deal with alternative overlay networks and to me actually the most interesting thing is you know we did some experimentation and we wanted to show it but it wasn't quite ready to share yet is how do we you know the push of a button say export this whole configuration and application then running on Google Cloud engine right you have a container engine yes I think there's a lot of great work to be done in these areas to make it awesome for all of us so I I really want to invite all of you to participate in doing that with us and so I've got some links here for neo obvious next steps how do how can you do this yourself in your own labs and how can you contribute so I hope these links are useful and I think do you have anything more to share before we ask for questions yeah I think we could take questions if there's questions Patrick there is an open source web UI yes if you go to the kubernetes project in the WWF older I think it is yeah right here so the question was how often is the API and a change it's a great question so as we've been pre 1.0 it's changed pretty rapidly but we've we're putting in place we have a you know governance model and deprecation policy we have I believe it's either a one-year 18-month deprecation policy for every feature of Google cloud platform so we're getting much more serious about making things reliable for the long term so you can take you know good bets on the technology but yeah it's a great it's a great question over here one pod goes to one host yeah anyone else right here let's go ahead and talk in the microphone if you since you're right there so let's Jack has so OpenStack has to now beam only is with this it's going to the past layer also right so is there going to be more investment in terms of not going to be anymore separation between is in platform as a service or OpenStack is going to be all good Murano so it's not a question about the OpenStack about Murano about Murano yeah so Murano is a layer to facilitate the integration between OpenStack and other kinds of services okay so you know it is not meant in the of itself to be a pass or a container orchestration engine and then of itself it's there to facilitate other tools to do that other questions yeah go ahead so so when I when I did the the demonstration so that is actually a preview of riotous OpenStack 6.1 which is based on Juno still that will be out in a couple of weeks it weighs it one at the microphone here oh we have Magnum also deploy cable it is so what's the difference between Magnum and Murano there are lots of differences between Magnum and Murano was taking Cuban it is Essbase yeah right so that's actually a much more complicated subject that we don't can't cover in a one gap question so you can reach out to me after and we can chat here if you want I don't have all the answers in short thank you and be you mmm simple answer yeah this simple there is a really simple answer is that Borg has lots and lots of features that wouldn't make sense if you read the paper there's lots of stuff in it that wouldn't make sense in the context of an open-source project so to keep it clean easy for people to contribute to we wrote it and go we simplified and streamlined a lot of the concepts it was just easier to kind of start from scratch it's also likely licensing and legal issues and things like that but it's just it's the same people that went and built it nevermind that it takes six months to learn how to use Borg yeah what anyway other questions great thank you all right thanks so much please three chapters after thank you
Info
Channel: Open Infrastructure Foundation
Views: 13,421
Rating: undefined out of 5
Keywords: Craig Peters, Kit Merker, Cloud Applications, OpenStack Vancouver 2015 Summit Session
Id: ao8UAShNBW8
Channel Id: undefined
Length: 35min 4sec (2104 seconds)
Published: Thu May 21 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.