Docker & Google Container Engine - Webinar

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone and welcome to this webinar on docker and container engine I'm Ben Lambert I'm a cloud technology instructor for cloud Academy and you can reach me at at so whelmed on twitter should you want to reach out okay today's agenda I thought it'd be cool if we talked about docker and container engine for docker we're gonna talk about what it is we'll talk about why it's useful we'll create a container image and then we're gonna create multiple containers and then docker is really in effect useless unless we can get our containers into production so we're going to talk about container engine we'll talk about what it is why it's useful and how to get started using it so we'll start with the first question what is docker docker is a containerization technology what that means exactly is containers allow us to wrap up any code that we write along with any system libraries into one package and we call that package a container this container will run the same way everywhere we run it because it has all of the files it needs contained inside so from Dockers website this is what they say about docker docker containers wrap a piece of software in a complete file system that contains everything needed to run code runtime systems system tools system libraries anything that can be installed on a server so a lot of people ask this how are they different than virtual machines virtual machines run on top of a hypervisor and the hypervisor emulates a computer's hardware so every virtual machine has to include a full operating system because it's gonna run on this fake hardware and that can take up a lot of space containers on the other hand share the kernel of the host operating system that they run on this makes them smaller in size because we no longer need to run an entire operating system and it allows them to be more efficient because they don't need to run all of the extra stuff that an operating system runs so containers become a much more efficient mechanism for bundling things up okay here it is side by side you notice that there's a layer missing we don't have that guest operating system it looks similar but it's different docker bundles up all the files we need it has all of the binaries and libraries that comprise the the kind of personality of the operating system and instead of running on a hypervisor which emulates hardware we run on the docker engine which shares the kernel of the OS underlying it so won't building the containers take a while since they need to add all of those files the answer is yes and no but more no than yes let me explain the talker uses this concept of image layers and this is this is a really cool way to do things when you use a docker file and we're gonna talk about docker files later when you use a docker file every command basically gets turned into an image layer so any files that get installed from that command become a layer now if we read this top to bottom we have from Ubuntu latest and then this command gets turned into what is basically a layer that comprises all of the files needed to become a boon - and then the next command runs in and installs everything needed for Python and then this next layer is all of the files that comprise Python so it sits on top of the on top of the Ubuntu image and then we come down here and we have this apt-get install Apache so again we create an image and this image is our Apache layer so when you start stacking these on top of each other any files that are changed between them are going to be accumulated so this allows us to to build on top of it and if something doesn't change like this Ubuntu layer is probably not going to change and let's see underlying ubuntu image changes and so we don't need to rebuild this every time so layers make containers fast for building we don't need to build the layers that haven't changed so we only rebuild basically the code layers each time then that makes it really quick and now because of that sharing these containers becomes faster because we only need to share layers that have changed so let me give an example if we're developing a website and we changed some files in our application and then we build our container and we push that to a repo let's say it's the official docker repo then some other developer wants to do some work the first time that they download this container it's gonna take a little while because now they are downloading all of the images but subsequent downloads are gonna be faster because once they have all of these base images they don't need to download them every time they only need to download what has changed and that's gonna make sharing images or sharing containers much faster all right why use containers containers are a very hot topic right now and people are trying to containerize all the things not everything needs to be put into a container so what's a good use case for containers well in my opinion containers are a great way to implement a standardized environment that way it really doesn't matter which tools and technologies the developers select as long as we wrap our final project in a container we can deploy it the same way so here's an example let's say we have a Python web application and all of our stuff is is is a done in Python we're a Python development shop at some point we decide you know what we'd like to switch to using go for a particular thing we don't have to rethink how we are going to deploy this process with go if we have a process in place to deploy containers then all we have to do is make sure our go applications are wrapped in a container and then it's deployed in the exact same way so we get to standardize on our deployment mechanism and that's a lot of value in there okay I think we've talked about it enough so let's let's see this in action now I just want to say docker and and containers in general are a broad topic there's an awful lot that could be said about containers so we're not gonna cover everything we just don't have the time to hit all of the topics but we'll do a nice broad level of creating images using containers with docker we'll talk about actually how to get some containers into production and then future courses that are going to be coming out will kind of fill in the gaps later okay so let's close down PowerPoint okay and we'll open our text editor great and let's get a shell here all right here's what I want to do I want to show you that I'm not cheating by we're gonna go to localhost 5000 okay site can't be reached that's good this is what I want to show you I want to show you there's nothing running on our local health support 5,000 now I want to go back and what I want to do is hmm gonna build this and we're gonna say v1 I'll go back and explain what's actually happening with these commands but we're gonna start at the final result and then we're gonna work our way backwards so I just want to build this real quick okay and now run this hmm okay what actually happened we spit out a giant hash and we returned to the to the console doesn't look like anything happened but if we go back and we reload and this might take a little bit the first time it's gonna initialize some stuff so the first load can always take just a little bit longer okay so what did we actually do we started up a container that's running an application let's see let's see what's going on with the with the actual application what we have is a basic flask application there's nothing too fancy about it we expose a route here for the index.html we pass in some hard-coded data we have a couple other routes we'll talk about those later and then we run it with the the development server this isn't how we run in production this is by no means an example of a production app I just want to clear in production we'd have it under behind a whiskey container and then we would expose that through something like nginx so this is just a good demo so let's see all right we built our container with the build command let's go back and then we ran it so what we did here is we said docker build a container and we're gonna tag it with this tactic this tactic flag as webinar flask and then this colon is going to implement the version we're sent : version 1 and this period here is saying look in the current directory for the docker file and the docker file is what is responsible for telling docker how to build our image so let's check out that let's check out the docker file so what we ended up with as we said from a boon to latest and where does that come from because we never referenced Ubuntu anywhere what happens is we have docker has a registry where it keeps all of the base images and we can we can see that if we were to docker registry actually we want to go to the hub let's go back there we go now we want to explore ok so we use the Ubuntu image and that's what we use here it's just Lubuntu official this is what's referred to as a trusted repo trusted is contextual it doesn't mean that these are all going to be secure but what it does mean is that these are community images anyone can use so we said from a boon to latest and if we go into here you can see exactly what that means means we're gonna have an Ubuntu and a boot to server basically we can look at the tags latest here that's what we installed we could have installed any one of these 1404 1204 so you see these long-term support ones we can install any version that that we want to use and this is a really cool way to have a container that is of a base such as Ubuntu or Debian or whatever the case may be so we can go back come on there we go we go back we're gonna see that we have a lot of options it's not just operating systems but we have stuff like this we have a Redis image this Redis image is going to have Redis pre-loaded so it's it's going to use some other operating system as a base and then it's going to install its own files and that's where these image layers come in handy because we can have let's say Ubuntu and then install our own stuff and we give that a name and then we could push that out to the community and the community could use our container as a base for their stuff and that's what we're seeing here is all these different repositories so we got a lot of options so we go back let's figure out what else we have so we say from Ubuntu that means we want this to basically be in a bundu container and then we're start running some commands to actually install some of the files we need now we could have used the Python container as a base but I wanted you to understand this holistically so I wanted to run these commands and create these different layers when we run this apt-get update it's gonna it's gonna update the apt-get repo and once it runs this command it's gonna create an image layer for that and that image layer is going to contain all the files that got updated since it's not gonna update all that frequently these files won't really change that often and we do it again here we install Python now this is our Python image layer and then we want to copy our code into the image so we're saying copy the app directory here and see I don't know if you can see my mouse wiggling we're gonna copy this into the slash app folder in our container and then we're gonna set our working directory to be slash app what that means is all of these subsequent commands are gonna take place in that working directory it's like doing a CWD or a CD change change directory so then we install our requirements we're running a pip app I'm sorry we're running a Python app so we're easing pip to install the package requirements if you're familiar with package managers we just have pip it has a listing of all of the different modules we want to install and their versions okay so that's what pips doing saying install all of the requirements from our requirements file ok now we also have an entry point by default this is a shell bash shell I'm sorry not a bash shell just a shell and that means anything that gets run runs in the context of a shell so we could omit this but then we would have to specify Python and our app to say run run the Python command and pass it the app dot PI parameter instead we can override the entry point so I'm saying use the Python command as our entry points so anything that gets passed into this command is going to expect that it can be passed into the Python executable as a as a parameter okay so this is saying run Python and then the app top PI which is our flask application close that so when we ran this simple little docker build it did an awful lot it went out and built all of these different layers now let's run it again focus here notice that it went really quickly this time and you see this using cash what happened is these layers didn't change it didn't need to rebuild so it just went through and said yep that's the same same hash same hash and uses the cache version for all of these subsequent builds become faster because eventually the only thing that's gonna change is our code and we'll be able to compile this really quickly now we also ran with this command what this is doing is it's saying run the command see if we can't make this a little bit bigger okay we run with the TAC D which is demonized so it's going to run this in the background and then we use TAC lowercase P to bind the local port to our container port so it's saying we want to make sure that whatever is running on the container on port 5000 gets exposed to our hosts on 5000 and that's why we're able to look at it and on the web browser on our host and see it on port 5000 then we're telling it what container image we want it to run and specifically what version now this is cool let's look at let's do this docker images and we're gonna shrink this up we're gonna be just kind of tweaking this as we go for each command okay here's our v1 we have an image that was created eight minutes ago so we just created that and we have a tag of v1 and it's our webinar flask we could also do docker PS and let's up alright it's a little small hopefully you guys can read that and clear the screen there alright so what this is saying is that it yeah what this is saying is we have one container process running it's built on the image flask our webinar flask v1 it's running this command Python app dot pi it gives us the status when it was created how long it's been running what ports are exposed on our hosts to the container and a name for it so now if we go back to our web server or what our web browser and we run this again it's going to load let's kill that container I'm gonna do docker kill make sure you this is a perfectly humane way to kill a container okay no it's your docker yes alright notice there are no containers running we still have a docker image built these these are pre-built containers but we're not running any of them the PS command shows a container processes that are running the images show us containers that are built so now if we reload this page this is what we'd expect we don't have a container listening for this now we can reuse and go back with our run command cycle through okay we can rerun this container because we've already built it nothing's changed so now go back to our browser and there it is so the container allows us to we can build a container we can start it stop it whenever we want we can kill it and we can always start up a new instance based on the image that we've created okay so this is all well and good we have a container which is useful however as developers inevitably we're gonna work on some sort of micro service that is gonna have multiple containers or a need to have some multiple container option because we'll need like a database maybe some sort of separate service that helps feed into ours so how do we deal with multiple containers let's kill this let's do this handy command here I'm gonna copy and paste because why not okay what this is doing is gonna run this sub shell command to list off all of the running processes running containers and grab their IDs and it's gonna pass it into the docker stop so it's gonna stop any containers and then that'll be okay now like I said inevitably we're gonna have multiple containers one so how do we deal with that let's do that stalker attack compose and we'll say op web okay now we have some output notice what it says here creating container and it's see a webinar DB one we have see a webinar web one we told it to create web where did this DB come from we're gonna show that in a minute first let's just look at this we want to see open URL I wanna see it started up our our container but it also it also started up a separate container so if we close out of here it's gonna shut down the web one it started up a database container as well and we need to see why so we're gonna look at our docker compose file this is cool docker compose is a allows us to create these yamo files that will specify multiple containers and how we want to handle them so we start off by setting the version of the file format this is so that docker composed knows what kind of attributes and properties to expect and then we specify the services we want to create we say we want a DB and that's gonna be the service name and it's gonna be an image based on the Redis alright so lets where does where do we get Redis we get it from page one you know from right here we specify the name if we don't have it locally it comes to this docker hub and it fetches it for us which is cool so let's go back we're gonna say expose the default Redis port which is 63 79 and then we want a service for our flask application we're saying build and then we pass in this period all we're saying it to this is use the docker file to build this it's the same file we use when we created the docker file or the docker image before the command is going to be the app dot pi and then we're going to specify volumes we're gonna mount our app - slash app we're gonna expose port 5000 to 5,000 on our hosts vs. our container and then we say depends on and this is really cool anytime we start up a web instance we're saying it needs to have a database so that's that's a really cool piece of functionality it ensures that we always have the dependencies we need so when we started it up it created it for us we do docker PS we can see that it started the database database container and it's still running so let's - let's do a docker stop all copy paste is commit again and if we restart our web it's gonna start them both so now reload this awesome this becomes a really handy piece of functionality like I said inevitably as developers we're gonna have to run multiple containers and using docker compose will make that nice and easy let's just show that Redis works so we have a route here that says slash Redis slash random slash test what this is is it's gonna pass in a key and this is the key name we're gonna say hello alright let's just say greeting and the greeting will be hello okay so give that a second to run and then when it's done it's going to set that value and sometimes on these webinars live code doesn't always go as expected anyway so greeting hello just reloading see if that's happy that's happy we're gonna list off the values alright there it is greeting hello random test so let's look at the code that populates that so you know there's no Voodoo going on here in our app top pi what we have is it's this route here we passing a key we pass in a value it tries to instantiate an instance of Redis and then it sets the value and returns it and then we have this get all it just iterates through all the keys hmm now notice it uses this host here up here we say check in the environment variable for this string and if it doesn't exist use DB as a name where does DB come from DB is this year this is our host name and for our web server this is our host name the hosts file gets updated by docker compose automatically to add these for us so that we can reference our services by their host name and that that makes it really convenient in code so alright we've talked about we've talked about using a single container image and multiple container images but basically all we've done is develop some code locally that is fun but it's not useful in a production environment containers fun as they may be are practically worthless unless we can have them running in production the point is to have our environments the same as from in development as they're going to be in staging and production we want them to be exactly the same so getting a container into production is key when it comes to that we have some options we can use things like mesos that's an apache platform so we can use mesosphere for that we can use docker swarm darker swarm is still it's still in its infancy it's young but it's it's definitely getting there my familiarity and preference is for kubernetes so I naturally gravitate towards container engine which allows us to use kubernetes which is an open source orchestration tool to work with deploying our containers on Google cloud and we can also deploy them anywhere that kubernetes will run which is basically everywhere so let's see how to deploy our multiple containers because I think there's a lot of demos out there that will show you how to get a container into any old production environment but let's do multiple containers let's make a change and then actually update those containers because I think that's valuable to see if you can't get the containers into production and you can't update them all of this is for nothing it's just a fun exercise right so let's try this out so we have container engine here I've already created a container cluster what this is is we can set our cluster size and in this case we have three servers running and they're gonna serve as the cluster of servers that will house our our containers and the way kubernetes refers to a grouping of one or more containers is as a pod and a pod will have some number of containers that represent a particular application in our case we have our Redis and our flask and then we have our service which will expose that those pods to us and allow us to browse to them so I want to show also this is a container registry remember we looked at the official docker registry here you can push stuff to the docker registry you can also use something like this this is this is the private container registry for Google Cloud for container engine this allows us to submit our container images without having to worry about other people seeing or using them so if we have containers and apps that we want private we can use this alright let's let's see a demo I want to switch back to switch back to our command line we're gonna close out of this we're gonna kill our container images and I'm gonna remove any of those images all right so now if we did docker PS it's nothing running tech a you can see anything all right so there's not nothing there this is great I want to make sure we start fresh so we can see how everything works like I said we're gonna use kubernetes I've already done some of the setup so if you were gonna set this up yourself you want to follow some of the instructions if you're gonna use Google Cloud follow some of the instructions on how to get things like the Google Cloud SDK installed which will give you access to the G cloud commands and some of the kubernetes commands okay so we need an image we're gonna we're gonna use the one let's use let's use v8 all right we're gonna use our v8 image so it's already built it's already pushed we'll do another example we're gonna push one to the container registry later but we'll use this for now so let's go back all right now before we can get started we need to create some things we need a replication controller and then we need a service to expose our pods to the world now we're gonna run this command it's gonna create let's go create a replication controller cube control create tach F RC file Gemmell and run this alright it says replication controller created what did we just do and why let's let's look at our replication control file it's another Yambol file and what this is doing is it's specifying a replication controller that will start up some pods I want you to think of this like an auto scaling group if you're familiar with the terminology for something like AWS or Google Cloud this is going to ensure that there's always three replicas of our pods so we have an API version that's v1 we have a kind that's replication controller and we're gonna name our replication controller webinar flask RC we want three and now this selector here is saying I need to know how to find the pods that I will be basically controlling so we're saying webinar flask v8 so a named version and then here we have our pod template and that's how that's what these are matching up with this name here has to be matching this here and this version here has to match this version here and the spec for these pods for this template says we're gonna have some containers that container is gonna be DB it's gonna use the Redis image it's gonna expose the container port 63 79 we're gonna have our webserver and it's going to use our image that we have stored on the Google cloud registry and it's going to be the v8 version and again expose the port it should look very familiar it's very similar to what we did and the docker compose see here yeah we call it services we have our DB and our web and here in our replication controller we have a template for our pods remember a pod is just a grouping of one or more container images okay and so this replication controller will ensure we always have three up and running now let's go back we're gonna say cube control get RC a replication controller we desired three we currently have three you're gonna eventually get used to these kubernetes control commands that I call them cube control that may not be the official pronunciation but it kind of flows so cube control is gonna have some sub commands and once you get used to them it's gonna be very similar the pattern is gonna be you're gonna use get four different things so we can do get pods get services we don't have any yet we have the one all right if that's from a previous example and so we'll use cube control get and then some some thing that we want to get same is gonna be true for delete we're not going to delete anything but you'd say delete and then you can say your our replication controller and then the name of it or delete some service so the the pattern will be the same it's gonna be familiar cube control get set delete describe described is is useful we can do describe RC and we can give it our name and it's going to give us details about this replication controller so this is gonna help us to kind of debug and find out what's going on that's that's a really useful thing okay we have a replication control which is like an auto scaling group and it auto created pods for us those pods again are where our containers live and now we need a service let's create this service same syntax cube control create tack F service file kemell alright it created the service what does that service file look like let's take a look we have an API version we have a kind again of service we have our metadata this is naming the service and now here under our specification check it out type load bouncer this is important what it's going to do is it's going to basically expose our pods that are running to the world with the following information it's going to say look for port 5000 on the container and then map it to port 80 on the load bouncer that's gonna allow us to say whatever port we're running our thing on in this case it's a web server is going to be displayed to the world on this port and we're gonna use 80 now this is another important part it's saying selector name is webinar flask what does what is this actually looking for let's go back to our RC file notice here we have our template has a name it's webinar flask this is what the pods are gonna be named so we're looking for any pods that are named webinar flasks and those are the things we're going to expose to the world to use alright so we have a service now we can use cube control get get service says just list them all ok notice we have an external IP address it took about a minute to actually create that that's the typical range somewhere between 45 seconds in a minute to create once you create a service so now in theory we copy this and there we go we have an application running this application is going to be running here in container engine looks let's check this out here's our cluster and we have some information about our cluster so we're using we're using container engine to spool up some instances and machines for us and then we're using those instances to house our containers in pods well that's the terminology for kubernetes is pods so we have three sets of pods that are running to form this and that allows us to very quickly set up a container and deploy it into the world relatively quickly you saw so we're running version 8 what happens if we want to run version 9 notice here I want to make note of it so that after it's changed it will stand up underneath the logo there's no no text except for these three links here once we deploy version 9 it's gonna say something there and I'll I'll let you see what it says but it's gonna change the words so let's go back how do we actually deploy our new version we use the cube control rolling update command I've got it pasted here because it's a bit of a long command I want to make you guys watch as I type so we have cube control rolling update webinar flask RC now let's actually not run this command yet I want to do this I want to do get RC that name there is the name of our replication controller we're basically going to replace our replication controller okay so let's paste that back in we're saying rolling update look for the webinar flask replication controller and then replace it with whatever is in this replication controller file v8 we're gonna run this and while it's doing its thing let's look and see LS no apparently I've changed it to ov9 because we're running v8 my apologies v9 there we go so let's give that a go okay created a new replication controller scaling up v9 from 0 to 3 we want 3 replicas right now there are zero and then it's gonna scale down the existing replication controller from 3 to 0 so it's gonna invert the two let's go into our our file so you can actually see what's happening and hopefully it'll start to make sense this is a replication controller that looks just like this with a couple exceptions we change the version that we want to deploy and we're these are all v8 that's your original version and now we're saying and the new replication control is going to be the same name with attack v9 at the end we want to find the version 9 template for the pods and that template is going to use version 9 here so this is going to upgrade from our v8 version to our v9 and remember those are already pre stored and container registry if we go here V 8 V 9 all these previous versions from kind of playing around but these are the two we're gonna we're gonna look at okay let's look and see the progress v 9 is up to 1 RC the original is down to 2 and it's gonna keep kind of staggering these now if we looked at the actual site now we could reload this and we will and depending on which one we hit with the load balancer we may or may not see a change in the text underneath the logo so let's try it there it is alright it says V 2 a little confusing why does it say V 2 when technically we deployed v 9 it's because this is an application that I repurposed from a different project this the application was actually version 2 of the application so that's why that's why it says V 2 but we can change that in just a minute and then deploy so we can see our new version so if we reload notice we go back to nothing under there and that's again because it's load balance it's bouncing between different versions and do it again and it's gone ok the images for this are exceptionally large so sometimes it's a bit slow these are like I said I repurpose this app and kind of kind of just cut it together so that it would work for this demo but these images are not the thumbnails that the real app uses so these are exceptionally large images ok still staggering the two let's see I don't know maybe you can hear it my laptop is sounding like it's about to take off so if it flies it into outer space you'll know why all right we're almost there we're up to three versions for our new replication controller we just have maybe one more yeah we just get to one more we got to get rid of from our original so you can see it does take a little while it's not immediate fruit cut over but we have three groupings that it has to go through so okay looks like it's almost done all right cool replication controller flask RC rolling updated to RC v 9 let's go back and now if we we run our app we should never see the previous version it will always be v2 unless we see a cached version of the webpage p2 v2 v2 all right it's all v2 that's great so it's rolled out it's deployed this makes it a really nice and powerful way to deploy multiple containers again those multiple containers come from our RC file we go back to it we're saying we want multiple containers here we could specify just one if we wanted to but there are different ways that work for deploying and updating single containers it's a little different to do single verse multiple you can do it the same way as the multiple but there's no need there's there's a different Mehcad mechanism for that so let's build one now we're gonna update it as we're gonna update as V 11 because we already have a v10 we'll update it and then deploy that so you can see how to actually build and deploy to container engines registry and then deploy that okay let's make a change to our text go back into the text editor and let's go to the template we're gonna go into the base let's do this we're gonna say greetings all right I can't spell today greetings from a webinar and a smiley face because who's not happy when we get to play with new technology cool so we have a change in our code what we need to do is build an image that we can deploy to container engine now you saw us build an image before we have a ran the build command it's gonna be similar difference is we're gonna call it the 11 the difference is this you notice this G GC r which is the Google container registry dot IO C a webinar where does this see a webinar come in if you want to build this yourself you need to know so it's the project named after that it's the project name for the Google Cloud project this is the CA webinar so it needs to have that so it knows which project to map to and then it's going to be the name of our container and the version so and then again this period just says look in the current folder for our docker file okay so we've tagged it we're gonna build this notice using cash using cash we already have those so we there are no changes we didn't have to do anything but we haven't we haven't deployed it with these changes so we had to run a few commands nothing too major and now we've built our container let's check docker images and we'll shrink this down see here we have our container image and our version it's got its own image IDs is its own hash that represents this particular image who's created 11 seconds ago all right so just zoom back in here so that it's easier to see now we need to push this to the to the registry and to do that instead of just using the docker commands we're gonna have to use the g-cloud command g-cloud is gonna kind of wrap the docker command in this instance providing us the credentials we need to deploy our app now these credentials are gained through the console and let's go back to our clusters and we'll click on this and so if we wanted to connect notice it gives us this handy command if we want to connect to our cluster we need to have the credentials for that cluster so it allows us to set the project get the credentials and all that good stuff okay and then close out of that great alright where were we alright we're gonna run the g-cloud docker command we want to push this image and we want it tagged as v11 and let's send that out into the world okay now notice here this is what we were talking about earlier these layers were already built and sent and nothing has changed in them it uses these hashes to determine if it's changed nothing has changed so it's able to just push the code that's changed again this makes it efficient to share our changes and push them out because we don't need to send out massive images all the time and in one second it will be done so let's jump into our container let's look at the registry drill in here alright we have version 11 now let's go back okay some updates to g-cloud since earlier that's fine we'll deal with that later okay now we want to actually deploy this do a rolling rolling deployment it's gonna be similar except we don't yet have the file needed to deploy right before we already have these pre created we had a V 9 and a v8 so we need to do something to deploy our version 11 we're gonna do copy this file paste it rename it click here ok v xi m l and now let's just do a find and replace V 9 for the 11 let's replace that and I'll skim through let's make sure it has everything we need alright this looks good we're gonna map the selector here webinar flask the 11 to the template webinar flask V 11 and we want the same two containers only this one the flask version is going to be V 11 okay save that and now we're gonna do a cube control cube control get RC that's our replication controller alright that's its name v 9 so we need to replace V 9 with whatever runs in the file of our V 10 and let's just grab that code command line image here and where is it create profile I'm just looking for the update so that we have that handy okay rolling update yeah let's just do this we'll do cube control rolling-attack update I'm gonna do the name of our current replication controller we need to tack tack file name and then it's gonna be our SeaTac file and it's gonna be the uppercase yeah 11 . yeah okay now again it's gonna take a little while you saw how long it took before it's gonna take like five minutes to kind of swap over to make sure that it's staggers we always want to have a certain number of pods available to serve up the users so it's going to progressively do that and we can just check this out eventually it will say something other than v2 can say what we told it to say I think greetings from the webinar there it is so we have we have one introduced to the farm and we're back to our other so again it's gonna it's gonna stagger back and forth until we hit 100% deployment and that it will only show our latest pods so this is really useful we have the ability to not only create a container create multiple containers and deploy multiple containers and then update them in a in a method that's not going to cause downtime that's that's an important part of operations is to reduce that downtime so that pretty much covers actually you know what let's let's wait for this to get done there's one more thing I think we cool to see so there's two more things we're going to do visual daugher visual I think this is kind of cool emeritus this is kind of a neat thing remember we talked about layers and how they work every command basically creates all of the files needed for that layer and then they get stacked on top of each other it's kind of what this little icon here signifying so if we were to check out and say the Python the Python image from the docker registry and we give it a second to populate let's make sure it actually pick the right one Python all right there we go and let's find this if I want to check out let's check out this on build yeah what's fun so save changes now this is cool it's going to show us what all of these layers actually do and this is neat it adds these different files it creates it runs the command bin bash let's let's go back through so each layer is going to do things that add new files or change files on the file system and then they're stacked on top of each other to form the final image that comprise our docker container so I thought this is really cool and I thought I'd share so you guys can kind of see and understand how docker image layers work just a bit better and so alright this is just about done and then we'll run one more command hopefully it'll work with them already authenticated I want to show you how to use the kubernetes user interface and so we use the cube control proxy command for that so this is down to zero so this should be wrapping up any second okay there's it's from this alright so run this make do this / UI okay this is really cool we have replication controllers see it here aged three minutes old tells us what images it's comprised of any actions that we want to do we can scale you can view details so this is a nice handy user interface over what's going on okay we have our three pods remember our replication controller said we want three replicas let's go back to let's go back to that we wanted three replicas of our pods and that's what we got and now they're all running and we can see the memory usage all that good stuff and I think that's a nice valuable thing we could also scale we're not gonna add anymore but you look at the events see what's going on let's go back now let's go back to our home page here our pods we can deploy an app upload some yamo this makes it nice and easy to do some of this stuff from a user interface we can also do from the command line because that's useful to script stuff out but should we want to we have this ability so that's that's kind of a cool thing and now all of these are deployed they should all say greetings from our webinar okay that's pretty much all I have planned thanks everyone for joining I think this has been a lot of fun for me and hopefully this has been useful to you to kind of get a feeling for what docker is at least a high-level and how to get an application into production we have a course coming out it's going to be on docker it's going to be an introductory course to kind of go through some of the gaps that I've had to skip over to kind of get through this in the time we had and then we'll be building on that as we build out the docker learning path so that'll be that'll be coming out within maybe a week or two so look forward to that if you want to create a demo a count to try this stuff out you can create a seven day trial account and kind of play around with some of the courses okay thanks everyone and I look forward to seeing you at the next webinar Thanks
Info
Channel: Cloud Academy
Views: 42,809
Rating: undefined out of 5
Keywords: Docker, DevOps, cloud, cloud computing, cloud training, cloud certifications, Google Cloud, GCE
Id: WAPXaDpkytw
Channel Id: undefined
Length: 57min 4sec (3424 seconds)
Published: Fri Aug 26 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.