Containers for Beginners

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome everybody to containers for beginners I'll be your MC for this talk my name is David Butler I'm in product marketing at docker but it's a great pleasure to be able to introduce Michael Irwin from Virginia Tech Michael is an application architect so he's pretty familiar with containers and the IT organization of Virginia Tech but he's also a docker captain he's also a docker community leader and I just told him he's also been named professor of containers so lots to learn here welcome Michael thanks David thank you and good morning everyone and I fully recognize I stand between you and lunch and I don't take that responsibility lightly so hopefully we'll have a good time together I'm excited to be here just curious do we have any hooky alum here all right there's a couple in the back awesome all right cool that's a Virginia Tech alum so a couple disclaimers okay first off this talk is going to be a technical one-on-one getting started so we're diving into what are containers what our image is how we should think about them etc and disclaimer number two I added this since the keynote I can't explain sprinkle pods either so just setting that disclaimer out there okay but to get started I want to start off with just a little bit history and I know docker CEO Steve Singh talked about this a little bit in the keynote but it helps set the stage and the context again so I'll just breeze through this a little bit but if we think about history and shipping okay a long time ago if I was a goods producer and I wanted to ship goods I just bundled him in whatever I had bags barrels crates whatever I had I'd take my Goods down to the dock they've loaded up on a ship and then the ship would carry it across the sea to the next market or wherever and funny enough ships actually ended up being in dock more often and longer than they did while they were actually out to sea actually shipping their goods okay and it's because of such a process of taking goods on off the ship loading things in shoving them in every nook and cranny to maximize the space and the ship and then get it out to sea and then well taken everything off and now move things around and and so the chance of loss and theft was really really high okay and then the Industrial Revolution came along and now we've got rail that's moving things much faster across continents and countries etc and while it worked out really well we started to see the inefficiencies of moving things from one shipping method to another to another and and so these inefficiencies started to really creep in and people started to notice them and so eventually if somebody said well why don't we just standardize around a box okay and let's with this box just throw whatever you want into it and as a shipping provider I can just standardize around this box I can know that it's gonna be a certain size it can carry a certain weight capacity so I can stack them so high excetera and I as a shipping goods provider I don't have to worry about what's inside the box okay and so if I well if I'm a good producer I just again throw stuff in the Box hand it to the shipping company and it will end up where it needs to go and so this completely revolutionised shipping as we saw Steve talked about earlier today now so then the question is how is software like shipping and it's it's very analogous and it's not by X in that we have the name containers a long time ago if I wanted to build an app I had to talk to my sis and say hey we've got a new map coming up and they said great come back in six weeks we'll have a rack and stack the new server set up that begins install all the stuff you need and then we'll give you access to it okay and for a long time that was just accepted that was just the cost of business okay and so we would build at North schedules and we would have to deal with it but now we're in a much different environment where it's how many hours have been since our last deploy okay and we're wanting to respond to user feedback as quickly as possible adjust software fix bugs whatever and get it out the door now if we're still using old processes and just automating those processes we're still gaining a lot I'm sure but we still see a lot of the inefficiencies in the system and so that's where docker and containers really come in because now we're able to standardize around this box this container and by being standardized now we can build tools and everything on top of that abstraction okay so I'm going to provide two scenarios here and depending if you're Devon obso I'll be able to figure out really quickly based on who's laughing more one of these two scenarios will probably sound familiar to you so this first one will give you a second to read it I'm hearing a couple of chuckles so so yeah as a developer there have been many times in which hey here's the repo I hope the docs are still up to date you know install everything on your machine and you know as your first commit go ahead and update the wiki you know good job you've contributed for the day that's awesome okay it's it's tough and it's painful okay if that didn't quite sound if you couldn't relate to that maybe this one does okay that one got a much better response so it worked it works fine in Devon and if we think about this what's the problem here okay as a developer when I develop an application I'm developing with a particular runtime in mind whether it's Java or note or PHP or whatever it is I have a particular runtime in mind I've set up that runtime on my machine with all the dependencies modules libraries whatever but what do I commit my repo I only commit the source code okay and I leave that environment behind and I expect somehow magically that when I put this on another developers machine or put it in production somehow that environment has been magically replicated in the other place and we we need some way to move the environment around okay and that's again what what containers are bringing so I want to play an imagine if game and I'm gonna I'm gonna do a demo here how many people have heard of change route okay so there's a couple all right I'm gonna do a change route demo here and I think it's it's really gonna help set the stage for what we're going to do going forward so what I've got here is I've just got a I'm actually running this in an alpine container but it doesn't really matter but I've I've just got a an app here and what I'm gonna do is I'm going to make a new folder that's gonna basically be a new root filesystem for a custom shell I'm gonna make okay so inside this directory what I'm going to do is I'm going to copy all the main the main bin directory in the main Lib directory so I have all the normal binaries and libraries everything and I'm gonna echo hello there to dr. Khan Tech so I'm just gonna make a file here and what I'm going to do is I'm gonna modify this and I'm gonna remove the RM command and the move commands okay and we'll see why here in a second so I have I've kind of got this directory custom binaries and libraries again just copied from the default collection here and then I'm gonna do a change route and say now I want to start a new shell and instead of using the route of the the operating system of the filesystem here I want this directory to be my route and so at this point now if I do an LS well that's the route that I see I can no longer see outside of this directory okay and since I removed the the RM command if I try to remove docker context I'm gonna do a quick vote here how many think this will work not a single okay there's one one courageous person up front okay who thinks this won't work okay and how many people think there are just undecided they don't know okay we have a problem here the sum of those three groups didn't equal the whole okay all right so let's run this and yeah it doesn't work RM is not found okay because in this little custom file system here it doesn't have the remove command okay now this is a pretty pretty lame demo I guess I'll say that but let's let's try something else so in this other tab here I'm in that same same container so I see that new shell and when I'm in this app directory I've got a note rgz and what I'm gonna do is let's see if I still have it here okay I'm gonna just explode that archive into that new shell directory and it's adding just a bunch of files okay so now let me go back to my custom shell here and just by expanding that that's hard now I have note installed okay and so I can run node and I could say console.log 2+2 and yeah math works that's good and and so I've got node now in this custom shell and all I did was expand a tar file okay so if I now were to exit out of this and I say let me tar up this entire new shell directory and I can share that with you guess what that's really what an image is okay and what we'll talk about a little bit more but if I can again take this custom root filesystem I just made and share that with you now you have the exact same environment that I just built and your apps will run the same way as it did here assuming that we're sharing this environment okay that's what an image is okay so so Korean images images again are just really the think of them as portable file systems so just like that demo if I were to tar up that new shell directory and share that with you think of it that way we'll dive a little bit deeper here in a few minutes so so the best practice is to use a docker file and a docker file was just simply a text file that contains various instructions I'm not gonna go deep into those there's other talks on building images and whatnot but the cool thing about it being a text file is now I can keep in my version control system they can it can be version control I can share it easily etc and we build it using docker build so this example docker file here is building a node app I'm starting from node I'm copying in my package JSON which defines my apps dependencies and the yarn lock file which pins those dependencies and then I'm doing a yarn and saw to actually do install those dependencies copy in my source code and then I specify the command here to say whenever you run a container using this image this portable file system that we just make here's the default command to run okay and we'll see this here in just a second so once I've built an image how do I share this image okay again just conceptually thinking if I had this magical tar file that has this this file system how do I share it with the world and the way that you do that is you use registries okay by default the default register is docker hub but you can run your own registry you can use the docker trusted register there are lots of third party offerings and the screenshot over on the the right of this this slide these are all the current registries part of the CN CF landscape the cloud native computing foundation so there's lots of offerings if one's not working for you try another one and then once it's shared once it's in these registry then other people can pull the image explode it onto their machine or run it as a container and and make advantage of that environment okay so let's let's build a quick image here and so what I'm going to do is this tab over here let me clear that actually let's do this okay so I've got my docker file here that's the exact same one that I had in the slide and so I'm gonna build and I'm gonna tag this as Mike sorry seven my first node image I'm just gonna put DC 19 for docker 19 so this is gonna give this new image the name tagging is just really giving it a name and then don't forget the period it's really easy to forget it basically says here's here's the location of the docker file here's where all the other files that we're going to include and so what I'm doing like it copy words of pulling those from really I don't know why it doesn't default the period a lot of times I forget to push that and it'd be nice if it just did that but so I do this I did the build earlier so it's already cached and it goes really quickly here but now let me push that image and since I already pushed it earlier we see it went pretty quickly there it's all cash so let me actually go to play with docker now and I may make the font size a little bit bigger here so you can see in the back it's good 24 okay so doctor container run and I'll explain what these are in just a second so this app runs on port 3000 my first ma node image DC 19 and so since this was a new play with docker instance it doesn't have that image and so it's pulling down the image and we'll talk about what all of these different pieces are in just a minute so to explain the commands real quick so - - RM is going to clean up my my local machine when the container exits and we'll talk about what things that's cleaning up in just a minute the - T and I put me in interactive mode into the container so if I was running a bash it would put me inside the container in that bash prompt and - P exposes the port on to the host machine I'll talk about why that's important here in a minute but and it's not giving me my my badge up there let's see if we can figure this out see if I get lucky there it is okay so I get my hello world there and hey this display with docker instance here if I go back to it it doesn't have note installed okay but I just ran a node app here because again I just built this little custom file system that we're calling an image I pushed it to a registry and then I pulled it down and said hey run a container using that that as the root filesystem for this this process okay yeah it worked okay and I didn't do my demo god appeasement so let's hope it keeps working for us okay so what's a container then if anybody tells you a container is like a VM just walk away okay especially if it's a vendor all right a container is not a VM okay at the end of the day a container is just another process on the machine and I'm gonna repeat that containers just another process on the machine okay but but it's a process that's its view of the world has been altered okay and the way that it that alteration occurs is by using namespaces control groups and some of the namespaces I have on here so network and user and process IP seamount excetera there I think at all these namespaces have actually existed for a long long time much longer than docker itself but what docker did with the CLI was just make it really really easy to use these things ok so again the idea of containers has been around for a long time so run a container we use the docker container on which we've seen lots of examples I just did one as well and so what what's actually kind of going on we've we've probably seen similar pictures like this before and I've made a slightly modified version this kind of traditionally the way that we've isolated applications is like this we have the infrastructure or the host OS and hypervisor sure some of those can be swapped around or merged into a single box or whatever but at the end the day if we wanted to isolate applications because of version differences or whatever we'd say well let's just spin up another VM and we got really good at that there's lots of automation around it but now if you look at now I've got three guest OS is for three apps and three apps that mail in beginning 10% utilization okay and now I have to manage three kernels three memory managers three systems that have to be patched and everything that's a lot of overhead just isolate three apps so what containers allows us to do is again since they're just processing the machine now we can basically bin pack more onto each machine and namespaces are what give us the boundaries now if you search around the internet and I argue with people all the time about this there's there are examples of this image that have hope yeah you can see my mouse there that have another layer on top of operating system that's that many people label as container engine and I think that's a terrible diagram because it gives the perception of the container engine sits between the app and the underlying infrastructure okay and that's not true okay the the container engine is not doing any kind of syscall translations or doing anything once once the container starts it's again just another process on the machine so the docker daemon is yet another process on the machine that when you tell it to run a container it just starts the process sets up on the namespaces everything but then it's pretty much out of the way okay so just remember that some caveats with this is me that means if all the containers that you have on the same machine art yes sharing the same kernel okay and there so if you have kernel level vulnerability so you okay a break-in and one container if it's if you're taking kernel level vulnerabilities could gain access to other containers on the same machine okay depending on what the vulnerabilities so just be aware of that but for the most part there have been very very very few container escape mechanisms and even the many years that containers have been around okay but it's still something to be aware so put this up here so my wife actually made this for me a week ago and she knows nothing about containers but she took this picture this is my daughter and she made this I'm like that that's gold you actually do listen to some of the nerd stuff I talked about and and so she made this and it's like I've got to fit this into the slide but containers again it's it's a this idea that I'm just creating a portable file system and shipping around it and sharing it with people okay so you probably saw that when I push the image and when I pulled out onto the play with docker let's see if I still have it you see that it's pulling all these different layers okay what are all those layers each of those layers simply represents a set of file system changes okay set of file system changes in relation to the parent layer okay and each of the layers file system changes are represented as a single tear and I'll show that to you here in just a minute and every command and a docker file will produce another layer so you need to be careful of the commands that you put in well we'll see an example that here in just a minute one cool utility that you can use is the docker image history command and what that does is it actually tells you here all the layers in this image plus here's the command that was used to create this specific layer when we think about the layers when those layers are being put together to run in a container its Dockers using what's called a union file system has any I heard of the Union filing system before guess there's maybe five hands okay and honestly I hadn't heard of it much and again this is another contract that's been around for a long long time the containers really brought another valid use case to it so if I have all these layers the container is running in the merge layer it's basically taking all these layers Union Lemur geing them all together and higher layers replace files found in lower layers so in this example layer two has file two and file five well layer one also had file two but since layer two is a higher layer the the version of file two found in layer one is isn't seen in the container okay so again higher layers replace lower layers so then the question is how do you how did deleted files get represented in the structure and layer so if I had a another command in my docker file said hey remove this file house I get represented and it works very much the same you know same way we do on paper it uses what's called a wideout file okay if I write something on paper and with a pen a real Asylum and I didn't mean to it's the wrong date let me get out some whiteout and I just cover it up it's basically the same idea here so in layer three if I were removing a file if I actually look at that that layer star file I would have a zero length file that's just named WHS file for and it's a whiteout file and so then when the Union file system puts these all together the final container doesn't see that file for existed okay but what's interesting to note is that will file for is still in layer one once a layer is created it can't be modified and so that's why we're creating another layer with the whiteout files but that means I'm still shipping around therefore sorry I'm still shipping around file for even though I'm not using it all okay so that's an important thing to remember in fact warning be careful what you put in your images okay deleted files aren't actually gone all right so I'm gonna do a demo here let's get to this tab what I have on the top is I'm gonna be na actually let's do this one yeah I had a container that I just called a mystery image and if we look at the image history we could see all right hey there's a node version and I'm command node okay so this is some node based application it's using apk so it's probably Alpine based and or interesting I see in this layer here that I'm doing an NPM install and remove some file but it's getting cut off if you want to see the full command you use the no trunk flag and I find it completely ironic that you truncate the no trunk flag okay whatever so so if we run that it gets really long cuz there's some of the commands are much longer and it's word wrapping them all but if we look at it we can see hey there's this command that's doing an MPM install and remove this app source settings j/s okay if I actually run the container mystery image just start a shell and if I look in the source directory I don't see that settings at J's file so it is indeed gone from my container but so what can I do there's another command docker image save mystery image in which what this is going to do it's going to create a a tar file that has all the the layers bundled in it and it strings it to standard out so you can save it to a file or whatever but what I'm going to do is I'm going to just pipe it to guitar and just immediately explode it back out and this will take a second to run the the use case for this might be if I create an image and I want to take the image and put it onto an air-gap network or something like that I can put it onto a USB Drive carry it over to the other machine and now load that image from there but if I look at this now I see a bunch of directories and I see this manifest.json if I look at the manifest JSON it actually tells me here's the repo tags here's all the layers in this image and everything okay now this is getting super nerdy I know but if I look at the layer that remove that file okay that was in the last command so if I go in the previous layer where that file was added let's actually go in there and I'm going to unpack that tar file app source and now I've got that settings at J's file and so here's the file that got deleted in my image okay and so all right now I've got a settings DB user equals root and DB past secret that shouldn't be seen okay so some developer was like oops I didn't mean to actually copy that in there let me put another run command to delete that file I don't see it in the final container must be good enough ship it okay no okay you never ever ever want to bake secrets or any files that you don't want in your final image in previous layers because they're not actually gone they're just being whited out okay and I I I'm quite confident that there are bots that are just crawling docker hub pulling images looking through all the layers and looking for AWS secrets TLS certificates and these database credentials where so you never ever ever want to bake your secrets into files even if you remove them okay so be aware of that okay so to press practices real quick so clean up as you go as I talked about earlier every command and a docker file is created another layer so in this example we're creating a image that had that's producing the AWS CLI you know pretty basic I've got four commands one that's getting all the repo cache our repo indexes from apt I'm going to install Python on pip I'm going to install the AWS CLI using pip and then you know I don't need pit than my final image so let's remove it okay but what we just learned is that when I remove pip all I'm doing is creating a bunch of white out files so I'm still shipping Pip even though I'm not actually use it in the final container so the way to fix this is the chain of all this and do a single run to say well I'm going to do my update I'm going to install python a pip i'm going to install my AWS CLI I'm going to clean things up and the last command is actually cleaning up those the repository indexes okay and what this ends up doing just this simple change my container looks the exact same and what it does is it reduces my image size from 512 Meg to 183 with a 64 percent reduction okay so again it's just thinking about what am I actually putting into each of my layers what do I really want this or not and in fact actually I looked at the the auto removed command where where I'm actually uninstalling pip and there's 1.2 megabytes of just zero constant length files okay that's a lot of whiteout files for it's nothing really okay I didn't need to ship that in the first place so clean up as you go another best practice I'll just tell you just keep your images tight and focused only install the things that you want it's really easy when you're getting started with containers to say I'm used to general-purpose VMs I'm gonna make a general-purpose container that can do everything okay that's not the point remember the point is to make a file system that's specifically tailored to run a specific app a specific process in this particular example I've got to react front-end and so in order to build this react app I need node and so I'm going to use a node container I'm doing a package installing all my dependencies and everything running a yarn build and for those of you not familiar with reactive it the end result is just static HTML Javascript CSS okay I don't need node to actually serve that content so why don't I use a container and a in an application that's designed to serve static content like nginx so this is an example of a multi-stage build where I use node to do my build but then I use nginx to actually serve my app and I pull the the contents of the build into another container you can do the same thing with job I'm gonna use a my first stage an image that has a JDK in it with maven or whatever up the grate or whatever you're using and then my second stage well I don't need a JDK in production I just need a JRE and Tomcat or wildfly or whatever you're using and so in my second stage I'm going to pull the war file that are produced in the first stage and put it into my final image and so then again this allows your final images to be tight and focused okay that's reduce attack vector reduced maintenance that you have to do on your image okay so use multi-stage builds a separate build timer Undine runtime dependencies and there are other talks here at dr. Condon to dive deeper into that as well especially build kit so that's something to look at so how do you persist data that's a question I get quite often all right I start this this container I spin it up I do something I tear down I started again and now it's started from scratch how do I persist data and the answer is to use volumes okay so volumes provide the ability to persist or supply data into containers if we go back to our file system idea that we were talking about earlier okay the root filesystem is coming from the image but then I can augment it and say alright here are other other sources of data coming from the host or a network store or whatever built into the docker image and there's only two types of vines technically there's a third but not many people use a third one anymore there's bind mounts in which I'm saying I want to bind this specific directory to this specific spot in the file system and then our name volumes where I say I don't really care where the data stored just make sure it's stored can I just think of as it's a bucket and as long as I use that that volume that same name it's going to refer to the same bucket and again these are the only two they're built-in but there are lots of volume drivers if you're using NetApp or whatever SFTP I don't know why you necessarily want to use that in a container but you can okay at Virginia Tech we've got a NetApp storage array and so for our production clusters we've got a volume driver that lets me say I want to create a volume that's backed by net AB and it just works it's awesome okay so I'm gonna do just a quick demo on volumes here and where's my mouse okay so up top what I'm gonna do is I'm going to do a I'm gonna run it you punchy container I'm just gonna mount my home data directory and put it into data on the into the container and so if I echo higher there to docker context again and if I go on to my root machine in that data directory that I guess I didn't finish cleaning up from when I was practicing earlier but I still see that docker context file and it's still there okay now if I remove hi dot text whatever I'll see that it's been removed so it's it's it's a bind mount here changes and one is are being changed and the other now if I exit this container and I restart the container I won't make you all vote because it didn't work last time but you know we'll the file still be there and the answers well yes because those are being persisted on the host okay and so that's again you want to use volumes to persist data and for of cluster environments remember that the defaults are local volumes so if I'm running a running in a container that's using the local volume that's local to that node so if you're running the cluster environment you you definitely would want to figure out how can I what network storage provider should I be using to get those off of the node so all the nodes can share the same volumes okay so yeah good luck remembering all the options there's lots of flags lots of everything and to tell somebody hey here's a command go run it good luck that's that's a lot to remember so docker came up with docker compose it's a yeah most file structure that makes it easy to define these applications and share them with others typically they're seeing that in project source repos at the the root of the repo so a lot of times I'll clone a repo I'll see a compose file and I can just use docker compose to spend everything up the tool is bundled with docker desktop for for mac and windows if you're using docker on Linux environments you have to install it separately but it's it's not too hard to do and one thing to just mention so docker compose relies heavily on networking I'm not going to go into networking there are other talks on it but think of networking really in terms of communication boundaries in isolation if two containers on the same network they can talk to each other if they're not on the same network they can't talk to each other and that's the unless you're a network engineering you learn about be the--the pairs and all that kind of stuff i won't get into it but if there's same network they can talk to each other and one of the really cool things is docker basically puts itself in as the dns resolver for the containers so if i have two containers i have an app and database and i've specified that my database has an alias of DB my app just simply has to say i want to connect to a hostname and doctor says well great yeah I have a service in that same Network that's named DB let me resolve the IP address to that container and it just does it so my app doesn't have to figure out how do I find it all I need to know is the hostname and so as apps or containers come and go Dockers DNS is keeping that all up to date which is really awesome all right so I'm gonna do a quick compose demo here and I'm gonna go back to docker compose here and oh sorry play with docker and I'm going to clone a repo that I have here and I'm gonna spin this up and while it's doing that I'll show you what it's actually doing so I have a docker compose file that it's it's two things there's an app it's a PHP this is just a simple lamp stack and it's going to build a container to use for the app image and it's going to expose the port and I'm mounting in my source directory to the web root and then my database it's going to build a database really it's just use an upstream I seek when adding a schema file to it so it'll automatically create the schema and that that's really it okay and at this point let me go back here go to port 80 alright smile and now I've got my grocery list application I was tired of making to-do lists so I just made a grocery list and so now I can add milk in it it stores in the database hurry now since since I have mounted my source directory in it what I can do is I can I can make changes to it and I say you know this the Smit buttons not exciting enough so let's make it super exciting and if I save that file and go over here and just refresh now my submit button super exciting okay so again I'm just mounting the source code in there and as a developer I don't have to know how to set up my environment I just run docker compose up and it just works okay and I could start contributing day one on our team we had developers in the past when we would hire new developers we we would fully expect the first week was just setting up your machine getting everything and now we're at the point that hey you got in today you're committing code by the end of the day okay sure it may not be real good code yet because they don't know the the application architecture everything but they at least have the ability to do so okay because all they have to do is spend it up and go so doctor and compose fantastic for developers okay so let's talk about orchestration for a minute just got a couple more minutes left orchestration if we go back to the idea of standardizing around these shipping containers this is another one of the tools that builds on top of this can this container of standard think of it kind of as a traffic controller I have a fleet of machines and I can just simply say hey I want this to work and it says alright cool I'll make it happen and the way that it works and pretty much every orchestration framework works this way you define the expected state which you want to happen the desired state and then the system works really hard to make actual state reflect what you want to happen okay so if I say I want 3 replicas of this it'll say great let me figure out where I can put those across the cluster ok and I don't have to go to each individual machine and actually start it up and the actors in orchestration there's typically only two types of actors there's the managers who are the brains of the operation these are the ones that I tell the expected state and then there are the workers that actually go and do the work in some environments some orchestration frameworks managers can be workers too so just recognize that as well and so the workers go and actually do the work now managers in some systems may be called masters and workers may be agents or nodes but it's pretty much all the same same types of actors and every orchestration framework just really high-level ok these are the three I'm just going to hit on so docker swarm we saw some examples of this morning it shipped with the container engine you install docker you get the ability to run swarm it's super user-friendly it's easy to get up and going and satisfies most needs and kubernetes is super extensible but it is super complicated ok and you have that's why there are so many managed services out there for because it's hard to do ok but you can do a lot of really cool stuff with it Amazon ECS we use that quite a bit stair elastic container service and it has a lot of deep integrations yes as well as eks but we do a lot with ECS as well so let's actually spin up a quick storm so going back to play with docker what I'm gonna do is increase my font size again so docker swarm in it and because this is on play with docker I have to specify an advertised address and I'm just going to copy this command good in my hello what is it actually copying here sometimes the copy-and-paste part gets a little funky alright so copy that paste there and now at this point I've got a node cluster I've got a three node cluster and all I did was just run three commands and now all I have to do is say let me actually create a service here I'm gonna name it cats and replicas three and I want to turn on Mike's rusev and cats one dotto and so I amazing defining this expected state here and now this cluster is gonna figure out work and run and for kicks I'm going to install I've got a docker app which we saw quite a few demos of this morning which will show me once it starts out where things are across the cluster and so I can see that it started up this app across the cluster and if I open up oops I forgot to publish the port but anyways I can define the expected state and say I want this going and it says great let me figure this out now if I doctor service update and let me publish and 5,000 and I'm also going to change the image to version 2.0 oops what we'll see as well see it roll out and a new update and if I start looking at it now once it starts up its rolling out the update the version 2.0 and my cat's image version 2.0 is actually dogs because dogs are better so so there you go just displays a random zip there but we can see it rolling out across the cluster so again swarm and orchestration really give me the ability to kind of traffic control across my cluster and so containers and images again to recap are just the standard way to app package up an application I no longer need to do a lot of host config either to just install the engine and docker compose and orchestration or again tools that build upon this this standardization we need to be mindful of how we build our images and volumes allow us to persist our data yeah one thing I'll just end on containers are not a silver bullet to change your company culture going back to the idea of the Industrial Revolution that happened because people wanted to make Goods faster they produce goods faster they ship them faster if if you're having a struggle just being agile and responding to user feedback containers aren't gonna fix that okay there's there's a whole cultural adoption that has to occur along with that okay and that that's honestly the hardest part okay so work on your culture work on the ability to move quickly and with that I thank you I encourage you to rate the session to give me feedback on what worked what didn't and I'd be happy to to respond from there excellent what a great great overview in the app on the phone you can rate the presentation that would be great I think Michael may hang around he's going to take a selfie for any quick questions but we are definitely out of time so thanks for everybody for coming enjoy lunch come around for Michael [Applause]
Info
Channel: Docker
Views: 19,533
Rating: 4.9607844 out of 5
Keywords: Docker for Devs
Id: 6gJs0F8V3tM
Channel Id: undefined
Length: 41min 54sec (2514 seconds)
Published: Mon May 13 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.