Docker - Complete Tutorial [Docker For Everyone In 2 Hours]

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so let's start with the most important question right away what exactly is docker well docker in the end is a container technology it's a tool for creating and managing containers okay so that clearly is a nice sentence but what exactly does it mean what's a container in software development and why might we want to use it well a container in software development is a standardized unit of software which basically means it's a package of code and that's important the dependencies and tools required to run that code so for example if you're building a node.js application node.js is a javascript runtime which could be used to execute javascript code on a server if you had such an application with a container built with docker you could have your application source code in that container as well as the node.js runtime and any other tools that might be needed to run that code and the advantage is that the same container with the same node.js code and the same node.js tool so the same node.js runtime with always the same version will always and that's the key thing will always give you the exact same behavior and result there are no surprises because it's all baked into the container it's always the same but maybe it's even easier to understand this concept if we take a step back think about a picnic basket a picnic basket contains everything you need to have well a picnic dinner in the park something like this it contains the food and it contains the dishes which you need and therefore you can take that basket and take it everywhere where you want to have that picnic and you're ready to go you have it all in that basket you can also share that basket with a friend and that friend can have that same picnic you would have had otherwise there are no surprises you have the dishes and the food you don't have to look for dishes at the place where you want to eat and maybe then you have soup with you might be strange with a picnic but whatever and then you don't have the dishes for eating soup if you pack your own picnic basket it's all in there and that's the idea behind a picnic basket it's the same idea behind containers and docker which is just a tool for creating and managing these containers now the term container might be strange at first because if you hear about a container you might think about something like this at least that is what i know as a container but actually it's not a bad comparison this is a container we would load onto ships or trucks to move goods around well and it's the same idea as with docker containers still we have standardized containers here like this and we can fit various goods into these containers but they are then self-contained and isolated the goods in one container don't get mixed with goods from another container if you need something like cooling it can be built into the container and therefore the container works stand alone and it can be put onto any ship or any truck which is able to handle containers and that's exactly the same with docker containers we have our units of software our packages with code and with the dependencies to run this code and we can then take it anywhere where docker runs and we will then be able to run exactly the same application with the same environment wherever that is we don't need to worry about installing any extra tools in that place where we want to run our application because it's all in the container that is what containers are and that is what docker is about because docker is just the tool for building these containers now the good thing is that support for containers is built into modern operating systems or at least there it's easy to get started with them and docker can be installed on all modern operating system to then work with it there and docker then in the end is a tool that simplifies the creation and management process of these containers you wouldn't need it to create containers but it's the de facto standard for doing that since it makes that task so super simple so we now know what containers are and that docker helps us create these containers i also already outlined some advantages but still you might wonder why exactly do we need containers in software development maybe the example made sense for a picnic basket and to some extent it makes sense for a software development but is it really that important and useful yes it is we should just ask ourselves one simple question why would we want independent standardized application packages in software development well for one one very good and typical example and one of the main use cases of docker i would say we of course often have different development and production environments and here's a simple example let's say you created a nodejs application and you wrote some code there which requires node.js version 14.3 to run successfully and this is not some made up example the example you see here is node.js code which uses a feature called top level await now you don't need to know node.js to follow along with this course for the moment it's enough to know that this is a feature which would not work in older versions of nodejs we need node.js 14.3 or higher to execute this code successfully the problem is that we might have that version installed on our local environment on our development environment on our local machine but if we then take this application and we deploy it onto some remote machine onto a server where it should be hosted so that the entire world can reach it then on that remote machine we might have an older version of node.js installed maybe 14.1 or 12 or 8 whatever and all of a sudden the code which worked locally on our machine doesn't work there anymore and depending on what's going wrong it can take quite some time to figure out what the problem was so having the same and with that i really mean the exact same development environment as we have it in production can be worth a lot and that is something where docker and containers can help you you can lock a specific node version into your docker container and therefore ensure that your code is always executed with that exact version and all of a sudden that potential problem is gone and can't occur anymore because your application runs in that container which brings its own node.js version so that's one example kind of related to that would be another example different development environments within a team or company let's say we're in a big team i'm working there and you are working there and now we're working on the same project for example that same node.js application again now because you haven't worked with node for some time or because you didn't need to update you still have an older version of node.js installed on your system and i have the latest version so i wrote this code with top level weight and when i share the code with you it doesn't work for you now obviously you might say updating node.js is no problem i can easily update it on my system and we're on the same page again yes but for one it's just one example there could be more complex projects with more complex dependencies which you need to manage and install and in addition even in that one example it's still annoying that we're not able to work on the same code base together and we have some guarantee that it will always work and that we always use the same environment we want that reproducibility in software development and therefore even here if we're not deploying it at the moment having that locked in environment with everything the code needs in the container can be worth a lot and now last but not least even if you're working on your own then still docker and containers could be very useful because if you have multiple projects on which you're working then you might have clashing versions let's say one project uses python version 2 still for whatever reason another project uses the latest version of python the same maybe with node.js or php or whatever kinds of projects there you would have clashing versions and that means that whenever you switch from project a to project b you have to uninstall the wrong version and install the right version for example if one project needs node version 12 the other project needs node version 14. if you switch projects you have to uninstall version 14 install version 12 instead and vice versa so switching between projects becomes a big hassle and is no fun at all and that again is something where docker and containers can help us we lock our versions into the containers and every project has its own container and therefore if we then switch projects it'll just work like that we don't need to uninstall and reinstall anything because it's all in the container not globally on our host machine and therefore switching projects becomes as easy as launching a different container and these are all things we're going to see throughout this course and you're going to learn everything you need to solve these problems throughout the course and that's why we might want to consider working with containers and why docker is a quite helpful tool so now we know what docker is and what containers are and why we might want to use it we will learn what exactly kubernetes is a little bit later in the course by the way but let's stick at the basics for now containers and docker the problems solved by docker and containers make sense hopefully and it makes sense that we might be looking for a solution like docker and containers but if you've been in the software development area for some time you might also think to yourself why docker and why containers isn't that a problem so that reproducible environment thing isn't that a problem which we can solve with virtual machines so machines running on our machines virtual machines with virtual operating systems encapsulated in their own shell independent from our host operating system isn't that a solution well kind of with virtual machines we have our host operating system windows or mac os or linux and then on top of that we install the virtual machine so a computer inside of our computer so to say this virtual machine has its own operating system the virtual operating system which runs inside of that virtual machine let's say linux then in this virtual machine since it is like a computer it's just virtually emulated you could say but inside of that virtual machine we can then install extra tools we can install whatever we want in there because it is just another machine even though it just exists virtually but we can install all the libraries dependencies and tools which we need and then also move our source code there and hence since it's an encapsulated virtual machine with everything our program needs and all the tools being installed there we kind of have the same result as with docker and containers we have an encapsulated environment where everything is locked in and we could then have multiple such environments for different projects or we could share our virtual machine configuration with a colleague to ensure that we're working in the same environment well this works but there are a couple of problems one of the biggest problems is the virtual operating system and in general the overhead we have with multiple virtual machines every virtual machine is really like a stand-alone computer a stand-alone machine running on top of our machine and therefore if we have multiple such machines especially we have a lot of wasted space and resources because every time a brand new computer has to be set up inside of our machine and that of course eats up memory cpu and of course also space on our hard drive and that really can become a problem if you have more and more virtual machines on your system because you have a lot of things which are always the same and still duplicated especially the operating system you might be using linux in all your virtual machines and still it's installed separately in every machine and that of course wastes a lot of space in addition you might have a lot of other tools installed in every virtual machine which your application doesn't need directly but which still are set up as a default and that can be a problem so to sum it up virtual machines have a couple of pros and cons they do allow us to create separated environments and we can have environment specific configurations in them and we can share and reproduce everything reliably but we do have this redundant duplication that wasted space performance can be bad since we have an extra machine running on top of our host system and especially if we have multiple such machines performance can really degrade and in addition even though it is reproducible and shareable that also can be tricky because you still have to set up that virtual machine on every system where you want it and you have to then configure it in exactly the same way there is no single config file you can necessarily share if you want to deploy your application so from development to production you also have to ensure that you configure your production machine in the same way as your virtual machine alternatively you run your virtual machine on your production machine but that wasted performance that's something you might not want to do in production so it solves the problem but not in a perfect way and you might guess that's why we have docker and containers and actually it's really important to understand that containers is the key concept here docker is just the de facto standard tool for creating and managing them so why does docker help us with that how do containers solve that problem in a better way than virtual machines do with containers we still have our host operating system windows mac os linux whatever it is but then we don't install a couple of machines in the machine instead we utilize built-in container support which our operating system has or emulated container support something docker will take care about that this works and then we run a tool called the docker engine on top of that and that will all be set up by docker when we install it by the way and then based on that docker engine which now runs on our system which is just one tool one lightweight small tool being installed there we can spin up containers and these containers contain our code and the crucial tools and runtimes our code needs like node.js for example but they don't contain a bloated operating system tons of extra tools or anything like that they might have a small operating system layer inside of the container but even that will be a very lightweight version of an operating system much smaller than anything you would install in a virtual machine and you're going to see how containers work exactly and how you create them throughout this course of course now the other great thing about containers is that you can configure and describe them with a configuration file and you can then share that file with others so that they can recreate the container or you can also build the container into something which is called an image which you need to do anyways and then you can share that image with others to ensure that everyone's able to launch that same container which you have on your system on their systems and obviously we're going to dive in detail into images and containers throughout this course so you are going to learn all about that step by step so if we compare containers to virtual machines we have a couple of advantages on the container side they have a low impact on our operating system and machine they're very fast and they use minimal disk space sharing rebuilding and distributing them is very very easy because we have these images and these configuration files and we still have encapsulated apps and environments with everything our app needs but nothing more which is perfect which is exactly what we want virtual machines on the other hand have that bigger impact on the operating system they tend to be slower and they also tend to eat up more of our disk space in addition as i said sharing rebuilding and distribution that all can be done but also may be trickier than with containers and dockers still of course we encapsulate our environments but we don't just encapsulate our app and what it needs to run but entire computers it's like having a totally separate machine which in rare examples or in rare use cases could be an advantage but often it's just a bloated thing which you don't necessarily need so by now we have a first good idea of what docker is what containers are and why it might be interesting to work with these things now of course to work with docker we need to install it so in order to create these containers and work with them we need to install docker and when it comes to installing docker the exact steps depend on the operating system you're using and then also on certain system requirements which you need to meet and for that it's best if you visit docker.com there if you go to developers docs and then to download and install you can choose your operating system and you will find the system requirements there and for example you see for mac you need a hardware newer than 2010 and the macos version which is 10.14 or newer at least at the point of time of recording this for windows you also got certain requirements for example you need windows 10 pro enterprise education or home and for linux you actually don't have any specific requirements because there you'll be able to use stalker basically everywhere now i will come back to the different platforms and the exact installation steps in a second but for the moment you should just check whether you meet these requirements or not for mac os if the requirements are met you can install a tool named docker desktop which is the recommended tool you should use when working with docker if the requirements are not met then you can install an alternative tool called docker toolbox docker desktop is the recommended tool but if there is no way of getting that to run use docker toolbox instead i will show you both by the way i will show you the setup steps for both tools over the next lectures for windows it's the same if the requirements are not met you should install docker toolbox if they are met you should use docker desktop just as on mac os and for linux it will be a bit easier there you have no docker toolbox or docker desktop tool instead linux natively supports the docker engine and therefore you can directly install that engine on linux docker toolbox and docker desktop are basically just tools that bring docker to life on non-linux operating systems you could say because the linux operating system natively supports containers and the technology docker uses you could say so therefore for linux if you want to install docker you can use the attached resources which in the end also just guide you through some setup steps for linux where you install docker just like this on linux it'll be really straightforward and once you're done with that you will be able to run the docker command now for mac os and windows as i mentioned and explained you can't install docker just like that instead you need the recommended docker desktop tool or if not supported the older docker toolbox tool and now over the next lectures i will show you installation on both mac os and windows and i will show both docker desktop and docker toolbox we start with mac os in the next lecture windows thereafter and they are after docker toolbox for both operating systems therefore choose the lecture which makes sense for you so which matches your operating system and your answer to the question whether you meet the requirements i showed you a couple of minutes ago or not and thereafter we'll all have the same setup where you can use stalker so no matter how you install it in the end you will be able to use docker and in this course i will use mac os for my videos but what i show you there and what i teach you there applies in the same way on windows you will execute the exact same commands on windows so with that let's dive into the different installation steps in the next course lecture now in this lecture we're going to install docker for mac os and therefore on this get started page on docker.com here under docker desktop we're going to download the version for mac it should automatically pre-select the right one if it didn't you can simply switch to it so click this and download this to some folder of your choice this is just installer basically so it's really up to you where you want to store it and i will be back once this download finished to then show you the very very very complicated not installation process and to then also show you how to finish up the installation and what's important to keep in mind when it comes to working with docker so download finished now we can execute this file here on mac os and let it open the docker dmg file and now the installation really is just that you drag this docker app which for you might have a real docker icon and not this placeholder which for whatever reason i have here and drag the to the applications folder that really is all this will now copy over docker and once this is done you can close this and simply run docker you will now have the stalker app available as an application you can run so i'm running it here and you now see this whale here in the status bar on top now here you can go to preferences to bring up this graphical user interface here and there you can set up some general things you can for example control whether docker desktop should start when your mac starts you can set up automatic updating and you can also configure other things some of these things or settings are topics i'll come back to later in the course for example file sharing generally you should be fine with all the defaults here only change things if you know what you're doing besides the startup thing here which of course is up to you whether you want to have this changed or not now one important note is that on the bottom left corner here you will see that docker is currently running and it will start whenever you start docker desktop so if you don't automatically start it with startup of your system you should start it manually by running this docker app before you try to run any docker commands against it because all those commands which you'll see throughout the course which we execute on the command line will only work if docker is running so you should always make sure that you started docker for example by starting the docker desktop app in order to run commands thereafter now with that that's it for the moment you can close this window it's still running here as long as you have this icon here and you can always shut docker down with this menu here but for the moment i'll keep it up and running and again make sure it is up and running when you plan on working with it and that's already it with the docker for mac os setup for older versions of mac os check out this extra lecture which comes after the next lecture and for windows also check out that next lecture and with that you'll then be ready to dive into docker and start writing some commands to learn how docker really works and what you can do with it now in this lecture we are going to install docker on windows so obviously you can skip it if you're on mac os or linux now we can click on get started to be taken to the page where we can download docker desktop for windows and we will use that but before we start doing that actually go to developers and then docs here in the menu and then there click on download and install and then docker desktop for windows so that you can learn about the system requirements which you have to meet in order to be able to install it and you need either windows 10 enterprise pro or education or windows 10 home to use docker desktop if you have an older version of windows windows 8 or 7 the next lecture is for you because there i show you an alternative way of installing docker but if you have any chance of using docker desktop which we're going to install in this lecture definitely use that chance so if you for example are able to upgrade to windows 10 you might want to consider doing that now if you got windows 10 pro enterprise or education you need to have hyper-v and containers features enabled and in order to enable both of that you can search for windows 10 enable hyper-v to find certain instructions on how to do that you best follow the approach described here on the official microsoft web pages and in the end you just need to run one command to enable hyper-v you need to do that in powershell which is a default tool that ships with your windows installation which you have to run as administrator now once you are running this as administrator make sure you copy and paste this command into powershell and simply execute it and this will enable a functionality in your operating system which docker desktop needs to in the end run these docker containers on your system so make sure you enable this and thereafter in the same administrator started powershell session run a never command which you find in an attached file actually in that attached file you find both commands but now we need this enable containers command here and run this in your powershell as well with that you enable the two features which you need to enable if you are on windows 10 home then you got a separate page here with more steps you need to follow in order to install docker desktop so click on that install docker desktop on windows home link then and make sure you meet all these requirements here specifically you need to enable the wsl2 feature on windows and you got another link to the microsoft documentation which informs you how you can do that it's again in the end very simple you first of all need to enable the wsl feature by running this command again in powershell start it as an administrator you don't need to do this on windows 10 pro but you need to do this on windows 10 home and then execute this command here in powershell and once you did that you need to update to wsl2 by following the steps outlined here thereafter make sure you enable the virtual machine feature by running this command here in powershell as administrator obviously you can also leave powershell up and running so hit enter and now with that you need to download a linux kernel update package because wsl2 in the end is a linux installation inside of your windows installation and that linux installation will be used by docker desktop and by docker itself then now therefore we now needed to enable and update it now we need to download the linux kernel update package and that again can be done with the link you find here it simply downloads a file which you can execute walk through that installer which should be super quick and which in the end then just updates your linux kernel inside of that linux system which you have inside of your windows system then execute this command here in powershell as administrator to set version two of this linux inside windows solution here as a default version and then install an actual linux distribution of your choice so that will then be the linux operating system running inside of your windows system in addition to windows of course it does not replace windows it just runs in addition now here you can really follow these instructions and pick any distribution you want ubuntu is a very typical choice here and simply then walk through the installer here and follow the other steps which are outlined in this article you don't need to install the windows terminal here if you don't want to you can but you don't have to and you're there for done now if you're facing any problems also have a look at the troubleshooting guidelines here at the bottom of this microsoft article which you of course also find attached to this lecture and once you did all of that you're also prepared to install docker desktop on your windows 10 home installation again this was not required as i mentioned before if you are on windows 10 pro or enterprise or education of course you also must meet the hardware requirements outlined here and outlined here which are the same requirements and once all of that is set up therefore you now can use that installer which you find on the getting started page so simply download docker desktop for windows and then wait for this installer to be downloaded which can take a while of course and once it is downloaded walk through the installer now we're going to do this together so let me just wait until this is finished and i'll be back thereafter now once this was downloaded simply double click on the downloaded executable to start the installer you might be prompted to enable wsl to windows features especially on windows 10 home and you should then check this if you don't have this option you can also proceed without it though if you're prompted to enable hyper-v then check this so long story short whichever check boxes you see here just tick them of course with the exception of the add shortcut to desktop checkbox that is up to you whether you want to do that then click ok and this will now install docker desktop and therefore the docker tool on your system now this again can take a while so let's wait for this installation to finish and eventually this should be done now once this is done you might be prompted to restart your system if you are of course do so and thereafter after your system restarted docker might have started up automatically otherwise simply execute docker desktop this tool which we just installed make sure you execute and start it and once you do start it you should see this prompt here or this screen here you also should have that whale in your system tray here which proves that docker is up and running here you can also always dive into the settings of docker if you need to configure something though for the most part you shouldn't need to do too much here however you should ensure that docker is always up and running if you plan on working with it so if there is no whale down there you will not be able to execute docker commands so throughout this course ensure that docker is up and running if you want to work with it here in this graphical user interface you can click on this gear icon to be taken to the settings and here you can control whether docker desktop should start up when your system starts or not for convenience you might want to leave this turned on but it's of course up to you you also might have the option to choose whether wsl2 should be used on windows 10 home you should do that on windows 10 pro you can do it as you see it provides a better performance but you can also use just this hyper-v tool which we enabled now you don't need to do too much about the other settings you see here you can ignore them for now i will come back to them if we should need them throughout this course and with that we got docker installed and we're ready to use it you can now open up your regular command prompt you don't need to run it as administrator and there you should be able to enter docker like this and you should not get an error but instead a list of commands you could use and that proves that it works and that docker was installed now we're soon going to use it for a more useful command than just this dummy command but that's now the setup we need now docker desktop is the recommended way of running docker on mac os and windows but it's not available on every system if you go to the developer docs part here on the docker page so if you visit docs.docker.com you can click on download and install and there you see the requirements for using docker desktop both for windows as well as for mac now the good thing is for mac os most systems support docker desktop if you have mac hardware newer than 2010 and you fulfill all these requirements you will be fine for docker desktop for windows you need windows 10 essentially as you saw in the last lecture as well but if you don't fulfill these requirements there is an alternative available for you there is a tool named docker toolbox which you can then use instead of docker desktop if you google for a docker toolbox you should find these toolbox installation instructions it's a legacy tool because it's not recommended that you use it anymore because it was replaced by docker desktop for mac and windows historically docker toolbox was the only solution for all platforms nowadays we have the more modern docker desktop solution but for older systems that's not available so therefore there you need to stick to docker toolbox now what is docker toolbox the docker tool runs natively on linux and to make it work on mac os or windows you in the end need a virtual machine so a machine simulated on your machine which holds a linux installation in which docker can run now docker desktop for both mac and windows uses built-in operating system features for that but older versions don't have these features that's why you then need to install a virtual machine manually and install docker inside of that machine and that's in the end what docker toolbox helps you with so how do you use stalker toolbox then well first of all you might want to verify that you really can't use stalker desktop but if you're at this point that you can't use it you can follow the instructions on this page and on this page you for example can find out how to check whether your system supports the virtualization which is needed for example how to check it on windows 8. if you don't fulfill the requirements listed here you unfortunately can't use docker on your system because you will not be able to install docker toolbox and that is the last possible way of getting docker to run so make sure you check these requirements on windows 8. for windows 7 make sure you use a tool like speccy or the hardware assisted virtualization detection tool and once you know that you are able to run a virtual machine you can install docker toolbox on mac os by the way check the install toolbox on mac page here to find all the installation steps there though in general these are the same steps as shown on windows so i will show it on windows here but this would be your mac os steps if you need to install docker toolbox on mac os because you can't use docker desktop now how does this work first of all you need to go to the toolbox releases page and download the latest version dear and simply download the executable for windows and the package file for mac os so here i'll go with the executable for windows and this will now install everything you need to install to get docker to run on your older windows machine or on your older mac os machine you find detailed installation instructions on that page we just visited by the way so definitely make sure you also check these steps here in case you should get stuck so let's wait for this download to complete and once it did complete we can execute this installer which we downloaded in that installer simply walk through the different steps and pick a location where that should be installed on your system click next and then make sure you have all these things checked docker toolbox needs a virtualbox for example which is a tool that creates a virtual machine on your system and you should also ensure that all the other tools here are being installed so click next then you can decide whether you want the desktop shortcut or not you should definitely keep the second option checked though that docker binaries are added to your path and you can in general go with the default settings here so let's click next and install and this will now install and set up all the tools docker needs to bring up that virtual machine in which docker then is able to run now eventually this will be installed and you can then click finish and it should open up this folder with some shortcuts you can use to basically start docker you can simply click the docker quick start terminal here and this will open up a terminal in which you then can run the docker command now you see for me i'm not able to start it because i have hyper-v enabled on my machine and therefore i should not use docker toolbox but if you can't use hyper-v if you can't use docker desktop therefore you should instead see something like this and you will have an environment in which you can run docker commands so where you then for example can run docker to see a list of all commands or execute all the different docker commands you will see throughout this course and that is it this is then your environment in which you can work to run docker commands and to work with containers as shown throughout the course what you'll learn about docker will be the same as if you used docker desktop so the concepts are all the same it's really just the environment in which you run docker which is different and as mentioned multiple times if you got any chance of upgrading to docker desktop definitely do that it offers greater performance doesn't clutter your system as much and overall is the recommended environment for running docker so over the last lectures we installed docker and i find it important to understand what exactly we installed there and what we do with these tools in the end we installed the docker engine we installed that no matter if you installed docker desktop if you installed it directly on linux or if you used the docker toolbox there this docker engine was simply set up in that virtual machine which hosts linux which is simply required to run docker the virtual machine by the way just to make this clear is really only there because your operating system doesn't natively support docker if it would we wouldn't need the virtual machine because the idea was to not use virtual machines but even with that it's just there to run docker and then your containers will run in that virtual machine so we still will be working with containers anyways we installed the docker engine in the end docker desktop installed it for us and made sure it works docker toolbox installed it for us and on linux we installed it just like that they are for docker desktop the tool which some of you at least installed over the last lectures is really just a tool that made sure the docker engine was installed and that it works it includes a so called daemon a process which keeps on running and ensures that docker works so to say the heart of docker and it contains a command line interface and you also got that with docker toolbox and if you just installed docker on linux the command line interface will play a crucial role because that is the tool we will use throughout this entire course to run commands to create images and containers and to work with docker so we got all of that installed by now throughout this course you will also learn about a service called docker hub which i want to mention right away we're not using it right now we also don't need to install anything for that but that will be a service which will allow us to host our images in the cloud in the web so that we can easily share them with our systems or with our people and we will also learn about a tool called docker compose later in a standalone section labeled docker compose this is a tool which kind of builds up on docker you could say which makes managing more complex containers or multi-container projects easier last but not least this course is also about kubernetes and therefore that's another tool if you want to call it like this we're going to explore later it will also help us with managing complex containerized applications when we want to deploy them but we will learn more about that when the time is right so that's what we installed and some tools you should be aware of and with that let's now use the docker tool we installed to get our hands dirty and create and run our first real container now in this course we are primarily going to run a lot of commands in the command line but we're also going to run these commands on projects which are written with node.js or other programming languages so there will be code in this course and there will be configuration files like this therefore you will need a code editor to follow along and you can of course use any editor of your choice but i do recommend visual studio code especially if you don't know which editor to choose it's a free editor a free ide available for mac os windows and linux it's amazing and you can simply install it from code.visualstudio.com walk through that installer and then have that tool installed here which you see in the background you can then always open projects with file open and then opening the folder which contains your code and if you want to have the same look as i do you can go to the preferences and there on color theme pick the dark plus default dark theme that's the theme i'm using which will give you that look in addition you can use the view menu here to adjust the appearance and show and hide side and status bars and you can also go to the extensions menu here to install certain extensions and here i can recommend the docker extension since that will help you with writing these configuration files it will actually make that a bit easier so that's something you might want to look into i can also recommend that you install the prior extension because whilst you're not going to write a lot of code this can help you with code auto formatting to basically automatically clean up your code besides that that should be all and these are also just some ideas not something you have to use just the setup i will be using throughout this course and therefore with that setup let's now get our hands dirty and let's write some first docker code or bring up our first docker container so let's create our first container and let's get our hands dirty even though i will say right away that at this point we're not going to understand everything here we're just going to write some code and bring up a container and we're going to understand all the details a little bit later in the next course section i just want to immediately get something up and running we also have this example to validate that everything is working as it should so here's the example you find it attached it's a very simple node.js application and by the way i just want to highlight it's just an example you don't need to know node.js for this course you can follow along even if you don't know it because we're not going to write any node.js code you can use docker for any programming language and any project and application written with any technologies of your choice what you learn in this course will apply it's just an example because i have to pick some programming language for the example in the end so this is a node.js application with some basic dummy code and in the end this will start up a web server on port 3000 which will listen to get requests on no particular path and will then send back some dummy html i also got this dummy database connection code which doesn't really connect but just set a timer of one second until the server is launched in the end and i'm having this here so that we have a top level a weight example which is a node.js feature for working with asynchronous code which only works if you use node.js version 14.3 or higher so this is the node.js code if we would want to run it locally without docker and without containers we would have to visit nodejs.org and then download that latest version of node.js whichever version that is when you're watching this walk through the installer that gives you and then open up a new terminal and then run this app mjs file by running node app mjs and actually before we do that we would have to run another command npm install which installs all dependencies listed here in package.json which are simply third-party packages this app mjs file needs to work correctly again if that all doesn't tell you too much that's no problem here these are the steps we just would have to execute if we would want to run this code here locally on our machine however the idea with docker was a different one we want to run this code in a container and for that we first of all need to create a so-called image because containers are always based on images and you're going to learn more about that and this relation between image and container in the next course section for the moment to create such an image we simply create a docker file so a file which is simply named docker file without any extension and in here we now describe to docker how our container in the end should be set up and we do this by adding a couple of instructions now since we are going to dive into images and containers in greater detail in the next section i don't want to waste your time with a bunch of instructions which we don't fully understand yet so instead attach to find a finished docker file which in the end just has a couple of instructions that we want to use node.js as a base image so that we want to have node.js available inside of our container that we have a certain directory in the container file system every container has its own file system so that we want to have a special directory in there in which we want to work that we then copy the package.json file into our working directory then we run the npm install command to install all the dependencies our application needs then we copy the rest of the code here then we expose port 3000 to the outside world because that's our deport our application is listening on and we want to be able to reach that port from outside the container not just from inside the container and then we execute app app.mjs with the node command which is available because we're running in a node environment again this was the quick walkthrough the details will follow in the next course section now with that docker file created we open up a terminal for example here i'm using the terminal integrated into visual studio code the editor i'm using here and with docker setup which we have at this point we run docker build dot and this builds the image the docker file it finds in the directory in which you run this therefore i'm using this integrated terminal here because this automatically launches in this project directory so any commands i execute here run inside of this directory so now this will build an image based on this docker file which is the first thing we need to do so simply hit enter here if you're getting an error at this point make sure you got docker installed and started start docker desktop if you installed that and make sure docker is running in the background and therefore at some point this should then work and what this now does is it grabs this node environment which already exists it downloads it from the cloud from docker hub to be precise but more on that later and then it will set up an image for a container to be launched with all these setup steps being executed inside of the image so it will give us an image which is prepared to be started as a container so this now walks through all these steps and after some time it should be done you get a successfully built and then some id for this image output as a side note on windows your output looks slightly different there you find the id of the image which was built here now we can use this id here and then run a container based on this image with the docker run command like this docker run and then the image id however actually since our container here has a port to which you want to communicate we actually need to publish that port on the container which we want to run and we do this by adding the dash p flag on docker run here and we then publish port 3000 on port 3000 which means we can use our localhost on our local system to reach the application running on port 3000 inside of the container because by default there is no connection between container and our host operating system if we want to send http requests for example to an application running in a container we need to open up the port on the container to which we want to communicate otherwise it's a locked network in the container and we can't reach it from outside with that however we can hit enter and this now will run this container and you can tell that it's running by the fact that you now can't enter any more commands here but that instead this command is stuck it's stuck because we have a running web server now you can visit localhost 3000 and you should see hi there there and that's our first dockerized application even though of course at this point we haven't written the stalker file ourselves and we don't fully understand what's happening there but we're going to learn all of that throughout the next sections to now stop this container we can open up a new terminal by clicking on this plus here and then you can run docker ps which will list all running containers and then grab this name of this container which was started it's a automatically assigned name and run docker stop and then this name and this will now stop this container and shut it down this can also take a couple of seconds but thereafter this container is not running anymore and therefore once this did stop if you reload localhost 3000 you can't reach the site anymore also in that terminal where you ran docker run you now are out of this running process again and you can enter more commands and that's it that's this very first basic example not too fancy and of course not a lot of work from our site but it shows us that docker was installed successfully that it works and that we kind of did create such a containerized application because we definitely did not install node.js on our system we did not run npm install in this project folder to install all third-party dependencies and still we were able to bring up that web server and visit it on localhost 3000 and that was possible because of docker so now we know what docker is what containers are why it's awesome we set everything up and we also got our hands dirty already now we definitely want to dive in deeper and therefore i just want to let you know what to expect from this course and what will be inside of this course we're pretty much done getting started and having this overview over what docker and containers are and therefore next we're going to dive into a block of sections which i call the foundation sections so we're going to dive into a couple of sections which lay out a very important foundation which you need to work with docker we're going to dive into images and containers and we're going to learn in detail what that is how you can build your own images how you can use existing images how you can run and configure containers based on images and how these two pieces work together thereafter we're going to dive into data and a concept called volumes you're going to learn how you can manage data in containers and how you can ensure that data persists so that it's not lost if a container is shut down and restarted and removed in between for example thereafter we're going to conclude the foundation block with a section on containers and networking so you're going to learn how multiple containers can talk to each other because you might be building an application where you have a node rest api in one container and where you then for example have a react js front end in another container and having these containers communicate with each other that's something which matters and which you will learn in this module thereafter we got a very solid foundation which does not mean that we're done yet but that you understand the core concepts hence the next step is to dive into the real life part of the course as i like to call it you could also say a bit more advanced concepts we're going to have a closer look at multi-container projects and what could be tricky there and how to manage these projects we're thereafter going to dive into a tool called docker compose which makes managing containerized applications much much easier as you will see i'm then going to explore utility containers as i like to call them together with you and you're going to see what that is and why it might be interesting and we're then also going to deploy containers and containerized applications and we're going to do this with aws as an example now after this part we'll be done with all the important docker and container concepts and basics hence the next step is to dive into the other big part of this course kubernetes i haven't talked much about kubernetes in this section because we first of all need to understand docker and containers in detail before we can understand which problem kubernetes solves well by the point we reach this part of course you will be in the position to understand it and hence i will introduce you to kubernetes and we're going to dive into all the basics you gotta know about kubernetes step by step after having these basics we are going to explore how we work with data and volumes in a kubernetes world we learned that earlier already at this point when we talked about data and volumes with just docker now we're going to see how that translates to kubernetes and we'll have the same thing for networking and for deployment because we will also learn how to deploy a kubernetes cluster and how to deploy containers with kubernetes instead of just standalone without kubernetes and that is the course it's a huge course with plenty of examples and plenty of concepts and therefore let's continue and let me show you how you can get the most out of this course now your success matters to me i want to ensure that you get the most out of this course and for that there are a couple of things to keep in mind and to be aware of most importantly of course it's a video on the month course you should watch the videos but watch them at your own pace at your own speed watch these video player controls which you find in the video player to speed me up if i'm going too slow or to slow me down if i'm going too fast really make it your course and watch it at your pace you can also pause a video from time to time or rewind if something isn't clear immediately docker can be a complex topic and therefore it's perfectly normal that you might have to do something like that in addition i can recommend that you code along we're not going to write that much code to be honest but we're going to write configuration files and we're going to run hundreds of containers and i can only recommend that you follow along with these examples that you write the same configuration files i do and that you also try running these containers if i'm going too fast as i mentioned pause the video to then code along and then unpause once you tried something out maybe sometimes even pause ahead of time and try something we're about to do together on your own first this might be a great practice as well and i can only encourage you to do that from time to time to ensure that you really get the most out of this course in addition of course also make sure you repeat concepts from time to time it's a huge course with a lot of sections and after a couple of hours it's easy to forget something which you learned earlier so don't feel bad if you want to dive back in into an earlier section or maybe in some lectures of that earlier section once you advance a bit more this is something which can make sense to ensure that you really don't forget what we learned and that you can always apply all the key concepts when you need them of course sometimes you also might be stuck and we do have a q and a section which you can also use to ask if you're facing any errors or anything like that i also however recommend that you do use google and stack overflow because often you can get a quick answer there if something is unclear or if you're facing some error and that might simply be faster than waiting for a response here and you might also find other examples which help you understand a certain concept even better last but not least as i mentioned there is a q a section and you can always ask dear if you're stuck but i do encourage you that you don't just ask there but that you also answer help your fellow students because you will learn way more if you do that if you help others you are challenged to think about a certain problem and to solve it and that is where you learn the most so i can absolutely recommend that you do this as well and if you keep all these things in mind and try using these different things throughout this course you will get the most out of this course and with that let's now dive right into docker and let's start digging a bit deeper into images and containers so by now we got docker installed and we're all set to dive into docker and that's exactly what we're going to do in this module this module is about the two core concepts the two fundamental concepts you gotta know and you got to understand if you learn docker and if you want to work with it and that will be images and containers now we already heard about containers in the first course module in this course module we will also learn about images and we will explore how images are related to containers and how you need both to work with docker in this module we will learn how we can use pre-built and custom images and what the difference will be and most importantly of course we will learn how we can create run and manage docker containers so with that let's dive right in let's understand what images and containers are and how we can work with these concepts to ultimately use docker so as mentioned we don't just have containers we also have images when working with docker now what is the difference and why do we need both we already heard about containers in the first course module you learned that containers in the end are small packages you could say that contain both your application your website your node server whatever it is and also and that's important the entire environment to run that application so a container is this running unit of software it is the thing which you run in the end but when working with docker we also need this our concept called images because images will be the templates the blueprints for containers it's actually the image which will contain the code and the required tools to execute the code and it's the container that then runs and executes the code and we have this split this separation here so that we can create an image with all these setup instructions and all our code once but then we can use this image to create multiple containers based on that image so that for example if we talk about a node.js web server application we can define it once but run it multiple times on different machines and different servers and the image is that shareable package with all the setup instructions and all the code and the container will be the concrete running instance of such an image so we run containers which are based on images that is the core fundamental concept docker is all about in the end images and containers where images are the blueprints the templates which contain the code and the application and containers are then the running application and this will become clearer throughout this module once we start working with images and containers and actually let's start working with that right now let's see how we can work with images and containers when using docker i mentioned that containers are based on images and actually there will be two ways of creating or getting an image so that we then can run a container the first way is that we use an already existing image for example because a colleague of ours already built it or also very common because we use one of the official pre-built images or one of the images shared by the community and a great source for that would be docker hub which you can simply google to find hub.docker.com and there you can log in but you don't need to do this right now instead here in the search bar we can for example search for node and what we'll find is the official node docker image which we could use to build a node application container a container which will later run a node application now this node image which we find on docker hub can be used by anyone and it is distributed and created and maintained by the official node team now we will use such official images a lot throughout this course and in general when working with docker but we can especially use it right away to get started with images and containers here and all we need to do for that is open up the command prompt or terminal on your system and then navigate in any folder of your choice and run docker run node this command here will use this node image which we find on docker hub and it'll utilize it to create a so-called container based on this image because as i mentioned containers are really just the running instances of images images contain the the setup code the environment in this case the node image contains the node installation and then we can run the image to run the application or in this case to simply run the node interactive shell now if you hit enter this will give you an error that it doesn't find this image locally which makes sense because it's on docker hub and then it'll automatically pull it from docker hub so now this downloads the latest node image from docker hub and once it downloaded it locally onto our machine it will run this image as a container so let's wait for this download to finish here and thereafter you will see nothing special here it's done and we can enter more commands so did we now run the container did we now create a container based on an image well yes we did but this container isn't really doing much node is of course just a software you could say and indeed we can execute node to get an interactive shell where we can insert command but by default and that's important a container is isolated from the surrounding environment and just because there might be some interactive shell running inside of a container does not mean that this shell is exposed to us as a user nonetheless this container was created and you can tell by running docker ps dash a ps stands for processes and with the dash a flag this will show you all the processes all the containers docker created for us and if you hit enter you will see that a minute ago i actually created a container and to make this a bit easier to read i'll zoom out a bit and rerun this and you'll see we created a container with this id the image was the node image it was created two minutes ago it exited so it's not running anymore and it also received a automatically generated name we'll dive into names and into configuring containers in general in greater detail later but what we see is that something happened but that it's not running anymore because as i said a container runs in isolation and even though we executed node as a image or as a container based on the node image this alone doesn't do much because the interactive shell exposed by node is not automatically exposed by the container to us that is something we can change though if we repeat the command from before docker run node and we now add an extra flag in front of node the dash i t flag then we will actually tell docker that we want to expose an interactive session from inside the container to our hosting machine and hence if we now hit enter we actually are in that interactive node terminal where we can run basic node commands for example one plus one but we could also use node apis in here but that's of course not the focus of this session now the important thing about this here is that node here is now running inside of that created container and it's just exposed to us by adding this extra flag so we can interact with that container and with node running in the container node is not running on our machine here and i can prove this please note that here we're interacting with node 14.9 which is the version that at the moment i was recording this was pulled into this image and therefore is being used in this container now if i quit this process with ctrl c pressed twice now the container will shut down when i quit this and i run node v here like this on my system so not inside of the container i see 14.7 as a version so locally on my system i got a different version installed then the version we interacted with here which proves that the version we did interact with must be the version from inside the container and we don't need node to be installed on our system at all in order to be able to run this node container and interact with it and that is really how you work with containers at least these are some first steps of working with containers and this also shows us what images and containers are images are used behind the scenes to hold all the logic and all the code a container needs and then we create instances of an image with the run command this creates then the concrete containers which are based on an image and if we now run docker ps a again we see that we got two containers now both are not running anymore they have been exited but we have more than one container based on the same image both containers are based on the same image and yes they are not running anymore but we could absolutely have two containers which are based on the same image up and running at the same point of time simply by opening up multiple terminals and then while repeating the stalker run command this would absolutely be possible and that's the idea behind images and containers images contain the code the setup the meat you could say and containers are then the running instances of those images now in the vast majority of use cases you don't just want to download and run an image that gives you an interactive shell like the node image does at the moment this is nice to get started and to have our first experience with containers and images but it's not all we want to do instead typically you build up on those base images to then build your own images like for example here we could build up on the node image to then execute certain node.js code with that image and of course node is just one example here an example i will use a lot throughout the course but the same would be true for php for go for python whichever programming language you're using whichever application you might be building typically you would pull in the official base image and then add your code on top of that to execute your code with that image and of course you want to do all of that inside of a container and that is a scenario where you need to build your own image because your exact application with your code of course this does not exist on docker hub unless you shared it there so therefore now we'll build our own image building up on this node image though and for that i prepared a very simple node.js dummy project and you don't need to know node to follow along you can simply download this dummy project you find it attached and it contains four files here a server.js file which contains our main node application code in case you know node you can quickly see what this does in case you don't know it this starts a web server with node.js listening on port 80 and we handle incoming requests to two urls to our domain slash nothing if it's a get http request in which case this html code will be returned and we handle post requests to store goal where we then try to retrieve a goal key a goal value from the incoming request body we then log this goal to the console and we set some user goal variable equal to that extracted goal and then we redirect to the slash route again so to this route and here we then render html code where we utilize this user goal variable which has a default value which is eventually overwritten to the entered goal value and this variable is output here this is javascript code executed with node to bring up such a node web server but again you don't need to know node this course is not about node this is just one example application which we want to dockerize which we want to run in a docker container now two other key things belong to this application one key thing is the public folder which has a css file for some styling this file is automatically loaded in the end by telling the node server to well just redirect requests to files to the public folder and then look for such files there which is why a request to styles css will load style css which is in the public folder but then probably the more important our key thing we have here is the package.json file here we describe this node application and that is a node exclusive concept it has nothing to do with docker this package.json file but here we tell node which dependencies we have which other third-party packages we need to run this node application in this case that's the express framework which is a node.js package we can use in our node.js applications and the body parser package now if you would want to run this project locally without docker you would need to install node.js from nodejs.org for example that latest version and then thereafter in your project folder in a terminal opened in that folder and here this is my default terminal just integrated in visual studio code the ide i'm using here automatically navigated into this project folder and in this terminal you would need to run npm install npm is another tool shipping together with node.js so being installed automatically when you install node and npm install will now download and install all the dependencies we need and then thereafter once you did this you could execute the server.js file with the node command and this would then start this server and on localhost port 80 you could visit this application set your goal and see this in action now this is the node application running locally without docker and you don't need to install node this was just an example how we would run it without docker now of course this course is about docker though so let's quit this with ctrl c let's clear this console and let's actually delete the node modules folder which was created after running npm install and let's also delete the package dash log json file here so that we're back to the initial files the git ignore file by the way is optional it's just important if you're using git for version control so this is now a node application how can we now build our own image that contains this application and that then also utilizes the node docker hub image to run that code to build this custom image we need to go to the folder that contains our code and in there we need to create a new file a file named docker file this is a special name which will be identified by docker now to get the best possible support for writing such docker files i recommend that in visual studio code you go to the extensions area and there you search for docker and you install this docker extension which i already have here make sure you install this docker extension because this will help you with writing docker code docker instructions it's not a must-have but it will make your life easier and thereafter you can go back to the explorer view so now we got this docker file and what do we put into this file now this file will contain the instructions for docker that we want to execute when we build our own image so it contains the setup instructions for our own image and typically here you start with the from instruction all caps from this allows you to build your image up on another base image and this is what you typically do theoretically of course you could build a docker image from scratch but you always want some kind of operating system layer in there one some kind of other tool which your code needs so therefore here i want to build up on the node image and i can do this with from node just entering the image name of an image which either exists on your system or under that name on docker hub and this image now exists on docker hub and actually since we already executed a container based on this image at the moment it also exists on our local machine because when we ran a container based on this docker hub image for the first time this image was downloaded and cached locally so now this is basically both a local and a docker dockerhub image the most important thing is though that it will be recognized that node is a name docker will be able to find that there will be an image named node so now we're telling docker hey in my own image i want to start by pulling in that node image and then i want to continue so that's now the first step as a next step we want to tell docker which files that live here on our local machine should go into the image and for that we got the copy command and here a very simple instruction we could execute is copy dot dot now what does this mean you basically specify two paths here the first path is the path outside of the container outside of the image where the files live that should be copied into the image and if you just specify a dot here that basically tells docker that it's the same folder that contains the docker file excluding the docker file though so in this case dot would tell docker this first dot would tell docker that all the folders subfolders and files here in this project should be copied into the image and now the second dot is the path inside of the image where those files should be stored every image and therefore also every container created based on an image has its own internal file system which is totally detached from your file system on your machine it's hidden away inside of the docker container and actually here it is a good idea to not use the the root folder the root entry in your docker container but some subfolder which is totally up to you you can name it however you want and here i will name this slash app now all the files here in the same folder as the docker file and all the subfolders there as well will be copied into an app folder inside of the container and this folder will simply be created in the image and container if it doesn't exist yet so that's one key step now as a next step we need to run npm install right because that is what we had to do outside of the container as well for node applications we had to run npm install in order to install all the dependencies of our node application and you also have an instruction for that which you can give to docker you can tell it that after copying all the local files into the image you want to run a command in the image in this case npm install however there is a gotcha here by default all those commands will be executed in the working directory of your docker container and image and by default that working directory is the root folder in that container file system since i'm copying my code into the app folder here i actually want to run npm install inside of the app folder as well and a convenient way of telling docker that all commands should be executed in that folder is that you set another instruction here before you copy everything and that's the workdir instruction for setting the working directory of the docker container and setting this to slash app and this tells docker that all the subsequent commands will be executed from inside that folder which makes sense because that is where we will have our code later and now as a side note given the fact that we now did set the working directory to slash app we could also change copy to copy everything from the path the docker file is in to just dot or dot slash which basically means to the current working directory inside of our docker container since we changed that working directory to slash app not just run but also copy will execute relative to this working directory so now inside of the container internal file system this relative path now points at slash app but we can also be more explicit here and set this to the absolute slash app path like this and i'm a fan of doing that since this makes it very clear where we're going to copy our files and we don't have to guess or look into that file to see what the current working directory is of course that is a simple file it's easy to see what the work door is but if this would be a more complex file it could be harder and therefore i personally prefer setting this to slash app here as well now with that we have a lot of important setup instructions the last instruction is that when all of that is done we want to start our server for that we could add run node server but this would actually be incorrect node server.js this would be incorrect because this would be executed whenever this image is being built all these here are instructions to docker for setting up the image now keep in mind the image should be the template for the container the image is not what you run in the end you run a container based on an image and therefore with this command we would try to start the server in the image so in the template but that's not what we want we want to install all the dependencies there yes we want to have all the code in there yes but we only want to start a server if we start a container based on an image also so that if we start multiple containers on one and the same image we also start multiple node servers so therefore here we have another instruction and that's the cmd instruction which stands for command the difference to run is that this will now not be executed when the image is created but when a container is started based on the image and that's what we want then we want to run our node server however for cmd the syntax is a bit differently here we want to pass an array you could say and then we have two strings in there where we split our command like this so that's how we now tell docker that whenever a container is created based on that image we use the node command which exists inside of that container to run the server.js file now if we would try to run it like this we would not be able to see our application though for one key reason this node web server listens on port 80. and i mentioned and emphasized multiple times already that a docker container is isolated it's isolated from our local environment and as a result it also has its own internal network and when we listen to port 80 in the node application inside of our container the container does not expose that port to our local machine so we won't be able to listen on that port just because something's listening inside of a container therefore in the docker file after setting everything up before specifying the command which should always be the last instruction in your docker file we can add the expose instruction to let docker know that when this container is started we want to expose a certain port to our local system so to our machine here which will run this container and then we'll be able to run the container such that we listen on this port now with that we finished our docker file with all the setup instructions for a docker image now let's see how we can utilize this custom image build it and how we can run it so we got this docker file how can we now turn this into an image and then into a container ultimately well we need to open up the terminal and again here i will simply use the terminal integrated into my ide which is the default system terminal though and here we can now run docker but not run but build because now i don't want to run an image at least not yet but first of all i want to create an image i want to create an image based on the instructions in this docker file and that's what the build command does it tells docker to build a new custom image based on a docker file and here we now need to specify the path where docker is able to find the docker file and if we just type a dot here we tell docker that the docker file will be in the same folder as we're running this command in and since i'm using the integrated terminal this is already navigated into this project folder hence the docker file is in that folder hence if i hit enter this now creates this image and you see it executes a couple of steps here it executes the from command to take the node image sets the working directory copies our code runs npm install we can ignore the warnings here then exposes the port and recognizes this command you could say and at the end it's done and it built an image with this name here with this id now keep in mind as mentioned in the first course module on windows the output looks a bit differently here you find your image id here you can also assign custom names to images but for the moment we can go with that so copy the id that was generated and then you can run docker run and use that id here and if you hit enter this will now start this container and you will see it doesn't finish it keeps on running the reason for that is that the command we executed here starts a node server which is also ongoing process that doesn't finish and therefore the container also keeps on running because the command that was executed when the container is started is a command that doesn't finish and hence the container keeps on going but you will notice that if you visit localhost you won't see the website even though we exposed the port here in our docker file so why is that not working and what can we do here well we are almost here first of all let's shut this container down because clearly it's not working as intended to do that you can open up a new terminal and there run docker ps to see all the processes and if you just run docker ps without dash a you see only the running processes and here we see one container based on this image we created and it's still up and running here because as i said it doesn't quit automatically now we can stop this container manually by running docker stop and then using the container name which was automatically assigned here if i now hit enter this takes a short while and it will shut down this container and therefore also the node server running inside of the container once this finished if you run docker ps again you see no running container anymore instead now you would have to enter ps-a to see your container there it now exited okay so now we shut this down what went wrong though why were we not able to listen on port 80 for this custom container because one step is missing here yes we have this expose 80 instruction in our docker file but actually this instruction only is added for documentation purposes it doesn't really do anything it is a best practice to add it and you should add it to clearly document which ports will be exposed by your container but you need to do more you could actually also remove this expose 80 instruction and still do what i'm about to show you and it would still work so this instruction is really 100 optional yet as mentioned recommended that you add it but what really matters is that when you run the container with docker run that you then add a special option so therefore what we need to do is we need to run this container but we need to add a extra flag here and that's the dash p flag in front of the image name which stands for publish and this allows us to tell docker under which local port so under which port on our machine here this internal docker container specific port should be accessible and the syntax here is as follows you have dash p and then you specify your local port under which you want to access this application in this case for example 3000 this is up to you and then colon and then the internal docker container exposed port in this case 80 since we're exposing 80 in this container and with this now if we hit enter this again starts but now if i reload localhost 3000 i see my application and i can learn docker in-depth because now this is published under the local port 3000 and this is now our first custom image based on the default node image with our own instructions and our own node app and i'm pretty sure that at this point there still are many question marks in your mind and in your eyes and i will get to all of them but i hope this general idea here is clear that we have a custom image based on a existing image the node image where we then have a couple of instructions for example to copy our code and install all dependencies and then we create this image with the docker build command and we then run a container based on that created image with the docker run command and then here again this container is still up and running if we want to close it if we want to stop it we again have to first of all find the container name with docker ps here and then we can run docker stop container name and of course there are also ways of assigning your own names and so on and we'll get to these ways and these features as well but for now these are the basics you should be aware of how to build your own image how to run that image as a container how to stop that container and most importantly you should understand how these core concepts work together so we created our first docker file therefore we then also built our first custom image and we ran the first container based on a custom image now there is way more which we can and will learn about this docker file and also about the docker command and all the sub commands we can run there all the different ways of configuring things we are going to explore that but before we explore anything related to that it is important that we fully understand how an image and a container work and how they work together and for that let's say in our node application code we want to change that code we want to change that code and say my course goal with an exclamation mark here for example this is a tiny change made to this html code which now should be reflected in our running application so if i rerun this command to restart this container based on this image which we built the application the node application starts again and if i now reload localhost 3000 i see this application again but we also see one problem my little change in code isn't reflected here keep in mind i added an exclamation mark after my course goal here we still see my course goal like this without the exclamation mark so why is this change not reflected i mean i even restarted the container in the meantime in case you didn't do that by the way you learned that with docker ps you can see running containers then you can pick the name and run docker stop to stop it and then restart it but again that restarting didn't do anything here i don't see that change in my web application because we have to understand how images work keep in mind that this is part of our source code our node application code what are we doing with that code well in the docker file we instruct docker to in the end copy everything inside of this project folder including the server.js file which holds my source code into the container file system to be precise into the app folder then we run npm install tell docker that it should open up port 80 and start the server when the container is launched now therefore we do one important thing we copy our source code into the image and we basically take a snapshot of our source code at the point of time we copy it if i thereafter edit my source code as i did it here when i added this exclamation mark this change is not part of the source code in the image we need to rebuild our image to copy our updated source code into a new image and that is really crucial now if this sounds cumbersome and strange that we have to rebuild our image every time we change our code i have a good message for you we will find a more elegant and faster way of picking up changes in our code later but the core takeaway here is important and will always be true images are basically locked and finished once you build them everything in the images is read only then and you can't edit it from the outside by simply updating your code just because you copied that code in in the past the image doesn't care about the past once this copy operation is done you can change your outside code however you want you can even delete it and the image will not be affected you need to rebuild to pick up external changes and basically copy all the updated code into the image so therefore what we need to do here is we need to run docker build dot again to rebuild this image and therefore to build a new image in the end so let's wait for this operation to finish and once we get this new image we also got a new image name because it now really is a totally new image it's almost the same as the previous one but technically it's totally different it has totally different code inside of it and we can now docker run this new image by using this new image name and if we do this now and we reload localhost 3000 you now see the exclamation mark is there and this change has now been picked up because we rebuilt a new image with our new code inside of it and i'm putting so much emphasis on that because it is super crucial to understand that an image is really a closed template in the end these instructions are executed to create an image and thereafter it's locked it's finished and if you change something which you copied into your image thereafter that has no impact so with that i'll also stop this newly created docker container here and we can now dive a bit deeper into images and again all the containers because there is more which we need to explore here so an image is closed once we build it once these instructions were executed that's why we have to rebuild it if we need to update something in there for example when our code changed and we want to copy the new code into a new image that's what we covered in the previous lecture building up on that there is another important concept related to images which you also should be aware of they are layer based now what do i mean by that with that i mean that when you build an image or when you rebuild it only the instructions where something changed and all the instructions thereafter are re-evaluated keep in mind that i changed the code and then i rebuilt this image we did this in the last lecture now i did not change the code again since then so if i now rebuild this image again by running docker build dot you see this is super fast it finished in like a quarter of a second it was super fast because we see all these using cache messages here because docker basically recognized that for all these instructions the result when the instructions are executed again will be the same as before we have the same working directory the code i copy has not changed at all there is no new file no file has changed and therefore docker is able to infer that it doesn't really need to go through that instruction again instead whenever you build an image docker caches every instruction result and when you then rebuild an image it will use these cached results if there is no need to run an instruction again and this is called a layer-based architecture every instruction represents a layer in your docker file and an image is simply built up from multiple layers based on these different instructions in addition an image is read only which means once an instruction has been executed and once the image is built the image is locked in and code in there can't change unless you rebuild the image which technically means you create a new image that's what i covered before but let's come back to these layers an image is layer based every instruction creates a layer and these layers are cached if you then run a container based on an image that container basically adds a new extra layer on top of the image which is that running application that running code basically the result of executing the command which you specified in your docker file this adds the final layer which only becomes active once you run an image as a layer all the instructions before that final instruction are already part of the image though as separate layers and when nothing changes all these layers can be used from cache now if i do change something in code if i add more exclamation marks here or anything else no matter what you change if i now build this again by repeating docker build dot you will see that now it takes longer because it only uses some results from cache it used the work directory instruction result from the cache but it noticed that for the copy instruction it needs to run it again because it scans the files which it should copy in and docker detects that one file changed and hence it copies in all files again now here's the thing whenever one layer changes i said that all other layers are all the rebuilt docker is not able to tell whether npm install would now yield the same result as before after all we copied in our files again and docker does not do a deep analysis of which file changed where and if this could affect npm install so whenever one layer changed all subsequent layers are also re-executed which is why here npm install ran again so i hope this layer-based architecture makes sense and is clear it exists to speed up the creation of images since docker only rebuilds and re-executes what needs to be re-executed and that's of course a very useful mechanism now it also means that at the moment whenever we change anything in our code we also run npm install again even though we as a developer know that this is unnecessary unless we changed something in package.json which manages the dependencies of our project there is no need to run npm install again ever because if we just changed something in our source code this has no impact on the dependencies this project needs and therefore in nodes world npm install does not need to be re-executed and here we have our first tiny bit of optimization potential for this docker file instead of copying everything like this and then running npm install it would be better if we would copy this after npm install but before we run npm install we also copy the package.json file and we copy that into the app folder with that we would pick up this package.json file copy that into the app folder then run npm install and then copy over our other code with this we would ensure that this layer the npm install layer comes before we copy our source code so in the future whenever we change our source code these layers in front of the copy source code command will not be invalidated and npm install will not run again just because we copied in our source code again so now only these layers would run again and that will be more performant than running npm install again which simply takes a certain amount of time to finish i hope this makes sense so if i now build this again for the first time it will run npm install and copy and everything but then here we got our image name and if we now use that to run our container and we reload we see this change in source code of course which i made before but if we now stop this container first of all with docker stop and then go to server.js and update the source code again to remove all the exclamation marks again you will notice that if i rebuild the image with docker build dot it's now again super fast because it was able to use the cached result from npm install because the steps prior to npm install didn't change because docker sees that the package.json file did not change it's the same as before and therefore there was no need to copy that again and to run npm install again the only change happened in this step but that comes after npm install so that's a first small optimization but more important than that optimization is that you understand why we are doing it and that you understand this layer based approach this layer based architecture it's really important because it's a core concept in darker and darker images and it exists for the reasons outlined in the last minutes by now we explored the core concepts about images and containers and i just want to provide a first summary on those concepts to ensure we're all on the same page with docker it's all about our code in the end our application we're building for example our web application we put that code that makes up our application into a so-called image and we don't just put our code in there but also the environment the tools we need to execute that code you learned that you can create such an image by creating such a docker file where you provide detailed instructions on what should go into the image which base image you might be using which code and which dependencies should be copied in there if maybe some setup step like npm install is required and if you then want to open up some internal port so that you can listen to that from outside of the image and therefore ultimately outside the container in the end and that's of course important docker ultimately is about containers not images but images are an important building block they are the template the blueprint for your containers you can then instantiate run multiple containers based on an image the image is the thing that contains your code and so on the container as you learned is just an extra thin layer on top of the image but still the container in the end is your running application based on an image but then once it is running stand alone and independent from other containers that might be running i wanna emphasize though that a container does not copy over the code and the environment from the image into a new container into a new file that is not what's happening a container will use the environment stored in an image and then just add this extra layer on top of it this running node server process for example and allocate resources memory and so on to run the application but it will not copy that code so our code and denote environment is not getting copied three times here if we have one image and two containers it exists only once in the image and the containers then utilize that image and the code in it this is how docker manages this and that's of course very efficient and that's the core idea behind docker having those isolated environments that contain your app and everything that is required to run that app all the environment all the tools like node.js and having all of that inside of this isolated container that's what docker is about and that would be covered thus far over the last lectures now i want to take the time and have a look at how we can configure and manage our images and containers because thus far we saw how we can build an image and how we can run a container how we can stop a container but there is more we can do and more we should be aware of and in general one very important hint or note you should be aware of is that on any docker command you can add dash dash help to see all available options now there will be tons of options which you never or only rarely need but you will see all available ways of running a certain command or of configuring a certain command by adding dash dash help that's an important hint in the next lectures we'll focus on a couple of core configuration options and core features built into docker we will learn how we can tag which basically means name images we will learn how we can list the images we created in the past how we can analyze and inspect images and also how we can remove and clear images if we no longer need them we will also have a closer look at containers and learn how we can name containers how we can configure them in detail how we can control them in detail and we'll use help for that to see various options there we'll have another look at listing containers running and stopped containers we'll also see how we can restart containers that we stopped in the past and we will also learn how we can remove containers after they have been stopped when we don't need them anymore so that's a broad variety of things we can do with images and containers let's dive in now i'm back in the demo project we worked on before and i'll bring up that built-in terminal again but you can of course also open up a default terminal outside of visual studio code and there we got various commands which we can run with help of the docker command and as i mentioned if you for example run docker-help you get a list of the built-in main commands you can run with docker and you see there are quite a lot of commands there well good news is a lot of these commands will not really matter to you uh in the vast majority of cases there also are some commands from the past where we nowadays have better ways of achieving something but still you see we have quite a lot of commands here you can obviously read these descriptions to see what these commands do but it will come down to a couple of core commands you should be aware of in addition there also are some commands which also could be replaced by other commands executed differently in general with docker for some operations you have multiple ways of performing that operation now i want to start with managing images and containers since that is what docker is all about and we already saw for example that you can list all containers by running docker ps and this shows you all running containers by default now if you add dash a you see all containers you had in the past including the stopped containers which are not running anymore as a side note if you run docker ps help you see all available config options for docker ps and here's the dash a flag which we used to show all containers so docker psa shows us all containers including the containers we stopped and one important thing you can also do with docker is you can restart a stopped container you don't always need to docker run a new container because that's important with docker run you create a new container based on an image and that new container is started thereafter sometimes that is what you want but sometimes it's not if nothing changed about our application about the dependencies and our source code and our image didn't change there is no need to create a brand new container we can't just restart an existing container and for that we can just search for stopped containers with docker psa now you might see a different output than i do here because i'm also working with these containers off-screen deleting some which we'll learn later and adding new ones but here you see my history of docker containers i worked with recently and if nothing changed about our code and about our docker file we can just grab let's say this one the the most recent one which exited and restart that with docker start if we run docker start and then the container id or name this will bring the container back up and now you see it starts the container in a different mode it's not blocking the terminal as it did with docker run but i can tell you this container is up and running and you can verify by running docker ps without the dash a flag to only see the running containers and you should see this container here and we can also visit localhost 3000 and reload there and there our application is up and running thanks to this restarted container but again as i mentioned this restarted container is in this strange mode where yes it did start but no we're not really able to interact with it or to see the logs instead it started this container and then returned to the terminal so that we could enter more commands and as i mentioned that's different to what docker run did well that's something we're going to explore next now there is more we can do with the docker command than with containers and images we already saw a couple of different commands in action and i want to come back to this mode which we have here when we restarted a container and the mode which we had when we initially started a container here when we're restarting it the process in our terminal here finished immediately we're not attached to this running docker container in the terminal anymore nonetheless it clearly is still running as we can see with docker ps so this container is running it's just not blocking us here in the terminal and that's different compared to the docker run command we executed before if i run this again binding to a different local port since 3000 is already used by the already running container but if i use a different port here i'm starting a new additional container based on the same image so therefore with docker ps in a different terminal here we see we got two containers up and running now but you see that for docker run we're stuck in this process i can't enter more commands here i mean i can type but i can't commit them i can't confirm them with enter instead this process is blocking this terminal and with docker start that was not the case now this is not a bug or anything you have to accept this is something you can configure if you want to be in this attached mode or in detached mode and it's just the case that for starting with docker start the detached mode is the default for running with docker run the attached mode is the default and now the question of course is which mode do you want and why does this even matter well in this example application i am logging something to the console whenever we set a new goal so here if i add learn docker in death as a goal on localhost 3000 that's my started container which was detached we see nothing in the console anywhere on the other hand if i go to localhost 8000 which is that new container we ran with docker run and i set my goal here you will see that it also shows up in the terminal here because here we're attached to this running container and attached simply means that we're listening to the output of that container for example to what's being printed to the console and this might or might not be what we want we can also run a container in detached mode if we want to we don't have to accept that default of being attached for this i'll quickly stop this more recent container which i started with docker stop which will take a couple of seconds so that's the container we started with docker run and once this is stopped i will rerun it but now in detached mode by adding dash d as an extra flag in front of the image id here if i now hit enter you see now i just get the id the automatically generated id of the new container you see with docker ps that we have this container up and running that's the shortened id of that up and running container by the way and we see that container is running and therefore visiting localhost 8000 also works but now if i learn docker in death for example here on localhost 8000 we don't see that log in the terminal where we executed docker run because now we're not attached to the output since we used the dash d flag to start in detached mode the advantage is that we now can use one and the same terminal to also do other things if you still want to look into your running container you got two ways of doing that for one we can see with docker help that there is an attach command so we can attach ourselves again to a running container we can simply find the container and use its name or its id both works and then run docker attach and that name and we are attached again and therefore now if we change it again and set a new goal we do see this output again so that's one way of doing it but actually let me stop this container again and let me rest restart it thereafter now in detached mode automatically since that is the default for docker start and let me show you another way of getting access to for example the log messages that are printed inside of a container this is running again if i now type learn docker in depth here and commit this so i saved this and set this goal we don't see the goal because we are detached and we could add hatch again to see future log messages or another thing we can do another useful command is the docker logs command which fetches the logs that were printed by a container and here we can first of all find the container eloquent brown is the name of the container i'm interested here and then we can run docker logs on that container and we see the past logs that were printed by that container we can also if we inspect the options of docker logs enter follow mode by adding dash f to keep on listening so basically we're now attaching ourselves again so if i add dash f here we again get this attach process where we see future logs future output by that container again so these are the different options we have for that and i'm spending a lot of time on that because it is important that you understand the difference between a running and a stopped container and an attached in a detached container no matter if it's attached or detached it's still up and running but if you then need information from inside the container you can use docker logs or attach yourself to the container again to get that extra information now with that i will again docker stop eloquent brown to bring down that container and detach myself again here now one last note about all of that if you would want to restart a stopped container like eloquent brown here in attached mode right from the start you can do so by adding the dash a flag to the start command now you start this in attached mode so now i hope you liked this video you enjoyed the tutorial and you have a first good understanding of what docker is and how you can use it to build images and run containers if you want to learn way more about docker and how to work with data there persist data in containers and synchronize data with data on your host machine if you want to learn how to connect multiple containers how to set up a container network if you want to learn how you can leverage kubernetes to deploy containers in a scalable and production ready way if that's all something which sounds interesting to you you also might want to check out my complete docker and kubernetes guide of course you'll find a link below the video with a nice discount and i would love to welcome you on board there so hopefully see you there otherwise hopefully i see you in another video
Info
Channel: Academind
Views: 66,739
Rating: undefined out of 5
Keywords: docker, kubernetes, containers, docker containers, docker tutorial, docker course, docker full course, docker image, docker container, docker kubernetes, maximilian schwarzmĂĽller, maximilian schwarzmuller, maximilian schwarzmueller, dockerfile, what is docker, docker introduction, docker vs vm
Id: d-PPOS-VsC8
Channel Id: undefined
Length: 136min 31sec (8191 seconds)
Published: Wed Nov 11 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.