Docker Tutorial for Beginners | Docker Full Course | Access to FREE LABS [No Ads]

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hello and welcome to the dunker for beginners course my name is moon shot 1 omelette and I will be your instructor for this course I'm a DevOps and cloud trainer at code cloud comm which is an interactive hands-on online learning platform I've been working in the industry as a consultant for over thirteen years and have helped hundreds of thousands of students learn technology in a fun and interactive way in this course you will learn docker through a series of lectures that use animation illustration and some fun analogies that simplify complex concepts we have demos that will show you how to install and get started with docker and most importantly we have hands-on labs that you can access right in your browser I will explain more about it in a bit but first let's look at the objectives of this course in this course we first try to understand what containers are what docker is and why you might need it and what it can do for you we will see how to run a docker container how to build your own docker image we will see networking in docker and how to use docker compose what docker registry is how to deploy your own private registry and we then look at some of these concepts in debt and we try to understand how docker really works under the hood we look at docker for Windows and Mac before finally getting a basic introduction to container orchestration tools like dr. Swann and kubernetes here's a quick note about hands-on labs first of all to complete this course you don't have to set up your own labs well you may set it up if you wish to if you wish to have your own environment and we have a demo as well but as part of this course we provide real labs that you can access right in your browser anywhere anytime and as many times as you want the labs give you instant access to a terminal to a docker host and an accompanying quiz portal the quiz portal asks a set of questions such as exploring the environment and gathering information or you might be asked to perform an action such as run docker container the quiz portal then validates your work and gives you feedback instantly every lecture in this course is accompanied by such challenging interactive quizzes that makes learning docker a fun activity so I hope you're as thrilled as I am to get started so let us begin [Music] we're going to start by looking at a high-level overview on why you need docker and what it can do for you let me start by sharing how I got introduced to Locker in one of my previous projects I had this requirement to set up an end-to-end application stack including various different technologies like a web server using node.js and a database such as MongoDB and a messaging system like Redis and an orchestration tool like ansible we had a lot of issues developing this application stack with all these different components first of all their compatibility with the underlying OS was an issue we had to ensure that all these different services were compatible with the version of OS we were planning to use there have been times when certain version of these services were not compatible with the OS and we've had to go back and look at different OS that was compatible with all of these different services secondly we had to check the compatibility between these services and the libraries and dependencies on the OS we've had issues where one service requires one version of a dependent library whereas another service requires another version the architecture of our application changed over time we've had to upgrade to newer versions of these components or change the database etc and every time something changed we had to go through the same process of checking compatibility between these various components and the underlying infrastructure this compatibility matrix issue is usually referred to as the matrix from hell next every time we had a new developer on board we found it really difficult to a new environment the new developers had to follow a large set of instructions and run hundreds of commands to finally set up their environments we had to make sure they were using the right operating system the right versions of each of these components and each developer had to set all that up by himself each time we also had different development tests and production environments one developer may be comfortable using one OS and the others may be comfortable using another one and so we couldn't guarantee that the application that we were building would run the same way in different environments and so all of this made our life in developing building and shipping the application really difficult so I needed something that could help us with the compatibility issue and something that will allow us to modify or change these components without affecting the other components and even modify the underlying operating systems as required and that search landed me on docker with docker I was able to run each component in a separate container with its own dependencies and its own libraries all on the same VM and the OS but within separate environments or containers we just had to build the docker configuration once and all our developers could now get started with a simple docker run command a respective of what the underlying operating system they run all they needed to do was to make sure they had duck or installed on their systems so what are containers containers are completely isolated environments I think they can have their own processes for services their own network interfaces their own mounts just like washing machines except they all share the same OS kernel we will look at what that means in a bit but it's also important to note that containers are not new with docker containers have existed for about 10 years now and some of the different types of containers are LX c LX d LX c FS etc docker utilizes Aleksey containers setting up these continua environments is hard as they are very low-level and that is where docker offers a high-level tool with several powerful functionalities making it really easy for end-users like us to understand how docker works let us revisit some basic concepts of operating systems first if you look at operating systems like Ubuntu Fedora Susi air scent OS they all consist of two things an OS kernel and a set of software the OS kernel is responsible for interacting with the underlying hardware while the OS kernel remains the same which is Linux in this case it's the software above it that makes these operating systems different this software may consist of a different user interface drivers compilers file managers developer tools etc so you have a common Linux kernel shared across all races and some custom software that differentiate operating systems from each other we said earlier that docker containers share the underlying kernel so what does that actually mean sharing the kernel let's say we have a system with an Ubuntu OS with docker installed on it docker can run any flavor of OS on top of it as long as they are all based on the same kernel in this case Linux if the underlying OS is Ubuntu docker can run a container based on another distribution like debian fedora SUSE or Sint OS each docker container only has the additional software that we just talked about in the previous slide that makes these operating systems different and docker utilizes the underlying kernel of the docker host which works with all OSS above so what is an OS that do not share the same kernel as this Windows and so you won't be able to run a Windows based container on a docker host with Linux on it for that you will require a docker on a Windows server now it is when I say this that most of my students go hey hold on there that's not true and they install docker on Windows run a container based on Linux and go see it's possible well when you install docker on Windows and run a Linux container on Windows you're not really running a Linux container on Windows Windows runs a Linux container on a Linux virtual machine under the hood so it's really Linux container on Linux virtual machine on Windows we discuss more about this on the docker on Windows or Mac later during this course now you might ask isn't that a disadvantage then not being able to run another kernel on the OS the answer is no because unlike hypervisors docker is not meant to virtualize and run different operating systems and kernels on the same hardware the main purpose of docker is to package and container as applications and to ship them and to run them anywhere any times as many times as you want so that brings us to the differences between virtual machines and containers something that we tend to do is specially those from a virtualization background as you can see on the right in case of docker we have the underlying hardware infrastructure and then the OS and then docker installed on the OS docker then manages the containers that run with libraries and dependencies alone in case of virtual machines we have the hypervisor like ESX on the hardware and then the virtual machines on them as you can see each virtual machine has its own OS inside it and then the dependencies and then the application the overhead causes higher utilization of underlying resources as there are multiple virtual operating systems and kernel running the virtual machines also consume higher disk space as each VM is heavy and is usually in gigabytes in size whereas docker containers are lightweight and are usually in megabytes in size this allows docker containers to boot up faster usually in a matter of seconds whereas VMs as we know takes minutes to boot up as it needs to put up the entire operating system it is also important to note that docker has less isolation as more resources are shared between the containers like kernel whereas VMs have complete isolation from each other since VMs don't rely on the underlying OS or kernel you can run different types of applications built on different OSS such as Linux based or Windows based apps on the same hypervisor so those are some differences between the two now having said that it's not an either container or virtual machine situation its containers and virtual machines now when you have large environments with thousands of application containers running on thousands of dog or hosts you will often see containers provisioned on virtual docker hosts that way we can utilize the advantages of both technologies we can use the benefits of virtualization to easily provision or decommission docker House serves as required at the same time make use of the benefits of docker to easily provision applications and quickly scale them as required but remember that in this case we will not be provisioning that many virtual machines as we used to before because earlier we provisioned a virtual machine for each application now you might provision a virtual machine for hundreds or thousands of containers so how is it done there are lots of containerized versions of applications readily available as of today so most organizations have their products containerized and available in a public dock or repository called docker hub or docker store for example you can find images of most common operating systems databases and other services and tools once you identify the images you need and you install docker on your hosts bringing up an application is as easy as running a docker run command with the name of the image in this case running a docker run and single command will run an instance of ansible on the docker host similarly run an instance of MongoDB Redis and nodejs using the docker run command if we need to run multiple instances of the web service simply add as many instances as you need and configure a load balancer or some kind in the front in case one of the instances were to fail simply destroy that instance and launch anyone there are other solutions available for handling such cases that we will look at later during this course and for now don't focus too much on the commands and we will get to that in a bit we've been talking about images and containers let's understand the difference between the two an image is a package or a template just like a VM template that you might have worked with in the virtualization world it is used to create one or more containers containers are running instances of images that are isolated and have their own environments and set of processors as we've seen before a lot of products have been docker iced already in case you cannot find what you're looking for you could create your own image and push it to docker hub repository making it available for public so if you look at it traditionally developers developed applications then they hand it over to ops team to deploy and manage it in production environments they do that by providing a set of instructions such as information about how the hosts must be set up what prerequisites are to be installed on the host and how the dependencies are to be configured etc since the ops team did not really develop the application on their own they struggle with setting it up when the hidden issue they work with the developers to resolve it with docker the developers and operations teams work hand-in-hand to transform the guide into a docker file with both of their requirements this docker file is then used to create an image for their applications this image can now run on any host with docker installed on it and is guaranteed to run the same way everywhere so the ops team can now simply use the image to deploy the application since the image was already working when the developer built it and operations are have not modified it it continues to work the same way when deployed in production and that's one example of how a tool like docker contributes to the DevOps culture well that's it for now and in the upcoming lecture we will look at how to get started with docker we will now see how to get started with docker now docker has two editions the Community Edition and the Enterprise Edition the Community Edition is the set of free docker products the Enterprise Edition is the certified and supported container platform that comes with enterprise add ons like the image management image security Universal control plane for managing and orchestrating container runtimes but of course these come with a price we will discuss more about container orchestration later in this course and along with some alternatives for now we will go ahead with the Community Edition the Community Edition is available on Linux Mac Windows or on cloud platforms like AWS or Azure in the upcoming demo we will take a look at how to install and get started with docker on a Linux system now if you are on Mac or Windows you have two options either install a Linux VM using VirtualBox or some kind of virtualization platform and then follow along with the upcoming demo which is really the most easiest way to get started with docker the second option is to install docker Desktop for Mac or the docker desktop for Windows which are native applications so if that is really what you want check out the docker for Mac and the Windows sections towards the end of this course and then head back here once you are all set up we will now head over to a demo and we will take a look at how to install docker on a Linux machine in this demo we look at how to install and get started with docker first of all identify a system physical or virtual machine or laptop that has a supported operating system in my case I have an Ubuntu VM go to doctor comm and click on get darker you will be taken to the docker engine Community Edition page that is the free version that we are after from the left hand menu select your system type I choose Linux in my case and then select your OS flavor I choose Ubuntu read through the prerequisites and requirements your abun to system must be 64-bit and one of these supported versions like disco cosmic bio or Sanyal in my case I have a bionic version to confirm view the Etsy release file next uninstall any older version if one exists so let's just make sure that there is none on my host so I'll just copy and paste that command and I confirm that there are no older version that exists on my system the next step is to set up repository and install the software now there are two ways to go about this the first is using the package manager by first updating the repository using the apt-get update command then installing the prerequisite packages and then adding Dockers of facial GPG keys and then installing docker but I'm not going to go that route there is an easier way if you scroll all the way to the bottom you will find the instructions to install docker using the convenience script it's a script that automates the entire installation process and works on most operating systems run the first command to download a copy of the script and then run the second command to execute the script to install docker automatically give it a few minutes to complete the installation the is now successful that has now checked the version of docker using the darker version command we've installed version 19.0 3.1 we will now run a simple container to ensure everything is working as expected for this head over to docker hub at hub docker calm here you will find a list of the most popular docker images like engine eggs MongoDB Alpine nodejs Redis etc let's search for a fun image called we'll say we'll say is Dockers version of cows a which is basically a simple application that trends a cow saying something in this case it happens to be a whale copy the docker run command given here remember to add a sudo and we will change the message to hello world on running this command docker pulls the image of the whales II application from docker hub and runs it and we have our avail saying hello great we're all set remember for the purpose of this course you don't really need to set up a docker system on your own we provide hands-on labs that you will get access to but if you wish to experiment on your own and follow along feel free to do so we now look at some of the docker commands at the end of this lecture you will go through a hands-on quiz where you will practice working with these commands let's start by looking at docker run command the docker run command is used to run a container from an image running the docker run nginx command will run an instance of the nginx application on the docker host if it already exists if the image is not present on the host it will go out to docker hub and pull the image down but this is only done the first time for the subsequent executions the same image will be reused the docker PS command lists all running containers and some basic information about them such as the container ID the name of the image we used to run the containers the current status and the name of the container each container automatically gets a random ID and name created for it by docker which in this case is silly Samet to see all containers running or not use the - a option this outputs all running as well as previously stopped or exited containers we will talk about the command and port fields shown in this output later in this course for now let's just focus on the basic commands to stop a running container use the Tucker stop command but you must provide either the container ID or the continue name in the stop command if you're not sure of the name run the docker PS command to get it on success you will see the name printed out and running docker PS again will show no running containers running docker PS - a however shows the container silly summit and that it is in an accident states a few seconds ago now what if we don't want this container lying around consuming space what if we want to get rid of it for good use the docker RM command to remove a stopped or exited container permanently if it prints the name back we're good run the docker PS command again to verify that it's no longer present good but what about the nginx image that was downloaded at first we're not using that anymore so how do we get rid of that image but first how do we see a list of images present on our hosts run the docker images command to see a list of available images and their sizes on our hosts we have four images the nginx Redis Ubuntu and Alpine we will talk about tags later in this course when we discuss about images to remove an image that you no longer plan to use run the docker RM I command remember you must ensure that no containers are running off of that image before attempting to remove the image you must stop and delete all dependent containers to be able to delete an image when we ran the docker run command earlier it downloaded the Ubuntu image as it couldn't find one locally what if we simply want to download the image and keep so when we run the run docker run command we don't want to wait for it to download use the docker pull command to only pull the image and not run the container so in this case the docker pull a bull to come and pulls the Ubuntu image and stores it on our host let's look at another example say you were to run a docker container from an Ubuntu image when you run the docker run Ubuntu command it runs an instance of a bundle image and exits immediately if you were to list the running containers you wouldn't see the container running if you list all containers including those that are stopped you will see that the new container Iran is in an exited state now why is that unlike virtual machines containers are not meant to host an operating system containers are meant to run a specific task or process such as to host an instance of a web server or application server or a database or simply to carry some kind of computation or analysis tasks once the task is complete the container exits a container only lives as long as the process inside it is alive if the web service inside the container is stopped or crash then the container exits this is why when you run a container from an Ubuntu image it stops immediately because you bun too is just an image of an operating system that is used as the base image for other applications there is no process or application running in it by default if the image isn't running any service as is the case with Ubuntu you could instruct docker to run a process with the docker run command for example a sleep command with a duration of 5 seconds when the container starts it runs the sleep command and goes into sleep for 5 seconds post with the sleep command exits and the container stops what we just saw was executing a command when we run the container but what if we would like to execute a command on a running container for example when I run the docker PS command I can see that there is a running container which uses the bun to image and sleeps 400 seconds so let's say I would like to see the contents of a file inside this particular container I could use the docker exec command to execute a command on my docker container in this case to print the contents of the Etsy hosts file finally let's look at one more option before we head over to the practice exercises I'm now going to run a docker image I developed for a simple web application the repository name is called cloud slash simple web app it runs a simple web server that listens on port 8080 when you run a docker c'mon like this it runs in the foreground or in an attached mode meaning you will be attached to the console or the standard out of the docker container and you will see the output of the web service on your screen you won't be able to do anything else on this console other than view the output until this docker container stops it won't respond to your inputs press the ctrl + C combination to stop the container and the application hosted on the container exits and you get back to your prompt another option is to run the docker container in the D test mode by providing the - D option this will run the docker container in the background mode and you will be back to your prompt immediately the container will continue to run in the back end run the docker PS command to view the running container now if you would like to attach back to the running container later run the docker attach command and specify the name or ID of the docker container now remember if you are specifying the ID of a container in any docker command you can simply provide the first few characters alone just so it is different from the other container IDs on the host in this case I specify a 0 for 3 D now don't worry about accessing the UI of the webserver for now we will look more into that in the upcoming lectures for now let's just understand the basic commands will now get our hands dirty with the docker CLI so let's take a look at how to access the practice lab environments next let me now walk you through the hands-on lab practice environment the links to access the labs associated with this course are available at cold cloud at code cloud comm slash P slash docker dash labs this link is also given in the description of this video once you're on this page use the links given there to access the labs associated to your lecture each lecture has its own lab so remember to choose the right lab for your lecture the labs open up right in your browser I would recommend to use google chrome while working with the labs the interface consists of two parts a terminal on the left and a quiz portal on the right the cooze portal on the right gives you challenges to solve follow the quiz and try and answer the questions asked and complete the tasks given to you each scenario consists of anywhere from 10 to 20 questions that needs to be answered within 30 minutes to an hour at the top you have the question numbers below that is the remaining time for your lab below that is the question if you're not able to solve the challenge look for hints in the hints section you may skip a question by hitting the skip button in the top right corner but remember that you will not be able to go back to a previous question once you have skipped if the quiz portal gets stuck for some reason click on the quiz portal tab at the top to open the quiz portal in a separate window the terminal gives you access to a real system running Tucker you can run any docker command here and run your own containers or applications you would typically be running commands to solve the tasks assigned in the quiz portal you may play around and experiment with this environment but make sure you do that after you've gone through the quiz so that your work does not interfere with the tasks provided by the quiz so let me walk you through a few questions there are two types of questions each lab scenario starts with a set of export multiple-choice questions where you're asked to explore and find information in the given environment and select the right answer this is to get you familiarized with a set up you are then asked to perform tasks like run a container stop them delete them build your own image etc here the first question asks us to find the version of docker server engine running on the host run the docker reversion command in the terminal and identify the right version then select the appropriate option from the given choices another example is the fourth question where it asks you to run a container using the Redis image if you're not sure of the command click on hints and it will show you a hint we now run a Redis container using the docker run readies command wait for the container to run once done click on check to check your work we have now successfully completed the task similarly follow along and complete all tasks once the lab exercise is completed remember to leave a feedback and let us know how it went a few things to note these are publicly accessible labs that anyone can access so if you catch yourself logged out during a peak hour please wait for some time and try again also remember to not store any private or confidential data on these systems remember that this environment is for learning purposes only and is only alive for an hour after which the lab is destroyed so does all your work but you may start over and access these labs as many times as you want until you feel confident I will also post solutions to these lab quizzes so if you run into issues you may refer to those that's it for now header words are the first challenge and I will see you on the other side [Music] we will now look at some of the other docker run commands at the end of this lecture you will go through a hands-on quiz where you will practice working with these commands we learned that we could use the docker run Redis command to run the container running a Redis service in this case the latest version of Redis which happens to be 5.0 to 5 as of today but what if we want to run another version of Redis like for example and older versions say 4.0 then you specify the version separated by a colon this is called a tag in that case docker pulls an image of the 4.0 version of Redis and runs that also notice that if you don't specify any tag as in the first command docker will consider the default tag to be latest latest is a tag associated to the latest version of that software which is governed by the authors of that software so as a user how do you find information about these versions and what is the latest at docker hub com look up an image and you will find all the supported tags in its description each version of the software can have multiple short and long tags associated with it as seen here in this case the version fight of 0.5 also has the latest tag on it let's now look at inputs I have a simple prompt application that when run asks for my name and on entering my name prints a welcome message if I were to docker eyes this application and run it as a docker container like this it wouldn't wait for the prompt it just prints whatever the application is supposed to print on standard out that is because by default the docker container does not listen to a standard input even though you are attached to its console it is not able to read any input from you it doesn't have a terminal to read in what's from it runs in a non-interactive mud if you would like to provide your input you must map the standard input of your host to the docker container using the - I parameter the - I parameter is for interactive mode and when I input my name it prints the expected output but there is something still missing from this the prompt when we run the app at first it asked us for our name but when docker iced that prompt is missing even though it seems to have accepted my input that is because the application prompt on the terminal and we have not attached to the containers terminal for this use the - T option as well the - T stands for a sudo terminal so with the combination of - int we are now attached to the terminal as well as in an interactive mode on the container we will now look at port mapping or port publishing on containers let's go back to the example where we run a simple web application in a docker container on my docker host remember the underlying host where docker is installed is called docker host or docker engine when we run a containerized web application it runs and we are able to see that the server is running but how does a user access my application as you can see my application is listening on port 5,000 so I could access my application by using port 5000 but what IP do I use to access it from a web browser there are two options available one is to use the IP of the docker container every docker container gets an IP assigned by default in this case it is 172 dot 17.0 - but remember that this is an internal IP and is only accessible within the docker host so if you open a browser from within the docker host you can go to http colon forward slash forward slash 172 dot 17 dot 0 dot 1 : 5,000 to access the IP address but since this is an internal IP users outside of the docker host cannot access it using this IP for this we could use the IP of the docker host which is one ninety two dot one sixty eight dot 1.5 but for that to work you must have mapped the port inside the docker container to a free port on the docker host for example if I want the users to access my application through port 80 on my docker host I could map port 80 of local host to port 5000 on the docker container using the dash P parameter in my run command like this and so the user can access my application by going to the URL HTTP colon slash slash one ninety two dot one sixty eight dot one dot five colon 80 and all traffic on port 80 on my daugher host will get routed to port 5000 inside the docker container this way you can run multiple instances of your application and map them to different ports on the docker host or run instances of different applications on different ports for example in this case and running an instance of MySQL that runs a database on my host and listens on the default MySQL port which happens to be three 3:06 or another instance of MySQL on another port eight 3:06 so you can run as many applications like this and map them to as many ports as you want and of course you cannot map to the same port on the docker host more than once we will discuss more about port mapping and networking of containers in the network lecture later on let's now look at how data is persisted in a docker container for example let's say you were to run a MySQL container when databases and tables are created the data files are stored in location /wor Labe MySQL inside the docker container remember the docker container has its own isolated filesystem and any changes to any files happen within the container let's assume you dump a lot of data into the database what happens if you were to delete the MySQL container and remove it as soon as you do that the container along with all the data inside it gets blown away meaning all your data is gone if you would like to persist data you would want to map a directory outside the container on the docker host to a directory inside the container in this case I create a directory called /opt slash data dir and map that to var Lib MySQL inside the docker container using the - V option and specifying the directory on the docker host followed by a colon and the directory inside the gawker container this way when docker container runs it will implicitly mount the external directory to a folder inside the docker container this way all your data will now be stored in the external volume at /opt slash data directory and thus will remain even if you delete the docker container the docker PS command is good enough to get basic details about containers like their names and ID's but if you would like to see additional details about a specific container use the docker inspect command and provide the container name or ID it returns all details of a container in a taste on format such as the state Mounds configuration data network settings etc remember to use it when you're required to find details on a container and finally how do we see the logs of a container we ran in the background for example I ran my simple web application using the - D parameter and it ran the container in a detached mode how do I view the logs which happens to be the contents written to the standard out of that container use the docker logs command and specify the container ID or name like this well a sit for this lecture head over to the challenges and practice working with docker commands so to start with a simple web application written in Python this piece of code is used to create a web application that displays a web page with a background color if you look closely into the application code you will see a line that sets the background color to red now that works just fine however if you decide to change the color in the future you will have to change the application code it is a best practice to move such information out of the application code and into say an environment variable called app color the next time you run the application set an environment variable called app color to a desired value and the application now has a new color once your application gets packaged into a docker image he would then run it with the docker run command followed by the name of the image however if you wish to pass the environment variable as we did before he would now use the docker run commands - II option to set an environment variable within the container to deploy multiple containers with different colors he would run the docker command multiple times and set a different value for the environment variable each time so how do you find the environment variable set on a container that's already running use the docker inspect command to inspect the properties of a running container under the config section you will find the list of environment variables set on the container well that's it for this lecture on configuring environment variables in docker [Music] hello and welcome to this lecture on docker images in this lecture we're going to see how to create your own image now before that why would you need to create your own image it could either be because you cannot find a component or a service that you want to use as part of your application on docker hub already or you and your team decided that the application you're developing will be derived for ease of shipping and deployment in this case I'm going to containerize an application a simple web application that I have built using the Python flask framework first we need to understand what we our container izing or what application we are creating an image for and how the application is built so start by thinking what you might do if you want to deploy the application manually we write down the steps required in the right order and creating an image for a simple web application if I were to set it up manually I would start with an operating system like Ubuntu then update the source repositories using the apt command then install dependencies using the apt command then install Python dependencies using the PIP command then copy over the source code of my application to a location like opt and then finally run the web server using the flask command now that I have the instructions create a docker file using this here's a quick overview of the process of creating your own image first create a docker file named docker file and write down the instructions for setting up your application in it such as installing dependencies where to copy the source code from and to and what the entry point of the application is etc once done build your image using the docker build command and specify the docker file as input as well as a tag name for the image this will create an image locally on your system to make it available on the public docker hub registry run the docker push command and specify the name of the image you just created in this case the name of the image is my account name which is M Amjad followed by the image name which is my custom app now let's take a closer look at that docker file docker file is a text file written in a specific format that docker can understand it's in an instruction and arguments format for example in this docker file everything on the left in caps is an instruction in this case from run copy and entry point are all instructions each of these instruct docker to perform a specific action while creating the image everything on the right is an argument to those instructions the first line from Ubuntu defines what the base OS should be for this container every docker image must be based off of another image either an OS or another image that was created before based on an OS you can find official releases of all operating systems on docker hub it's important to note that all docker files must start with a from instruction the run instruction instructs docker to run a particular command on those base images so at this point docker runs the apt-get update commands to fetch the updated packages and installs required dependencies on the image then the copy instruction copies files from the local system onto the docker image in this case the source code of our application is in the current folder and I will be copying it over to the location opt source code inside the docker image and finally entry point allows us to specify a command that will be run when the image is run as a container when docker builds the images it builds these in a layered architecture each line of instruction creates a new layer in the docker image with just the changes from the previous layer for example the first layer is a base Ubuntu OS followed by the second instruction that creates a second layer which installs all the apt packages and then the third instruction creates a third layer with the Python packages followed by the fourth layer that copies the source code over and the final layer that updates the entry point of the image since each layer only stores the changes from the previous layer it is reflected in the size as well if you look at the base opened to image it is around 120 MB in size the apt packages that I install is around 300 MB and the remaining layers are small you could see this information if you run the docker history command followed by the image name when you run the door Bilka man you could see the various steps involved and the result of each task all the layers built are cast so the layered architecture helps you restart docker built from that particular step in case it fails or if you were to add new steps in the build process you wouldn't have to start all over again all the layers built are cached by docker so in case a particular step was to fail for example in this case step three failed and you were to fix the issue and rerun docker build it will reuse the previous layers from cache and continue to build the remaining layers the same is true if you were to add additional steps in the docker file this way rebuilding your image is faster and you don't have to wait for docker to rebuild the entire image each time this is helpful especially when you update source code of your application as it may change more frequently only the layers above the updated layers needs to be rebuilt we just saw a number of products containerized such as databases development tools operating systems etc but that's just not it you can containerized almost all of the application even simple ones like browsers or utilities like curl applications like Spotify Skype etc basically you can containerize everything and going forward and see that's how everyone is going to run applications nobody is going to install anything anymore going forward instead they're just going to run it using docker and when they don't need it anymore get rid of it easily without having to clean up too much [Music] in this lecture we will look at commands arguments and entry points in docker let's start with a simple scenario say you were to run a docker container from an Ubuntu image when you run the docker run Ubuntu command it runs an instance of Ubuntu image and exits immediately if you were to list the running containers you wouldn't see the container running if you list all containers including those that are stopped you will see that the new container you ran is in an excited state now why is that unlike virtual machines containers are not meant to host an operating system containers are meant to run a specific task or process such as to host an instance of a web server or application server or a database or simply to carry out some kind of computation or analysis once the task is complete the container exits a container only lives as long as the process inside it is alive if the web service inside the container is dot or crashes the container exits so who defines what process is run within the container if you look at the docker file for popular docker images like ng INX you will see an instruction called CMD which stands for command that defines the program that will be run within the container when it starts for the ng INX image it is the ng INX command for the MySQL image it is the MySQL D command what we tried to do earlier was to run a container with a plain Ubuntu operating system let us look at the docker file for this image you will see that it uses bash as the default command now bash is not really a process like a web server or database server it is a shell that listens for inputs from a terminal if it cannot find the terminal it exits when we ran the ubuntu container earlier docker created a container from the ubuntu image launched the bash program by default docker does not attach a terminal to a container when it is run and so the bash program does not find the terminal and so it exits since the process that was started when the container was created finished the container exits as well so how do you specify a different command to start the container one option is to append a command to the docker run command and that way it overrides the default command specified with him the image in this case I run the docker run Ubuntu command with the sleep 5 command as the added option this way when the container starts it runs the sleep program waits for 5 seconds and then exits but how do you make that change permanent say you want the image to always run the sleep command when it starts you would then create your own image from the base ubuntu image and specify a new command there are different ways of specifying the command either the command simply as is in a shell form or in a JSON array format like this but remember when you specify in a JSON array format the first element in the array should be the executable in this case the sleep program do not specify the command and parameters together like this the command and its parameters should be separate elements in the list so I now build my new image using the docker build command and name it as Ubuntu sleeper I could now simply run the docker ubuntu sleeper command and get the same results it always sleeps for 5 seconds and exits but what if I wished to change the number of seconds it sleeps currently it is hard-coded to 5 seconds as we learned before one option is to run the docker run command with the new command appended to it in this case sleep 10 and so the command that will be run at startup will be sleep 10 but it doesn't look very good the name of the image ubuntu sleeper in itself implies that the container will sleep so we shouldn't have to specify the sleep command again instead we would like it to be something like this docker run ubuntu sleeper 10 we only want to pass in the number of seconds the container should sleep and sleep command should be invoked automatically and that is where the entry point instructions comes into play the entry point instruction is like the command instruction as in you can specify the program that will be run when the container starts and whatever you specify on the command line in this case 10 will get appended to the entry point so the command that will be run when the container starts is sleep 10 so that's the difference between the two in case of the CMD instruction the command line parameters passed will get replaced entirely whereas in case of entry point the command line parameters will get appended now in the second case what if I run the ubuntu sleeper image command without appending the number of seconds then the command at startup will be just sleep and you get the error that the operand is missing so how do you configure a default value for the command if one was not specified in the command line that's where you would use both entry point as well as the command instruction in this case the command instruction will be appended to the entry point instruction so at startup the command would be sleep 5 if you didn't specify any parameters in the command line if you did then that will override the command instruction and remember for this to happen you should always specify the entry point and command instructions in it JSON format finally what if you really really want to modify the entry point during runtime say from sleep to an imaginary sleep 2.0 command well in that case you can override by using the entry point option in the docker run command the final command a startup would then be sleep 2.0 10 well that's it for this lecture and I will see you in the next we now look at networking in docker when you install docker it creates three networks automatically bridge no and host bridge is the default Network a container gets attached to if you would like to associate the container with any other network is specified the network information using the network command line parameter like this we will now look at each of these networks the BRIT network is a private internal network created by docker on the host all containers attached to this network by default and they get an internal IP address usually in the range 170 2.17 series the containers can access each other using this internal IP if required to access any of these containers from the outside world map the ports of these containers to ports on the docker host as we have seen before another way to access the containers externally is to associate the container to the hosts network this takes out any network isolation between the docker host and the docker container meaning if you were to run a web server on port 5000 in a web app container it is automatically as accessible on the same port externally without requiring any port mapping as the web container uses the hosts network this would also mean that unlike before you will now not be able to run multiple web containers on the same host on the same port as the ports are now common to all containers in the host network with the non network the container are not attached to any network and doesn't have any access to the external network or other containers they run in an isolated Network so we just saw the default burst network with the network ID 170 2.72 0.1 so all containers associated to this default network will be able to communicate to each other but what if we wish to isolate the containers within the docker host for example the first two web containers on internal network 172 and the second two containers on a different internal network like 182 by default docker only creates one internal bridge network we could create our own internal network using the command docker Network create and specified the driver which is bridge in this case and the subnet for that network followed by the custom isolated network name run the docker network LS command to list all networks so how do we see the network settings and the IP address assigned to an existing container run the docker inspect command with the ID or name of the container and you will find a section on network settings there you can see the type of network the container is attached to is internal IP address MAC address and other settings containers can reach each other using their names for example in this case I have a webserver and a MySQL database container running on the same node how can I get my web server to access the database on the database container one thing I could do is to use the internal IP address signed to the MySQL container which in this case is 170 2.72 0.3 but that is not very ideal because it is not guaranteed that the container will get the same IP when the system reboots the right way to do it is to use the container name all containers in a docker host can resolve each other.we the name of the container docker has a built-in DNS server that helps the containers to resolve each other using the container name note that the built in DNS server always runs at address 127 dot 0 dot 0 dot 11 so how does docker implement networking what's the technology behind it like how are the containers isolated within the host docker uses network namespaces that creates a separate name space for each container it then uses virtual Ethernet pairs to connect containers together well that's all we can talk about it for now more about these are advanced concepts that we discussed in the advanced course on docker on code cloud that's all for now from this lecture on networking head over to the practice tests and practice working with networking in docker I will see you in the next lecture hello and welcome to this lecture and we are learning advanced docker concepts in this lecture we're going to talk about docker storage drivers and file systems we're going to see where and how docker stores data and how it manages file systems of the containers let us start with how docker stores data on the local file system when you install docker on a system it creates this folder structure at where lib docker you have multiple folders under it called a ufs containers image volumes etc this is where docker stores all its data by default when I say data I mean files related to images and containers running on the docker host for example all files related to containers are stored under the containers folder and the files related to images are stored under the image folder any volumes created by the docker containers are created under the volumes folder well don't worry about that for now we will come back to that in a bit for now let's just understand where docker stores its files and in what format so how exactly does docker store the files of an image and a container to understand that we need to understand Dockers layered architecture let's quickly recap something we learned when docker builds images it builds these in a layered architecture each line of instruction in the docker file creates a new layer in the docker image with just the changes from the previous layer for example the first layer is a base Ubuntu operating system followed by the second instruction that creates a second layer which installs all the apt packages and then the third instruction creates a third layer which with the Python packages followed by the fourth layer that copies the source code over and then finally the fifth layer that updates the entry point of the image since each layer only stores the changes from the previous layer it is reflected in the size as well if you look at the base open to image it is around 120 megabytes in size the apt packages that I install is around 300 MB and then the remaining layers are small to understand the advantages of this layered architecture let's consider a second application this application has a different talker file but it's very similar to our first application as in it uses the same base image as Ubuntu uses the same Python and flask dependencies but uses a different source code to create a different application and so a different entry point as well when I run the docker build command to build a new image for this application since the first three layers of both the applications are the same docker is not going to build the first three layers instead it reuses the same three layers it built for the first application from the cache and only creates the last two layers with the new sources and the new entry point this way docker builds images faster and efficiently saves disk space this is also applicable if you were to update your application code whenever you update your application code such as the app dot py in this case docker simply reuses all the previous layers from cache and quickly rebuilds the application image by updating the latest source code thus saving us a lot of time hearing rebuilds and updates let's rearrange the layers bottom up so we can understand it better at the bottom we have the base open to layer than the packages then the dependencies and then the source code of the application and then the entry point all of these layers are created when we run the docker build command to form the final docker image so all of these are the docker image layers once the build is complete you cannot modify the contents of these layers and so they are read-only and you can only modify them by initiating a new build when you run a container based off of this image using the docker run command docker creates a container based off of these layers and creates a new writable layer on top of the image layer the writable layer is used to store data created by the container such as log files written by the applications any temporary files generated by the container or just any file modified by the user on that container the life of this layer though is only as long as the container is alive when the container is destroyed this layer and all of the changes stored in it are also destroyed remember that the same image layer is shared by all containers created using this image if I were to log in to the newly created container and say create a new file called temp dot txt it will create that file in the container layer which is read and write we just said that the files in the image layer are read-only meaning you cannot edit anything in those layers let's take an example of our application code since we bake our code into the image the code is part of the image layer and as such is read-only after running a container what if I wish to modify the source code to say test a change remember the same image layer may be shared between multiple containers created from this image so does it mean that I cannot modify this file inside the container no I can still modify this file but before I save the modified file docker automatically creates a copy of the file in the readwrite layer and I will then be modifying a different version of the file in the readwrite layer all future modifications will be done on this copy of the file in the readwrite layer this is called copy-on-write mechanism the image layer being read-only just means that the files in these layers will not be modified in the image itself so the image will remain the same all the time until you rebuild the image using the docker build command what happens when we get rid of the container all of the data that was stored in the container layer also gets deleted the change we made to the app dot py and the new temp file we created we'll also get removed so what if we wish to persist this data for example if we were working with our database and we would like to preserve the data created by the container we could add a persistent volume to the container to do this first create a volume using the docker volume create command so when I run the docker volume create data underscore volume command it creates a folder called data underscore volume under the var Lib docker volumes directory then when I run the docker container using the docker run command I could mount this volume inside the docker containers read/write layer using the - B option like this so I would do a docker run - V then specify my newly created volume name followed by a colon and the location inside my container which is the default location where MySQL stores data and that is where Lib MySQL and then the image name MySQL this will create a new container and mount the data volume we created into very Lib MySQL folder inside the container so all data are written by the database is in fact stored on the volume created on the docker host even if the container is destroyed the data is still active now what if you didn't run the docker volume create command to create the volume before the docker run command for example if I run the docker run command to create a new instance of MySQL container with the volume data underscore volume to which I have not created yet docker will automatically create a volume named data underscore volume - and mounted to the container you should be able to see all these volumes if you list the contents of the where Lib docker volumes folder this is called volume mounting as we are mounting a volume created by docker under the var Lib docker volumes folder but what if we had our data already at another location for example let's say we have some external storage on the docker host at four slash data and we would like to store database data on that volume and not in the default where Lib docker volumes folder in that case we would run a container using the command docker run - V but in this case we will provide the complete path to the folder we would like to mount that is for slash data for slash minus QL so it will create a container and mount the folder to the container this is called bind mounting so there are two types of mounts a volume mounting and a bind mount volume mount mounts a volume from the volumes directory and bind mount mounts a directory from any location on the docker host one final point note before I let you go you think the - V is an old style the new way is to use - mount option the - - mount is the preferred way as it is more verbose so you have to specify each parameter in a key equals value format for example the previous command can be written with the - mount option as this using the type source and target options the type in this case is bind the source is the location on my host and target is the location on my container so who is responsible for doing all of these operations maintaining the layered architecture creating a writable layer moving files across layers to enable copy and write etc it's the storage drivers so Dockery uses storage drivers to enable layered architecture some of the common storage drivers are au FS btrfs ZFS device mapper overlay and overlay to the selection of the storage driver depends on the underlying OS being used for example we to bond to the default storage driver is a u FS whereas this storage driver is not available on other operating systems like fedora or cent OS in that case device mapper may be a better option docker will choose the best storage driver available automatically based on the operating system the different storage drivers also provide different performance and stability characteristics so you may want to choose one that fits the needs of your application and your organization if you would like to read more on any of these storage drivers please refer to the links in the attached documentation for now that is all from the docker architecture concepts see you in the next lecture [Music] [Music] hello and welcome to this lecture on docker compose going forward we will be working with configurations in yamo file so it is important that you are comfortable with llamo let's recap a few things real quick course we first learned how to run a docker container using the docker run command if we needed to set up a complex application running multiple services a better way to do it is to use docker compose with docker compose we could create a configuration file in yamo format called docker compose Gemmell and put together the different services and the options specific to this to running them in this file then we could simply run a docker compose up command to bring up the entire application stack this is easier to implement run and maintain as all changes are always stored in the docker compose configuration file however this is all only applicable to running containers on a single docker host and for now don't worry about the yamo file we will take a closer look at the yamo file in a bit and see how to put it together that was a really simple application that I put together let us look at a better example I'm going to use the same sample application that everyone uses to demonstrate docker it's a simple yet comprehensive application developed by docker to demonstrate the various features available in running an application stack on docker so let's first get familiarized with the application because we will be working with the same application in different sections through the rest of this course this is a sample voting application which provides an interface for a user to vote and another interface to show the results the application consists of various components such as the voting app which is a web application developed in Python to provide the user with an interface to choose between two options a cat and a dog when you make a selection the vote is stored in Redis for those of you who are new to Redis Redis in this case serves as a database in memory this load is then processed by the worker which is an application written in dotnet the worker application takes the new vote and updates the persistent database which is a Postgres SQL in our case the Postgres SQL simply has a table with the number of votes for each category cats and dogs in this case it increments the number of votes for cats as our what was for cats finally the result of the vote is displayed in a web interface which is another web application developed in node.js this resulting application reads the count of votes from the Postgres sequel database and displays it to the user so that is the architecture and data flow of this simple voting application stack as you can see this sample application is built with a combination of different services different development tools and multiple different development platforms such as Python node.js net etc this sample application will be used to showcase how easy it is to set up an entire application stack consisting of diverse components in docker let us keep aside docker swarm services and stacks for a minute and see how we can put together this application stack on a single docker engine using first docker run commands and then docker compose let us assume that all images of applications are already built and are available on docker repository let us start with the data layer first we run the docker run command to start an instance of Redis by running the docker run Redis command we will add the dash D parameter to run this container in the background and we will also name the container Redis now naming the containers is important why is that important hold that thought we will come to that in a bit next we will deploy the Postgres sequel database by running the docker run Postgres command this time - we will add the - d option to run this in the background and name this container DB for database next we will start with the application services we will deploy a front-end app for voting interface by running an instance of voting app image run the docker run command and name the instance vote since this is a web server it has a web UI instance running on port 80 we will publish that port to 5000 on the host system so we can access it from a browser next we will deploy the result web application that shows the results to the user for this we deploy a container using the results - app image and publish port 80 - port 5001 on the host this way we can access the web UI of the resulting app on a browser finally we deploy the worker by running an instance of the worker image okay now this is all good and we can see that all the instances are running on the host but there is some problem it just does not seem to work the problem is that we have successfully run all the different containers but we haven't actually linked them together as in we haven't told the voting web application to use this particular Redis instance there could be multiple Redis instances running we haven't told the worker and the resulting app to use this particular Postgres equal database that we ran so how do we do that that is where we use links link is a command line option which can be used to link two containers together for example the voltage app web service is dependent on the Redis service when the web server starts as you can see in this piece of code on the web server it looks for a Redis service running on host Redis but the voting app container cannot resolve a host by the name Redis to make the voting app aware of the Redis service we add a link option while running the voting app container to link it to the Redis container adding a - - link option to the docker run command and specifying the name of the Redis container which is which in this case is Redis followed by a colon and the name of the host that the voting app is looking for which is also Redis in this case remember that this is why we named the container when we ran it the first time so we could use its name while creating a link what this is in fact doing is it creates an entry into the e.t.c host file on the voting app container adding an entry with a host name Redis with an internal IP of the Redis container similarly we add a link for the result app to communicate with the database by adding a link option to refer the database by the name DB as you can see in this source code of the application it makes an attempt to connect to a Postgres database on hosts DB finally the worker application requires access to both the Redis as well as the Postgres database so we add two links to the worker application one link to link the Redis and the other link to link Postgres database note that using links this way is deprecated and the support may be removed in future in docker this is because as we will see in some time advanced and newer concepts in docker swarm and networking supports better ways of achieving what we just did here with links but I wanted to mention that anyway so you learned the concept from the very basics once we have the run commands tested and ready it is easy to generate a docker compose file from it we start by creating a dictionary of container names we will use the same name we used in the docker run commands so we take all the names and create a key with each of them then under each item we specify which image to use the key is the image and the value is the name of the image to use next inspect the commands and see what are the other options used we published ports so let's move those ports under the respective containers so we create a property called ports and lists all the ports that you would like to publish under that finally we are left with links so whichever container requires a link created properly under it called links and provide an array of links such as Redis or TB note that you could also specify the name of the link this way without the semicolon and and the target target name and it will create a link with the same name as the target name specifying the DB : DB is similar to simply specify dB we will assume the same value to create a link now that we are all done with our docker compose file bringing up the stack is really simple from the docker compose up command to bring up the entire application stack when we looked example of the sample voting application we assumed that all images are already built out of the five different components two of them Redis and Postgres images we know are already available on docker hub there are official images from Redis and Postgres but the remaining three are our own application it is not necessary that they are already built and available in a docker registry if we would like to instruct docker compose to run a docker build instead of trying to pull an image we can replace the image line with a build line and specify the location of a directory which contains the application code and a docker file with instructions to build the docker image in this example for the voting app have all the application code in a folder named vote which contains all application code and a docker file this time when you run the docker compose up command it will first build the images give a temporary name for it and then use those images to run containers using the options you specified before similarly use build option to build the two other services from the respective folders we will now look at different versions of docker compose file this is important because you might see docker compose files in different formats at different places and wonder white-sand look different docker compose evolved over time and now supports a lot more options than it did in the beginning for example this is the trimmed down version of the docker compose file we used earlier this is in fact the original version of docker compose file known as version 1 this had a number of limitations for example if you wanted to deploy containers on a different network other than the default bridge network there was no way of specifying that in this version of the file also say you have a dependency or startup order of some kind for example your database container must come up first and only then and should the voting application be started there was no way you could specify that in the ocean one of the docker compose file support for these came in version 2 with version 2 and up the format of the file also changed a little bit you no longer specify your stack information directly as you did before it is all encapsulated in a Services section so create a property called services in the root of the file and then move all the services underneath that you will still use the same docker compose up command to bring up your application stack but how does docker compose know what version of the file you're using you're free to use version 1 or version 2 depending on your needs so how does the docker compose know what format you are using for version 2 and up you must specify the version of docker compose file you are intending to use by specifying the version at the top of the file in this case version : 2 another difference is with networking in version 1 docker compose attaches all the containers it runs to the default bridged Network and then use links to enable communication between the containers as we did before with version 2 dr. Campos automatically creates a dedicated bridged Network for this application and then attaches all containers to that new network all containers are then able to communicate to each other using each other's service name so you basically don't need to use links in version 2 of docker compose you can simply get rid of all the links you mentioned in version 1 when you convert a file from version one to version two and finally version 2 also introduces it depends on feature if you wish to specify a startup order for instance say the watering web application is dependent on the Redis service so you need to ensure that Redis container is started first and only then the voting web application must be started we could add a depends on property to the voting application and indicate that it is dependent on Redis then comes version 3 which is the latest as of today version 3 is similar to version 2 in the structure meaning it has a version specification at the top and a Services section under which you put all your services just like in version 2 make sure to specify the version number as 3 at the top version 3 comes with support for docker swamp which we will see later on there are some options that were removed and added to see details on those you can refer to the documentation section using the link in the reference page following this lecture we will see version 3 in much detail later when we discuss about docker stacks let us talk about networks in docker compose getting back to our application so far we have been just deploying all containers on the default bridged Network let us say we modify the architecture a little bit to contain the traffic from the different sources for example we would like to separate the user generated traffic from the applications internal traffic so we create a front-end network dedicated for traffic from users and a back-end network dedicated for traffic within the application we then connect the user facing applications which are the voting app and the result app to the front-end network and all the components to an internal back-end network so back in our docker compose file note that I have actually stripped out the port section for simplicity sake they're still there but they're just not shown here the first thing we need to do if we were to use networks is to define the networks we are going to use in our case we have two networks front end and back end so create a new property called networks at the root level adjacent to the services in the docker compose file and add a map of networks we are planning to use then under each service create a network's property and provide a list of networks that service must be attached to in case of Redis and DB it's only the back-end network in case of the front-end applications such as a devoting app and the result app they require to be attached to both a front-end and back-end Network you must also add a section for worker container to be added to the back-end network I have just omitted that in this slide due to space constraints now that you have seen docker compose files head over to the coding exercises and practice developing some docker compose files that's it for this lecture and I will see you in the next lecture [Music] we will now look at docker registry so what is a registry if the containers were the rain then they would rain from the docker registry which are the clouds that's where docker images are stored it's a central repository of all docker images let's look at a simple nginx container we run the docker run engine X command to run an instance of the nginx image let's take a closer look at that image name now the name is nginx but what is this image and where is this image pulled from this name follows Dockers image naming convention nginx here is the image or the repository name when you say nginx it's actually nginx slash nginx the first part stands for the user or account name so if you don't provide an account or a repository name it assumes that it is the same as the given name which in this case is nginx the user names is usually your docker hub account name or if it is an organization then it's the name of the organization if you were to create your own account and create your own repositories or images under it then you would use a similar pattern now where are these images stored and pulled from since we have not specified the location where these images are to be pulled from it is assumed to be on Dockers default registry docker hub the dns name for which is darker dial the registry is where all the images are stored whenever you create a new image or update an existing image you push it to the registry and every time anyone deploys this application it is pulled from that registry there are many other popular registries as well for example Google's registry is at GCR that I offer a lot of kubernetes related images are stored like the ones used for performing end-to-end tests on the cluster these are all publicly accessible in just that anyone can download and access when you have applications built in-house that shouldn't be made available to the public hosting an internal private registry may be a good solution many cloud service providers such as AWS as your GCP provide a private registry by default when you open an account with them on any of these solutions be a docker hub or Google registry or your internal private registry you may choose to make a repository private so that it can only be accessed using a set of credentials from Dockers perspective to run a container using an image from a private registry you first log in to your private registry using the docker login command input your credentials once successful run the application using private registry as part of the image name like this now if you did not log into the private registry it will come back saying that the image cannot be found so remember to always log in before pulling or pushing to a private registry we said that cloud providers like AWS or GCP provide a private registry when you create an account with them but what if you are running your application on-premise and don't have a private registry how do you deploy your own private registry within your organization the docker registry is itself another application and of course is available as a docker image the name of the image is registry and it exposes the API on port 5,000 now that you have your custom registry running at port 5,000 on this docker host how do you push your own image to it use the docker image tag command to tag the image with a private registry URL in it in this case since it's running on the same door host I can use localhost semi colon 5,000 followed by the image name I can then push my image to my local private registry using the command docker push and the new image name with the docker registry information in it from there on I can pull my image from anywhere within this network using either localhost if you're on the same host or the IP or domain name of my docker host if I'm accessing from another host in my environment well let's sit for this lecture hello words of the practice test and practice working with private docker registries welcome to this lecture on docker engine in this lecture we will take a deeper look at Dockers architecture how it actually runs applications in isolated containers and how it works under the hood docker engine as we have learned before is simply referred to a host with docker installed on it when you install docker on a Linux host you're actually installing three different components the docker daemon the rest api server and the docker CLI the docker daemon is a background process that manages docker objects such as the images containers volumes and networks the docker rest api server is the api interface that programs can use to talk to the daemon and provide instructions you could create your own tools using this REST API and the docker CLI is nothing but the command-line interface that we've been using until now to perform actions such as running a container stopping containers destroying images etc it uses the REST API to interact with the docker demon something to note here is that the docker CLI need not necessarily be on the same host it could be on another system like a laptop and can still work with a remote docker engine simply use the dash H option on the docker command and specify the remote docker engine address and a port as shown here for example to run a container based on ng I and X on a remote docker host run the command docker dash H equals 10.1 23 2000 call n' to 375 run ngan now let's try and understand how exactly our applications containerized in docker how does it work under the hood docker uses namespaces to isolate workspace process IDs network inter-process communication mounds and unix time sharing systems are created in their own namespace thereby providing isolation between containers let's take a look at one of the namespace isolation technique process ID namespaces whenever a Linux system boots up it starts with just one process with a process ID of one this is the root process and kicks off all the other processes in the system by the time the system boots up completely we have a handful of processors running this can be seen by running the PS command to list all the running processes the process IDs are unique and two processes cannot have the same process ID now if we were to create a container which is basically like a child system within the current system the child system needs to think that it is an independent system on its own and it has its own set of processes originating from a root process with a process ID of one but we know that there is no hard isolation between the containers and the underlying host so the processes running inside the container or in fact processes running on the underlying host and so two processes cannot have the same process ID of one this is where namespaces come in to play with process ID namespaces each process can have multiple process IDs associated with it for example when the processes start in the container it's actually just another set of processes on the base Linux system and it gets the next available process ID in this case 5 & 6 however they also get another process ID starting with PID 1 in the container name space which is only visible inside the container so the container things that it has its own route process tree and so it is an independent system so how does that relate to an actual system how do you see this on a host let's say I were to run an ng I in X Server as a container we know that the nginx container runs an NGO next service if we were to list all the services inside the docker container we see that the ng ionic service running with a process ID of one this is the process ID of the service inside of the container namespace if we list the services on the docker host we will see the same service but with a different process ID that indicates that all processes are in fact running on the same host but separated into their own containers using namespaces so we learned that the underlying docker host as well as the containers share the same system resources such as CPU and memory how much of the resources are dedicated to the host and the containers and how does docker manage and share the resources between the containers by default there is no restriction as to how much of a resource a container can use and hence a container may end up utilizing all of the resources on the underlying host but there is a way to restrict the amount of CPU or memory a container can use docker uses three groups or control groups to restrict the amount of hardware resources allocated to each container this can be done by providing the - - CPUs option to the docker run command providing a value of 0.5 will ensure that the container does not take up more than 50% of the host CPU at any given time the same goes with memory setting a value of hundred M to the - - memory option limits the amount of memory the container can use to a hundred megabytes if you are interested in reading more on this topic refer to the links I posted in the reference page that's it for now on docker engine earlier in this course we learned that containers share the underlying OS kernel and as a result we cannot have a Windows container running on Linux host or vice versa we need to keep this in mind while going through this lecture as it's very important concept and most beginners tend to have an issue with it so what are the options available for docker on Windows there are two options available the first one is dock or on Windows using docker toolbox and the second one is the docker desktop for Windows option we will look at each of these now let's take a look at the first option docker toolbox this was the original support for docker on Windows imagine that you have a Windows laptop and no access to any Linux system whatsoever but you would like to try docker you don't have access to a Linux system in the lab or in the cloud what would you do what I did was to install a virtualization software on my Windows system like Oracle VirtualBox or VMware Workstation and deploy a Linux VM on it such as Ubuntu or Debian then install docker on the Linux VM and then play around with it this is what the first option really does it doesn't really have anything much to do with Windows you cannot create Windows based docker images or run Windows based docker containers you obviously cannot run Linux container directly on Windows either you're just working with docker on a Linux virtual machine on a Windows host docker however provides us with a set of tools to make this easy which is called as the docker toolbox the docker toolbox contains a set of tools like Oracle VirtualBox docker engine docker machine docker compose and a user interface called CAD Matic this will help you get started by simply downloading and running the docker toolbox executable I will install virtualbox deploy a lightweight VM called boot to docker which has docker it already so that you are all set to start with docker easily and with within a short period of time now what about requirements you must ensure that your operating system is 64-bit Windows 7 or higher and that the virtualization is enabled on the system now remember docker to box is a legacy solution for older Windows systems that do not meet requirements for the newer docker for Windows option the second option is the NewYork an option called docker Desktop for Windows in the previous option we saw that we had Oracle VirtualBox installed on Windows and then a Linux system and then docker on that Linux system now with docker for Windows we take out Oracle VirtualBox and use the native virtualization technology available with Windows called Microsoft hyper-v during the installation process for docker for Windows it will still automatically create a Linux system underneath but this time it is created on the Microsoft hyper-v instead of Oracle VirtualBox and have docker running on that system because of this dependency on hyper-v this option is only supported for Windows 10 enterprise or professional Edition and on Windows Server 2016 because both these operating systems come with hyper-v support by default now here is the most important point so far whatever we have been discussing with Dockers support for Windows it is strictly for Linux containers Linux applications packaged into Linux docker images we're not talking about Windows applications or Windows images or Windows containers both the options we just discussed will help you run a Linux container on a Windows host with Windows Server 2016 Microsoft announced support for Windows containers for the first time you can now packaged applications Windows applications into Windows docker containers and run them on Windows chopper host using docker desktop for Windows when you install docker desktop for Windows the default option is to work with Linux containers but if you would like to run Windows containers then you must explicitly configure docker windows to switch to using windows containers in early 2016 Microsoft announced windows containers then you could create Windows based images and run Windows containers on a Windows server just like how you would run Linux containers on a Linux system now you can create windows images container as your applications and share them through the docker store as well unlike in Linux there are two types of containers in Windows the first one is a Windows Server container which works exactly like Linux containers where the OS kernel is shared with the underlying operating system to allow better security boundary between containers and to a lot of kernels with different versions and configurations to coexist the second option was introduced known as the hyper-v isolation with hyper-v isolation each container is run within a highly optimized virtual machine guaranteeing complete kernel isolation between the containers and the underlying host now while in the Linux world you had a number of base images for a Linux system such as ubuntu debian fedora alpine etc if you remember that that is what you specify the beginning of the docker file in the windows world we have two options the Windows server core and nano server a nano server is a headless deployment option for Windows Server which runs at a fraction of size of the full operating system you can think of this like the Alpine image in Linux the windows server core though is not as lightweight as you might expect it to be finally Windows containers are supported on Windows Server 2016 nano server and windows 10 professional and Enterprise Edition remember on Windows 10 professional and Enterprise Edition only supports hyper-v isolated containers meaning as we just discussed every container deployed is deployed on a highly optimized virtual machine well that's it about docker on windows now before I finish I want to point out one important fact we saw two ways of running a docker container using VirtualBox or hyper but remember VirtualBox and hyper-v cannot coexist on the same Windows host so if you started off with docker toolbox with VirtualBox and if you plan to migrate to hyper-v remember you cannot have both solutions at the same time there is a migration and guide available on docker documentation page on how to migrate from Marshall box to hyper wait that's it for now thank you and I will see you in the next lecture [Music] we now look at docker on Mac docker on Mac is similar to docker on Windows there are two options to get started docker on Mac using docker toolbox or docker Desktop for Mac option let's look at the first option docker toolbox this was the original support for docker on Mac it is darker on a Linux VM created using VirtualBox on Mac as with Windows it has nothing to do with Mac applications or Mac based images or Mac containers it purely runs Linux containers on a Mac OS dollar toolbox contains a set of tools like Oracle VirtualBox docker and Jane docker machine docker compose and a user interface called CAD Matic when you download and install the docker toolbox executable it installs VirtualBox deploys lightweight VM called boot a docker which has darker running in it already this requires Mac OS 10.8 or newer the second option is the newer option called docker Desktop for Mac with docker Desktop for Mac we take out or commercial box and use hyper cat virtualization technology during the installation process for docker for Mac it will still automatically create a Linux system underneath but this time it is created on hyper kit instead of Oracle VirtualBox and have docker running on that system this requires Mac OS Sierra 10.12 or newer and Martin and the Mac hardware must be 2010 or newer model finally remember that all of this is to be able to run the Linux container on Mac as of this recording there are no Mac based images or containers well that with docker on Mac for now we will now try to understand what container orchestration is so far in this course we've seen that with docker you can run a single instance of the application with a simple docker run command in this case to run a node.js based application you're on the docker run nodejs command but that's just one instance of your application on one docker host what happens when the number of users increase and that instance is no longer able to handle the load you deploy additional instance of your application by running the docker run command multiple times so that's something you have to do yourself you have to keep a close watch on the load and performance of your application and deploy additional instances yourself and not just that you have to keep a close watch on the health of these applications and if a container was to fail you should be able to detect that and run the docker run command again to deploy another instance of that application what about the health of the docker host itself what if the host crashes and is inaccessible the containers hosted on that host become inaccessible too so what do you do in order to solve these issues you will need a dedicated engineer who can sit and monitor the state performance and health of the containers and take necessary actions to remediate the situation but when you have large applications deployed with tens of thousands of containers that's that's not a practical approach so you can build your own scripts and and that will help you tackle these issues to some extent container orchestration is just a solution for that it is a solution that consists of a set of tools and scripts that can help host containers in a production environment typically a container orchestration solution consists of multiple docker hosts that can host containers that way even if one fails the application is still accessible through the others the container orchestration solution is allows you to deploy hundreds or thousands of instances of your application with a single command this is a command used for docker swarm we'll look at the command itself in a bit some orchestration solutions can help you automatically scale up the number of instances when users increase and scale down the number of instances when the demand decreases some solutions can even help you in automatically adding additional hosts to support the user load and not just clustering and scaling the container orchestration solutions also provide support for advanced networking between these containers across different hosts as well as load balancing user requests across different house they also provide support for sharing storage between the house as well as support for configuration management and security within the cluster there are multiple container orchestration solutions available today docker has docker swamp kubernetes from Google and mezzo mezz from Paget while docker swamp is really easy to set up and get started it lacks some of the advanced auto scaling features required for complex production grade applications mezzos on the other hand is quite difficult to set up and get started but supports many advanced features kubernetes arguably the most popular of it all is a bit difficult to set up and get started but provides a lot of options to customize deployments and has support for many different vendors kubernetes is now supported on all public cloud service providers like GCP azure and AWS and the kubernetes project is one of the top-ranked projects on github in upcoming lectures we will take a quick look at docker swamp and kubernetes [Music] we will now get a quick introduction to docker swarm docker swarm has a lot of concepts to cover and requires its own course but we will try to take a quick look at some of the basic details so you can get a brief idea on what it is with docker swarm you could now combine multiple docker machines together into a single cluster docker swarm will take care of distributing your services or your application instances into separate hosts for high availability and for load balancing across different systems and hardware to set up a docker swamp you must first have hosts or multiple hosts with docker installed on them then you must designate one host to be the manager or the master or the swamp manager as it is called and others as slaves or workers once you're done with that run the docker swarm init command on the swarm manager and that will initialize the swamp manager the output will also provide the command to be run on the workers so copy the command and run it on the worker nodes to join the manager after joining the swamp the workers are also referred to as nodes and you're now ready to create services and deploy them on the swamp cluster so let's get into some more details as you already know to run an instance of my web server I run the docker run command and specify the name of the image I wish to run this creates a new container instance of my application and serves my web server now that we have learned how to create a swamp cluster how do I utilize my cluster to run multiple instances of my web server now one way to do this would be to run the docker run command on each worker node but that's not ideal as I might have to log into each node and run this command and there there could be hundreds of nodes I will have to set up load balancing myself a large monitor the state of each instance myself and if instances were to fail I'll have to restart them myself so it's going to be an impossible task that is where docker swarm orchestration consent dr. Swan Orchestrator does all of this for us so far we've only set up this one cluster but we haven't seen orchestration in action the key component of suam orchestration is the Ducker a service dr. services are one or more instances of a single application or service that runs across the song the nodes in the song cluster for example in this case I could create a docker service to run multiple instances of my web server application across worker nodes in my swamp cluster for this around the docker service create command on the manager node and specify my image name there which is my web server in this case and use the option replicas to specify the number of instances of my web server I would like to run across the cluster since I specified 3 replicas and I get 3 instances of my web server distributed across the different worker nodes remember the docker service command must be run on the manager node and not on the worker node the docker service create command is similar to the docker run command in terms of the options passed such as the - e environment variable the - key for publishing ports the network option to attach container to a network etc well that's a high-level introduction to dr. Swan there's a lot more to know such as configuring multiple managers overlay networks etc as I mentioned it requires its own separate course well that's it for now in the next lecture we will look at kubernetes at a higher level [Music] we will now get a brief introduction to basic kubernetes concepts again kubernetes requires its own course well a few courses at least five but we will try to get a brief introduction to it here with docker you were able to run a single instance of an application using the docker CLI by running the docker run command which is grid running an application has never been so easy before with kubernetes using the kubernetes CLI known as cube control you can run a thousand instance of the same application with a single command kubernetes can scale it up to two thousand with another command kubernetes can be even configured to do these automatically so that instances and the infrastructure itself can scale up and down based on user load kubernetes can upgrade these 2000 instances of the application in a rolling upgrade fashion one at a time with a single command if something goes wrong it can help you roll back these images with a single command kubernetes can help you test new features of your application by only upgrading a percentage of these instances through a be testing methods the kubernetes open architecture provides support for many many different network and storage vendors any network or storage brand that you can think of has a plugin for kubernetes kubernetes supports a variety of authentication and authorization mechanisms all major cloud service providers have native support for kubernetes so what's the relation between docker and kubernetes well kubernetes uses docker host to host applications in the form of docker containers well it need not be docker all the time kubernetes supports as relatives to Dockers as well such as rocket or a cryo but let's take a quick look at the kubernetes architecture a kubernetes cluster consists of a set of nodes let us start with nodes in node is a machine physical or virtual on which a cobranet is Karina's software a set of tools are installed a node is a worker machine and that is where containers will be launched by kubernetes but what if the node on which the application is running fails well obviously our application goes down so you need to have more than one nodes a cluster is a set of nodes grouped together this way even if one node fails you have your application still accessible from the other nodes now we have a cluster but who is responsible for managing this cluster where's the information about the members of the cluster stored how are the nodes monitored when a node fails how do you move the workload of the failed nodes to another worker node that's where the master comes in the master is a note with the kubernetes control plane components installed the master watches over the nodes are in the cluster and is responsible for the actual orchestration of containers on the worker nodes when you install kubernetes on a system you're actually installing the following components an API server and EDD server a cubelet service contain a runtime engine like docker and a bunch of controllers and the scheduler the API server acts as the front end for kubernetes the users management devices command line interfaces all talk to the API server to interact with the kubernetes cluster next is the Etsy be a key value store the Etsy D is a distributed reliable key value store used by kubernetes to store all data used to manage the cluster think of it this way when you have multiple nodes and multiple masters in your cluster let CD stores all that information on all the nodes in the cluster in a distributed manner and CD is responsible for implementing a locks within the cluster to ensure there are no conflicts between the masters the scheduler is responsible for distributing work or containers across multiple nodes it looks for newly created containers and assigns them to notes the controllers are the brain behind orchestration they're responsible for noticing and responding when notes containers or endpoints goes down the controllers makes decisions to bring up new containers in such cases the container runtime is the underlying software that is used to run containers in our case it happens to be docker and finally cubelet is the agent that runs on each node in the cluster the agent is responsible for making sure that the containers are running on the notes as expected and finally we also need to learn a little bit about one of the command-line utilities known as the cube command-line tool or the cube control tool or cube cuddle as it is also called the cube control tool is the kubernetes CLI which is used to deploy and manage applications on a convergence cluster to get cluster related information to get the status with the nodes in the cluster and many other things the cube control run command is used to deploy an application on the cluster the keep control cluster info command is used to view information about the cluster and the cube control get nodes command is used to list all the nodes part of the cluster so to run hundreds of instances of your application across hundreds of nodes all I need is a single kubernetes command like this well that's all we have for now a quick introduction to Cornelis and its architecture we currently have three courses on code cloud on kubernetes that will take you from the absolute beginner to a certified expert so have a look at it when you get a chance [Music] so we're at the end of this beginners course to docker I hope you had a great learning experience if so please leave a comment below if you like my way of teaching you will love my other courses hosted on my site at code cloud we have courses for docker swarm kubernetes advanced courses on kubernetes certifications as well as OpenShift we have courses for automation tools like Antigo chef and puppet and many more on the way with it code cloud at www.calculated.com/support [Music]
Info
Channel: KodeKloud
Views: 947,566
Rating: undefined out of 5
Keywords: Docker Tutorial for Beginners, free docker training videos, docker training India, hands on docker course, Docker devops labs online, learn docker for free, free docker devops online, docker course for beginner to advanced, docker course training, docker for beginers online classes, learn docker online, docker containers for beginners, docker beginner tutorial, docker certification training, docker, Kodekloud, docker tutorial for beginners 2022, docker tutorial for beginners linux
Id: zJ6WbK9zFpI
Channel Id: undefined
Length: 129min 27sec (7767 seconds)
Published: Thu Aug 08 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.