Docker Tutorial | Docker Tutorial for Beginners | Docker

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello welcome to dr. tutorial for Deb offs run docker containers course my name is James Lee in the past I worked at many large companies such as Amazon and Google and now I'm working at one of the top silicon valley-based startups specializing Bakke data analysis in this introduction lecture we'll see what this course covers and what you will learn from this course in the first section we'll develop a conceptual understanding of virtualization technologies hypervisors and Linux containers then we'll see how docker fits into the overall virtualization technology ecosystem then we'll get to know docker server and Klein architecture next we'll learn how to install docker on your local computer no matter whether you are using Windows Mac or Linux you'll be able to follow along develop our understanding of some of the most important docker terminologies such as containers images docker registries and repositories and we'll try out our first docker workflow where we'll pull an image from docker hub create and run a container from the image then we'll take a close look at some of the useful commands to work with docker containers such as docker PS and docker inspect etc in section two we'll start by introducing an important docker concept image layers then we'll learn how to create our first docker image using docker commit command next we'll look at a more professional dr. workflow which is to use the docker file to build docker images which we can run as containers then we'll deep dive into several important dr. v instructions such as run command copy etc and once we create our own image we'll Dammann pushing our image to docker hub then we can pull that image from online repository to run on another environment such as staging or production in Section three we'll apply the knowledge we have learned so far to dr. Rhys a simple hello world web application next we extend our hello world web application to a key value lookup service by incorporating Redis docker image you'll find out how effective it is to use docker to build up applications with the micro-service approach then we see how to use containers linkings which allow containers to discover each other and securely transfer information about one container to another and we'll take a close look at how container linking works behind the scene we'll learn how to automate our current docker workflow with docker compose then we'll cover more details about docker compose workflow such as docker compose built and docker compose PS etc in section 4 we'll create some unit tests to test our dock arised application and run those tests inside the docker container next we'll extend our talker workflow set up the github account and circle Sai account to create a continuous integration pipeline in the cloud so that any changes pushed to our github repository will trigger a build on circle CI after the test is green the docker image will be automatically pushed to docker hub in section 5 we'll start by learning some of the concerns about running docker in production then we'll see how to deploy our dock arised allocation to production server running in the digital ocean cloud then we'll learn how to use docker swarm to scale docker workflow and employ our docker web application across multiple hosts in the cloud at the end of this course I'm confident you will gain in-depth knowledge of a dagger and general develop skills to help your company or your own project to apply the right docker workflow and continuously deliver better software you will go from 0 to docker hero in 4 hours hello everyone in this lecture we're going to talk about how you should take this course and how to get support a lot of slides provide practical information on how to do things the source code for this course is uploaded to github will also keep the repository up to date with new and extra information we have put most of the complicated commands we're using this course to the text lecture right after the video lecture so that you can copy and paste those commands and try out on your own laptop we also have a Facebook group called learning DevOps and LevelUp Facebook group is a fantastic tool that we can use in so many of the communities we're in and we hope it would be a great way for extending this course and adding more value to your learning you can post your questions in the Facebook group and we'll try to get back to you as fast as we can we also periodically share the latest trend in the DevOps world and some practical tricks we found useful you can scan the following barcode or use the link in the next text lecture after this video lecture to find the group technically docker is one implementation of the container based virtualization technologies let's take a look at how virtualization technology has evolved over time in the pre-visualization days we're using big server racks underneath we have the physical server we install the desired offering a system on it then we run the application on top of the operating system and each physical machine would only run one application so what was the problem with this model first of all we have to purchase a physical machine in order to deploy each application and those commercial servers can be very expensive and we might end up only using a fraction of the CPU or a memory of the machine the rest of the resources are simply wasted but you have to pay for the whole hardware upfront secondly deployment time is often slow the process of purchasing and configuring new physical servers can take ages especially for big organizations thirdly it will be painful to migrate our applications to servers from different vendor let's say we stole our application on an IBM server it would take us lots of effort to migrate to Dell servers a significant amount of configuration change and Manu intervention is required the rescue is the hyper reservation technology let's take a look at this virtualization model underneath we have the physical server that would install the desired operating system on top of the operating system a hypervisor layer is introduced which allows us to install multiple virtual machines on a single physical machine each VM can have a different operating system for example we have Ubuntu installed on one VM and Debian on another in this way we can run multiple operating systems on a single physical machine and each operating system can run a different application this is the traditional model of virtualization which is being referenced as the hypervisor based visualization some of the popular hypervisor providers are VMware and VirtualBox in the early stage users would deploy VMs on their own physical servers but nowadays more and more companies have been shifted to deploy VMs in the cloud with providers such as AWS and Microsoft Esther which means we don't even have to purchase physical machines up front there are some huge benefits with this model first of all it is more cost effective each physical machine it is divided into multiple VMs and each one only uses its own CPU memory and storage resources we pay only for the compute power storage and other resources you use with no upfront commitments which is a typical pay-as-you-go model secondly it's easy to scale with VMs deployed in the cloud environment if you wanted more instances of our application we don't need to go through the long process of ordering and configuring new physical servers we can simply click the mouse and deploy more at VMs in the cloud the time taken to scale our application can be reduced from weeks to just minutes this results in a dramatic increase in a djeli for the organization this hypervisor based virtualization model has obvious advantage over the one application on one server model but it still has some limitations first of all each virtual machine still needs to have an operating system installed this is an entire guest operating system with its own memory management device drivers dmoz etc when we're talking about a Linux operating system we're talking about and kernel for example here we have three host operating systems and three kernels even though they can be three different kernels we're still replicating a lot of the core functionality of Linux it is traditional hypervisor based virtualization model we have to have an entire operating system there simply to run our application which is still not inefficient secondly application probability is not guaranteed even though some progress has been achieved in getting virtual machines to run across different types of hypervisors there is still a lot of work to be done there VM portability is still at an early stage finally the container based visualization technology comes out docker is one implementation of the container based virtualization technologies let's take a look at a diagram here underneath we have our server and this can be either a physical machine or a virtual machine then we install our operating system on the server on top of the OS we install a container engine which allows us to run multiple guest instances each guest instance is called a container within each container we install the application and all the libraries that application depends on the key to understand the difference between the hypervisor based virtualization model and the container based virtualization model is the replication of the kernels in the traditional model each application is running in its own copy of the kernel and the virtualization happens at the hardware level in the new model we have only one kernel which was supplied different binaries and runtime to the applications running in isolated containers so the container was shared the base Brunton kernel which is the container engine for the new model the virtualization happens at the operating system level containers shared the hosts OS so this is much more efficient and light-weighted you might want to ask what do we gain by running those applications in different containers wide can we just run all applications in a single VM and this comes to the nature of isolation as you know most applications depends on various third-party libraries let's say we want to run to job of replications with two different junior e's this is going to be quite challenging if you want to run those two applications in the same VM without introducing any conflicts by leveraging containers we can easily isolate the two runtime environments let's say application a requires JRE 8 then we just installed JRE 8 in the first container and run application a in the first container for container B it requires stray or e 7 and we just installed JRE 7 only for second container and run application B inside the second container in this way we have two containers on the same machine running two different applications each with a different JRE version this is what we call runtime isolation comparing to hypervisor based virtualization container based visualization has some obvious benefits firstly is more cost-effective container based virtualization does not create an entire virtual operating system instead only the required components are packed up inside the container with the application containers consume less CPU RAM and storage space and 10 VMs that means we can have more containers running on one physical machine than VMs secondly faster deployment speed containers housed the minimal requirements for running the application which can speed up as fast as a process a container can be several times faster to boost than a vm thirdly great portability because containers are essentially independent self-sufficient application bundles they can be run across machines without compatibility issues that's it for this lecture oscillator let's talk a little bit about darkus client and server architecture docker uses a client-server architecture for the daemon being the server the user does not directly interact with a demon but instead through the docker client the docker client is the primary user interface to docker it accepts commands from the user and communicates back and forth with a docker daemon there are two types of docker clients the typical command-line client and Kid Matic which is a darker client with graphical interface so if you don't like working with the commands Kid Matic it's something you should check out the daemon is the persistent process which does the heavy lifting of building running and distributing your docker containers docker daemon is often referred as docker engine or docker server on a typical Linux installation the docker client the docker daemon and any containers run on the same host you can also connect a docker client to a remote docker daemon we will cover more about this later but you can run docker natively in OS 10 or a windle's because docker daemon uses Linux specific kernel features so on OS 10 or Windows installation the docker daemon is running inside a docker machine the docker machine is a lightweight linux vm made specially to run the docker demon on OS 10 way windle's let's get started on installing docker on our local machine this lecture applies to you if you're using Linux or you are using Mac and your Mac version its OS ten ten point 10.3 or newer or you're using Windows and a Windows version is Windows 10 or newer otherwise you can skip this lecture and follow the instruction of the like snack sure to install docker toolbox on your local machine here we googled our install the first entry is stockers official installation page just click the link to answer the installation page the steps required install docker vary depending on the operating system you use if you're using linux you can just follow the corresponding installation instruction on this page to install docker on your operating system since docker is a technology built around Linux containers people developing on on Linux platforms would need to use some forms of virtualization to run docker if you're running Windows just scroll down and click the Windows installation link as you see there are two options for installing darker on windows darker for windows and darker toolbox now if we have Windows 10 who are newer version installed you will be able to install docker for Windows docker for Windows R as a native Windows application and it has a beauty in virtual machine inside the app which saves you tons of efforts from managing the VM by yourself docker for Windows is the desired way to run docker on Windows machine as long as you meet minimum requirement you can just click the getting started with docker for Windows link to start the installation since I'm using Mac I'll be demoing how to install docker for Mac on my web machine but the steps should be very similar between Windows and Mac here I scroll down and click the Mac installation page similar to Windows you also have two options here you can either install docker for a Mac or a docker toolbox if your Mac is 2010 or a newer model and the Mac version is OS 10 10.10 point 3 Yosemite or Neuer it is recommended to install docker for Mac which runs as a native Mac application I'm running a relative newer version of Mac here I just click getting started with docker for Mac let's download the stable version as you see it is downloading installer I'm fast forward in the video until the download is complete now these dollar is downloaded let's click it then drag the whale to the Applications folder just type your password to proceed then open the Applications folder and double-click the whale icon click Next now installer is asking for privileged access as a nice to install its network components just click OK and type password to proceed docker has been installed on my local box as you see we have a wheel icon on the menu bar on top of our desktop this initialization phase might take a while to complete I'm fast-forwarding video until it's finished this is what you would see if everything goes well now docker is starting finally we got docker up running let's open a command line terminal then type docker info donker info would display the system-wide information about docker as you see docker is running inside a Linux virtual machine you can also configure your docker preference by clicking the whale on the top menu bar and select preferences now we're under the general tab docker for Mac is set to automatically start when you log in we'll leave this option checked so that we can have docker automatically start when we login into our desktop you can also configure the number of CPU processors you can increase processing power for the app by setting this to a higher number or lower it to have docker for Mac use fewer computing resources by default docker for mac is set to use 2gb runtime memory allocated from the total available memory on your mac you can increase the ramp on the app to get faster performance by setting this number higher that's it for this lecture I hope you've enjoyed it in this lecture we'll see how to install docker toolbox on your local box this lecture applies to you if you're using Mac and your Mac version is older than OS 1010 point 10.3 or you are using Windows and your Windows version is older than Windows 10 or you want to install docker machine or a cat Matic instead of docker engine otherwise you can skip this lecture and follow an installation guide of the previous lecture here we googled our install the first entry is Dockers official installation page just click the link to enter an installation page the steps required to install docker vary depending on the operating system you use if you're using Linux you can just follow the corresponding installation instruction on this page to install docker on your operating system so stalker is a technology built around Linux containers people developing on no Linux platforms will need to use some forms of visualization to run docker if you're using Windows to scroll down and click the windows installation link then click docker toolbox then you can click get docker to a box to download docker toolbox installer for Windows since I'm using Mac I'll be demoing how to install docker toolbox on my Mac machine but the installation steps should be very similar between Windows and Mac here we click the Mac os10 link to go to the installation guide for Mac then click docker toolbox as you see will be installing docker toolbox which will install all the components you need to run docker on Mac OS just be aware that you need OS 10 10.8 mountain lion or newer version due east all dog her toolbox here we click get docker to a box link then download the installer for Mac the download is going to take a while I'm fast-forwarding the video until the download is complete now this dollar has been downloaded docker toolbox includes a minimum boot to docker virtual machine which were run inside VirtualBox if you have VirtualBox running make sure to shut it down before running the nice dollar to avoid conflict let's double-click the icon to run the Installer and accept all the default the installers going to taste all docker clion docker machine docker compose kitematic and QuickStart terminal app docker machine is a tool that lets you install dr engine on virtual hosts and managed to hosts with docker machine commands docker compose is a tool for defining and running multi container docker applications kid Matic is graphical docker client if you don't know what they are don't worry we'll talk about them the more details later typing the password to proceed now the installation is complete here we choose to start with the doctor Quick Start terminal now it is provisioning the documen the first time you run it it would take a while we'll just fast forward the video while the machine puts up this is what it looks like if everything works successfully this terminal is effectively a docker client which take the user input and send them to the docker demon here we can run docker machine LS command which will last all the dr. virtual machines as you see we have our default docker machines up the docker daemon is running inside the document she if you run docker version command it would bring the version for both client and docker server diapered server refers to the docker demon here if you launch the VirtualBox you'll find out the installer has installed and configured a boot docker VM and the docker daemon is running inside the VM before I finish this lecture let me show you how to run docker next time once you have the docker toolbox installed you can access docker anytime by opening the docker QuickStart terminal again see we get back our doctor-client there are several important concepts we must understand before it start playing with docker the first two concepts are containers and images images are read-only template used to create containers images are created with a docker build command either by us or by other docker users because images can become quite large images are designed to be composed of layers of other images allowing a minimal amount of data to be sent when transferring images over the network images are stored in a docker registry such as docker hub we'll talk about it in a minute next we'll talk about containers to use a programming metaphor if an image is a class that a container is an instance of a class a runtime object containers are hopefully why you're using docker they're lightweight and portable encapsulations of an environment in which to run applications we create a container from an image and then run the container and inside that container we have all the binaries and dependencies we need to run our application here are another two important concepts about docker registry and repositories a registry is where we store our images you can host your own registry and you can use Dockers public registry which is called docker hub inside a registry images are stored in repositories docker repository is a collection of different docker images with the same name that have different tacks each tag usually represents a different version of the image let's take a look at the duggar hop docker hub is a public registry which contains a large number of images you can use here we googled our hub the second entry is what we are looking for just click the link here we click browse to see what we can find as we see there are some popular official rest posit areas listed here such as nginx Ubuntu and Redis official repositories are certified repositories by docker for each repository we can also see the number of stars and poles which indicate the popularity of each repository you can also search other repositories here let's say we want to find some musk you have images let's type musk UL and hit enter to search see so docker hub found some mutts queue overs posit or ease here note the first one is marked as official new docker users are encouraged to use the official repositories and their projects these repositories have clear documentation promote best practices and are designed for the most common use cases docker incorporation which is the company behind docker sponsors a dedicated team that is responsible for reviewing and publishing all official repositories content it is also ensured that security updates are applied in a timely manner for official edges so when we get started with docker try to use official images so that we can get the most support from the community all the other repositories are also musk ul repositories which presumably contains musk you images they are contributed by other users in the community so how can we tell which one is an official image and which is not first of all as we mentioned before official images usually come with an official mark also the name of an unofficial image usually has a namespace but for the actual image name which is often a user name of the user who created the repository here let's click the link of the official muscularis posit really can see the information about this ripple and clear documents about how to use this image if I scroll up and click the text tab we can see the repository has several tax in most of the cases the tag means the version of the application or tool in this image so an image is specified by its repository name and tag even the same image might have multiple tax if you don't specify a tag docker will use the default Tech which is latest we'll get into more details about this when we start playing docker run command in the next lecture in this lecture we'll learn how to create and run our first container we're going to create a container from an image the image we're going to use here is called busybox let's go check it out here we're at docker hub website just search for busybox let's click the first one which is the official visit box repository we quickly scroll down the document as you see busybox is a tiny image which is only about one megabyte this is the main reason we choose busybox because of its tiny size so it would take little time to download let's check out the tax tab daisy box has various different tax here we picked AK 1.24 to run our container let's close the browser and open the docker QuickStart terminal if you have installed docker for a Mac docker for windle's or you were used Linux you can just open up a normal terminal when we use the image to create a container docker will first look through our local box to find the image if docker is able to find the image locally it will use the local image to create the container if docker can find local copy of the image it will download from remote registry to find out what image you have in your local box we can run the docker images command see we don't have any images in my local box now let's start running our container to run a container we will use docker run command docker ran command will create the container using the image we specify in the comment line and it was span up the container and run it as we talked about in the previous lecture an image is specified by the repository name and tag we need to put a column between the repository name and attack so let's use busybox image with tag 1.24 after that we need to specify what command wick would like to run in that container and pass the argument for that command the command we're going to run is echo and let's put the argument hello world so the docker should output hello world after spinning up the container then hit enter as you see docker codes ahead and download the image from the remote repository that is because we don't have busybox 1.24 on our local box after docker downloads the image it will create the container from the image and run the container see darker outputs hello world which is what we expected now if we run docker images command again as you see we have one image which is the one which is downloaded busybox 1.24 and the image has a unique ID when we run the container again dr. we'll use local copy of the image to create and run the container let's see it in action notice how faster the execution is it prints out hello world right away this is because we already have two busybox 1.24 image in our local box so docker just creates the container from local image right away without the need to download the image from remote registry let's see another example let's say we want to display all the Compton's in the root directory of the container just two docker run busybox 1.24 then LS command slash there you go docker outputs all the Countians under the root director of container we can also rank container in an interactive mode so that we can go inside container we need another two options - I and - T the - I flag will start an interactive container the - T FLAC will create a priests do TTY that attaches standard input and output let's see this in action here we do docker run - I - D we'll keep using busybox 1.24 again as the image now we hit enter there you go it gets me right inside the container we can write LS command see it gives us all the Countians enroute level let's say we want to create a new file here called a dot txt after that we do LS again we can see the five is created now we can type exit to exit the container know that once we actually The Container docker also shuts down a container if I run the same container using the exact same container run command docker with Spain up a container but this time docker starts a brand new container so if I go check out the file I created previously by running LS command you can see it is not there as you see when we run docket read command its pings up a new container the container we created previously with a text file has been shut down hello and welcome back in this lecture we're going to deep dive into docker containers we'll learn how to run containers in detached mode how to specify docker container name and how to use docker PS and docker inspect commands in the previous lecture we have seen how to run containers in the foreground but in most cases containers are actually running in the background we can start a docker container in the test mode with a dash d option so the container starts up and run in background that means we can start up the container and could use the council after startup for other commands let's see this in action we'll keep using our busybox image in order to keep the container running at the background we can run the Linux leap command to suspend execution for a while let's do docker run - DPC box 1.24 sleep 1000 then hit enter see docker returns us the long container ID now the docker container should be running at the background but how can we verify that we can find out all the running docker containers in our local box by using docker PS command see the container we just started up is currently running as you see the docker PS command displays some container information such as container ID image name and the command we've run the container ID displayed here is the short container ID which is the prefix of the long container ID what if we want to display all the containers in the local box including ones which have stopped we can add the - a option at the end of the docker PS command see docker also shows all the containers that I have previously run if we do not intend to keep the container we can add our M option in the end so the doctor would automatically remove the container when the container exits here we do dr. Ron double - RM busybox 1.24 let's leave for one second and he had enter the container runs for one second then exits here we run docker PS - a to list all the containers as you see we don't see the sleep one docker container which is sprung because it has been removed by dog herd as soon as as it existed here is another useful option we can also specify the name of the docker container we want to run here we type docker run - - name HelloWorld busybox 1.24 now let's run docker PS - a to list all the containers as you see the new container is named as hello world if we don't specify the container name docker will automatically generate a container name when we run docker run command as you can see here we got some funny name here such as boring Rosaline matte Lemire and kickass hopper before we finish this lecture let me show you another handy docker command docker inspect docker inspect would display low-level information on a container or image let's see this in action let's start up a new container in the detached mode you daughter returns us the container ID then let's run container inspect copy and paste the container ID and hit enter see docker inspect renders the results in a JSON array it displays the IP address and mic address of this container let's scroll up as you see docker inspect also outputs some useful low-level information such as the image ID and logged path I hope you've enjoyed this lecture I'll see you later in this lecture we'll talk about docker port mapping and docker lock command we'll be using a new docker image the Tomcat image Tomcat is an open source web server that execute java servlets let's check it out here we are at the docker hub page with search Tomcat and enter the official tum can image page let's scroll down as you see Tomcat page by default runs on port 8080 we can expose a port inside the container MF the port to another port on the host by - P option in this way the tomcat web server can be accessed by using the host URL in the mapped port if you're running docker on linux the host here refers to localhost if you're running docker on windows or mac with docker machine the host here refers to the linux virtual machine running docker the format for - b option is - p post port container port let's see this in action here we open up the docker QuickStart terminal just make the font size larger as before let's do docker run - I tea - p8 h8 8 column 8 0 8 0 to expose the container port a 0 8 0 to host port 8 8 8 8 then Tomcat 8 0 and hit enter Tom get image is about 300 megabyte large so it takes quite a while to download it is recommended to run this docker command where you have good internet connection I'll fast forward the video until the download is done now the image is downloaded and the container is up running we can access Tomcat server through web browser first we're scroll up to find out docker machine IP just copy it if you're running docker on Linux or you're running docker from Mac or a docker for Windows the host IP is just a local host then we open our browser paste the host IP and go to point number 8 8 8 8 see we have opened the Tomcat council page in most cases especially in production we would run containers in the background previously we have learned - D will allow us to run containers in detached mode let's try it out here then we check the container status by running docker PS a as you see our previous Tomcat container has existed let's go back our previous container run command and AD - D option heat enter see docker returns us the long container ID and the containers should be running at the background we can check out the locks of running container by docker locks command just type docker lock and the container ID oops it should be Ducker logs not dunker lock let's redo it again see we are seen the container locks in this lecture we're going to talk about docker image layers a docker image is made up of a list of read-only layers that represent file system differences image layers are stacked on top of each other to form a base for the containers file system take a look at this diagram here each image is consists of multiple layers and each layer is just another image the image below is referred to as the parent image we call the image at the very bottom as the base image docker is pulling the images layer by layer you can also check the full set of layers that make up an image by running the docker history command as you see busybox image consists of two layers the base layer is to add a file and the second layer is to run bash when we create a new container you add a new thing writable layer on top of the underlying stack this layer is often called the writable container layer all changes made to the running container such as writing new files modifying existing files and deleting files are written to the sustain writable container layer the major difference between a container and an image is the top writer book layer all rights to the container that add new or modify existing data are stored in this writable layer when the container is deleted the writable layer is also deleted the underlying image remains unchanged because each container has its own themed writable container layer and all changes are stored this container layer this means that multiple containers can share access to the same underlying image and yet have their own data state the diagram shows multiple containers sharing these same butoh 15.0 for image in this lecture will there how to build dagger images there are two ways to be a docker image we can either commit our change made in a container to made a new docker image or we can write a docker file to build an image in this lecture we'll start looking at the first approach here is what we'll do in the lecture firstly with spin up a container from a base image secondly we'll install gig package in the container finally use document command to commit changes made in the container to build a new image let's see this in action this time we'll be using Debian image which is one of the most popular Linux distributions here we are the docker hub website search for Debian let's pick tack Jesse as you see Debian needs also a relatively small image which is about a hundred 25 megabytes and consists of two layers our terminal and type docker run - IT debian Jessie you see stalker can find the image locally it is pulling from docker hub I'm fast-forwarding the video until the download is done now the image has been downloaded and we're writing inside the show of the container we can do LS command to show the root directory structure of the container as you see it is typical die being filesystem what if we want to use git command here just type git oops looks like it is not installed in this container let's install it by using the app get package management tool here we do AB get update and apt-get install - do I get we put - why here so he would automatically confirm yes - prompts now as you see it is installing kit I'll fast forward the video until it's done now it has installed the kit package that's verified kit has installed correctly first we clear the screen and type kit again see it shows me the help message what I will do now is that I'll exit a shell and we're going to commit our container has a new docker image we have a new command to learn that is docker commit what document command does is to save the changes we make to the containers file system into a new image when running this command we specify the ID of the container we're committing and the repository and tag of the new image let me show you in a second here we do docker PS - a to get the ID of the container which is run then we commit the container by running docker commit just copy and paste the container ID here next we need to provide the repository name an image tag for the repository I'll put my own docker hub repository name which consists of my docker hub user ID James Lee plus slash and the official Debian repository name and will tag the image with 1.00 they hit enter darker commits our changes to a new image and returns the long id of the new image if we do docker images here we can see our new image James Lee Debian and tagged with 1.00 it's a little bit bigger than the official tabby image because I extended the file system we have learned previously images are made of the file system layers the base layer of this new image is Debian and we have extended the debase image with a new layer so it takes a bit more size on disk here we can spin up a container based on this new image now we're in the shell of the new container we can do LS command to show the file structure let's try the git command here see the GUID command has already been installed the changes we have committed are persistent in the new image I'll see you later in this lecture we'll take a look at the second approach to build a docker image using docker file a docker file is a text document that contains all the instructions users provide to assemble an image what is an instruction it can be for example installing a program adding some source code or specifying the command to run after the container starts up and so on docker can build images automatically by reading instructions from a docker file each instruction will create a new image layer to the image basically instructions specify what to do when building the image let's go ahead and create a docker file a docker file must not have any extension it must be named as docker file with capital D the docker Vibe gets created let's open it up we add our first instruction from instruction docker rest instruction in a docker file in order the first instruction must be from to specify the base image from which you are building here we just use Debian Jessie as our base image so Debian Jessie is the argument for the from instruction the instruction is not case-sensitive however convention is for them to be uppercase in order to distinguish them from arguments more easily let's move to the next instruction run instruction run instruction with specified command to execute it can be any commands you can run in a Linux terminal here we do apt-get update first then we still get make sure we put - why option because we won't be able to answer the prompt next let's install VI M as well now we have our dockerfile ready just save the file it is time for us to start building the docker image we have a new command to learn doctor build docker bill would build the image using structions given in the docker file here we type docker build docker build takes a Dashti option to tag the new image we're building as I did in the previous lecture I'll tag the image with my own repository named James Lee slash Debian docker Bute command also requires a path which is the path to the build context the path specifies where to find the files for the context of the build on the docker demon for example if we would like to copy some source code from local disk to the container those files must exist in the build context path remember the demon could be running on a remote machine and the no parsing of the docker fire happens at the client side when the build process gets started docker client would first pack the all the files in the built context into a tar pool and then transfer the Tarble file to the docker daemon was more by default docker with search for the docker fire in the root directory of the build context path if you're darker file doesn't live in the Butte context path no worries you can tell docker to search for a different file by providing a dash F option here my daugher file is at my current directory so I can just use the current directory as the path now we hit enter to start the build process let's go through the build output to get a better understanding about how dagger is building the image first it outputs sending the built context to docker demon as we mentioned before now the docker client is transferring all the files inside the belt context which is my current folder from the local machine to the docker demon step 1 from Debbie and Jesse that is the from destruction as you can see docker is going through this directions in the docker file that step to run apt-get update docker is executing the run instruction it says running in followed by a container ID what happens is that docker starts a new container from the base Debu image and is executing the app cat command in the container now with step Chuy's about finish it prints out removing intermediate container and followed by a container ID you might have already noticed the two container IDs are exactly the same so dockets pains up a new container and afterwards just removes it let me explain to you what is happening here docker daemon runs each instruction inside a container a container is a writable process that will write filesystem change to an image in our case it installs a program one stalker has written changes to the image and committed that image docker removes the container so for instruction docker creates a new container runs the instruction commits a new layer to the image and removes the container basically containers are ephemeral we just use container to write image layers and once they're finished we get rid of them images are persistent and read-only let's continue digging through the output now it says step 3 apt-get installed - white kit and followed by a container ID what happens is that at the end of it step 2 docker committed our container as a new image and it starts a new container from that image for the next instruction now it is installing it when step three is about to finish it is repeating the same process to commit the intermediate container as a new image and remove the container let's move to step four it says run apt-get install - white VM and docker is executing the command in a new container from the image it committed in the previous instruction once all the steps are finished the build completes successfully let's run docker images to make sure the new image it's created successfully as we can see the image we just built is tagged with latest because we don't specify tag when we've ran the docker built command the new image we created is about 250 megabytes the base Debian image is only about a hundred 25 megabytes the new image almost double the size of the base image so the extra layers we added by installing it and vim are about 125 mega bytes that's all for this lecture I'll see you later hello and welcome back in the previous lecture we have seen how to write abductor file to build an image in this lecture we'll learn a little bit more about the dr5 syntax and some best practice to write docket files first of all we'll take a look at how to change run instructions one thing to keep in mind is that each run command will execute the command on the top right Abel layer of the container and then commit the container as a new image and that new image is used for the next step in the docker file so each ran this direction will create a new image layer it's recommended to chain the run instructions in docker file to reduce the number of image layers it creates I'll show you how to do this in a second here we modify the docker file instead of having three instructions we'll do apt-get update and apt-get install kit and them to aggregate those three instructions into one you let's save the dockerfile and rebuild the image as you see we have only two built steps instead of four which means it is only adding one more layer on top of the base image instead of three another good practice when writing the grunt instructions has to soar in multiple line arguments alpha numerically this will help you avoid duplication of packages and make the list much easier to update let's say we want to install Python package as well we need to put Python between git and vim so that they are sorted alpha numerically now we'll move on to the CMD instruction CMD instruction specifies what command you want to run when the containers starts up if we don't specify CMD instruction in the docker file docker will use the default command defined in the base image in our case it is Debian Jessie and the default command is bash I like the run instruction the CMD instruction doesn't run when building the image it only runs when the container starts up you can specify the command in either exec form which is preferred or in shell form let's see this in action we modify the dockerfile at CMD instruction after the run instruction here we echo hello world let's rebuild the image you the build completes successfully now let's start a container from this image just copy the image ID here see a prints hello world we can also overwrite the CMD instruction and runtime when you do docker run you can specify a different command to run let's rerun the container from the image and overwrite the CMD instruction with echo hello docker see this time it prints up hello doctor instead the next topic we're going to talk about it's darker cash you probably have already noticed last docker build is much faster than the first build that is because docker cache each time docker execute an instruction have built a new image layer the next time if the instruction doesn't change docker knows the image layer already exists rather than building it again docker was simply reused existing layer as you see the last docker build does not reduce step 2 it just reuses the image built previously this helps to make our build much faster if you're booting many containers this can greatly reduce build time however if docker cache is used too aggressively it may cause issues for example say you have a darker vial like this after building the image all layers are in the docker cache suppose you later modify apt-get install by adding extra package curl Dockery sees the first and second instruction are not changed and reuses cache from previous steps because the apt-get update is not run you can potentially get an out-of-date version of the kit and curl the solution is to chain the gap get an apt update command as a cinco instruction so that whenever the app gap command is modified the whole instruction will rerun to ensure you get the latest version of the package you can also tell docker to invalidate cache by using the no cash flag when issuing the docker built command next let's talk about copy instruction the copy instruction copies new flights or directories from built context and adds them to the file system of the container let's see this in action here we add ABC txt file in my current directory let's modify the dockerfile remove the CMD instruction and add a copy instruction to copy the ABC txt file to the container then save the file does rebuild the image and run the container you let's enter the source directory see we get our abc.txt file there is another instruction which is very similar to copy that is the add instruction those two instructions are quite similar the difference is that ad can do more magic ad allows you to download a file from internet and copied to the Container ad also has the ability to automatically unpack compressed files if the SCRC argument is a local file in a recognized compression format then is unpacked at a specified test path in a containers spire system generally speaking copy is preferred that's because it's more transparent than ad copy only supports the basic copying of local files into the container copy is really just a stripped-down version of AD ultimately the rule is this use copy unless you're absolutely sure you need add I hope you have enjoyed this lecture I'll see you later hello welcome back previously we have learned how to build a docker image either by manually committing the changes we made in a container or by writing a docker file in this lecture we'll see how to push our extended images to a docker repository so that other developers can use that image or we can pull that to our production environment and run it as a container the easiest way to make your images available is to use the docker hub which provides free repositories for public images first we need to create a docker hub account let's go to docker hub where is sign up for your account just type your docker hub ID email address and password after you have sign up and you have logged into your account it looks like this I just created a docker hub account for this tutorial jaylee tutorial is the docker hub ID in order to push the image to the right repository we need to associate the image with a docker hub account the way to link the image with a docker hub account is to rename the image to something like docker hub ID slash repository name the command to rename this image is docker tag let's see this in action first we issue docker images command which displays all the current images in my local box the image we're going to push is this extended debute image now let's rename this image just type dogger tak copy and paste the image ID then the repository name jé tutorial slash debian we also need to specify a tag for this image if we live in blank docker will just use a default latest tag but try not use ladies tag unless you have to let me explain why docker will use latest as a default tag when no tack is provided but beyond that latest hack has no special meaning a lot of repositories use it to tag the most up-to-date stable image however this is still only a convention and it's entirely not being enforced images which are tagged latest will not be updated automatically when a newer version of the image is pushed to the repository if you're shipping docker images to a production environment you should just ignore the latest hack don't use it don't be tempted by it it's easy to look at it and think that your deployment script should just pull latest however this is not enforced it takes a lot of discipline to make that work the safest way is just to version attacks every time here we just explicitly specify attack 1.01 then hit enter now let's do docker images again see the image has been tagged with a new name the first and the second image have the same image ID because they're the same image and have exactly the same content but tagged with different names the next step is to push the image to docker hub in order to do this we need to issue the command docker logging will need type our dogger hub account credentials now we're logged in finally we can do docker push followed by the repository name and tack and he'd enter now it is sending the image to docker hub this is 250 megabyte image which might take a while to push I'm fast-forwarding the video until the image has been pushed this is what you see if everything worked out successfully now let's go back to our dogger hub account and refresh the page we can see the new image appears under my account now we can click the image and check the image details such as image tag and that's it for this lecture I hope you've enjoyed it
Info
Channel: Level Up
Views: 62,541
Rating: 4.9414635 out of 5
Keywords: Docker, docker tutorial
Id: VlSW-tztsvM
Channel Id: undefined
Length: 71min 54sec (4314 seconds)
Published: Sat Jul 07 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.