Learning Docker // Getting started!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in professional it containers have undoubtly become one of the most crucial Technologies we are using these days no matter if you are a cloud engineer deploying High available container orchestration a software developer coding and shipping apps or a CIS administrator deploying and maintaining app Services containers are just everywhere and I think this is such an important technology that everyone in it should know about hey everybody this is Christian and in today's video It's All About Docker and containers you probably know from my other videos here on YouTube that I often use Docker and other container orchestrators to deploy and maintain application services in my home lab so most of you might be quite familiar with it but I just noticed that I really didn't create a comprehensive tutorial series about this important topic yet so that's why I want to start a brand new video series starting with today where we unve all the magic behind Docker and containers I've prepared some interesting presentation with a lot of background information for you to understand the bigger picture in it but also many many practical examples and walkthroughs you can directly follow along in your terminal so no matter if you're just a beginner curious about Docker and you just want to learn more about it or you are somewhat Advanced and experienced but you want to get more background information so this is definitely the Right video series for you so I hope you join me on this journey and I hope that will be a lot of fun but before we start I first want to show you castra and an absolutely incredible it automation tool and thanks KRA by the way for supporting this video now I've been playing around with this tool for quite a while in my home lab it's mainly created for automating data operations and building reliable workflows for example I've used it to create some automation workflows that run updates on my servers or perform container image build tasks or cicd pipelines to easily deploy and update software now what makes KRA very interesting in comparison to other automation tools like anible or terraform is that KRA comes with a really nice web UI that is both developer and non-developer friendly for example if you like to create automation pipelines you just write your workflow logic in a declarative yamu based language but it also allows non-developers to have a visual representation of what's actually happening inside the workflows with the ability to easily change values or optimize queries even without having to write any line of code and because I know you guys love free and open source tools yes it's fully open source and you can easily deploy this in your own infrastructure using Docker kubernetes or whatever you like the free Community Edition gives you all the necessary core utilities to build schedule and run Automation workflows and if you like to get some additional Enterprise features just reach out to the castra team and book a demo you will find a link to this tool in the description of this video just try it out it's really great tool okay so before we start with the Practical walk through and I show you how to deploy containers I first want you to understand the bigger picture of why do we actually use containers in it and I just want to show you a quick uh example so if we want to install an application on our machine we would typically install that on an operating system which might be Windows Linux server it doesn't really matter and when we install an application that usually requires a lot of things to present on the operating system for example if that application one is written in a Java language it always requires the Java runtime environment to be present on the system and just like that the application one might have other requirements such as an SQ light database to store some uh data for this app and so on so that mostly Works totally fine as long as all the packages are installed on the operating system however if you start installing other applications on the same system they also might have different requirements if the application 2 is also written in Java language and the installer might decide hey I I need Java but in a slightly newer version from version 16.2 to version 17 LTS you never really know so if application 1 also works with the latest version of java version 17 so maybe the application one hasn't been updated by the developers to be compatible with it or maybe it has an update but the system administrator hasn't updated the application yet so it starts crashing and you get a problem on your system so this is a very common we see in Linux uh distributions for example so that's the whole reason why we have package managers they try to solve all these dependency issues and you can definitely see that yeah that that that's quite a huge challenge yeah for both developers because developers need to make sure that their application is always working in all of the different edge cases with different systems with different binaries libraries it's a hell lot of work and also for the assist administrators you can imagine this is quite a heavy task to always make sure all your apps are function properly if you're doing updates so if you have once worked as a CIS admin you probably know what I'm talking about and you can imagine that even gets worse and more challenging the more application you install on the operating system and this is exactly the problem why we use containers the idea behind containers is so if we have all these dependencies why don't we create an isolated environment for each application and ship the application with all its required uh dependencies libraries binaries and Utilities in its own package and that should not interfere with other containers on the same system so when we install the application one the container one where this application is installed and deployed has also the Java runtime environment in version 162 and the sqa database but if we create another container with application 2 that also has has a Java runtime environment present but in a different version and container one and container 2 actually don't know about each other so we ship all of these libraries dependencies and utilities along with the application in its own container package which is very lightweight and very efficient and this is what we would call a container image and from that container image we could create uh multiple copies and create more than one container instances that are all running in their own isolated and protect Ed environment so that is the whole idea behind containers and you can imagine this has a lot of great benefits and advantages for both the software developers and the system administrators because if a software developer needs to write new code for the application one you don't always need to make sure that your app works with every Java version that is out there you just create your container image and package your desired Java version that your application is compatible with along with your cont container image provide that to the system administrator and the system administrator don't need to care about if the server has Java runtime environment installed in the correct version the system administrator can just start the container and you can always be sure that the container image has all the exact uh dependencies binaries utilities just like the developers have packaged it with for example if I want to create a new container on my Mac and just run an application from a container image I can just uh use the docker command the docker run command to create and start a new container I'm just adding a couple of parameters that you should not worry about so don't worry I'll explain all of the stuff that I'm doing here in much more detail in a few minutes I just want to demonstrate how easy it is for example now we can pick a container image that we want a new container process to be created from for example I've installed a Debian image in Alpine Linux image some other testing applications that I'm running here enginex web server a database for example for now let's just create a new container from a a generic Debian image and once I hit enter you can see it automatically starts a new shell so this shell is not running on my Mac system anymore so this is already running in a Debian based container for example when I cut the output of the ETC OS release file you can see that this container thinks that it would run on a Linux distribution that is Debian 2 book and I could also easily create a new process for example when I open another terminal window and I execute the exact same command you can see I got a second shell with a different ID because these are two separate containers and they don't really know about each other for example when I show you the file system you can see that uh each of the container has its own file system for example when I create a test.txt file and you can see it's present on the container one and when I check the file system on container 2 you can see see that the test.txt file is missing in here because the file system on container one is different from the file system of container 2 and that is also the case for all of the processes and things that are running inside the container such as the apps the binaries and the libraries but one thing is really important and I should mention it at this point because you might know oh this looks like a virtual machine somehow but containers are not virtual machines because containers they don't come with a fullblown operating system they only come with all the necessary tools binaries and libraries for example on a Linux virtual machine you would be able to execute commands like PS no this is not present in that container image because it was created with the idea in mind to be as small as possible and only come with the necessary tools for example is uh if we take a look at the uh binary folder you can see that it has a couple of generic Linux utilities and tools that are present in that container but not like in a full-blown operating system but let's take a closer look at the architecture and how Docker approaches containers to better understand so how these different components interact with each other so if you want to create and run containers on your host operating system you will always need a container engine and Doos implementation requires a demon so this is a service application that uh always runs on the host operating system and this is responsible for managing containers and container images and if you want to instruct the dock demon to perform a certain action so if you want to build a new image or create a new container you will need a CLI tool so that is the darker CLI which is a terminal application you usually install on your client note the client and the host they don't necessarily need to be separate physical machines or visual machines so you can install the client the docker CLI on the same server as where you're running the docker host but it could also technically install that on a different machine or make API calls to the darker demon from a different client so that's why I always refer to it as a client and the CLI tool as I said there you can execute the darker commands let me quickly explain some of these basic commands so these are the typical commands that you will always need when you want to work with containers for example the docker build command so this will build a new container image as I set a container image that is this lightweight package where all the files the binaries the tools so everything that this container and its applications need to run with and this you can directly build with the docker build command and once you created a container image you can start a new instance of a container so a new container process from that image and as I said you can run multiple copies of the same image on the system and then you will use the docker run command that I just demonstrated to you but but uh you might also think when you build these images how do you ship them or how do you share application images so that's why we have a third component which is very important and that is called the container registry so our container registry is like a big cloud storage where uh developers and individual people can upload their container images for example there are container images around that uh are based on a Linux disto something like auntu or or are packaged with fully preconfigured pre-installed applications for example an enginex web server or some other apps like raana Drupal teleport there are millions of different container images existing and uploaded on the docker Hub which is one of the container Registries that is around and if you want to start a new container from a specific image because you want to run an application you can just use the docker pull command that will automatically download that container image and with the docker run command you can start new containers and multiple copies of these container images on your host system and that's basically the whole Magic behind containers and how they work I know there are many many more details we could talk about but I think this presentation was already long enough I I hope I didn't bore you with that presentation but I think it's really important to understand the bigger picture before we start playing around with containers so you actually understand reason and the whole architecture behind it now if you want to easily get started with containers I would always recommend you to start with Docker because Docker is one of the earliest players in the container business and they created a lot of great tools and utilities and services around containers that will make it very easy for you as a developer or as an individual person to get started with containers completely for free and what I would always recommend you as a beginner if you want to play around with containers on your personal workstation on your computer you should always install the doer desktop so this is an application that com comes with the docker engine so that Docker service demon that you will need to run containers but it also has a graphical user interface it also has some other interesting utilities and services around containers that help you to easily maintain containers on your system and it's also pretty clever because Docker desktop allows you to run Linux based containers on a Windows or on a Mac system as well and because these container images don't actually contain an a guest operating system like viral machines you're not able to mix operating systems you're not able to run Linux based containers on a Windows or on a Mac machine so the docker desktop system on a Mac or on a Windows will install a small lightweight virtual Linux machine in the background you don't need to manage this so Docker desktop is doing this entirely for you but this small virtual machine that is a Linux based operating system and this is where the actual containers are executed in for example this is the darker desktop that I've uh started on my Mac you can also give this virtual machine uh resource allocations or customize how many CPU Calles memory the system will reserve for that virtual machine to execute your containers you can also easily customize all the configuration install software updates uh you even have extensions you can easily manage all your container images your volumes so this is really a great way to get started however on my home servers like my Ubuntu servers they don't have a graphical user interface so I don't really need that do desktop I just need that Docker demon so the engine and this is where you should go to the documentation so this is also great place to to learn more about Docker and use as reference for all the CLI tools and manuals so here in the manual section you will find a guide to install the docker desktop but you can also find a guide for the docker engine and if you want to install the pure Docker engine the docker demon on the Linux operating system you would typically pick one of these server packages either in the Debian or RPM format depending on what type of Linux distribution you're using um you can also easily find instructions for example I'm mostly using Ubuntu Linux dros on my home lab servers and there you have great instructions and terminal commands so how easily you can install the docker engine on your machine and then if you have done all of that if you are running this on a Windows on a Mac based in a virtual machine and the docker engine is present you can execute commands in the terminal by the way there's probably one thing for the windows users that I should mention because Docker desktop runs in combination with wsl2 so this is the windows subsystem for Linux and I've used that in some of my older videos where I was using Windows so now I don't have a Windows machine to actually show it to you anymore but you can check out my older videos on YouTube where I've explained this in a bit more detail or just go to the official Microsoft homepage and learn how to install Linux on windows with wsl2 and then you can install Docker desktop on top of this and you can select your wsl2 instance that you want to use to run um and perform these Docker commands Okay so let's go back to the terminal so if you've got that installed on a on a server or on your workstation you should be able to execute the docker commands so this will give you a list of all the different commands that you can use don't worry we will go through most of these uh basic commands here in future video series so I'm not going to explain everything in this short video this would fill out multiple hours for sure uh but we will start with a very simple um Docker version so this if you execute this you should see some output and if that works you can be sure that the docker engine and the docker CLI has been installed correctly on your workstation or on your server now again um to start and create a simple container I just explained to you you can use the docker run command so this is probably the easiest way and let me just do that with a short example there is a small container image that is maintained by doer which is called hello world and if you start this container it will just print out some stuff on the terminal to to demonstrate okay darker is working properly and then you can see it immediately stopped this container but you can also start like other containers with an interactive terminals so this is what I've just used for the Debian container interactive TTY but for now let's not start ad Debian uh based container image for example I've also installed in auntu based container image you can see this is even smaller just 69 megab and when I start this it also starts a new shell maybe you just want to try something else for example sentos uh this container image seems to be existing but it's not uh present on my system because I haven't downloaded it with a Docker pull command so if you try to start or create a new container with an image that is not present on your system it automatically pulls it down from the container registry now one question you might have is where do you actually pull down these container images where where do you find them and uh this you can easily find on the dockerhub registry you just need to go to hub. do.com and there you can search for specific container images just like I said um there are container images for Linux distribution something like Dean Ubuntu so these are usually small lightweight core images that you can use as as a template and you can extend with your own applications and deployments but there are also other ones existing for example I uh what I often see is the Alpine Linux distribution so this is much more lightweight than Ubunto ad Debian is and that is really often used as a base image for new container deployments but you can also search for specific applications for example if you want to deploy an engine X web server here you can see engine X or the developers of engine X they also maintain their own official build of engine X in a container image so these container images are based on those core images like this uh these here are based on a dean book or on an Alpine Linux drro and you can select container image tags that you want to start your container from and those tags are shipped with different version dos of engine X or they are based on different Linux distributions or core images let me just demonstrate this example so for example we want to download and start the enginex web server in a new container I'm just searching for this app and I see okay so this was maintained from the official developers of engine X be careful when you download container images by the way because you can see some of uh the container maintainers or developers are verified Publishers or are Docker official images but not every because you could actually build your own container image install an enginex web server in that image and upload it to Docker Hub so this is where you can see so many different flavors of uh of applications and container images so you can see these are sponsored OSS but you can also find Container images from individual people here so for example this is from someone who's uploaded a container image that's called enginex gate and that was updated 6 years ago so this is for example nothing that I would want to deploy so you really need to be careful when using container images the most secure way is to build your own container images so we probably do this in upcoming series here but for today I think it's it's totally fine to just use one of the official images this is by the way the the way how most system administrators that are not software developers just run they just run the official uh application container images and they are totally fine with it so just to mention and for example I just want to download this here I can just write down this attack here and I can execute the docker pull command so if I just pull down the enginex container image it will always use the default tag latest and as you can see this is up to date I've already downloaded and installed this latest version so the latest tag is usually updated by the maintainer once a new version of that container image is released but I usually also say like the latest tag I don't know if that's the best way to deploy applications most of the time you will pick a specific version to make sure it's uh compatible with all the other systems on your infrastructure so let us pull down this specific tag here so if you write the name of a container image the tag is described after a column so 1.2 4.0 and let's pull down this specific container image and now that we have this container image you can also inspect uh what are the images that you have currently downloaded on your system with a darker image LS command so there you can see I've downloaded a couple of uh images somewhere should be enginex so here you can see I've downloaded enginex three times but I've downloaded three it in three different tags so here's the version 1.24 that we we've just downloaded but I also downloaded the tag 1.25 and the latest tag is also separate but you can see the image ID is the same here the latest version with the 1. 125.2 so that means even when these are different container images they actually contain the same payload so that's how you can sometimes see so this is actually the latest version and and of course you can also remove older images from your system so if you want to remove an image you you can you can execute image RM and then you need to give it the ID here or the short command would be Docker RMI so this is the short version for example let's let's delete some of these uh test images that I've built don't really need them anymore and then I've cleaned up a bit this list here note we will talk about Docker images and the different layers and how to tag and build them in probably the next video of this series for now I just want to teach you more about how to create and run containers so for example if we want to run a new container instance from a specific image again use the docker run command then you can use the image and then specify the tag for example latest uh or the 1.2.5 to2 hit enter and then we've just yeah created a new container from an engine X so this is a web server that is currently running on our system now let me just stop this and uh if you want to see what containers are currently running on your system you will use the docker PS commands but as you can see um there is nothing in here because there's currently not a single container running on my uh system with a Docker ps-- all flag you can get a list of all containers that are currently uh existing and if you create these containers and you stop them they are still existing in on the system just like this uh enginex web server that I've just started and stopped it is still present and if I would execute do run engine X you can see this will not just start the same uh container it will create a new instance with a different ID and if you want to start the same container that is existing you will need to use a different command that is not the docker run command but the docker start command and then you need to give it the name of the container the ID and then we can see Docker PS so this container is currently running of course we can also stop this container again to give it this ID then our container is stopped and you can also do um things like restart a container for example and if you want to remove an existing container from the system you would use the docker RM command just like with the images give it the ID and then your containers are gone okay so we know understand so how we can create and run new containers how we can stop existing ones how we can remove existing containers now let's get in a bit more details here because um this is probably what the bread and butter of a system administrator is if you want to deploy and run containers in your home lab or in a production system you should actually know so how these containers function how you name them properly how you can inspect the configurations of these containers how you can start them with flags um how you deal with network access and persistent storage I don't want to go into all of the details because I will probably create one video per each of these topics separately but just to get you started so you know enough to run it now first if you want to inspect the current configurations or details of an existing container um you will use the docker inspect command so this is very useful to to get more information about a specific container for example if I want to get more information about this enginex container here I can just just execute the docker inspect command and use the docker the container's ID here for example and then I get a Json object list of all the different configurations or flags that this container has been started with you can see the container's name the arguments uh you can see the status that it's currently not running or you can see the sh value of the specific image that this container was started with many more configurations you don't need to understand all of this stuff yet but it's important that you can find more information about your existing containers with that command here so let's go back to this list here and let's assume I would want to run an enginex web server on my system using doer and I want this to be a production ready system so how would I approach this and there are some important flags that you need to start your container with such as network access persistent storage and so on for example if I want to create a new engine X container let's start with a very simple thing I will use the the 1.2 5.2 uh container image tag but I don't want this to be generated with a random name that doesn't say anything about this web server yeah so I want to specify the name in this case I would use the D- name parameter and I can give this web server container a unique ID for example engine X um production one for example yeah just pick a name that that uh you like and where you can recognize your existing containers and because this container always runs in an isolated environment we also need to allow access from outside to this container so we need to publish um the network ports and this is done by using the flag D- publish or just- P that's actually the same and then you need to name the ports that you want to publish or allow access from outside so you should know a web server like enginex usually uses Port 80 and another Port P for free for https and as you can see you can just um give this the port numbers the colon differentiates the host Port from the containers Port so this is really important to understand because the application process inside the containers file system will usually use the port 80 for HTTP and for for free for https that might be different for any type of application yeah database uses a different port that its service is listening on uh but you can also map the port to a different host Port so for example inside the container engine X is listening on the port 80 but you want to publish it on a different port then you can just change the mapping on the host system to a different port something like 8080 for example so then you map your host operating system port 8080 to the internals container service port 80 and the same you can do for any other uh Port that you want to publish for example four for free you can change to eight port for free that is what you usually do when you have more than one container listening trying to listening on the same port you would need to change the port number on the host OS without having to configure the application inside the container so very useful and because the storage inside the container you might have a container deployed in version 1.2.5 then you download a new version of the that container image you you need to create a new container with that new tag how do you transfer the data from the first container to the second container this is why we use persistent volume so we use a darker volume with a Das Dash volume parameter and then you can use named volumes or you can use Mount points I'm most of the time using Mount points here uh but I will probably make a separate video about persistent volumes I have also done some videos in the past so if you're curious about these differences or how do you migrate data from one container to another you can still check out my older videos so here I will for example and that's the same principle like with the port numbers you can map one specific file path on the host operating system so this is uh on my Mac in my personal home folder tutorials do tutorial 2 and let's let's for example pick the HTML uh directory and map this with a column to the internals container fight system so all the HTML Pages inside the container file system of course need to be in a diff in a different location but this is how you can easily map uh persistent data storage on your host o to specific path in the internal containers fight system for example here we can use the user share engine X HTML location so this is where the HTML locations are stored and what we can also do is we can run this container in the background because if I would start this container I would open a shell and if I stop this shell the container would stop so if I want to persistently run this container in the background I would attach A-D parameter for detach hit enter and then you can see the container has been started if I executed Docker PS command so now I can use a simple PS because the cont container is currently up and running you can see it has these Port mappings port 8080 is mapped to Port 80 inside the container and 84 for free map 2 4 for free and the container is up and running so now we can try to connect to this web server with a simple curl command on the Local Host but we need to use port 8080 because this is uh the port what we have mapped to and you can see the response is forbidden because probably there is not a single file existing in that folder so let's let us quickly open this location and let's for example let's create a new file in here that's called uh index do HTML so that's a default HTML website I just want to yeah let's let's let's just add some random data I think that should be totally fine let's go back to the terminal and let's execute the curl command again and you can see that's our web page so this is how we easily deployed a web server in Docker and now if if we want to stop this container we can easily use Docker stop command and now we don't need to get the identifier we can use the name that we have specified so let's uh stop it let's check do ps-- all and now we still got our container with the name engine X production one in that list here it's currently stopped and it's created from that image now when I want to update this to a newer version for example to the latest version what you would typically do is is you remove this existing container don't worry about it because we've still stored our index.html file in our host operating systems file path so when I create a new container and I can now just change the image tag and update this to version latest for example I will use the exact same command and mount the same data that is persistently stored on the host in the new containers file system I will publish the same part s nothing more let's just start it and as you can see container is up so this is how you would update your application you just stop the container delete it and create it with a new image version and this is much easier than you having to go inside the container update the packages make sure that the migration has been successfully done and so no you just use the latest image tag deployed and maintained by the software developers of enginex just check out if there is a new version and then create a new container with the exact same command like you've used before so this is how you can easily get started with containers I know this was a lot and I hope you will try it out at home you will just start installing dock or desktop and just start playing around with it create a few containers try to deploy a couple of different applications and try to familiarize yourself with publishing ports and persistent volumes try to store data persistently in a specific specific location on the host system and mount that in a container this is how you usually can get started with it and we've just scratched the surface here so again this was the first video of a brand new series we will talk about all of these things like building our own container images about persistent volumes managing projects with Docker compos and we might even talk a bit about Docker security and vulnerability scanning so there are so many cool topics we can talk about also not to forget Docker swarm so there are a lot of great ideas I have in mind you just need to be a bit patient with me creating content because it's usually a lot of work anyway if you want to support that work so if you enjoy these free tutorials that I'm creating here don't forget to hit the like button subscribe and if you want you can show your appreciation by supporting me on patreon thank you so much for all the supporters on patreon it's really helping me a lot thanks for watching I'm happy that you're here and I will catch you in the next episode of this series take care bye-bye
Info
Channel: Christian Lempa
Views: 94,390
Rating: undefined out of 5
Keywords:
Id: Nm1tfmZDqo8
Channel Id: undefined
Length: 35min 55sec (2155 seconds)
Published: Tue Oct 24 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.