Learning Docker // Build Container Images

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey everybody this is Christian and welcome to the second part of my Docker tutorial where we are learning all the magic behind Docker and containers today we will explore how to build our own container images add packages and scripts to them install requirements and do all sorts of other customizations as you can imagine this is a very fundamental skill you will need as a software developer but it is also important as a sis admin or network admin because you will often work with containers in it and it's important to know how to customize them and install and configure certain aspects of these images so let's get started so first of all let us understand why is building our own container images so important to learn because in a home lab we most of the time just download and run third-party container images to deploy common applications like web servers databases maybe other tools in programs and they mostly already come as pre-built and pre-configured container images directly from the developers or maybe other people have already done that for us and Shar their images on darker Hub so the question is why do we actually need to build our own well the most obvious reason is if you'd like to containerize your own applications and scripts so maybe you've written a small python or node.js program or anything else that you have coded and you want to ship this as a container image to deploy it on container orchestration like doer or kubernetes then you obviously should know how to build that but also if you're not into programming yourself so perhaps you'd like to use third party application container images but there are things you need to further customize in order to fix security vulnerabilities adding users and files or install other packages basically anything that you like to customize in a container image then you need to package this all into a new container image that you built yourself now to demonstrate some of the use cases of building container images let us run a generic Debian image and let's try to Ping the DNS server of Google as you can see this is not working because the pin command is not existing in this running containers fight system because it hasn't been built and shipped with the base image of Debian and if we want to solve this we of course can now just run an apt update apt install and install the Right Packages containing the Ping utilities inside the container's file system just like you would do it on a regular server or virtual machine and if you now execute the Ping command you can see this is not working because we have installed the correct tools inside the running containers file system now if we stop this container you can see that this container is still existing on my hard drive and if I start this again and attach a shell to this container try to Ping the DNS server of Google again you can see it is still working because the container hasn't been deleted it is persistently stored on my hard drive however the problem is that this is is not the way how you would operate and work with containers because containers are ephemeral so they are often just deleted and recreated because if there is a new version of the Debian do Docker image pushed to the registry and you want to update your applications you would usually stop the container delete the old instance pull the latest version of the base image from the registry and then start a new instance of this container with the updated image version and if you then try to Ping the DNS server of Google again the command is not existing because yeah the base image of Debian again hasn't been shipped with that so if you want to solve this and there is a new version of the base image of Debian existing you always need to pull the latest version and build your own custom Docker image that already installs the Ping utilities inside the build process and that is exactly what we're doing in this tutorial but before we do that I want to spend a minute and show you an absolutely incredible it automation tool that I've recently discovered and thanks by the way castra for supporting this video so kastra helps you to automate data operations and build reliable workflows that integrate seamlessly into your existing it stack for example I've built some workflows in this tool to automate the build process of my container images and deploy them on my server now what makes KRA very interesting in comparison to other automation tools like anible or terraform is that castra comes with a really nice web UI that is both developer and non-developer friendly they recently added a very cool feature that runs a web-based vs code instance directly inside the castra tool where you can easily open or write your automation pipelines in a declarative yam based language but it also allows non-developers to have a visual representation of what is actually happening inside the flow with the ability to easily change values or optimize queries even without having to write any line of code and that is great for collaboration in your organization between the different teams like developers operations or business people and because I know you guys love free and open source tools yes it is fully open sourced and you can easily deploy this in your own self-hosted infrastructure using darker kubernetes or whatever you like the free Community Edition gives you all the necessary utilities to build schedule and run Automation workflows in your home lab it also has built-in documentations and plugins to integrate with many third party tools and if You' like to get additional Enterprise features such as a secret back end user access control SSO and so on just reach out to the castra team and book a demo you will find a link to this tool in the description of the video so just try it out it's really great now let's go back to building our own container images how do we actually do this so first of all we need to create a file that is called a Docker file and inside this file we will need to add our instructions or maybe you would want to copy specific files containing our own programs and scripts so anything that we want to add on top of the base layer of the container image we need to Define in a build context and this is what goes into our own container image that we can then ship or deploy anywhere we like now let's create this file so first of all let's create a new directory as our project folder and a CD into that so now I will open this in vs code because it's easier to write code in this program here and now we need to create a new file in our project directory we simply call it darker file so it always need to have this name and you will always need to start with a from statement so the from statement defines the base image that you will use to build your customizations on top of for this demonstration I will just use the Debian image again because it's the easiest way to demonstrate it and you can also use a tag for example I will use the default tag the latest version that I've already pulled down from the registry and now below we can add our instructions so first of all I just want to show you one simple command that is a run command and this commands executes a specific commands at the time when you're building this container for example we of course want to update the app package sources and install the Ping utilities into the Debian container image so when you have this Docker file you need to go back into the terminal and execute the docker build command so this build command of course requires some arguments at least one argument which contains the path of your Docker project so where the docker file is located uh but I also want to add another parameter which is called the T attack parameter so this is optional and this is easier to give your Docker image a name and a attack so to to distinguish the different versions from each other for example you can give this attack version 0.1 then the next version would be 0.2 and so on so this is how you can track the different versions if you don't specify any tag it will always use the latest tag as you should know and then you need to define the path so if you add a DOT at the end that means that it just uses the current directory where you are located in your terminal so because I am in my project directory where the docker file is existing I can just use the dot hit enter and now the docker will create and build a new image as you can see it executes two commands here at the build process it already finished so you can see how fast it is it uses the base image deian latest and then just executed the Run command that we' have defined in the file exporting this to a new image give this as specific hash and this is how we can identify it now if we execute the docker image LS you can see that we now have a new Docker image on our hard drive which is called my first container we have one tag the latest tag and this is the unique identifier of that specific image version so this is what we've just built and if we now want to run this we again just can run it with a simple command I also want to open an interactive terminal and run my first container image now let's hit enter and now we have a new container instance created from our own custom image so as you can see this is just like the Debian Bas image but because we have installed the Ping utilities inside our Docker image we can now execute the command and this will now always work before we do further experimentation and adding more instructions to this Docker file I need to explain the different types of Docker image layers because each each Docker image consists of different types of layers the first layer is always the base image layer and this is the foundation for any container image it contains the underlying operating system the runtime the Essential Software packages everything that is required for the container to run in our example this is created with the from statement and the base image layer in our example comes from the Debian latest IM Docker image in each instruction that we add to the darker file creates another layer on top of this base image layer so the next type of layers are the instruction layers and each line in the docker file represents a new instruction layer which are stacked so for example the first line the Run AP update and AP install IP utils ping package instruction so this is one layer and if we would add any other new instruction this would be another layer on top of this and those layers can be cached to further enhance the performance and build time of the darker image I will show you that later in a second and the last layer would be the variable container layer so for example let's modify our docka file and add another instruction so the next instruction that I want to show you is the command instruction and this is also executing a command similar to The Run instruction but while the Run instruction executes a command at the time when you're building the container the CMD instruction will execute a command at the time when you start a new container instance so this is where you can add uh things like a pin command for example or start an application when you run this container in my example I just want to execute an echo command and with the comma you can separate and add specific parameters to this executable command so for example a hello world to print a simple hello world text when you start up this container and if I now go back to the terminal and build the new container again you can see that this now executed much faster than the first build command because you can see the second command was cached and this really reduces the build time of any new container and now when you add an instruction in between those cache layers you're deleting the cache of all the layers below that for example I can demonstrate this with a copy com instruction so the copy instruction would simply copy a file or a folder from The Source directory into the Container image file system so this is how you can add custom code or custom scripts to your container image with a dot dot this simply just copies everything so this I will just use to demonstrate how that will affect the cached uh instruction for installing the package here so now when I uh execute the build command again you can see that the first line was cached but the second was added at the third time consequently also needed to be executed again so this is also the reason why you should put important instructions that often change in the build process at the end of the instructions to benefit the most from the cached layers on top of it now let's create a second container image I want to demonstrate how you would maybe write a simple python application and create a darker image for your custom program let's create a new project directory called my second container CD into that and open it in vs code so again we will need to create a Docker file and start with a from statement so now if you're writing an application like a python script or maybe a node.js application you might want to start by using a Debian image or an auntu image and then add instructions to install the npm packages for running node.js or the pipin packages however uh what is much smarter is to use a base image that already comes with an environment for python or node.js or anything that you want to write for example if we go to the docker Hub and search for python you can see that there is a python Docker image existing so this already comes with a lightweight Linux environment but that also has the python run times and everything already installed so that you don't need to do that yourself and it also has different types of tags where you can see this image tag for example this image is based on a deian bookworm image on a bullseye image or maybe on an Alpine image and I want to build a new python application based on an Alpine image so let's go back to vs code and let's start with the first instruction this is a from statement and we will use the python Docker image and specify the version 310- Alpine so we will use this image as our base image and now if we want to add code or scripts or basically anything into the docker image I would always start with a work directory instructions so this sets the working directory for subsequent instructions in the docker file so anything that is executed will be executed in that directory and this is where I want to copy my code so now for any python applications or Python scripts you might know that you will sometimes need a requirements. txt fil so this is where you define your python libraries in for example if you want to do something with web requests you might want to use the requests Library I want to specify that in this version here for example so I add all my requirements and python packages into this requirements. file and of course if I want to build my container image with my python application the docker image should already install these requirements into the container's file system so this is what we can Define in the docker file so first of all I want to run an APK update so this is very similar to an AP update in a Debian based Linux D for example and now I want to copy with a copy instruction my requirements.txt from my source directory so this file here into the containers file system under the same name and now if I want to write a simple python script for example let's add app.py and this application should just simply print something on the terminal hello from python so I'm I'm not writing a complex program here in this tutorial but anyway it's enough for a short demonstration so now let's go back into the docker file of course we also need to copy the app dop so let's add another instruction to copy app.py and then I want to run a command at build time of this container and this will run a pip install on the requirements.txt so this is basically just to install all the python packages and now with a CMD command so this is what should be executed when the container is started of course I want to start my application not when I'm building this container image but when the container is started so let's add the instruction for python to start the app.py and this should then execute my python script so very simple but this most of the common instructions you would Define when you're uh packaging a python application into a container image let's go now back to the terminal can see the fights are all in that current directory and let's build my second container I will give it a tag latest and let's build this so now you can see because I haven't pulled down the base image python 3.10 Alpine this is now pulling the image down it's then in adding all the instructions as a new layer to that container image copying all this file installing the python requirements and at the end exporting this to a new image and you should also see this here my second container it's so let's try to run this so This should execute a simple python script that's prints something on the terminal and as you can see this is working hello from Pyon simple as that so now of course you can add more complex python code to this talker image to run specific applications or maybe I will do some tutorials on writing a a simple API instructor or something like that who knows but what I want to explain based on this container is how these different layers are stacked on top of each other and that is what you can very easily inspect in the darker desktop here so when you go to images you can see my first container so when we click on this you can see that here in the first lines you can see the image hierarchy and there are two lines here the from statement so this defines the base image and you can see the different layers that are added to this Docker image coming from this base image here so basically those two lines and everything else that I've customized those free instructions here are coming from my own Docker file that I've created now when I go to the second container so the python application you can see there are many many more layers stacked on top of each other and that is because the python Alpine image that I've used this is a Docker image that also was created from a different Basse image so this adds those two layers then somebody developed the python D Alpine image on top of the Alpine image so now adding all those instructions to the docker file and now I am coming with my second container adding those other instructions on top of it as you can see setting the V directory which is our first custom instruction in the docker file then the Run apk update then copy requirements copy app run run pip install and then the final instruction the CMD instruction that adds the last layer to execute my python script so this is when you're building your custom Docker images how those different types of layers are stacking on top of each other now of course there are many many other instructions you can add to a doer file for example we discussed the from instruction that uses a base image you can build and add your own instructions and layers on top of we setting the work directory so that you don't need to define the path every time you add a new instruction the copy instruction that copies a file from the source file system into the containers file system the Run instruction that executes a command at build time there's also another instruction that I haven't explained before the entry point instruction so this sets the primary command to be executed when the container is run from an image so for example we could also have defined the CMD instruction to execute our app.py we could also have used the entry Point instruction you can also set environment variables with a key value pair you can add arguments you can define a user ID this container should be started with so for example if you don't want to start it with a root process um you can define a user and a group ID we will probably also discuss this in further episodes when we talk about container security you can also find all these darker file instructions on the the Dockery reference on the official documentation of Darker so just go there and yeah this is where I can look up all the certain things so that's it about building custom Docker images I hope this was helpful to understand the different types of layers that are stacking upon each other when you're building custom images and understand some of the basic Docker file instructions you can add and now in future episodes we will spend much more time on building more Docker images and shipping this to a Docker registry because we have really discussed how to share our custom Docker images and how to deploy this on remote servers so that is what will be the topic for next video because I thought it would need a separate explanation and if you enjoyed this episode if you think that was valuable and helpful to you then please give this video a like And subscribe and you should check out my patreon program if you want to support all these free tutorials and free resources that I'm creating for you as always thanks everybody for watching and I will catch you in the next episode of the darker tutorial take care bye-bye
Info
Channel: Christian Lempa
Views: 29,885
Rating: undefined out of 5
Keywords:
Id: JDw3ZdQcv2g
Channel Id: undefined
Length: 23min 2sec (1382 seconds)
Published: Tue Dec 19 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.