Docker Containers and Kubernetes Fundamentals – Full Hands-On Course

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
gee Barrette teaches us Docker containers and kubernetes fundamentals course for beginners ghee is a developer and trainer with more than 25 years of experience he is a Microsoft MVP frequent conference speaker and was the leader of the montreal.net user group for more than 23 years all this is to say he is the perfect person to teach you about Docker and kubernetes [Music] welcome to this Docker containers and kubernetes fundamentals training course my name is gibaret and I'll be your host in your Learning Journey Into The Amazing World of containers I'm a full-time trainer with a developer background and I'm based in Montreal Canada and that's where my strange accent comes from I'm certified on kubernetes and also on terraform Azure AWS and Google Cloud I'm very honored to be a Microsoft MVP in the Azure expertise and a digitalocean navigator you can reach me via the contact page and on Twitter what should you expect from this course you will not become an expert just by taking this course this is an entry-level course that will provide you with a strong containers and kubernetes Foundation and you'll gain enough knowledge to make sound decision at work or in your projects throughout this course you will find lots of Amazon activities to practice what you've learned you will use Docker and kubernetes locally on your PC or Mac so there's no requirement to have an account with a cloud provider are there any prerequisites for this course not really if you're a developer a devops specialist an ID Pro or even a technical manager that's totally fine no previous Docker or kubernetes knowledge is required we will cover a lot of ground you will learn about containers Docker and D Docker registry you learn about the kubernetes objects like pods workloads and services that's a lot of material and the goal here is to get you from zero knowledge to a kubernetes ninja well at least provide you with enough knowledge to Aspire being a kubernetes ninja I want to say a big thank you for learning Docker and kubernetes using this course if you like the course you can help me by making a small donation this is the link to my buy me a coffee page you can of course buy one of my other courses where you'll learn to run containers on different Cloud providers services and use a managed kubernetes cluster in the cloud and finally I wish you all the best in your Learning Journey [Music] let's see how to set up your laptop or PC for this course you need a laptop PC or Mac with either Windows 10 Mac OS or Linux if you have a Mac with an apple silicon most tools should run perfectly I will use Visual Studio code with the docker extension to help build create and run containers vs code is a free IDE that runs on Windows Mac and Linux on Windows and Mac you'll need Docker desktop with kubernetes enabled on Linux refer to the documentation on how to install Docker and kubernetes on your distro you'll need a Docker of account and a few easy to install tools refer to the setup instructions located below this video the lab files are located in a git repo on GitHub simply open this URL in a browser click on the code button if you have git installed on your machine you can type git clone with the link displayed here and if you don't have git simply click on the download zip button to download the code as a zip file [Music] let's talk about the microservices concepts if we head to Wikipedia and take a look at the definition that we find over there it says that it's a variant of the service oriented architecture or SOA a structural style slash architecture and that it arranges in application as a collection of loosely coupled services so instead of a large monolithic system we have multiple smaller pieces in a microservices architecture services are fine-grained meaning that each of them have their own responsibilities and the protocol use are lightweight like an API exposed over HTTP or grpc for example if we look at the monolithic architecture these systems were usually built as one single unit an IDE would group multiple projects and we would compile the whole thing as one single unit they were also deployed as a single unit so we would need to copy everything all the files on on a server and if we had to scale the system we had to spin a new VM and copy deploy the whole system on that VM and same for a third and a Ford server an example of such a monolithic architecture is a treaty application even though the system was clearly separated into layers it was all tightly coupled from the web project we had to make a reference to the business layer project and the whole system would run in the same address space with macro Services we break our big system into smaller parts each with its own responsibility so let's say we have a class that deals with identity in our business layer we can extract that code and place it in its own microservice we can then scale each of these smaller pieces independently from each other there's no strong bound since we expose functionality through an API they can be written by smaller teams and each can use their own programming languages like go PHP C sharp and domain-specific data can be stored in separate databases so the way we would deploy a monolithic system is by deploying everything on a server all the dlls files needed to run the system we had to scale the deploy everything on more servers now let's compare that to microservices well microservices are deployed independently each can scale independently also need to scale back one service no problem so if you have an existing monolithic system how can you transform it into a microservices architecture well you need to break it into small units like the code that dealt with the identity in the in our business layer Martin Fowler author of the patterns of Enterprise application architecture book documented the way to achieve such a transformation using the Strangler pattern let's say our identity code is here in the Legacy system we can place a facade to Route the calls to it migrate the code and have the facade route the calls to the new macro service at some point as we go we will end up with less code in our Legacy system and when democracy and it's done we can get rid of the facade this pattern is very useful and you can learn more about it using the link in this slide and this concludes this lecture on the Marco Services Concepts [Music] let's talk about Marco Services anti-pattern because it's a thought Rosie I know it's kind of strange to talk about what can go wrong right away but I think it's very important first of all it's not some kind of magic pixie dust that you can sprinkle on top of a zigzing system and boom you get a beautiful Marco Services architect system it takes efforts and maturity to achieve this from one monolithic system you'll end up with a bunch of smaller pieces and that can add extra complexity a change to a microservice can have a domino effect and take your system down and what about securing all of these microservices it's also essential to use or introduce new processes in the organization like devops Ci CD and testing but be careful and don't try to implement everything at the same time it's a recipe for disaster take it step by step and make sure you have metrics in place to validate each of these steps and this concludes this lecture on microservices antibatterns [Music] let's talk about the microservices benefits and drawbacks since each microservice runs in its own address space there are less chances that if one of them goes down it takes the whole system down with it a microservice runs on open source Technologies so there's less Fender lock-in since they are smaller in most case they are easier to understand and that makes them faster to deploy and also easier to scale like we saw in the anti-pattern section there are some drawbacks complexity is added over mother's existence to resolve to her complexity issues so make sure your team is well trained and has made some proof of concept and make sure to start small adding one piece at a time testing might appear simpler since there are less functionality in a microservice to test but make sure to test the whole system deployment may appear simpler but one update can impact many Marco services and have a terrible domino effect ready to manage multiple databases calls between macro services will go through apis and this will add a bit of latency to all calls so make sure you test for that this key transition server will appear you'll make a call and it will fail or but try again 50 milliseconds later and it will work so make sure to implement some retry strategies in your code or by using a service mesh instead of one big point of failure you'll end up with multiple ones can your system survive if one microservice goes down and what about security are you okay for all these microservices can see and talk to each other so yes complexity is introduced for solving complexity issues and this concludes this lecture on the macro Services benefits and drawbacks thank you [Music] let's Now understand what is cloud native you may have heard the term Cloud native before but what it is exactly it's a way to architect and build complex systems taking advantage of modern development practices and the use of cloud infrastructure if we head to the cloud native Foundation website and look at the definition we see that it's quite a long one so let's break it into smaller parts Cloud native uses containers service meshes microservices immutable infrastructure and declarative apis we'll cover containers service meshes and macro services and the concept of immutability in this course but not how to build apis immutable infrastructure means that we usually never update something but we replace it with a newer version Loosely coupled systems mean that the functionalities are exposed through apis observable with the use of metrics creation and updates are automated and instead of making changes once every six months we deploy eye impact changes on a freaking basis and finally we use a series of Open Source projects to run our system when the cncf says to use open source projects they are not kidding this cncf landscape graph shows a ton of Open Source projects that you can use but don't worry you don't have to use them all the challenge really is to identify which one to use in the context of what you want and try to achieve and this concludes this lecture on cloud native head to the cncf website for more info [Music] let's go deeper in the cloud native Concepts Cloud native is about Speed and Agility the user wants new features right away without any downtime and the business wants faster release of feature to stay competitive a cloud native application architecture starts with clean code using domain-driven design techniques Markle services and kubernetes this course is all about microservices and kubernetes feel free to explore the concepts of clean code and DDD on your own with Cloud native we need to change mentalities infrastructure becomes immutable and disposable it is provision in minutes and Destroy as fast it is never updated but it's replaced with newer versions traditionally we would care about our virtual machines we would patch the OS update the apps with containers we create newer version with the software updates destroyed the previous running ones and replace them with the newer ones so the containers that you'll run will be more like cattle than pet of course this Cloud native thing is a lot easier when starting a new project a blank page or a green field however it's still possible with Legacy projects I really like the cloud native Foundation trail map because it breaks the journey to Cloud native into smaller measurable objectives you can set your own performance indicator to measure each steps to ensure a smooth Journey so let's take a look at the first steps your team must first learn how to cantonerize your application the developers and the IT Pros must know how to deploy and monitor containers you need to automate deployment through the use of continuous integration and continuous delivery techniques and tools you need to use an orchestrator like kubernetes and maybe deploy your application using L charts then you need to add observability so you can understand what's happening in your kubernetes clusters and be reactive use tools like service meshes to provide more functionalities inside your cluster Implement security through policies and wow these were just the first six steps Now understand that you don't have to implement all of this and especially not at the same time I really like this trail map because it breaks the journey into smaller steps that the management can understand and measure and this concludes this lecture on the cloud native Concepts [Music] let's take a look at the cncf website the website is located at cncf.io and from there you can take a look at the various projects maintained by the cncf information about on how to get certified Community Information like the conferences that the cncf organized each year so let's go back to the projects menu and you'll notice that projects are categorized in three categories sandbox incubating and graduated let's click here on this the second menu here and let's scroll down to the bottom and here you get the information about the meaning of these three categories and basically that's their maturity level sandbox projects are mostly newer projects uh well graduated projects our projects set conservatives Enterprises are more likely to use so let's take a look at the graduated ones we found here kubernetes Helm Jagger so let's click on kubernetes and basically that will show you the the project website let's take a look at the incubating ones here we find Linker D grpc let's click on grpc you can get more information about the grpc all right let's go back to this menu remember the trail map that I mentioned in the previous lecture well here it is cloud native trail map here's the nice diagram you can get more information you can get send that to friends colleagues and here's a link to the landscape diagram here it is it's super huge Let's uh let's click here on kubernetes and here we get very interesting information well you get the repository where the project is uh is stored you get the number of stars and the activity number of commits here you get you get the website address and also the Twitter handle here so you should follow the Twitter feed of the project that you're using so let's close this one let's click here on hell again website uh repository number of stars activity and the Twitter handle and this concludes this look at the cncf website thank you let's now talk about the containers Concepts containers containers containers they are everywhere but what are they exactly a container is a unit of deployment it contains everything needed for the code to run so the compile code the runtime system libraries and this is system tools you take a container push it on a server and it should run of course it can have some external dependencies like a database or a cache but the code deployed in it should run as is so why use containers because it is faster to deploy something small than something big like a complete monolithic system uh they use fewer resources they are smaller and since they are smarter you can fit more on the same server when using cicd techniques they are a lot faster to deploy you can run them anywhere and they are isolated from each other meaning that if one fails it will not take the whole system down with it so what exactly is virtualize let's compare virtual machines with containers a VM runs on some kind of Hardware where an OS is installed the OS hypervisor will let you create a virtual machine where you will install in OS so basically DVM virtualized the hardware and what's happening when a VM starts well you see the BIOS coming up and then the OS boots up and what about the size of that VM let's say we have a Windows Server VM it can take 12 gigabytes of RAM and 500 gigabytes of hard drive space and how long does it take to boot well depending on multiple factors something like 5 to 10 minutes now let's compare that to containers we still have the hardware and the OS of course there's a container runtime installed uh in the OS and containers images are run in memory now compared to a VM a container does not have to boot because it will use the host OS kernel this means that container starts in seconds because they don't have to boot they also use a lot less memory and hard drive space since there's no OS a small container can take a hundred megabyte of hard drive space and run in 64 or 100 megabyte of RAM so VM and containers virtual machine have a larger footprint they are slower to boot ideal for long running tasks container are lightweight they're quick to start they don't have to boot they're portable and they're ideal for short-lived tests because they you can spin one super fast so are containers replacing virtual machines our virtual machines obsolete absolutely no containers are just another tool in your toolbox and you need to find the right use case for them and also for VMS if you're old enough you must remember what the telephone booth is if not well before cell phones we used to make phone calls in these spoons by dropping a dime or a quarter anyways using a telephone boot analogy you can pack more containers on the same server than what's possible with virtual machines containers are made of layers you start with the base OS add customizations and add your applications let's take a deeper look at this screenshot the docker pull command retrieves and download a container image as you can see each layer is downloaded individually notice that each has a unique ID and that for the first ones Docker says that they already exist why is so Docker uses a local cache to store the container's images and if a layer already exists it will not be downloaded again the benefit is that if you pull version 2 of an image Docker will only download the layers not present in its cache one of the goal when creating container images is to create them with the smallest number of layers possible later on we'll see techniques on how to achieve that now can you write on these layers well no except for the top one because it is read right the lower ones are read only another concept is the container registry it's a centralized repository where you deploy the container images you create think Gita but for containers Docker as one called Docker up that provides public and private repositories and all major Cloud providers have container registry services the last container concept is the orchestrator an orchestrator allows us to manage scale monitor the containers that we run on our servers you can install your own or use a managed cluster offered by one of the cloud providers like AWS Azure Google Cloud we will come back to the orchestra Concepts after we have a better knowledge of containers and this concludes this lecture on the containers Concepts [Music] let's Now understand what is docker so what is stalker that may seem like a simple question but there's more to it there's Docker the company and Docker the platform document thinks the Mobi project an open source container runtime that follows the specs from the open container initiative doctors sold its Docker Enterprise division late 2019 to a company called mirantis so if you want to buy Enterprise support or get certified with docker you have to go through morentis Docker provides a container runtime that runs on Mac windows and Linux a command line tool to create and manage a containers a Docker file format for building containers and interestingly Windows lets you create both windows and Linux containers foreign if for some reason Docker doesn't seem to work on your machine try restarting it by using the restart menu from the system icon on Windows or by clicking on the debug icon in Docker desktop on Mac and windows and clicking on restart Docker desktop is very stable but I had some issues when my laptop was coming back from hibernation but that was a long time ago and I haven't had issues for a while and this concludes this lecture on docker [Music] the easiest way to run Docker on your machine is by running Docker desktop it's a free download available on docker.com so you click here on get started and you download the version for your OS so Windows Mac Linux here now if you're running Windows uh check the version of Windows that you're running if you're running Windows 10 version 2004 or a later version you can run what what's called windows subsystem for Linux version 2. so wsl2 basically it allow you to run a Linux distribution right into your Windows installation so Docker desktop can run its container by installing a virtual machine inside hyper-v or if you have wsl2 install it will it will install that virtual machine inside the Linux distribution and everything will be a lot faster so that's the preferred way so just to prove a point here I'm going to launch my hyper-v miniature and as you can see I'm not running any virtual machine so my Docker desktop installation uses wsl2 you'll find a link to this installation guide in the modules notes okay so let's take a look at the docker desktop I'm running Windows as you can see I can see my Docker desktop system tray icon here so for Mac User it'll be at the top of the screen of course I can right click on it and let's select dashboard all right so here I can see a list of um containers that are currently running I can stop them restart them delete them I can see a list of uh images that are installed on my machine here we'll come back to that later on there's a gear icon here that's the setting icon and in the general section here you can see that wsl2 is enabled so that's why Docker desktop doesn't use a Now preview virtual machine it uses the wsl2 to run the VM and there's the kubernetes menu here if I select it I can see that kubernetes is enabled so when you check that Docker desktop will download additional containers to run kubernetes right onto Docker desktop here so very useful there's a bug icon here which is the troubleshoot icon here so if at some point you're issuing Docker commands and the don't work or something's wrong uh running Docker Docker commands you can click here on the restart button here you can see my name here it means that I'm currently logged and if I right click down on the system play icon you can see that I'm currently logged in and I have the option to to sign out so what uh username and password did I use to to log in when you downloaded the uh the docker desktop you could have created a Docker account or a Docker Hub account so that's the same username and password so if you go to up.ducker.com and you logged in well that's the same account that is used here in Dr desktop thank you [Music] let's take a look at your first Docker CLI commands now throughout this course I will introduce you to various commands I will list them in what I call a cheat sheet list like what you find on the slide I will briefly explain what the commands are for and that will be followed by a concrete and Zone demonstration so when you install Docker desktop on on your Mac or PC it also installed the docker CLI tool our first command is Docker info this will display some information about the docker installation on your machine Docker version will display its version and Docker login will log you into a Docker registry by default this login command will log you into Docker Hub the registry from docker and this concludes this lecture on the docker CLI [Music] alright we need to open a new terminal window into Visual Studio code so let's select a terminal new terminal or you can use the shortcut Ctrl shift back tick key all right this will open a terminal window and let's take a look at the commands that we will run they're really basic it's just for testing that our Docker installation is working correctly so let's type Docker info this will give me some information about my current installation what's happening I have 47 containers three stuff 44 running 25 images and so on and so on so information that that is quite useful for debugging purposes I can see that the virtual machine running that running Docker desktop is running as to CPU it has two gigabytes of memory allocated all right sounds good let's uh type now Docker version and this will uh give me some information about the version number of different parts of Docker desktop so again useful for debugging purposes these are not commands that you will run on a day-to-day basis but they're quite useful for troubleshooting the last command is Docker login so I'm just going to type Docker login without any username password see what's happening and well the command says that I am login successfully why is so well if I right click here on my Docker desktop I can see that I'm already logged in so that's why I didn't have to input or type any username password I was already logged in [Music] let's now see how to run containers the docker pull command lets you download an image from a registry Docker run will execute an image in memory as a container if the image is not present in the docker local cache it will be downloaded automatically using run with the Dash D flag will run the container in the background giving you your comment prompt or terminal back the start command will run a container in the stopped state Docker PS will list all the containers currently running and add the dash a flag to also lists all the stopped ones Docker stop will stop a container running but the container will still be in memory we will see how to remove them from memory in a few minutes Dr kill will well kill a container that might be stuck in memory you usually don't use this comment but it's useful to know Docker image and spec will give you some information about an image very useful for debugging purposes so you may notice that we have two parameters here one is called image name and the second one container name so what's the difference the image name is the name of the image as you find it in the container registry and the container name is the name of the running container so you run an image using its name and then interact with it using the running instance name the Run come in as an optional flag call dash dash name that lets you specify your name if you don't specify one Docker will Auto generate one for you you can set limits on the memory and the CPU that the container can use when using the Run command so how do you run a container using the docker run command you specify the image name as found in the container registry you specify a name for the running instance and with the published flag you map a port from your local OS to the port that the container is listening to you can list the running containers using Docker PS notice how we stop the container by using the running name and not by using the image name then we remove it from memory using the remove or RM command containers are not black boxes you can attach a shell to your terminal and run commands that will execute right inside the running container by using the dash it switch and the name of the program you want to run a Windows container well you can run Powershell using the docker container exact command you can attach to a running container here's a screenshot showing the docker run command and notice the terminal problem changing when attached to a container so how do we clean up things the remove command lets you remove a container from memory but first it must be in the stop state for the command to work here's the remove command getting a list of the stopped container and removing them all the images that you pull will be cached locally you can get a list of these images using the docker images command use the remove image or RMI command to delete an image from your machine after a while you may end up with a bunch of images to do some spring cleanup I use the system prune command this will delete all the image currently not in use so be careful using this command and this concludes this lecture on the docker CLI [Music] let's now run our first container we'll run an engine X web server so something pretty basic that'll be perfect for this this lab all right let's open a terminal so terminal new terminal or Ctrl shift back tick perfect so we'll run disk command so let's take a look at the command first so I'm going to run a Docker run and let's go at the end of the command this will be the image that will run so an nginx image we'll give it a name so the name will be web server that will be the name of the running instance will map local All Sport 8082d or that the container is listening to so port 80. n d 4G decimal so we can get our Command Prompt or terminal prompt back so let's run this ha something interesting is happening unable to find image nginx latest and you see that the docker as pull all the different layers locally here so now if I issue a Docker PS it will list the containers that are currently running so I have uh three containers running so the kubernetes dashboard and the metric server here so don't uh look at these let's focus on this one so this is the containers it is that we just launched the nginx image here it started 30 seconds ago we can see that the port 8080 local levels is mapped to the port 80 here and the name is web server so let's launch a web browser and let's type localhost 8080 the container is actually running fantastic all right so we can get a list of the images installed on your machine here by using Docker images so I have a bunch I may have a lot more than you but let's focus on the last one here uh nginx the tag is latest image ID and this is the uh the size all right fantastic let's try to connect to uh to it so we'll issue a Docker container exact we'll give it the the name so the running instance and the program we want to run so look at the The Prompt now root at some some ID so the ID is the actual container ID so I'm logged in as root on that container so now I can issue some some commands so let's do in the last let's thy fell less been to see what's in there so different commands so I'm connected to to that running container that's pretty uh pretty cool so I can issue commands look at the logs if any and do some troubleshooting so this is super useful for debugging purposes uh we'll use that a lot in the various steps that we'll do together all right let's get out of there by typing exit all right you can just clear my screen here so our container is running how do we stop it we use the docker stop command but look at the the parameter that we use the name of the container that we use we use the running instance name not the name of the image so that's pretty important here Docker stop web server all right but the container is still in memory if I do a doctor PS it's not listed anymore as you're running container but if if I type darker es Dash a ha for all I can see that my my container is still in memory here so I need to remove it from uh from memory so we'll use the docker RM and the name of the running instance so RM for remove and now the container is no longer in memory awesome now the container is still well the image that was used to create the container is still on my machine so if I type darker images you see it's still it's still here so that takes 133 megabyte of this space if I want to get rid of that I use the RMI so remove image command and the name this time of the image not the name of the running instance because none are running right now and you see all the layers have been deleted [Music] let's now see how we can build containers Docker build that you create an image using a Docker file if you run the command in the same folder where the docker file is located simply use a DOT as the file name if the file is located in a different folder specify the location using the F flag the tag command let you assign a name to an image this tagging has two parts a name and a tag the tag is usually used to specify the version number so what is a Docker file well it's a text file listing the steps to build an image here's the simplest Docker file I can imagine two lines the from command specified the base image when building new images you always start from something already existing in this case an image with the nginx web server using the Alpine version and then the copy command copies everything from the current folder to a folder inside the container using the build command we create a new image specifying the docker file remember to use the dot when the file is located in the same folder here's another one this time a little bit more complex it is used to create an image running a node.js app let's take a look at it the from command specify the base image using the Run command we run the package manager inside the container to install node.js next we copy all the local files into a folder named SRC inside the container we use the Run command to do an npm install we then had some metadata in this case we tell the container to listen on port 8080 and finally we tell the container what to run when starting so as you see this Docker file contains the stats needed to run our node.js app we saw what tagging was a moment ago let's explore this again using the docker tag command we name an image using a name and optionally a tag if you don't specify a repository name it will default to Docker Hub later on we'll see how to push images to different repositories and we'll have to specify the Ripple's Journey name when tagging our images and this concludes this lecture on how to create Docker images [Music] let's see how can Visual Studio code help us build and run containers so what is visual studio code and why talk about this tool in this course well it's a text and code editor it's free and open source it runs on Windows Mac and Linux and you can download it for free using this link so you will work a lot with text files creating Docker files and later on Docker compose and yaml files a tool like vs code will help you because you can install plugins that will make your life easier using a different text editor no problem in vs code you can install plugins by clicking on the extension icon in the left menu then you search for Docker and install the extension from Microsoft the extension lets you add Docker files to your projects using the command palette open it using the view menu or type Ctrl shift p type Docker add and select add Docker files to workspace the extension will ask you a few questions and we'll create the docker files for you vs code has a built-in terminal where you can type commands or you can run commands using the command palette here's another example when running a container and when using the command palette there's no magic there the extension will simply issue a command in the terminal but sometimes it's a great way to learn creating Docker files is okay but what I like the most is the UI provided by the extension helping me manage my container so if you click on the docker icon you can see the images installed on your computer and you can even see the containers currently running right click on an image to manage it same thing for the containers currently running very very very useful and this concludes this lecture on vs code thank you let's now use Visual Studio code to containerize a node.js express application so I already installed the docker extension in Visual Studio code but let's take a look at it so if I click here on extensions and let's search for docker here it is that's the one from Microsoft almost 6 million install at the time of recording so that's the one all right so let's go back to our files so I have my node.js application here if I click here on package.json [Music] the name of my application is my Express app so that'll be important in a few seconds all right so let's first add the docker file to our project so we'll use the tooling provided by the extension to do that we'll go to the view menu command palette or you can use the shortcut Ctrl shift p right and we'll type Docker add and there are two options here Docker files or Docker compose files so we haven't looked at the docker compost file so let's select the first one Docker files the extension hack is asking us about the application platform so it's a node.js application but it can generate Docker files for net python Java C plus plus go and Ruby apps so let's select node.js all right and next next it's asking us where is the package.js file so here it is it set the root of my application so I'll select this one the port that the application is uh listening to 3000 perfect and do I need the optional Docker compost file no not at this time all right so the extension quickly added a the docker file and also the docker ignore file and also a folder for vs code so let's take a look at the docker ignore basically it's a list of files not to deploy in our container and the docker file has been created for me to use a node base image it copies everything into the Container the image and expose port a portrait 3000 perfect so let's now build this image so we'll go back to the command palette view command palette Ctrl shift p and this time we'll search for Docker build all right here it is Docker images build image all right so I just issue uh the command and look look at what's happening the extension is not a black box uh issuing some crazy or strange uh commands in the background it's just issuing a a Docker Bill command here and remember a few seconds ago I mentioned that the name of the application in the package.json file is is important well that's the name of our uh of our application here so the extension use it to generate the name for my image all right it's just a Docker Bill command nothing fancy but using this extension is a great way to learn about some uh some commands that you may want to use at a later time from the terminal prompt okay terminal will be reused by test press any key to close it let's press any key all right the image has been built let's run it now so view command palette okay Doctor run so we'll use the first one and uh there's a list of my images and here is my Express app perfect run that select image that's the latest the tag latest perfect and look at what's happening Docker run sport 3000 to the port that the container listening to so nothing fancy here is uh it's a comments that you can type yourself but uh the extension to that are for you so if I start my browser local ocean I go to Port 3000 there it is so that worked perfect let's now close this let's now use the UI provided by the extension so in the left menu let's click on the docker icon here and from there you have a list of the running containers the images and the different Registries that you connected to all right so here's our Express hat my Express apply it is and it's running name the instance name is called magical underscore Direct uh why is so well when we issued a Docker run command using the command palette the extension didn't provide a name so uh one was generated automatically by by docker so from there I can view the logs attach shell inspect open in a browser so if I select that there it is that's my app listening on Port 3000 uh I can stop it restart it remove it so let's uh let's remove it here so are you sure you want to remove container yes it's not in memory anymore it's not running anymore but if I look at the list of the images that I have here it is my Express app so I can right click on it and I have a few options I can run it inspect it pull push tag and even delete it so let's let's run run it okay and you see at the top my Express app and look at the name gifted underscore El beckyan because if you look if you uh inspect the docker command that was issued by the extension there's no name provided so Docker generated one for us all right so let's close this we can stop or remove it from memory and if we no longer need the image we can just remove it here are you sure you want to remove the image blah blah blah okay let's remove it and it's gone so what's happening in the background is that the extension is simply issuing a Docker remove image RMI to to remove the image from from this so nothing nothing really fancy the extension it's not a black box that is issuing some strange commands it's just regular and Docker commands but I really like the the UI provided here and so instead of uh typing uh okay let's uh let's try this let's um let's clear this and let's type Docker images here's a list of my all my images but here I have a nice UI listing all the the same images here so sometimes it's it's easier having a little UI to help you accomplish some tasks other times it's easier issuing the commands from the terminal or the command prompt so it's up to you you can use the UI or The Terminal thank you [Music] let's talk about data persistence containers are ephemers and stateless so you don't usually store data in them of course you can write data in a container but if you destroy one or if it crashes any data stored in it will be lost so it's okay to write some log files or scrap data that you don't want to keep as long as you understand that you will lose these files at some point in time to purchase data you need to store it outside the container in what we call a volume a value Maps an external folder or even a cloud storage service to a local folder inside your container so your app sees a volume just like any regular folder the OS in this diagram represent the server or the virtual machine where the container is running as you can see a local folder is mapped to the VM file system so they are stored in a volume where survive a container restart or crash there's still a chance that we can lose the data if the VM crashes so later on we'll see how we can use some type of external storage provided by the cloud provider but first thing first in the next lecture you'll see how to create a volume that maps to a VM file system and this concludes this structure on data persistence [Music] um let's see how to create volumes here's a cheat sheet listing the docker commands for managing volumes Docker create volume will create a new volume LS will list all the volumes volume and spec will get you information about a volume remove will delete a volume destroying all the files storing it and volume prune will delete all the volumes currently not mounted or not in use so be super careful careful using this command alright you first need to create a volume using the docker volume create command then when you run a container you need to use the V switch or to volume a parameter specifying the volume name a column and the name of a local folder that folder will be a logical folder in your code so your code will see just like any regular folder if you use the inspect command you'll see The Logical folder location in DVM instead of using a volume you can specify a local folder this is great for testing purposes let's say you started developing your service and that you want to test on your Dev machine if your code can read and write some files correctly you can use this kind of mapping but don't use that in production using the inspect command you can see the local folder path and this concludes this lecture on volumes [Music] a software system data outside of a running container so to do that we'll use a value so I'll open a terminal and we'll use the docker volume create and the name of the volume so this will create a volume perfect let's Now list the volumes on my machine so there's a few here the first four were created earlier this is the volume that we just created now let's run a ninja next image and let's attach a or Mappy a local folder to that volume so we'll use Docker run Dash D for detach we'll name our instance fault test with the Dash D for volume we'll use the volume name that we created earlier and map that to a folder called ATP on our nginx image all right to that okay excellent so that worked now let's connect to our running instance so Docker exec Dash it the name of the instance and it will run bash here perfect let's first do an LS to see if we see the app folder there it is so that's that folder is mapped to the volume so anything that we store or write in that folder will be written externally outside of the container so that will persist all right uh just for fun instead of doing a cat Let's uh install Nano inside the running instance so I'll do first in apt-get update remember this is uh this is running inside the the container and we'll install Nano app get the install Nano which is a small editor perfect LCD into the app folder in open Nano so Nano and we'll create a file called text test.txt hello volume [Music] all right and we'll use Ctrl o to write to to disk Ctrl o enter and control X to exit Nano perfect if I do an LS here I should see my file there it is test.txt so let's exit the running instance and what we'll do will stop the running instance and we'll remove it from memory stop it here and we will remove it from memory with our m okay so now the container or the instance is gone if we would have stored some data inside that container it would it would be lost at this point but we use a volume so the data was stored externally so let's try to create a second instance same thing using the same volume and let's exec to it perfect let's see what is in the app folder here is our file so let's do cat test.txt just to prove hello volume that work let's exit here this proved that by using a volume your data persists a container restart or crash here the data is still there until I remove the volume so if I issue a Docker volume removed and the name of the the volume you see doesn't work why the error says that the volume is in news interesting so I need to stop any container instance that is that is running and remove it from memory before deleting or removing a volume let's try uh again term Docker volume RM for remove by volume and this time it worked now another thing I want to show you let me create it again perfect and let's switch to the docker UI here if I click Docker here there's a section that list of the different volumes here so my vowel is listed here so here I can inspect I cannot look at the files but I can I can manage it so I can click here on remove that will delete the volume are you sure yes [Music] let's now see the yaml concepts yaml stands for yaml ain't markup language it's a way to serialize data so that it's readable by human beings it's the file format used by Docker compose and kubernetes here's the sample yaml file for a key value pair you specify the key a column data space and a value don't forget the space it's mandatory here are some nested values you specify child values using two space indentation and quotes are not needed for string values here's a list again the child elements are indented with two spaces and there's a space after the dash this is what we call the block style there's also a flow style that looks like Json so you may be tempted to use it if you're familiar with Jason but don't I never saw any sample or any documentation using this flow Style since it's easy to forget a space and you can spend quite some time figuring out why your yaml file doesn't work you can use tools like this linter available on yamlin.com it will parse your yaml and flag any errors very useful and this concludes this lecture on the yaml concepts [Music] let's now take a look at the docker compose Concepts let's say that your app is composed of multiple containers you run the front-end container using a Docker run command Docker run again for the backend container and again for the redis cache container so you end up issuing multiple Docker run commands to run your app would it be nice if you could deploy your app using one single command well that's the docker compost go to Define and run multi-containers application using a single yaml file there's a compost plugin that extends the docker CLI and let you run those Docker compose files these specifications are available here if you look at Docker compose before you may have seen that sometimes the commands are using an iPhone so Docker Dash compose and sometimes they do not why is this at the dockercon conference in 2022 Docker announced the general availability of couples version 2. okay this means that there was a V1 before and the V1 command line tool was installed separately from the docker CLI it was built using python so you needed to have python installed to run compost V1 and the syntax was Docker Dash compose couples V2 is a drop in replacement meaning that all the V1 commands are working as expected it's installed as the docker CLI plugin automatically by Docker desktop to use it you type Docker space compose no hyphen needed here it's written in go so no need to have python installed to run the command in summary it's simply a faster version of the docker compose tool shipped as a Docker plugin instead of a python application here's a Docker compose file there are three containers defined in it web API 1 web API 2 and API Gateway the name that you use here defined the network or host name for that container the code running inside your container can use these hostname to communicate between each container for each of them you specify the image to run you set the internal and external Port the container will listen on note that the API version is now optional so it's okay to skip it you may ask yourself should I use darker compose or not Docker compose is perfect for small workloads that don't require a full orchestrator like kubernetes it's perfect When developing and testing locally before deploying to kubernetes Some Cloud providers offer services that support Docker compose like app service on Azure and ECS on AWS and of course you can simply use a virtual machine or a VPS virtual private server with the digitalocean or linued and this concludes this lecture on the docker compost Concepts thank you LaSalle used Docker compose here's a cheat sheet listing some of the docker compose commands Docker composed build lets you build the containers as defined in your Docker compose yaml file if the file is located in another folder you can use the optional Dash F per meter and specify the files location start we'll start all the containers as defined in your yaml file step will stop them but they'll remain in memory up we'll do a build followed by a start this is super Endy use the Dash D parameter to run the command in the background and take back your terminal prompt PS will list what's running remove our RM we'll remove the containers currently from memory down we'll do a stop followed by a remove again this is super ND logs will display the logs for a container and you can open a session inside a container by running Docker compose exec the container name and the program to run the docker compose file is located inside a folder and if you run Docker compose up this will launch your application if you try to run a Docker compose up a second time from the same folder nothing will happen because the application is currently running if you want to run a second instance of your application it was impossible with Docker compose V1 but with V2 you can use a project name to launch a second instance of your application from the same folder here's the cheat sheet for some of the new commands Docker compose dash dash project Dash name followed by a project name this will run a then sense of the application as a project the shortcut is much shorter to type Dash p instead of that Dash project Dash name you can list the project currently running by using compose LS CP will allow you to copy files from the containers so this is super ND to retrieve let's say log files and you can copy files to The Container so from your machine your desktop or laptop to The Container by using Docker compose CP The Source pad the container ID and the destination path here's an example imagine that the docker compost file is located in the same folder where you run these commands you simply use the up command to build and run the containers and to take them down simply use the down command and this concludes the structure on Docker compose [Music] in this lab we will deploy a Docker compose application let's take a look at our Docker compose.yaml file we have one section called services and under that section we Define two Services the first one is called Web Dash Fe F4 front end it's a python application and instead of using an image from Docker up will be building that image using the build parameter the dot means that the docker file is at the same level as the docker compose file so here it is it's a simple python application just one file app.pi and requirement.txt that we're copying on that base image it will be listening on Port 5000 and this is our second service it's the redis cache and this time we'll be using an image from Docker up all right let's open eight terminal and let's build the image using Docker compose build perfect my image was build now I can launch the application using Docker compose up and Dash D for detach now I could have skipped the build step because up does a build first and then a start so it's super ND so let's use Docker couples up and my application is up and running I can test it if I go to local OS 5000 you visited me one time and do a few refreshes five time perfect okay I can list the running containers using Docker compose PS I can also use Docker PS since it will list the docker container currently running and I can look at the logs for my front-end service using Docker compose logs Dash F and the name of the service and if I move that a little bit and it F5 a few times you see new entries are logged perfect that works let's do a Ctrl C to terminate the log streaming and we'll use Docker compose LS to list the currently running projects I have one project running it's called l09-04 Docker compose well basically when I use Docker compost up I didn't specify a project name so Docker compose use the folder name as the project name now let's try to create a second instance of our application if we use Docker compose up Dash D again well Docker compose tell me that the application is running so it will not start a new version we can try to deploy our second version using a project name so Docker compose Dash p for project name we'll name it test up Dash D let's see what will happen starting oops we have an error here hmm bind for local OS 5000 failed Port is already allocated of course are my local old sport 5000 is in use right now so what I need to do I need to change that Port here the localhost port I'll use 5001 save the file and use the same command Docker compose Dash B project name test up Dash D and this time it worked awesome let's open our browser and let's go to Port 5001 yes that worked a few refreshes 10 times let's go back to Port 5000 the first instance 15 times all right so let's do a Docker compose LS again to list our projects now we have two projects the first one that we deploy without specifying a project name and the second one with the project name test let's delete our for instance by using Docker compose down so I didn't specify a project name Docker compose use the folder name as the project name now if I do the docker composer list I should have only one project running yes it's test now I can delete that one using Docker compost specifying the project name and down let's list the projects again Docker composer less nothing running PS listening the containers nothing and why not try Docker PS nothing in memory [Music] in this lab we will deploy a Docker compose sample application that is composed of three services a web front-end build with react nodejs backend and the Maria DB database let's take a look at our Docker compose file this is a more complex than what we saw so far so let's try to break it into smaller pieces here we have the definition of our tree services if we start from the left we have the back-end service top right the database DB and the front end now if you look at the DB service you can see that we're referencing an image that will pull from Docker hub and the other two Services we're using the build parameter this means that we will build these two images looking at the backend service we can see that we specify build and context contacts with the value of backend backend is actually a subfolder where the docker file is located next we're defining two networks public and private we can see that our front-end service is using the public network the backend service is using public and private and the database service the DB service use only the private Network front end being in the public network cannot communicate directly with the DB service but backend being in both public and private can communicate with both DB and front-end services we're defining also two named volumes back in dash modules and DB Dash data and we can see highlighted in yellow that our DB service is using DB data and our backend service is using back-end modules highlighted in yellow we also see that we're using other volumes these volumes are scoped at the service level and are not shared between services to create an instance of our Docker composed application we simply use Docker couples app and to bring it down Docker compose down in Visual Studio code let's take a look at the docker compose file so it's called compose.yaml here we have our services section Network section where we Define two networks the name volumes back-end modules and DB data and secrets that we haven't seen yet where we're defining one key value pair so the key is DB password and we get the secret from a file the file is located in the folder called DB and it gets the value from a file called password.txt if we open the DB folder here's the file with the value the secret and that will be injected when we run Docker compose up okay let's go back to the services section and here we have our backend DB and front-end Services let's take a look at backend we're using the build a directive and we are setting the context to backend it points to this folder called backend here and the docker file is located inside that backend folder with the application same thing with the front-end service build context front-end and we're getting the docker file and the application from the front-end folder and what about the DB service well we're using an image that will pull from Docker hub okay let's build our two images we'll open a terminal and run Docker compose build our images were billed perfect now let's run the application by using Docker compose up Dash D ER perfect the application is this thing on Port 3000 so let's open a browser and type local OS 3000 is our react application it's working awesome we can list the containers that are currently running by using Docker compose PS we should have three backend DB front end awesome let's take a look at the logs from the back-end service Docker compose logs Dash F backend and these are the logs for our backend service awesome we can type Ctrl C to stop the log streaming and we can take our application down by using Docker compose down this will stop and remove the containers from memory do we have something else in memory we should not perfect and even if I type doctor PS there's nothing however the volumes are still there when you do a doctor compose down it will remove the containers from memory but will not remove the volumes so if I open the docker desktop application and I click on volumes here I can see that I have a few well three volumes that were created a few minutes ago so I need to delete them manually I can select them right here and click on delete or I can do the same thing by clicking on the docker icon in vs code locating the volumes here and delete them find it easier to do that in Docker desktop because we see when the volumes were created so I'm pretty sure that these three are the ones that I need to delete I'll click on the delete button and confirm thank you [Music] let's take a look at some of the compose file features it's a good practice to set limits on the resources that your container will use in this example I light it in yellow we tell Docker to start the container with a quarter of a CPU and 20 megabytes of RAM the green section is the limits that we are allowing in this case half of a CPU and 150 megabytes of RAM to set an environment variable that will be injected in the running instance simply set the key value pair in the environment section those values can be overridden at the common line using the dash e per meter you can reference an environment variable using the daughter curly bracket syntax this way you can set the variable on your machine or server and use it directly in the compose file you can place the values in a file that you will name dot EnV located in the same folder as the compose file the compost command will automatically read the values from that file by default all containers specify in a compose file will see each other using their service names here we have two Services web and DB the code running in the web service can communicate with the second one using DB as the hostname and vice versa the web container is visible from outside of the docker network using the port number configured in the left portion of the ports value web is listening inside the docker Network on Port 80. DB can reach web on Port 80. finally DB only exposes one port number that's the internal port web can reach DB using Port 5432 but DB is not visible from outside the docker Network if you have a compose application with multiple containers you can restrict who sees who by configuring networks in this example we're defining two Networks front end and back end proxy can see app because both are part of the front-end Network however proxy does not cdb because it's not part of the backend Network when using multiple containers you may want to start some of them first and wait until they are running before starting the other ones a typical use case is a database that you want to run before sorting the main application doing so is easy using the depends on parameter where you simply specify what is the service name that the service is dependent on in this example app depends on DB so compost will first start DB and when DB is running compose will then start app you can declare volumes in the volumes section these are called named volumes and they can be used by all these services that you are declaring in the compose file to use a volume from a service map it to a local folder using the volume name colon and the virtual path inside the container optionally you can make the mapping read-only by appending colon Ro to the mapping you can also create a mapping without using a named volume this mapping can't be shared across services it's also a good practice to set a restart policy let's say that you deploy your couple's app in a VM and at some point you need to install some OS batches and you need to restart or reboot the server what will happen to your compost app well if you don't specify a restart policy the one by default is no meaning that compose will not restart the containers if they were shut down by reboot you can set the policy to always this way compose restarts the containers until their removal on failure resource a container if the exact code indicates an error and lastly unless stop does a restart unless you stop or remove the containers and this concludes this look at some of the docker compose features [Music] let's now talk about container registries so what is the container registry it's a central repository for container images you build an image locally then you push and store the binary the different layers 2D Repository they can be private or public the default one is Docker hub Microsoft AWS and Google each offer container Registries as service the benefit of using a repository from your cloud provider is that the images are located near your app so no apps over the internet to retrieve the images so let's say we want to retrieve an image from Docker up we issue a Docker pull command and Docker downloads the images layers and store them in its local cache and this concludes this lecture on container registries [Music] let's see how to push and pull images to Docker up make sure you are logged in with your Docker user account to be sure simply type Docker login without a username and password Docker will tell you that you're already logged in if not enter your Docker username and password you need to tag an image with the repository name by default it's your username if you have created some organization in Docker hub prefix it to the name of the image in this example I want to push this image to my Kates Academy organization then use the push command and don't forget to specify the organization name it's part of the image name to retrieve the image we use the pull command with the image full name on Docker up public images are available for download to anyone if you don't want to share them you need to create a private Repository later on we will create one using our cloud provider and this concludes this lecture on Docker up [Music] let's now push our first image to Docker hub first thing first let's make sure that we can log in into Docker Hub so if you head to up.docker.com make sure you can log in also if I right click here on my Docker desktop icon I can see that I'm logged in perfect we will containerize a node.js Express application so we'll first add the docker file we'll use the tooling so view command palette if you can type Docker add and Docker files to workspace this is a node.js application the package.json file is located at the real so that's the correct one it's listening on power 3000 and we don't want Docker compose files perfect now we need to build the image so let me open a terminal perfect and we need to issue a Docker build command with the Dash D for tag parameter but notice here we need to prefix the uh the name of the image with our registry name if I select this I need to prefix that with my name here the name of my registry let me run that the image was built successfully now I need to use the push command to push that image onto Docker Hub again I'll select this and let's replace the registry name with mine use your home and see what's happening here Docker is pushing each layer to my Docker app account here all right no errors things went fine let's go back here I'm gonna refresh this page and here it is here's my um uh image here my Express image so I can click on this I can edit the um the information I can see that I have my tag is V1 right I can get more information here I can here click on public view so by default the repositories on Docker Hub are public anyone can download and view and download your images this is the view for someone who would look at my image so with the pull command Docker pull let me go back here I can see the tags and so on and so on and if at some point you want to delete this you go into the settings tab here you scroll down and you can delete that repository here all right let's go back now let's try to pull that image to our computer here the first thing uh I will try to do is remove it from my computer let me type this RMI so remove image the image has gone completely so now let's try to pull it from uh from Docker uh okay Perfect Pull complete my image is is back here now you see that I use the V1 tag here to to tag my image with the with the version number so let's try to build a version 2 of that image let me copy that and again let's replace this part this placeholder with my registry name image has been built so let's now push it two Docker uh same thing as we did before but this time we're pushing version two okay let's go back here let's go back to my Express or the general tab look here I have V1 and V2 if I click here on tags I can see when the image was was pushed tags and people can download V1 or V2 if I want to remove my images I use the RMI command so remove image I'll remove V1 and also I remove V2 and on Docker up if I no longer need this Repository I click here settings scroll down a little bit click on delete Repository and I need to enter the name of my repo just to make sure and click on delete and now it's gone [Music] time to introduce kubernetes so kubernetes or also known as Kates so the letter K followed by the number eight so eight letters and then the number s and it's pronounced Kate's so kubernetes is a project that was created at Google version one came in July 2015. it was the third generation of container scheduler from Google previous projects were Borg and Omega and Google donated the kubernetes to the cncf so now the development is supervised by the cncf is currently the leading container orchestration tool it's designed as a Loosely coupled collection of components for deploying managing and scaling containers it's vendor neutral so it's not attached to a single company and it runs on all Club providers and there's a huge community ecosystem around kubernetes so what kubernetes can do service Discovery load balancing it can bridge to the cloud providers storage services can provide rollout rollbacks capabilities and can monitor the health of the containers can manage configuration and secrets and the same API is available either in a on-premising solution or in every cloud provider so what can't kubernetes do it can deploy or build your code and it does not provide application Level services like databases service buses caches here's a quick look at the kubernetes architecture this diagram was taken from the kubernetes documentation we'll take a closer look at each component but for now let's just say that it's composed of a master node also called the control plane so that's the portion to the left and the control plane runs the kubernetes services and controllers and you have the worker nodes these node runs the containers that you'll deploy in the cluster so a container will run in a pod a pod runs in a node and all the nodes form a cluster and this concludes this intro to kubernetes [Music] let's see how you can run kubernetes locally so do you need to install a kubernetes cluster in the cloud or ask your it Department to install one in your Enterprise so you can test locally absolutely not there are many ways that you can run kubernetes on a desktop or laptop Docker desktop lets you run kubernetes macrocates from the makers of Ubuntu and minicube also let you run kubernetes altree requires that virtualization is enabled kind runs over Docker desktop and offer extra functionalities Docker desktop is limited to OneNote but it's usually not a problem Marco Cates kind and minicube can emulate multiple worker nodes on Windows Docker desktop lets you run both Linux and windows containers and you can't create and run Windows containers on Mac and Linux it runs on hyper-v or Windows subsystem for Linux so if you have Windows 10 version 2004 or later it's the recommended way to run Docker desktop if fiber-v is enabled on your laptop or desktop you can't run another hypervisor at the same time and mini Cube used by default virtual box but it can also run on hyper-v you can install Docker desktop on Windows using App review and it will create a virtual machine named Docker desktop VM when you take the enable kubernetes checkbox in Docker desktop it will download additional containers to run kubernetes using Windows 10 version 2004 or later and if you have wsl2 installed you can tick the use the wsl2 base engine and Docker desktop will create its VM inside the Linux distro you install on your Windows machine note that this is the recommended way to run Docker desktop on Mac Docker desktop use the hyperkit lightweight hypervisor to run its VM mini cube is another popular option it does not require a Docker desktop it runs on Linux Mac in Windows and it requires an appervisor like a virtualbox here we can see mini Cube running on a Mac and its virtual machine in virtualbox if you need to install mini Cube on Windows but don't want to install virtualbox you can run minicube on hyper-v you need to create a network switch and start minicube with some extra parameters kind stands for kubernetes in Docker because it runs on top of Docker desktop kind lets you emulate multiple control planes and multiple worker nodes this is useful if you need to test node affinity and this concludes the sector on how to run kubernetes locally [Music] and this will be a super quick lab just to validate that our kubernetes installation is working locally so here I'm on Windows and I'm using Docker desktop and I installed kubernetes with Docker desktop in the system tray I can right click on the docker desktop icon on a Mac you can do that from the top of the the screen and I will select dashboard and from there we're going to click here on the gear icon the settings icon and I'm using wsl2 to run Docker desktop so that's the recommended way and here if I click on kubernetes uh enable kubernetes is checks so kubernetes is installed but is it running correctly let's find out let me go back here let's open a terminal and let's run uh this command uh Cube CTL cluster info that should give us a little information about what's running so kubernetes Master it as in green yay it's working so it's running at this address and Cube DNS is running at this address so by running qctl cluster info you get some information about the kubernetes installation and itself [Music] let's see how you can use the kubernetes CLI the kubernetes API server is a service running on the master node it exposes a rest API that is the only point of communication for kubernetes clusters you define the desired state in yaml files and let's say you want to run a x number of instances of a container in the cluster using the kubernetes CLI you then send that desired state to the cluster via the rest API other applications like a web dashboard can also communicate with the rest API to display the cluster state Cube CTL is the kubernetes CLI and it runs on Mac Linux and windows and you pick your choice of pronunciation cue control Cube cuddle Cube CDL doesn't matter it communicates with the API server and its connection information is stored in a config file under the dot Cube folder let's now see what a context is it's a group of access parameters that let you connect to a kubernetes cluster it contains the cluster name a user and a namespace the current context is the cluster that kubernetes commands will run against let's say that you can connect to three clusters cluster a cluster B and cluster C when you set the default context to Cluster B then all the cube CTL commands that you you will run will run against cluster B here's a cheat sheet for context commands Cube CDL config current context will get you the current context get context will list all of them Cube CL config use contacts and the context name Will Set the current contacts and delete contacts with the context name we'll delete the context from the config file there's a large ecosystem of free open source kubernetes tools that you can use Cube CTX is a good example it's a shortcut for the Cube's DL config use context command you simply type Cube CTX followed by the context name to quickly switch contacts it runs on Windows Mac and Linux very useful and this concludes the structure on the kubernetes CLI and the concept of context [Music] a context contains connection formation to a kubernetes cluster and you can have one or more than one context set on your machine it's super important to know how to figure out in which context you're currently in and how to change context so first thing first let's figure out in what context we're currently in I'm going to use cubectl config current context and this will print the name of the context we're currently in I'm currently in the docker desktop contact means that whenever I type Cube CTL commands it'll be applied to that kubernetes cluster all right I can have more than one context configure on my machine or on any machine or server to list them we use Cube CTL config get Dash context and here we can see that I have two contacts set on my machine Docker desktop which is the current context because it has that star in the current column and I have a second one called demo then one is a cluster that I created in the cloud what if I want to change from Docker desktop to demo okay let's use cubectl config use Dash context and the context name demo and now if I print again the current context there it is demo so whenever I'll be typing Cube CTL command DZ yaml files this command will be sent to My Demo cluster somewhere in the cloud a cool tool is Cube CTX because it allows you to well instead of using Cube CDL config blah blah blah blah blah basically it's a shortcut if I simply type a cube CTX that will print the context that I've configured on my machine so demo is green means that it's the current context and I can quickly change context using cubectx and the context name Docker desktop you know a little bit less keystroke to talk about anyway it's a fun too foreign context let's say uh you you your cluster has a long funky name and you want to rename it to make uh more sense you can use the cube CTL config rename contacts all name new name so let's try to rename our demo cluster here demo let's say it's on Azure so Azure demo okay let's use the cube CTX to print the contacts and there it is azure demo where is that context information stored it's stored locally on your machine so I'm on Windows let's see where it is it's on the C drive users your username under dot Cube deduct Cube folder and there it is config so if I right click on it and select opening with code we can see that it's a yaml file and it contains two entries here two clusters one is the Azure one and the second one is the Docker desktop one and this is the different context that I have that have so here is my Docker desktop context and here is my demo context well the cluster name is still demo but I renamed the context name to Azure Dash demo there it is all right let's go back here and let's say that I've deleted my cluster in the cloud I don't need it anymore I want to get rid of the contacts information I can use cubectl config delete contacts and the context name so let's try this Azure demo deleted context Azure demo from the config file so let's take a look here and you see that now I have only one contacts and here's the current context in this line here but let take a look here at the Clusters list My Demo cluster is still there right so it's not deleted automatically what you can do is simply edit that config file and remove the section no longer needed [Music] let's talk about the declarative and the imperative ways to create resources in kubernetes there are two ways that you can use when you want to create resources in kubernetes the declarative way and the imperative way using the imperative way you use cubectl to issue a series of command to create resources in the cluster this is great for learning testing and troubleshooting using the declarative way you define the resources needed in yaml files and use cubectl to send the content of these file as the desired state to the cluster so instead of a series of command this is reproducible and you can even store these yaml files in a source control system here we can see a series of command to create resources that's the imperative way you can create a pod using the Run command create a deployment or service using the create command using the declarative way you would use a yaml file to define the resource and then send the content of that file to the cluster to create these resources so what's a yaml file well it's a text file that contains properties that Define the resource it has some required properties like the API version the object kind that defines the type of object you want to create we'll take a look at these later on you can use the cube CTL create command to send the information to the kubernetes cluster we will take a deeper look at yaml files in the future lectures but right now you may be wondering do you need to type all that yaml manually the answer of course is no one way to get the correct syntax is to copy one from the official kubernetes documentation at kubernetes at IO slash Docs you then search for the object you want to create and click on the copy icon another way is to use templates offered with an editor like Visual Studio code let's say you create a new yaml file then you type control space and select the template to generate the Manifest that you can edit neat you can also use the kubernetes CLI to generate the ammo add dash dash dry run equals client and dash o for output specifying the yaml to Output the yaml to the console you can even send the output to a file using the greater than sign and a file name and this concludes this overview of the imperative and declarative ways to create resources in kubernetes [Music] let's deploy an nginx container using both the imperative way and the declarative way using the imperative way uh we're gonna type command called cubectl create I'm going to create a deployment I'm going to name our deployment my nginx1 I'm going to specify a parameter called image where we will specify the image we want to run this will create a deployment and we can Cube CTL get a list of the deployments using Cube CTL get deploy and there it is now the second way is the declarative way instead of typing a command with all the parameters at the common line we're going to specify a yaml file where all the configuration options are stored cubectl create Dash f for file and the name of the file so let's run this okay it was created let's again type Cube CTL get deploy get both our deployments and we if we take a look at the yaml file well it's a yaml file the type deployment it has the my nginx2 name it has a bunch of parameters that will come back to that a little bit later but basically all the configuration parameters are stored in that yaml file what's cool with that concept is that it's you can put these files quite easily in a source control system all right uh let's do a little bit of a cleanup let's delete our deployment Cube CTL delete deployment my nginx one okay and I'm gonna use a the same command but using a shortcut this time Cube CTL delete instead of deployment and the name of our deployment okay and if I type again Cube CTL get deploy there's no deployments currently my namespace [Music] let's take a look at namespaces so what is a namespace it's a kubernetes resource that allow you to group other resources let's say you need to deploy your application in multiple environments like Dev test and prod well you can create a namespace for each of these environments they are like logical folders in which you group resources kubernetes creates the default namespace called well default objects in one namespace can access objects in different in a different one the kubernetes internal DNS assigned Network names to some resources deleting a namespace will delete all these child objects this is super useful when doing tests create a namespace in the morning create resources under that namespace during the day and at the end of the day simply delete that namespace this command lists all the namespaces existing in a cluster we'll take a look at this command in more detail okay first you create a namespace this yaml file Define a namespace called prod then you use that namespace when you create other resources in the metadata section you set the namespace key to the name of the namespace you want this resource to be created in so namespace colon prod you can assign Network policies and limit the resources that you can create in a namespace using the resource quota object here's a cheat sheet for the namespace commands so cubesatel get namespace list all the namespaces and if you don't want to type namespace each time you can use a shortcut an S so Cube CTL get an S is the same thing as cubectl get namespace you can set the current context to use a namespace in the next commands that you'll type by using cubectl config set contacts current with which will use the current context and then the namespace is called the name space that you want to use so the next command that you'll type will be under that namespace cubectl create NS and the name of the namespace so you create a namespace you delete it using cubectl delete in s and the namespace name and you can also list all the pods or the objects from another namespace so Cube CTL get parts or any objects and then you pass the flag dash dash all- namespaces that will list all the objects in all the different things spaces and this concludes this look at namespaces [Music] let's see how to list and switch between name spaces you can get a list of the namespaces using the cube CTL get namespaces command let me copy this and these are the namespaces currently created on my machine on my kubernetes cluster so default the cube node Cube public Cube system I can also use the shortcut for namespaces which is just few letters which is a lot faster to type and it will give you the exact same result here let me get a list of the pods running Cube CTL get parts no resources found in default namespace that I'm currently my contacts is using the namespace called default right and there's no pods running in there so what if I want to list the pods that are in the cube system namespace well I can use the same command qctl get pass specifying the namespace switch and specifying the name of the namespace let me copy this and there are many parts running here here I can use also the the shortcut so instead of typing dash dash namespace equal I can just use the shortcut dash n so only one dash here two dashes here and the name of the namespace so let's try this and it works perfect now what if I want to change from the default namespace or the namespace called default to a namespace called Cube system right and then apply all my commands to so all the objects will be created in that namespace well I can do that by using the cube CDL config set contacts dash dash current so here we're modifying the context and passing a switch called namespace and the name of the namespace we want to change so we want to change instead of being the default namespace all the time we want to be in the cube system okay and yeah let's see if uh if we run Cube City I'll get passed if we would get some pods of course because we're currently in that Cube system namespace okay now let's change back to the default namespace and let's get a list of the parts there should be none perfect okay so this is how you uh you change namespaces perfect of course we can create new namespaces by using the cube CTL uh create NS for short for namespace and a namespace name so hello okay uh Cube CTL get NS and my name space here was created seven seconds ago and I can delete a namespace using cubectl delete NS and the namespace name that and hello now a bit of a warning here uh if you have resources under that namespace these resources the buzz the containers whatever running will also be deleted so use this command with caution foreign take a couple of seconds okay let's get the name spaces and our hello namespace is gone [Music] let's now look at the masternode in nowadays the physical or virtual machine a group of nodes formed a cluster the masternode is also called the control plane the kubernetes services and controller are located on the control plane they are also called The Master components and you usually don't run your application containers on the masternode the key value data Store where the state of the cluster is stored the API server is the only component that communicates with its CD let's start with the API server it exposes a res interface and client tools like the kubernetes CLI communicates through that recipe it saves the state in a CD and all client interacts with the API server never directly with the data store hcd is the data store for storing the cluster state it's not a database but a key Value Store is the single source of truths inside kubernetes the cube control manager is the controller of controllers its job is to run the other kubernetes controllers the cloud control manager job is to interact with the cloud providers a check if nodes were created or deleted uh Route traffic create or delete load balancers and interact with the cloud providers storage services the cube scheduler watches for paths that are not created yet and selects a node for them to run on it checks for various rules and then assign the Pod creation to a node finally you can install various add-ons on the masternode these add-ons provide additional functionalities in your kubernetes cluster and this concludes this look at the masternode [Music] let's take a look at the worker nodes a node is a physical or visual machine a group of nodes forms a cluster there's a special node called the masternode sometimes called the control plane where the kubernetes services are installed the nodes running the containers are called the worker nodes when a worker node is added to the cluster some kubernetes services are installed automatically the container runtime the cubelet and the Q proxy these are Services necessary to run pods and they are managed by the master components on the masternode the cubelet manages the pod's life cycle and ensure that the containers described in the Pod specifications are running and are healthy the Q proxy is a network proxy that manages Network rules on nodes all Network traffic go through the cube proxy on each node you will find a container runtime kubernetes supports several container runtimes that implements the kubernetes container runtime interface specification or CRI one thing to note is that for kubernetes version previous to 1.19 the Mobi container runtime was installed and was receiving the container runtime interface call through a shim because it did not fully implemented the specifications this is not ideal as it added an extra step starting with kubernetes version 1.19 Mobi is no longer installed oh wait a minute Moby not installed that means that my Docker containers will no longer run if the docker container runtime is not installed right well the short answer is that your Docker images will run as is nothing to change its business as usual what change is that what you can do inside the cluster since the docker runtime is no longer installed you no longer can access the docker engine and issue Docker commands directly inside a node you'll have to use another tool called crease CTL but again that's if you SSH into a node and run commands directly on that note something that you don't do usually alright a node pool is a group of virtual machines all with the same size a cluster can have many node pools and each node pool can host virtual Machines of different sizes let's say that we have two node pools in our cluster the first one consists of VM without gpus and the second one with chip use remember that Docker desktop is limited to one node so basically you run the master components and all the application containers on the same node and this concludes this look at the worker nodes [Music] so let's get some information about our nodes I'm going to run a cube CTL get nodes and since I'm running on Docker desktop locally here I have only one node name is Docker desktop status is ready roll is master here's the revision the version number and it was installed 72 days ago all right I can get more information about the uh the node uh since I only have one node I can skip the node name per meter and here's a bunch of information the name the row some labels creation date um what else what else what else what else uh the capacity to CPUs uh number of PODS maximum number of parts that I can run 110 the architecture the OS Linux architecture AMD the pods running and the CPU request CPU the Miss memory so all useful information when uh you need to troubleshoot and debug so that's pretty uh pretty interesting now I'm using a locally Docker desktop so I'm limited to one node Let's uh switch should to um My Demo cluster uh running in the cloud so let's do the same thing Cube CTL get notes I have more notes now so I have a cluster in in the cloud running three nodes in a node tool so one two three status ready agent and the version of kubernetes that is running that's interesting let me grab cubicity of the scrap node and let's uh let's say use this one so I can get the same information that I I was getting earlier but this one is running uh in the cloud that's the name role agent agent pool the different labels annotation number CPUs memory System Info it's running Ubuntu Linux AMD 64. uh what else will end the different parts that are running on this uh node [Music] let's take a look at parts so what are pots a pod is the smallest unit of work in kubernetes it encapsulate an application container and represent a unit of deployment paths can run one or more containers inside a pod containers shame the same IP address space and volumes and they communicate with each other using local laws that's inside the pod s are ephemeral deploying a pod is in atomic operation so it succeeds or not if a pod crashes it is replaced by a brand new one with a shiny new IP address you don't update the Pod that is currently running you create an updated version deleting the first one and deploying the new one you scale by adding more pods not more containers inside a pod I use this analogy in a previous lecture pods are like cattle there are ephemeras and you just replace them a note can run many pods and a part can run one or more containers if a pod runs multiple containers there's usually one that is the main worker where your application logic is located and the other ones are helper containers that provide services to the main worker we will come back to this concept in another lecture and this concludes this look at pods [Music] let's take a look at the Pod life cycle we'll start with the creation life cycle so when you issue a cube CTL create command to deploy a pod in your cluster the CLI sends the information to the API server and that information will be written into a CD the scheduler will wash for this type of the information look at the notes and find one where to schedule the Pod and write that information of course in a CD the cubelet running on the Node will watch for that information and issue a command to create an instance of the container inside a pod and finally the status will be written in a CD one thing to notice is that each time an operation is taking place inside the cluster the state is written in its CD so SCD is the single source of Truth in the cluster all right so let's take a look at the deletion life cycle now when you issue a cube City delete command to delete a pod from your cluster the CLI sends deformation of course to the API server that information will be written in SCD and notice that the grace period of 30 seconds will be will be added so the cubelet picks that information and sends a terminate signal to the container if the container hangs it is killed after the 32nd grace period and finally the state is stored in a CD The Path State will give you a high level summary of Where the Pod is in its life cycle pending mean that the Pod is scheduled for creation but is not created yet if you run out of resources in your cluster kubernetes may not be able to create new paths and if this happens the parts will be in the pending state running means that the part is currently running succeeded means that the code exited without any errors fail means that the code inside the the Pod exited with the non-zero status so some kind of error occurred unknown means that kubernetes can communicate with the pod and finally crash loop back off oh I love this in his state name so krashu backup means that the Pod started then crash kubernetes started it again and then the Pod crashed again so kubernetes say okay hold on I'm stopping here so we'll take a look uh in the official letter of where to look for these states and this concludes this look at the bud life cycles [Music] let's see how to define and run pods to define a pod the declarative way you create a yaml file specifying but as the kind that's the type of resource you want to create you specify an image location in this case the nginx image will be pulled from Docker up that's the default container registry you set the port that the container will listen on you can add labels they are used to identify describe and group related sets of objects and resources you can set environment variables directly here might not be the best idea to place configuration values in a later lecture we'll see how we can externalize that by the use of a config map object you can even specify a command to run when the container starts if you have created a yaml file with your pod definition you use cubectlcreate-f specifying the yaml file location and name and this will create your pod the declarative way now if you don't have a yellow file and you just want to run a pod the imperative way you use qctl run you specify a name for your running pod Dash Dash image the image name you can specify a program to run in this case it's sh and dash C you can specify a parameter that you want to pass to the program qctl get parts will list all the pods that are currently running Dash o wide will get you the same formation but uh with a few columns more cubicity Of The Scribe part and the Pod name will show the pad information you can use qctl get part with the Pod name Dash offer output uh in yaml format and you can pipe that to a file name so this is pretty cool because it will extract the Pod definition in yaml and save it to a file so if in case you lost that yaml file that was used to create a pod well you can recreate it quite easily qctl exec Dash it specifying the Pod name and the program to run will get you inside that part in the interactive mode you delete a pod using cubectl delete Dash F specifying the yaml file or if you don't have the yaml file simply use cubectl delete pod and the path name that will result in the same as the previous command and this concludes this look at how to define and run pods [Music] in this lab we will run our first pods we'll start by using the imperative way we'll use the cube CTL run command specifying the image that we want to run in our case we want to run an nginx web server and we'll specify a name to the running instance my nginx let's run this pod my nginx created let's get a list of the running pods by using the cube CTL get pods command I have one pod running my nginx ready one of ones that is running it was created 11 seconds ago let's try to get more information by adding the dash o y per meter it's the same command but we'll get a little bit more information like the IP address of the Pod and the node where the Pod is currently running awesome if you want more information we'll use the cube CTL describe command with the type of object and the name of the running instance so cubectl describe the type pod and the name my nginx all right tons of cool information here the name of the object my internet the namespace where it's running uh the node where it's running start time any labels annotation the IP address information about the container any volumes restarts that happen and any volumes here and at the end we get the list of events that happened when the Pod was created it was first scheduled and then the image was pulled and then the image was successfully pulled and created and then started the cube CDL describe command should be the first thing you try when doing some troubleshooting here you will likely have some very useful information uh in case a pod doesn't start maybe the image is not available maybe the image didn't start correctly also Cube CDL describe command that's very very useful let's now delete our pod by using the qctl delete the object type pod and the name of the Running part should get my command line back in a few seconds all right let's now run a second pod this time a busy box image Cube CTL run the image busy box and the name of the running instance my box but we're adding extra parameters here Dash it dash dash the program we want to run this will open a session inside our running pod and look at the command prompt it's changed to a pound sign this means that now I can type commands that are running inside the pod LS for listing the folders and files I can run a base64 command here I can encode a string here and that happened inside the running container inside the Pod pretty cool to stop the session simply type exit and the this ends the session okay let's do a little bit of cleanup um in the busy box case uh it takes up to 30 seconds to uh to delete so we have two options here we can run the cube CTL delete the object type and the running AdSense and to get back our Command Prompt right away we can use the dash dash weight equal false per meter or if we simply want to kill the Pod brutally we use dash dash force and with a grace period of zero seconds let's run this and this will kill the Pod brutally all right let's now create a pod the declarative Way by using a yaml file we have a yaml file here called myhab.yamo the kind is a pod we give it a name my app dashboard a few labels the image we want to run any limits here four degree sources in CPU and memory the port that the container is listening to and some environment variable we're defining an environment variable called dbcon that will have a value of my connection string here awesome let's use the qctl create Dash F4 file and the name of our yaml file pod my app.pod created perfect let's run the qcl get pods command to get a list of our paths it's running perfect we can also describe our pod again same information that we saw earlier right the default names the namespace and here look we have our environment variable and the same events that we saw earlier previously we used the cube CTL run command with the dash it parameter to open a session to our BusyBox container now what is the Pod is currently running well we can use the cube CTL exec command with the same it per meter here we specify the Pod name and the program we want to run that will open a session to a part that is currently running okay let's output the dbcon environment variable my connection string awesome that worked let's exit or stop our session and this time we'll use the qctl delete command specifying our yaml file to delete our pod [Music] let's take a look at init containers let's say that your app has a dependency on something can be a database an API or some config files you want to initialize or validate that these exist before launching the app but you don't want to clutter your main logic with this type of infrastructure code so what do you do you can use an init container that lets you initialize a pod before an application container runs let's say that for the app container to run it requires a series of configuration files in the Pod definition you define a container that will run first this is the init container upon completion kubernetes will start the app container this is a great pattern for applications that have dependencies the init container job can be as simple as validating that a service or a database is up and running this keeps the infrastructure code out of the main logic init containers always run to completion you can have more than one and each bus complete successfully before the next ones starts if it fails the cubelet repeatedly restarts it until it succeeds unless it's restart policy is set to never probe are not supported as ended containers run to completion in this bad definition file we have the main application located in the containers section in green and the init containers in the init container section in yellow as you can see here we have two init containers they both watch for services to be up and running so the first one we will run to completion then the second one and finally kubernetes will start the app container and this concludes this look at init containers [Music] in this lab we will use an init container to modify the home page of an nginx container let's take a look at our yaml file it's manifest for a pod um we have two sections here containers and any containers let's first take a look at init containers we will use a BusyBox image and we'll run this command wget and will it that website right here and that website it's the ohm of the first website the home page of the first website is pretty simple just a few lines of HTML all right and we will save that HTML into a file called index.html into a volume called work directory and the nginx image will map that value and we'll serve that index.html page as its default web page so basically we're initializing our nginx container by using an init container here awesome let's open a terminal and let's deploy our application Cube CTL apply Dash F the name of our yellow file let's wait till the nginx image is up so if I do a doctor PS yet my nginx container is a oh let's uh open a session right into that nginx container and let's try to hit the default web page curl localhost and yep that worked we're serving the default uh web page of the CERN website here pretty cool let's type exit and let's do our cleanup cubesatel delete and our yellow file [Music] let's now look at selectors when defining kubernetes resources you can use labels these labels allow you to identify describe and group related set of objects or resources they are simply key value pairs that you define yourself in this part definition we see two labels app with the with a value of my app and type with a value of front end note that the app and type keys are not something defined by kubernetes they are defined by you for your application okay but what does labels have to do with selectors well selectors use labels to filter or select objects here we see a selector in this pod definition the selector type is node selector and the key value pair is this type equals super fast okay but how does that work here we have a path definition with the node selector set to this type equals super fast we're telling kubernetes that we want to run this bar on the Node that has a label set to this type equals super fast note a does have such label so kubernetes will schedule the Pod creation on that note the simplest way to picture what selectors do is by comparing them with a SQL query it would be something like select star from nodes where this type equals super fast and this concludes this look at the selectors concept [Music] in this lab we will test the selector concept we have two yaml files here the first one it contains the definition of a pod we will run an nginx web server listening on Port 80. and for the sector concept what is enter is is this section the label section so we have two labels defined here app set to my app type set to front dash n let's take a look at the second yaml file that's the service of name my service it's listening on Port 80 and targeting or redirecting the traffic to Port 80 that the Pod is listing on and let's look at this section selector app my app type front dash n so these are the same labels that we Define right here in our pod definition all right Let's test this concept I'll open a terminal and it will first deploy r by using cubesatel apply Dash F and the name of the yaml file foreign let's now deploy the service Cube CDL apply Dash f myservice.yaml all right how do we know that the service is connected to the Pod how do we know that the selection was successfully made to find that let's get the Pod IP address by using qcdl get pod Dash o wide and let's look at the IP address here 10.1.9.31 all right let's run this command cubectl get EP EB is short for endpoint and here's the name of our service my service so let's run this all right you see D in the endpoints column 10.1.9.31 that's the same IP address let's now try to port forward to that service okay we get an immediate result here and let's uh go to localhost port 8080 and that worked perfect let's go back here to our terminal and let's type Ctrl C to stop the port forward now let's try to break things a little bit we'll open the my app.yaml file and let's change the one of the labels so uh app will set it to my app to control s let's save that file and let's redeploy the file again the Pod again by using cubectl apply Dash F and the name of the yaml file all right let's check the endpoint again is it still working huh look at the endpoint colon none and if we try to pour forward again there's no immediate results so that doesn't work so disproved that both labels must match here the selector here boat labels must match the labels in in the Pod definition so that the selection can work all right let's type Ctrl C to stop this and let's do our little cleanup by deleting the service and then the pod thank you let's take a look at multi-container pods we sign a previous lecture that pass can run one or more containers that there's always a main worker and that the other containers are providing services to that main worker like saving data to a database and writing log files there are scenarios while multi-container Parts make sense and they are well documented in a series of patterns we will take a look at some of them in the next few slides with these sidecar pattern the helper container provides extra functionalities to the main worker let's say that our app writes some log files inside the pod the sidecar can copy these log files to some purchasing storage offered by the cloud provider this way the application code located inside the main worker is not cluttered with infrastructure code that code is located in the helper container and if you move from one cloud provider to another one well you simply replace or update that L per code this keeps your application code super clean our next pattern is the adapter let's say that our main worker outputs some complex monitoring information that the monitoring surface of our cloud provider cannot understand the adapter role would be to connect to the main worker simplify the data for the monitoring service again the code specific to the cloud provider service is located inside the helper container the Ambassador pattern is another type of the men of the middle role let's say that our application code needs to write to some nosql database but the code has no clue on how to do that no problem you can send that data to the Ambassador that in turn will write the data to the nosql data store the code specific to the data store service is located inside the helper container if you're curious about these patterns I suggest you get a copy of the design distributed systems book the other is Brennan Burns Brendan worked at the Google where he co-created kubernetes he now works at Microsoft so how can you define multi-container paths well if you remember the lecture about yellow files you see that the container sections is actually a list this means that you can Define multiple containers are added in yellow you see container number one and in green container number two and when you create a pod both containers will be created at the same time pretty cool here's a quick cheat sheet for multi-container parts so after you created your yaml file you simply use cubectlcreate-f specifying the yaml file name so same thing as creating a single container pod if you want to exec into one of the container you simply use Cube CTL exec Dash it the part name and specifying Dash C and the container name this way you can jump into one of the containers running inside the pod you get the logs for a container using cubectl logs the Pod name Dash C the container name and this concludes this look at multi-container pods let's take a look at some networking Concepts kubernetes is what we call a flat Network as a result most resources inside a cluster can see each other all containers within a pod can communicate with each other all pods can communicate with each other all nodes can communicate with all pods and all nodes paws are giving an ephemeral IP address while services are giving a persistent IP so this is quite important we'll come back to that later let's illustrate the cluster Network in blue here each pod gets an IP address and the containers inside a pod share the same address space the containers inside the same pod share the same IP address but each must be assigned a different port number they can communicate inside the Pod using localhost and the port number they can also share the shared volumes what about communication between pods can the container on the right talk to The Container inside the pod on the left using localhost no they can't they need to go through a service that will front the network traffic for the pod for external access to the cluster traffic goes through a load balancer service offered by the cloud provider in future lectures we'll look at different type of services that we can use in kubernetes and this concludes this quick look at networking Concepts thank you let's create a multi-container pod using a yaml file let's take a look at the yaml file so the kind is part the name will be two dash containers and in the container section we're defining two containers so the first one we'll use the nginx image and we'll name it my nginx and that web server will listen on Port 80. and we're defining a second container using the BusyBox image and we'll name it my box this one will listen on Port 81. and for the BusyBox container to stay up and running we need to tell it to to stay up by issuing a sleep command so it'll stay for an hour what we'll try to achieve is open a session on the BusyBox container and try to hit the nginx container so the the default web page served by the nginx container all right let's try to create our pod here using a cube CTL create and the name of our yaml file okay pad two dash containers created so let's try to see uh if they're running all right two out of two because we have two uh containers running in that pod status running 10 seconds uh the IP address so there's one IP address assigned to the to the pub and denote okay let's try to get a little bit more information by using the cube Ctrl describe pod and the name of the pod let's scroll up a little bit so the name is two containers then space is the default one the node where it's running uh the containers so the first one is my nginx which using a the nginx image we have information about the limits that we set earlier in the ml file my box using a BusyBox image listing on Port 81 right there's a that sleep command so useful information here and also look at the events now so earlier when we add only one container inside a pod we would get just one set of events for polling and creating the container now we have two first one is the Pod was scheduled and the second one here is on the second land line um the nginx image was in the polling state it was Paul created started and then the busy box container was pulled and created okay perfect now let's try to open a session inside our BusyBox container so we'll use Cube CTL exec Dash it okay here's the trick now we need to specify the Pod name and then the container name that we want to to connect to and then the the program that we want to run that's the trick when you have multiple containers you need to specify the Pod name and then the container name all right let's try to do that oh looks good at last yeah it worked okay now we'll use wget to try to hit that default page serve ID nginx web server wget with the this flag and we'll call localhost all right welcome to nginx so that word now Outpost for a couple of seconds and why I'm gonna ask you why did that work why calling localhost worked so that work because let's go back to our yaml file the nginx container is listening on Port 80. right we don't have to specify a import number here if the nginx container would have listened to something different we would have to specify Here Local O's colon and the port number all right let's try to exit this perfect let's now delete our pod using Cube CTL delete and the name of the ml file and by using the force and grease grace period equals zero flag that will kill both containers immediately perfect [Music] this is a super short lecture just to introduce you to the concept of a workload so a workload is an application running on kubernetes all containers running in a kubernetes cluster must run in a workload the Pod is the atomic workload it represents a set of running containers and all workloads will create pods the replica set and the deployment will provide extra functionalities on top of the pod like the ability to Define how many instances of a part we want the stateful set and the Daemon set are specialized workloads and finally the job and crunch up offer tasks that run to completion these are short-lived tasks we will see each of these workloads in detail in the following lectures and this concludes this super short lecture on the workload concept [Music] let's take a look at replica set the replica set primary job is to manage d-pad replicas making sure that the desired number of instances are running this provides the self-e-link capabilities in kubernetes while you can create replica set the recommended workload is the deployment now welcome back to that in a moment so let's say that you want three instances of a pod to run at all time you create a replica set and specify that you want three replicas if for some reason one bird crashes kubernetes will replace it automatically without any human intervention I'm pretty cool eh let's see how to define a Rebecca set by starting with a pod we take the Pod definition except for the API version and kind and we place these values in the template section of the replica set the final result is a replica set yaml file so basically in the section highlighted in green you will find values specific to replica set and under the template section the values that Define the Pod you want to run here you set the desired number of instances with the replicas property again while you can create replica sets the recommended workload to use is the deployment because it provides extra functionalities on top of the replica set so why bother and learn about replica sets well in the deployment lecture you'll see that when you create a deployment that will also create a replica set in the background that's why it's important to learn about the replica sets functionalities here's a cheat sheet for replicasets command you create one using cubectl apply Dash F and the yabl file you get a list of the replicases by using cubectl get RS you get some information about the replica set by using Cube CTL describe RS and the replica set name you delete one using Cube CTL delete if you have the yaml file you specify Dash F and the yaml file name or if you don't have the yaml file simply by using the replica set name using cubectl delete RS and the replica set name and this concludes this look at Olympic assets [Music] let's create three instance of energy next container using the replica set template so let's take a look at our yaml file the type of object we want to create is the replica set this will be the name of the replica set rs-example we want three replicas running uh at the same time and uh we want three replicas of this containers uh name will be nginx and the image nginx uh colon Alpine so it's a smaller version and we Define the resources and also the ports that each is listening on all right let's try to uh create that so Cube CTL apply or create Dash F and the name of our EML file okay we'll pick a set created let's take a look at our running pods and have three pods running okay and look at the names that were um that were assigned to the each of these spots RS Dash example and then is some kind of a unique number here each one must have a different or unique name so each one is ready running and look here each one has a different IP address perfect so let's take a look at our replica sets that we've created so Cube CDL get RS so there's one rs-example um three desired three current three ready everything is green everything's okay Let's uh now describe our replica set so Cube CTL describe RS for replica set and the name of our replica set so let me paste that right let's scroll up a little bit so the name is RS example uh it's running in the default namespace um any labels or annotations are listed here number of replicas that we set so three current three desired and the the current part the status S3 running and zero failed to The Container information so nginx we want to run the nginx Alpine image the sync on Port 80 the limits and so on and this these are the events that were raised here so each pod was successfully created here all right so last thing we need to delete what created using qctl delete Dash F and the name of the EML file thank you let's take a look at deployments we'll start by comparing pods and deployments Buzz don't self-heal meaning that if a pod dies kubernetes will not replace it automatically using a pod definition only one instance can be created and you can't update and roll back pods deployment can a deployment manages a single part template so you create one for each microservice you want to run when creating a deployment this will also create a replica set in the background but while you can see it you don't interact with the replica set directly you let the deployment manage that for you to summarize the replica set provides the self-ealing and scaling functionalities while the deployment provides updates and rollback functionalities let's take a look at the deployment definition you define the desired number of instances with the replicas property this will be used by the underlying replica set you set the number of iteration you need to keep using the revision history limit property and you set the update strategy in the strategy section you can set the strategy type to a rolling update this way kubernetes will cycle when updating the pods the other strategy is recreate kubernetes will take all existing pods down before creating the new ones we'll have a dedicated lecture on this topic later on like a replica set we start with a pod definition and we start the metadata section in the template section of the deployment the final deployment definition looks like this highlighted in green we see the properties specific to the deployment and in yellow the ones defining the Pod we want to run here's a cheat sheet for deployments command so if you don't want to use a yaml file you can create a deployment using the imperative way so you use Cube CTL create deploy you specify a name then with the image property you you specify the image replicas the number of replicas you want to run and you can specify also other properties like the port number that the paths will listen on if you have a yaml file well you simply use Cube CTL apply Dash F and the yaml file you get a list of the deployments using cubectl get deploy and you get the deployment info using cubectl describe deploy and the deployment name since a deployment will create also a replica set you can get the list of the replica set using cubectl get RS and you delete a deployment by using a EML file so Cube CTL delete Dash F the name of the yaml file or if you don't have it simply use Cube CTL delete deploy and the deployment name and this concludes this look at deployments thank you [Music] let's use the deployment template to create three instances of an nginx container so let's take a look at the EML file this time we want to use the deployment Kai type of object we want to create is the deployment we name it deploy example we want three instances three replicas and we want to keep three versions three replica sets versions in the history in site kubernetes I know if you scroll down a little bit we see that we want to run an nginx uh the Alpine version because it's a little bit smaller and we name it nginx we set some resource limits and each pod will listen on Port 80. all right quite similar to the replica set template that we saw earlier except maybe for uh this parameter here all right now let's create our deployment using cubesatel apply or create Dash F deploy example and let's get a list of the pods that are currently running Cube CTL getbots Dash o wide and yes we have three parts three lines here so look at the names uh given to each bot so deploy example that's the name of the deployment object the the name of the deployment object Dash something unique so to make sure that each part has a unique thing kubernetes as a random number like this only one container is running inside each bot uh it's in running State each pod gets its own IP address so that's perfect okay let's now try to describe our pods so we can use the cube CDL describe pod and deploy example name but we just saw that the the names work a little bit different let's see if this works yep it worked it worked because uh the name is share across uh these three parts and so we get information about each one of these let's say I just need the information about that particular part I can use that name uh the unique name also so instead of deploy example I'm gonna use the full name and there it is f uh the information about that pod just that button all right let's now get some information about the deployment currently uh inside my cluster so cubesatel get deploy I have only one deploy Dash example three out of three ready up to date available so everything's uh looking good and I can describe my deployment using cubectl describe the name of the the type of the object and then its name all right so let's go a little bit we have the name the namespace where it's running any labels annotations number of replicas three desired three updated three total three available perfect strategies rolling update okay because we haven't set a running update strategy that's the default one we'll come back to that uh later on the path template nginx Alpine listening on Port 80. right and the events here all right since a deployment will automatically create a replica set let's take a look at the replica set that was created so Cube CTL get RS and for sure we have one replica set that was created by the deployment if we desired three current we're ready everything looks okay and we can also describe our Republic asset since we have only one we can uh use Cube CTL describe RS or we can use its full name we copy that and here I have the replica set description and finally we need to do a little bit of cleanup we delete our deployment using Cube CTL delete and with the name of the yaml file thank you let's take a look at demon sits the demon set is a specialized workload its role is to ensure that an instance of a pod is running on all nodes the paths are scheduled by the scheduler controller and run by the demon controller as nodes are added to the cluster the pods are added to them typical views are running some kind of helper service like a log collection or some kind of monitoring well let's illustrate that in this cluster we have two nodes and the demon workload ensures that an instance of a pod is running on each one of these nodes so you define a demon set in a yellow file you can specify that you don't want to schedule a pod on the master node by using a toleration same thing if you want to run the part on specific node pools here's a cheat sheet for the demon sets command so you create a demon set using a yaml file with the cube CTL apply Dash F command you get a list of demon set using Cube CDL get DS you get some information about the running demon set using cubectl describe DS and the name of the demon set and when you want to delete a demon set you either use a yaml file using cubectl delete Dash F and the name of the yaml file or the name of the running Daemon set by using Cube CTL delete DS and the demon set name and this conclude this look at demon sets [Music] let's run a BusyBox image as a demon set to ensure that we have one instance of that container running on each node in our cluster so let's take a look at the yaml file all right the kind is demon set we give it a name demon set Dash example and the container that we'll run is a busy box and here in the Toleration section we specify that we don't want to schedule a demon set on a node role that is mastered so we don't want to to run that demon said that the Busy box uh on the control plane on the masternode all right Let's uh deploy our demon sets so here I'm I'll be deploying that demand set on a cluster in the cloud that has three nodes because if I would try to do that on Docker desktop with just one note won't be interesting so let's try to do that doesn't matter what cloud provider I'm using right now okay uh let's uh get a a cube City I'll get parts we'll come back to uh the rest just after that so here I have three uh Parts demon example with a unique name here let's uh add the dash oh wide a flag Let's uh examine the node column so you can see that each pod is running on a unique node so 0 3 and 4 here and this selector is interesting because it if you have multiple objects you can basically filter out or just select what you want so here selector says the app uh equals the demon set example that's the name of our object here so that word one instance of our busy box is running on each node let me add a fourth node all right so my note count is now four let's go back to visual studio code and let's rerun uh the get paths command and for sure I have a Ford instance of my BusyBox container running as a demon set on the new note without me doing anything it was deployed automatically because I selected the demon set object type now we simply need to delete that demon set using a cube CTL delete [Music] let's take a look at stateful sets let's say that you run a database inside your kubernetes cluster traffic gets higher and you need to scale that database so you create more instances the main one can read and write while the other instances are in read-only mode and use replicas of the main database this is a complex scenario to implement the stateful set role is to help you solve this complex problem for parts that need to maintain state and unlike a deployment a stateful set maintains a sticky identity with a persistent identifier for each of the pods and if a part dies it will be replaced by another one using the same identifier the stateful set will create the parts in sequence and delete them in reverse order typical use are for Network related services that maintain some State and also for databases each pod gets a unique identifier using the stateful set name plus the sequence number and if one dies it is replaced by a new one but with the same identifier the pot creation is done in a ordered way meaning that the first one will be created then the second one and so on and when they are deleted they are deleted in the reverse order note that the persistent volumes are not automatically deleted foreign set you need to use a needless service you define one by setting the cluster IP value to none the stateful set will then refer to the atlas service and you define cloud storage in the volume claim templates section let's represent this slide as you can see only the first spot lets you read and write so how can you reach it if you want to write to the database well you use the pad network name in this case the instance name my SQL Dash zero plus the service name so dot MySQL when reading simply use the service name this will load balance the calls across all instances a bit of warning here containers are stateless by default and stateful sets are offering a solution for a stateful scenario but a lot of work has to be done on top of that a better approach might be to use the cloud provider database Services instead of trying to fit a stateful scenario inside your kubernetes cluster lastly deleting a sitful set will not delete the PVCs you have to do this manually here's a cheat sheet for stateful set commands so you create one using the cube CTL apply Dash F and the yaml file you get a list of set full sets using cubectl get STS you describe one using cubectl describe SCS and the set for set name and you delete a simple set using the delete command either using a yaml file or the name of the stateful set and this concludes this look at stateful sets [Music] let's now create a stateful set let's take a look at the state we'll set that EML file there are two parts in this CML file the first part is the creation of the Headless service the the kindest service but the trick here is to set cluster IP To None like this this will create an endless service all right second part is to create our stateful set so the kind set will set give it a name nginx Dash SCS and here we're referencing our Atlas service here so the name of our Atlas service assigned to the service name property we want three replicas right of a nginx image and we're creating a claim on a search class name called Azure file so the cloud provider here is not important that will work on any cloud provider so I already have a search class name called Azure file we'll use the read write once access mode and we'll use when when gigabyte of storage and we'll Mount this to a folder in VAR www all right so let's create our Seattle set here Cube CTL apply Dash if stateful set Dot yaml and quickly let's do a cube CTL get but a white and here you can see that the first instance is running the second one is running the third one is pending so each instance will be created in a sequence so the first one zero the second one one the third one two you can see by the age of each instances so 19 seconds 14 and 9 seconds they are created in a sequence and deleted in a reverse order also here we can see that um there's an IP address assigned to each of these let's take a look at the PVCs we should have a PVC for each of these instances and yes SCS 0 1 and 2. so uh we can do a mapping one to one here so for uh scs0 we have a PVC call also scs0 to prove that let's uh describe the second one let's say so Q the L describe paths and the number two and let's see what the PVC is assigned what claim is assigned so here the volume claim name scs-2 awesome okay what we'll do we'll create a file in the in that instance the instance number two and we'll delete the the Pod the Pod will be recreated automatically and we'll see if the file still exists we'll also modify the default web page served by nginx and see how we can reach that that file the default web page from another instance so let's open a session on the nginx CS2 perfect Let's uh CD in VAR are volume and let's uh simply type Echo and pipe that to a file called echo.txt I do it LS my file is there get Hello dot txt yeah perfect okay first step creating that file uh Second Step modifying the default web page let's CD into the user chair nginx HTML folder and let's do an LS here so here's the default page served by nginx index.html we will brutally replace that file by using cat and piping that to the file name and typing hello enter Ctrl D on Windows to save the file the file is there if I simply do a cat index.html yeah okay the file has been brutally replaced okay we'll close our session on this instance so scs2 let's close our ins our session and let's open a session on the instance zero okay awesome let's try to it that default HTML page but on the instance number two so nginx as CS2 so to do that we'll need to use the web the web address nginx scs2 so that's the name of the instance dot the name of the atlas service and that word would only the hard name word no you need a combination of boat the instance name and the atlas service also all right let's exit our session here okay let's try to delete uh the uh instance number two okay let's do a q that's CTL get but we have a new instance uh that was created uh seven seconds ago but the uh the name of the instance is still the same so instead of a random number here by using a stateful set this will ensure that the names will be the same okay so let's open a session on the instance number two let's LS far and here's our file awesome let's do our cleanup so we'll delete the stateful set and we need to manually delete dpvcs because simply deleting this table set will not delete the PVCs so let's do that [Music] let's take a look at jobs are for short-lived task workloads you start a job it executes and succeeds or fails so job don't say memory they don't wait for traffic a job creates one or more pod and ensures that a specific number of them successfully terminate the job tracks the successful part completions and then marks the job as complete when the desired number of completion is reached when using multiple Parts the job will create them one after the other they can also run in parallel this is a job definition you define how many paths you want to run at the same time you can set a deadline if needed and the number of completions to reach to Mark the job as complete and you should set the restart policy to never so here's a cheat sheet for the jobs command you create a job the imperative way using Cube CTL create job the job name and the image name using the declarative way with a yaml file you use cubectl apply Dash F into the yaml file name you list the jobs by using cubectl GitHub you get some information by using cubectl describe job and the job that is currently running and you delete jobs with either a yaml file using cubectl delete Dash F or with the job name Cube CTL delete job and the job name and this concludes this look at jobs thank you in this lab we'll create a simple job so let's take a look at the chub yaml file so the kind is chopped so that's the type of object we're creating uh the name we're giving it hello and what we will run is a busy box container and when the container starts it will Echo uh hello front the job perfect something sip simple Let's uh run this Cube CTL apply with the name of our yaml file perfect the job was created okay let's get a list of the jobs there's one hello completion one so it ran and duration was two seconds okay let's do a cube CDL describe job since we have only one job that will do the work if not we can type the job name hello and uh but but let's take a look at name hello namespace default annotation and so on and so on it ran One Time One succeeded so that's our container successfully create and complete it perfect now we can get a list of the pods using cubectl get that and this is uh the Pod that was created to run the job it's still there you see the status completed but it's still there so we can get to to examine the log in case something went wrong so we can do Cube CTL logs and the name of the part here hello from the job that worked so let's do our cleanup let's delete the job is deleted do we have any uh Parts left none any chops left none perfect [Music] let's take a look at crunch apps a crunch up is a workload that runs jobs on a schedule it's an extension of the job that we saw in the previous lecture the schedule is defined using a crown like syntax in UTC and you can get more information about the crown syntax in this Wikipedia page here's a crunch out definition you set the scheduled parameter to a cron schedule so how do you know if a crunch up ran successfully well you need to look at the job history by default the last three successful jobs and the last fill job are kept the paths will be in a stop State and you'll be able to look at their logs foreign if you don't want to keep any history you can set the successful job history limit to zero so here's a cheat sheet for the crown job commands you can create one using the imperative way and if you have a yellow file you use the cube CTL apply Dash F and the name of the yaml file you can get the list of decron jobs currently running using cubectl get the CJ you can get some information with the cube CDL describe CJ and you delete the crunch up using its CML file using Cube CTL delete Dash F and the yaml file name and if you don't have the yellow file name you can delete it using cubectl delete CJ and the crunch up that is currently running and this concludes this look at run jobs [Music] so let's create a crunch up we'll take a look at the yellow file kind is the crunch up I'm going to name it hello Cron and we will give it a schedule only Stars which means that it will run every 60 seconds every minute that's the default and we're going to run a busy box image and we it will Echo this string okay so let's create our job using cubectl apply and we can get a list of the crown jobs using cubectl get print jobs okay so here we have the name the schedule if it's suspended it's if it's active the last last time it ran and we can get some information using cubectl describe run job and we pass the its name hello Dash Cron here this is super useful for troubleshooting the name the default name space it runs on in the default namespace we have the schedule here how many jobs does it keep in its history how many failed job is kept also the the command that will run oh you can get a list of the pods okay uh one per Ran So elocron with a unique uh value here so it has uh completed I'm gonna pause um to let it run a few times all right uh the job ran three times well it the last one is still uh running uh container creating Let me refresh that so it's completed okay now we can get the logs by using cubectl logs and then the job name dpod name so hello from the crown chart so by default the last three run of the job are kept in the history and you can configure that in the yaml file let's delete our crunch up here perfect and if again we type Cube CTL get pads uh all the parts in in the history basically are also deleted when you delete the crunch up [Music] let's take a look at rolling updates in a previous lecture we saw that using deployments you can set the number of paths instances using the replicas parameter and set a number of previous iterations of the deployment to keep in kubernetes we also saw that there are two update strategies running a date and recreate recreate is quite simple kubernetes will shut down all the running paths and create new ones after that running update will cycle through updating parts all right let's illustrate that using recreate all previous paths are deleted and the New Path will be created after that this means that there might be a small period of time where your microservice might not be responsive using the routing of this strategy a pod is deleted and replaced by a new one then the next one and so on there are two values that you can set to help you with this process Max search will tell kubernetes how many parts can be created over the desired number of pods let's say that you you want three instances in total setting Max search to 1 will allow creation of one additional pod on top of these three desired ones and this while the rolling update is running Max unavailable is the opposite is the maximum number of pulse that can be unavailable during the update process note that if you don't specify an update strategy in the deployment manifest communities will use a default strategy of running update with Max Surge and Max unavailable both set to 25 percent so let's say that we want three instances of a bot and we set max search and Max unavailable to 1. we're telling kubernetes that it's okay to create one additional part on top of the three desired one and that's it's okay to have one part less than the three desired one when done the previous replica set is kept you set how many you want to keep with the revision history limit property here's a cheat sheet for running updates you create of course your deployment using cubectl apply Dash F and the name of your yaml file you get the progress of the update using cubectl rollout status you get the history of the deployment using cubectl rollout history deployment and the deployment name you can roll back a deployment using qctl rollout undo and the deployment name that will roll back to the previous version or if you want to roll back to a specific revision number you add the two dash revision parameter and this concludes this look at running updates [Music] in this lab we will create a deployment and later on update it to a new version using a rolling update so let's take a look at the yaml file the kind of object we're using easy deployment call Lo Dash dab we want three replicas and we're using a rolling update strategy here and setting a Max search to 1 and Max unavailable to one we will deploy a container called Lo Dash app yeah hello app and uh that'll be version one and later on we will update it to version 2. okay so let's create our deployment here okay we can get a the deployment status by using Cube CTL rollout status and the deployment name uh the deployment was successfully rolled out let's take a look at our pods running okay so I have three instances perfect yellow depth uh three times excellent let's describe the uh the deployment so Cube CDL describe uh deploy and the deployment name and let's try to find if we can get some information about the strategy yeah here it is rolling update strategy Max unavailable oh wow the strategy type is right here uh rolling update running update strategy Max and available One Max search to one so you can get that information that we set earlier in the yml files okay let's now see if we have a replica set here's our replica set and let's Now update our yaml file and change the version of Hello app to version 2. so just update to 2 and save the file and we will use cubesatel apply and with the same yaml file now what I'm going to do how I'm using right now A K9s it's a terminal dashboard in a terminal sorry and to get the visual view of what's what will happen basically so right now I have three parts in green these are the ones that are deployed let's apply our new deployment I'm going to switch quickly to K9s and you can see oh it happens so fast but is you saw that the the new pods were were created and the old ones were uh were shut down all right here I can have a a deployment status everything's fine if not we would have uh some information if the deployment would take longer we would have some information printed here okay how many uh replica sets do we have we have two uh the the current one so that's the current one and the previous one by default uh three uh versions are kept uh in history we can get the deployment history by using cubectl rollout history and the deployment name okay version one for version two okay we're at version two and um we want to downgrade basically our roll back to the previous version so you can do that by using Cube CTL rollout undo and the deployment name so by default it will roll back to the previous version or if you have multiple versions and you know what version you want to roll back to you can specify the flag to revision and the revision number as we see here foreign let's do this one well either one will do the same thing okay let me switch to K9s and see what's happening oh the other one the the older while the version two is uh terminating new version is created and we can get a deployment status here everything successfully wrote okay Let's uh now take a look at our replica sets so we still have two right the current one is now the the first one that we uh that we deployed all right and we can do our cleanup by deleting the deployment using the yaml file [Music] let's take a look at blue green deployments so let's say that version 2 of our macro service contains some breaking changes like a different database schema what do you do using the rolling update strategy you'll have both version 1 and version 2 of your app running at the same time that might not work at all so using the blue green deployment pattern that might help solve that problem blue identify what's in production and green identify a new version currently deployed but not yet in production notice that the pulse label contains the version number when ready simply update the service definition to point to the new version and now green is now in production so green becomes blue and blue becomes green oh great this means that this pattern is solving the new database schema problem well not entirely you may still have to deal with some downtime while you update your database and also another drawback is that since both version of the macro services will be up and running at the same time you need to have enough free resources in your cluster to make this possible and this concludes this look at the blue green deployments pattern [Music] in this lab we'll create a blue green deployment we have three yaml files here so let's take a look at the lodep V1 it's a deployment we want three replicas of a container of a an image called Hello app 1.0 and here we're setting a label of app lov1 now let's take a look at the second one it's basically the same thing so deployment three replicas but it will use version 2 of our hello App application and we're setting the this label Here app to lov2 all right let's take a look at our cluster IP manifest file so it kind of service and here's the selector app hello V1 we will deploy that and later on we will change the cluster IP manifest to point to the newer version okay so let's deploy version one of our pods and let's also deploy our cluster IP service so let's take a look at the list of the paths currently running there are three pods and also there should be uh if I type Cube City I'll get SVC one cluster IP as we see front that's the one I just deployed perfect so let's do a quick port forward to uh to connect to our cluster IP service so we'll port forward uh the port 8080 that the cluster IP is listening to to localhost 8080. here cubesatel port forward service the name of our service 8080 28080 and let's hit local OS on port 8080 here okay so here hello world version 1.0 excellent day okay let's now deploy version two so uh right in my phenomenal I'll hit Ctrl C on my windows keyboard to break that and uh gain my terminal back perfect let's deploy version two okay and let's get a list of our pods currently running should be six so I have both versions uh in memory at the same time okay so that's one of the drawback of this uh this technique this blue green deployment technique okay let's now it did the cluster IP manifest file so we're we will change the selector to select app on Lo V2 let's save the file and let's update our cluster IP service by using cubectl apply and the name of the yaml file yes okay Let's uh port forward again right let's hit that local OS again and there you go V2 it worked let's do a little bit of her clean up uh let's delete our first deployment our second deployment and also decluster IP service you can select the three lines at the same time [Music] let's take a look at the concept of service in kubernetes first what problem do Services try to solve well if the pod in green need to reach the pod in purple you need to use its IP address the problem is that pass yet ephemeral IP addresses if the part in purple dies well you replace it and the new one will have a different IP address so we need a way to make these calls between pods a lot more robust so back to the service what exactly is the service well it's a kubernetes object that you define in a yaml file but on my pods that have ephemeral IP addresses Services get durable IP addresses and also they get DNS names they serve as ways to access Parts they're kind of a middleman and the target pods using selectors here we have four pods and a service the service select the pods that have the Zone label equals to prod and the version label equal to one the first part satisfies the selector the second one also but not the third one and not the last one so only the first two are selected so let's say we have two instances of a pod and we place a service in front of them if another part needs to reach these ones it will go through the surface and then the service will load balance the request to the instances in kubernetes we can use these Services the cluster IP the node port and the load balancer note that the cluster IP is the default service we will look at them in more details in the next lectures and this concludes this quick look at the concept of services in kubernetes [Music] let's take a look at the cluster IP so what is a cluster IP well it's the default service in kubernetes its visibility is cluster internal this means that it's not possible to use a cluster IP service to reach a macro service from the internet from outside the cluster in the cluster IP definition you can set two different ports Port is the port that the service will listen on and Target Port is the port that deselected pods are listening on so the cluster IP will route incoming traffic to that board in this CML file the service listens on Port 80 and Route the traffic to Port 8080. traffic is load balanced across the selected paths so when do you use a cluster IP service well to provide a durable way to communicate with pods but from inside the cluster so let's illustrate this here we have a cluster IP service fronting three instances of a pod it's impossible to reach it from outside the cluster but it's okay it's visible from inside the cluster this cluster IP will listen on Port ad and select the paths using these two labels since the selected pods are listening on port 8080 the service Target Port must also be set to 8080. this way the parts in green that want to communicate with the ones in purple well they go through the cluster IP service on Port 80 and the service route the traffic to Port 8080. so let's say you have multiple Marco Services uh you place a cluster IP in front of each of them because a cluster IP service IP address is durable while the pods ones are ephemeral here's a cheat sheet for cluster IP so the first two commands are imperative commands let's say you already have a pod running and you want to expose it using a cluster ID so you would use Cube CTL expose Oh short for path the Pod name specifying the the port and the Target Port and you can also give a name to your to your service if you have a deployment you can also use Cube CTL expose deploy the deployment name specifying the port and the target board so both commands are imperative commands if you have a yaml file you would use Cube CTL apply Dash F and specifying the EML file name you can get a list of the services running using cubectl get SVC I get a little bit more information specifying uh the flag Dash o and wide you can also describe the service using a cube CTL describe SVC and the service name and you can delete the cluster IP service using the yaml file with the cube CTL delete Dash F and the name of the yaml file or cubectl delete SVC and the name of the service and this concludes this look at the clusterity [Music] in this lab we will deploy an nginx container front it with a cluster IP service and then deploy a BusyBox container open a session on that BusyBox container and try to hit the web page served by the nginx container but through the cluster IP service all right let's take a look at our yaml file we'll start with the deployment one kind is a deployment we want three instances of the Pod and we will run the nginx image the Alpine version and it will list it on Port 80. now we're setting two labels here and the cluster IP service will you will select uh the pods using these two labels so app example environment prod all right let's take a look at our cluster IP kind service it's gonna listen on port 8080 and it will redirect the traffic to Port 80 on the nginx containers all right the selector is here so app example environment broad so that will select our pods uh in our deployment perfect so let's try to do that first let's uh deploy the service and let's deploy the nginx containers let's also deploy the busy box and we can take a look at this it's a kind of pod the name is my box and uh it'll run a busy box image okay so now let's get a list of our Bots currently running we should add four one two three four so the first three ones are the deployment the nginx images and the fourth one is the BusyBox all right let's try to connect to the BusyBox container open a session by using cubectl exec my box Dash it and the name of program we want to run perfect at work by type LS yep okay let's try to use the service to reach the nginx pods so wget and HTTP SVC example colon 880 so let's try to run that and see if it works okay that worked why did it work so what is that name here if we go back to the cluster IP definition that's the name of our service and it's listening on Port 8080. the service name Colin d uh the port that is listening on and that's it we can now exit our session on our busy box and we can delete our resources the cluster IP the deployment and the pod thank you [Music] let's take a look at the note Port service what is a note port a noteboard extends the cluster IP service and provides extra functionalities its visibility is internal like a cluster IP but also external to the cluster you can set a third port using the note Port property this is the port that the service will listen on outside the cluster note that the port must be in a range between 30 000 and 32 767. and if you don't specify a note Port value well kubernetes will assign one randomly you then set the port and the Target Port Properties like you do with a cluster IP this sounds like a good way to expose our macro services to external traffic but this range between 30 000 and 32 767 it's kind of annoying because you can't set it to let's say port 80. and One requirement for using note ports is that nodes must have public IP addresses to access your service simply specify any node Port IP address plus the note port and the traffic will be routed to the right note Port service inside the cluster the way it works is that you set the pods and the service just like you did before with the cluster IP but this time you also specify a port number in the note Port property external communication uses the node IP address and the port set with the node Port property internal communication uses the port set in the port property just like a cluster IP now let's take a look at our note Port cheat sheet if you already have a pod running in your cluster and you want to expose it using a note Port service simply use the cube CTL expose bow the Pod name specifying the port and the Target Port and the note Port as the type now you may wonder where do you specify the note port number well you can't there's no properties letting you set know that value between 30 000 and 32 767. so communities will assign one randomly for you same thing for a deployment let's say you have a deployment already running in your cluster and you want to expose it using a note Port you use cubectl expose deploy the deployment name Port Target Port type which is note port and you can specify your name also you can define a node port in a yaml file and deploy it using cubesatel apply Dash F the name of the yaml file you list the services using cubectl get SVC get more info adding Dash o wide he can describe your service using cubesatel describe SVC and the service name if you have a yaml file you can delete it using the that file using cubectl delete Dash F the name of the yaml file or you can delete your service using its name with qctl delete SVC and the service name and this concludes this look at the note Port service [Music] in this lab we will expose a deployment using a note Port service we have two yable files let's take a look at them the first one is for the deployment we will deploy an engine X image the Alpine version listening on Port 80. and we will need to replicas perfect let's take a look at the noteboard yaml file kind is service and the type is noteboard and the selector will select our deployment and here we set our node Port 32 410 okay now let's open a terminal and we'll start by the deployment cubesatel apply and our yaml file next our service Cube CTL apply noteboard.yaml awesome let's make sure that our pods are running Cube CTL getbots Dash o wide awesome two parts two instance of our nginx container all right now since we're using Docker desktop the docker desktop node is mapped to local OS to reach the note Port service we need to use local OS plus the note port let's try that local Post 32 410 and it worked awesome Now when using a cloud provider you would need to get a node IP address instead of using the Local Host you would get that IP address by using cubectl get nodes Dash o wide and here in the external IP address colon you would find the external IP address of the node awesome let's do our cleanup let's delete our note port and our deployment [Music] let's take a look at the concept of surfaces in kubernetes what problem do Services try to solve well if the pad in green need to reach the purple one it needs to use its IP address the problem is that pods are ephemeral if the purple one dies you need to replace it and the new one will have a different IP address we need a way to make these calls between pods more robust so what exactly is a service a service is a kubernetes object that you define in a yaml manifest unlike parts that have ephemeral IP addresses Services gets durable IP addresses and a DNS name they serve as a way to access paths and they target pods using selectors here we have four pods and a service the surface selects the paths that have the Zone label equals to prod and the version label equals to V1 the first part satisfies the selector the second one also but not the third one and the last one only the first two are selected let's say that we have two instances of a pod and we place a service in front of them if another part needs to reach these parts it will go through the service and the surface will load balance the requests to these instances in kubernetes we can use these services D cluster IP the note Port the load balancer and the Ingress the cluster IP is the default service its visibility is internal only the note Port can expose a pod outside the cluster the load balancer and the Ingress are similar Services they let you expose applications outside of the cluster one operates at D layer 4 and the other at layer 7. L4 L7 what's that download balancer operates at the layer 4. that's the TCP transport level so that's very low in the transport stack it means that the load balancer can do simple operations like round robin routing the Ingress operates at the higher level in the transport stack think of protocols like HTTP or SMTP it's more intelligent so you can configure complex routing rules okay no worries if this sounds complex for now simply remember that an Ingress is like a load balancer but more intelligent and this concludes this look at the concept of services [Music] in this lab we will create a load balancer service but you may be may be asking yourself we're not using a cloud provider right now how can that work well Docker desktop is helping us it will emulate the load balancer service so we can test our load balancer locally awesome let's take a look at the application it's a simple deployment we want to replicas two instances of an nginx image super simple and the the load balancer yaml file kind is service the type is load balancer it will listen on port 8080 and redirect traffic to the Pod that is listening on Port 80. all right all right so let's open a terminal okay and let's deploy the app and the load balancer perfect let's make sure that our pod are running by using Cube CDL getbots yes I have two uh my coupons are here perfect now to get the IP address of the load balancer we use Cube CTL get SVC Dash o wide foreign S as the IP address using a cloud provider load balancer service you would find here instead of local OS the public IP address of the load balancer so let's test this open a browser and type localhost 8080 and that works we reach our nginx pod awesome let's do our cleanup let's delete our load balancer and our application [Music] this is an introduction to the persistent Concepts in kubernetes we saw this slide earlier containers are ephemeris and stateless and any data stored in them is deleted when the container is destroyed so we need to find a way to store data outside the containers if you want to keep that data so volumes let containers store data into external storage systems these are storage services offered by the cloud providers Defenders create plugins according to a specification called the container storage interface and there are two ways to use storage in the cloud the static way and the dynamic way we have separate lectures on these later on all right the cloud providers create plugins to expose their storage Services as persistent volumes and storage class these two are kubernetes objects next we will look at the static and dynamic ways [Music] let's see how to use the static way persistent volumes or PV and persistent volume claims are PVCs are two kubernetes objects that lets you define and use external storage a purchasing volume represents a storage resource that is available cluster wide and is provisioned by the cluster administrator you then use a persistent volume claim to claim the persistent volume a part will then use the PVC to mount a local folder PVCs can be used by multiple parts and inside the parts all the containers are sharing the same volume there are many persistent volumes plugins available some are offered by the cloud providers the one highlighted in yellow called hostpath is a special one it's a plugin available with kubernetes that allow you to do local testing and it's not mapped to a cloud provider storage service it will not work in a multi-node cluster but it's super useful for local testing here's the main drawback of persistent volumes let's say that the cluster admin provision 100 gigabytes of storage and that the Pod only requires a small portion of this storage let's say just one gigabyte of that 100 gigabytes in total so just one gigabyte well too bad for the other pods because the volume is used exclusively by the Pod who has the claim on it this can be a waste of precious resources and we will see how storage class get around this problem in the next lecture okay in the meantime let's focus on the PV and the PVC you first select the cloud provider storage service you want to use then you create a persistent volume and set the required capacity let's say here 10 gigabytes you then create a PVC so a claim that refers to the persistent value and finally you use the PVC from your pod and mount a local folder on it there's an inputs and property that you must be aware of it's the reclaim policy set a delete all data will be lost when the claim on the volume is released and this is the default value so be aware of this if you want to keep your files when the PVC is released you have to set the reclaim policy to retain again the default value is delete so be careful and be aware of this there are three access modes possible using read write mini the volume can be mounted as read write by many parts using read-only mini the volume can be mounted read-only by many parts and finally with read write once the volume can be mounted as real read write Sorry by one single path and the other parts will be in read-only mode this might be useful if you have a main worker that writes data and the other pods simply read the data you define a person in volumes using the purchasing volume kind and you specify the capacity the access mode and the reclaim policy in the spec section in this example the ospad plugin is used to access local storage remember to only use hostpat for local testing and refer to the storage provider documentation and on how to create a persistent volume specific for their storage service you then Define a claim so a persistent value claim making sure that the access mode match the one set in the processing value in this case the claim is for 8 gigabytes out of the possible 10 gigabytes set on the persistent volume this means that no one can claim the remaining two gigabytes until the claim is released in the volume section of your pod definition simply refer to the PVC and mount a local folder on it a persistent volume can have these states available meaning that the resource is free and not currently in use bound the volume is bound to a claim so it's in use it's not available anymore release the claim has been deleted but the resource is not yet reclaimed by the cluster and finally failed well something's wrong here's a cheat sheet for the PV and PVC commands using a yaml file you can create either a PV or PVC by using cubectl apply Dash F and the name of the yaml file you get the install persistent volume using cubecti Cube CTL getpv the claims using cubectl get PVC you can describe them cubectl describe PV or PVC with their name you can delete them using their yaml file cubicle delete F and the name of the yaml file or by using their name so Cube CTL delete PV and the PV name or the PVC name and this concludes this section as the static way next we'll take a look at the dynamic way [Music] in this lab we will create a persistent volume a persistent volume claim and use a pod to mount a local folder on that storage we will create that in Docker desktop locally using the host path plugin all right so let's first take a look at the persistent volume yaml file the kind is purchasing value we give it a name pv001 and a storage capacity 10 megabyte access mode read write once and we set the processing volume reclaim policy to retain and use that host Pat plugin here and to map to a folder in the docker desktop virtual machine to data here all right let's take a look at the PVC kind persistent volume claim we give it the name my claim read write access mode must be the same as the PV persistent volume so let's double check read write once on the PV read write once on the PVC awesome we request 10 megabytes of storage so we record the full capacity we could have chosen a lesser value if you want all right let's take a look at the Pod now so it's uh the kind of pod it's a busy box and uh we make a reference to the claim here in the volumes section we give it a name and we reference the persistent volume claim called my claim this guy here okay and we use that name mypd and we mount it to a local folder called demo we should see magically appear a folder called demo inside our BusyBox container okay let's deploy our persistent volume right percent in volume pv01 created awesome let's look at the PV qctl getpv name pv001 capacity 10 megabyte read write once reclaim policy to retain it's available it's not claimed uh awesome so let's now deploy the claim the PVC persistent volume claim my claim created awesome Cube CTL get PVC my claim it's bound to the volume called pv001 capacity 10 megabytes read write once and let's again take a look at the PV to see if something has changed yep it's now bound the status is bound to the claim called my claim running in the default name space awesome so let's now uh deploy our pod okay my PC box was created let's connect to it using Cube CTL exec the name of the instance so the Pod Dash it and the program you want to run so let's do NLS and see if we see a demo folder and there it is we have our demo folder Let's uh CD into that folder and let's create a a file inside cat and we'll pipe that to uh lo.txt let's type hello world world uh if I can type Ctrl D to exit and save the file Let's do an LS to see if the file was created perfect okay let's exit this session and now let's delete the pod let's delete the part by using cubectl delete Dash F pod and since the busy box takes uh 30 seconds to shut down we will force it to do it right away we don't want to wait okay Let's uh deploy it again well Cube CTL rctl get the pods no resource awesome it's really dead Let's uh deploy it again okay Let's uh open a session CD demo LS all right and let's get that file hello a car cat cat nut car hello world awesome it worked Let's uh exit our session let's now do our cleanup we'll delete our bud and then we will delete the PVC right and then the PV so you can't delete the PV before the PVC well you can issue the command but the the command will be in kind of a weight State uh until uh the PVC has been released [Music] let's continue our journey into persistence by looking at the dynamic way so here's a new object the storage class and the search class represent a storage resource that is available cluster wide and is provisioned by the cluster administrator you don't have to set a capacity and it eliminates the need for the admin to pre-provision a persistent value now compared with processing volumes where once a claim has been made the remaining capacity becomes unavailable well the storage class can support many claims many persistent volume claims so you first select the cloud provider storage service that you want to use you create a storage class so here no need to specify a capacity then you create a PVC the claim that refers to the storage class and now you specify the required capacity and finally you use the PVC in your pod and mount a local folder like a persistent volume there's an important property that you must be aware of it's the reclaim policy set at delete all data will be lost when the claim is released and it's the default value also like the persistent volume so be aware if you want to keep your files when the PVC is released you have to set the reclaim policy to retain again the default value is delete again three access modes possible and they are set using the PVC not the storage class read write mini the volume can be mounted as read write by many pods read only mini the volume can be mounted read-only by many pods and lastly read write once the volume can be mounted as read write by a single part and the other parts will be in read-only mode useful if you have a main worker that writes data and the other pods simply read the data so you first start defining a storage class specifying the cloud provider driver with the provisioner property and additional settings in the parameters section so refer to the source provider documentation on how to create a specific storage class further storage service you then Define a PVC specifying an access mode and the source capacity required in this claim uh the claim is for 5 gigabytes but more PVCs can be created over that storage class then simply refer to the PVC in your path definition and map a local folder on it in summary the main benefits of a storage class versus a purchasing volume is that with a storage class you don't have to define a capacity and multiple Claims can be made here's a cheat sheet for storage class at Men's so you create your storage class using a yaml file using Cube CTL apply Dash F and your the name of your yaml file you get a list of your storage classes or PVCs using get the SC for search class and get PVC you get the search as information by using cubectl described as C and the class name you delete your search class and PVC using cubectl delete Dash F and the yaml file name or you delete your search class using delete SC and the class name or delete PVC and the PVC name and this concludes this section about persistence using the dynamic way [Music] let's see how to store configuration values using config Maps in a previous lecture we saw that it was possible to place configuration values directly in the environment section of a pod definition but what if we need to change that value well we have to edit the Manifest and redeploy the container also usually it's not a best practice to tie an object with its configuration so how can we externalize these values the config map object allow you to decouple and externalize configuration values the key value pairs are then injected in the containers as environment variables they can be created from yaml files a series of text files or even folders containing files they are static meaning that if you change the value the containers that refer to these values have to be restarted to get these refresh values using a yaml file you define a config map and place the key value pairs in the data section you can even specify multi-line values using the pipe character in the EnV section environment section of the container definition you define an environment variable and by using value from an config map key ref you reference the config back name and the key as defined in the config map so in the config map key ref section name refers to the config map name and key refers to a key in the config map earlier I mentioned that this is a static process meaning that the values are injected when kubernetes starts the container this means that if you make a change to a config map value inside the container the original values stay the same until you restart the container to get around this you can map a volume on a config map Yes you heard it right mounting a volume this solves the static issue and updates are reflected in containers each key value pair is seen as a file in the mounted directory so we start with a config map then use a volume to mount it to a local folder inside our pod our container the result is that all key value pairs are now scenes as file the name of the file being the key and the value being inserted in the file while this sounds cool it also means that you'll have to refactor your code so instead of reading environment variables you'll have to read files so is it worth it you'll have to figure out that by yourself here's the config Maps cheat sheet so if you're adventurous you can create a config map from the command line that's the imperative Way by using Cube CTL create config map you give it a name and with the from Dash literal parameter you specify the key value pairs so you can specify multiple key value pairs of the same line or you can use a good old yaml file and use cubectl apply Dash F and the name of your yaml file you can create a config map using cubectl Create CM specify your name and specifying a the name of a text file containing multiple key value pairs also you can create a config maps from a folder so if you have multiple files inside your folders you can create a config map from that you can get a list of the config maps by using cubectl getcm you can output the config map in a yaml file by using cubectl get CM the name of the config map and the dash o parameter with the yaml and you can pipe that to to a file name of course you delete a config map by using its yaml file using cubectl delete Dash F and the name of the yaml file and this concludes this uh look at config Maps [Music] in this app we're going to create a config map and use a pod that will reference a value stored in that config map let's take a look at the config map so the kind is config map we have a name cm-example and in the data section we have two key value pairs date as set to Michigan and City and our board all right so let's now take a look at our pod tiny spot uh it's a busy box and here in the environment section we declare in environment variable that we will call City and we are getting the value from a config map key ref right and we specified the name of the config map so cm-example that's the name of the config map and the key is City so here that's the key right there okay again environment section we Define a new environment variable that we will call City we get the value from config map key ref specifying the name of the config map and the key awesome let's create our config map okay Cube CDL gets cm to get information about our config map so CM example two data okay that doesn't give us much information well about the data itself so let's do a cube Studio describe config map CM example okay we have the name the namespace where it is any labels annotations and here we have uh the data section City and Airport State Michigan and if for some reason you want to Output that as a yamo uh Cube CDL get config map the name of the config map Dash o output in yaml uh to recreate the config map using this let's now deploy the pod or BusyBox perfect let's open a session Cube CDL exec my box Dash it and the program you want to run and let's display the city environment variable let's Echo that Echo dollar sign City and there it is an arbor so that worked foreign exit and we can do a little bit of cleanup we can delete our config map and we can delete our busy box spot [Music] let's see how to use the secrets object in kubernetes you will find many times of Secrets types the default one is the OPAC type and it is very similar to the config Maps object that we saw in the previous lecture you can also store credentials to connect to private container registries authentication secrets and even certificates in this lecture we will focus on the OPEC Secret like config Maps secrets are used to store configuration values so they are somewhat identical to config Maps except that these store values as base64 encoded strings and it's important to understand that base64 is a way to encode strings and it is not an encryption algorithm this means that Secrets stored in kubernetes can be decoded quite easily yeah great since these secrets are Noah very secret should you use them well the answer depends on the type of information you want to store it might be okay to store a connection to a database but it might not be for something more sensitive you can protect Secrets using role-based access control policies are back or you can store them elsewhere all Cloud providers offered ways to store secrets in Seeker Vault services that you can retrieve firm kubernetes you can also use a third party tool like the very popular Vault product from archicup just be aware that the kubernetes default secret the OPEC one is not encrypted by default in kubernetes so you can define a secret in a manifest and use a base64 encoded strings as the values or use the command line where you can use plain text strings easier in the path definition you simply get the secret value using the secret key ref section this is very similar to config Maps and again similar to config maps you can mount a volume on top of Secrets here's the container registry secret and you can Define it using a yaml file or with the CLI next in the path definition you reference the credentials in the image pull Secrets section here's a cheat sheet for Secrets commands so you can create a secret the imperative way at the common line if you want using cubesdl Create secret generic and then the secret name and you pass the key value pairs with the from Dash literal per meter you can of course create one using a yaml file so cubesdl apply Dash F and the name of the yaml file you can get a list of the secrets using cubectl gets secrets you can output the secret to a yaml file by using cubectl get Secrets the secret name and the dash o yaml parameter and you can pipe that to a a file you can delete the secret using a yaml file or using the secret name using cubectl delete secrets and with the secret name and this concludes this look at secrets [Music] in this lab we'll create a secret and from a pod we will reference secret and use them as environment variable let's take a look at our secrets at yaml file the type or the kind is secret we give it a name secrets and in the data section we have key value pairs so username to some value and password to some value notice that the values but must be base64 encoded you cannot put a non-encoded string here how do you do that on Windows base64 the the tool is not installed by default so you can use these two websites basics of foreign code.org decode.org or you can install base64 using chocolati as shoko installed base64 on macro Linux well it's already installed so you simply do something like that you Echo your string and you pipe that to base 64. and that will encode the string and let me copy that and to decode Echo uh the encoder string and you pipe that again to base64 Dash D for decode and voila all right from the Pod now let's take a look at our pod yaml definition kindness part it's a busy box and here in the environment variable we're creating two environment variable the first one is called username and it gets its value from a secret key ref and the name references the secret name here and the value well the key is one of these two so username or password Here a case key reference here the username from the secrets Secret and second one is password a the environment variable is called password and we get the value from the secrets secret and the key is password awesome let's create the secret all right let's get a list of the secrets here's our secret if it has two data it was created six seconds ago we can describe it cubicity of describe the object type and then it name and here what do we have the name namespace and well we don't see the secret just the the keys here and what if we use Cube CTL get secrets and output that to demo the secret data password say uh doing a describe would not allow us to see the values but using get secret and outputting that to a yaml will allow us to retrieve the actual values okay let's now deploy our busy box right let's open a session and let's Echo uh username D username and the password whoops Echo uh password my password so that that works the uh the values the secrets are decoded when they're injected into into the pods let's exit that and let's delete our secret and let's delete our busy box [Music] let's talk about observability if you deploy a container using a deployment and it crashes kubernetes will create a brand new instance of the pod this works because kubernetes monitors the infrastructure but what about your application if your app crashes well kubernetes will look at the Pod health and see that it's still running so from the infrastructure point of view everything is working fine your bud is still up and running but your code inside the Pod has crashed would it be nice if kubernetes could monitor the application health well you can achieve this by configuring probes the startup probe informs a kubernetes that the container has started and it's now okay to send traffic to it the Readiness probe enforced kubernetes that the container is now ready to accept traffic let's say that when your app starts it needs to execute a series of steps like getting some configuration values from a database creating some files and so on and so on and let's say that this startup sequence takes around 30 seconds so even if your container is up and running your code is not ready to accept traffic so using a Readiness probe you tell kubernetes to wait 30 seconds before starting to send traffic lastly the liveness probes tell kubernetes if your app is still running and if not kubernetes will kill the Pod and replace it with a brand new one here's a part definition with the three possible probes the starter probe tell kubernetes to wait for 10 seconds before making an HTTP call to a page called health the failure threshold tells kubernetes to try three times the Readiness probe tell kubernetes to wait initially five seconds before probing and making a TCP call on port 8080 and then check every 10 seconds the liveness probe tell kubernetes to wait initially for 15 seconds before probing by making a TCP call on port 8080 and then check every 20 seconds note that the Readiness probe will run during the whole pod life cycle but will these two Conflict at some point yes and no they will run simultaneously but a fail probe will result in different actions from kubernetes failing a Readiness probe will tell kubernetes to stop sending traffic to the pod but the Pod is still alive right while failing a liveness probe will tell kubernetes to restart the pod how does kubernetes probe the containers the cubelet will do the probing using the method you configure with exec action kubernetes will run a command inside a container with TCP socket action kubernetes check if a TCP socket port is open and with HTTP get action kubernetes perform nhdp get here's an exec action you're telling kubernetes to run a cat command on a file called healthy in the TMP folder here's a TCP socket action you're telling kubernetes to check if there's an open socket on port 8080 and finally an HTTP get action you're telling kubernetes to do an HTTP get on the Health page on Port 8080. and this concludes this look at observability [Music] in this lab we will set a liveness probe so let's take a look at our yaml file we will deploy a pod that we'll call liveness Dash example is going to be a PC box and this is where we're setting the probe type is liveness Pro we're asking to do an exacto so to run a command inside our container and the command is cat and the parameter is that file so under the TMP folder the file is called healthy no extension basically we're asking kubernetes to run that command if that command is successful if the file exists well the probe is successful if the file doesn't exist the probe will fail okay uh initial delay seconds we're asking kubernetes to wait for five seconds before starting to probe and then period seconds to five we're asking kubernetes to probe every five seconds the last parameter is failure threshold set to two basically we're telling kubernetes that the liveness probe will fail when two probes will fail now for uh the purpose of this lab we will set something a little bit funky uh just so we're able to do this test we're um we're running a command when the container starts uh it's dutch and basically this will create that healthy file and then we're telling the container to wait for 15 seconds and then to delete that file right and just a little trick too so we'll be able to to quickly see the effect of the lavenous probe okay Let's uh deploy our pod and let's quickly do a cube CTL describe pod all right successfully pull the image the container was created okay so let's do that again a few times and we'll see what happens just rerun the command out liveness profile cat can't open okay two times over five seconds and now kubernetes is killing the Pod awesome and uh is still killing so you see that here the the Pod was in an unhealthy State here so now it's in the killing mode oh and then uh kubernetes is pulling again the BusyBox image and starting a new one right and then it's on LT again and the process starts uh starts again all right let's do our cleanup let's delete our pod and we'll force it you don't want to wait for it to end perfect [Music] let's take a look at some dashboards while it's fine to use a terminal to get a view of your cluster You may wish to use a graphical user interface instead luckily there are many options available we will take a look at these three popular and free options the kubernetes dashboard the lens desktop application and K9s a dashboard that runs in the terminal let's start with the kubernetes dashboard it's a web UI that you can install as an add-on inside your cluster it's not installed by default by Docker desktop and also by most Cloud providers the rule of thumb is if you don't need it don't install it a because it runs inside the cluster so you need to find a way to expose it over the internet and B well because of that it's a known Vector of attack that being said the kubernetes dashboard that you see the various resources deployed inside your cluster simply select the type in the left menu and you can also edit them by clicking on the edit icon and you can edit the yaml file and click on update to change the Manifest file lens is a IDE that runs locally on Mac windows and Linux so you need to install it on your OS on top of viewing the resources and editing the yaml Manifest you can also use a built-in editor and also a built-in terminal lens is maintained by mirantis and you can download it from this URL here's the overview dashboard that lets you see a resource count like the kubernetes dashboard you can select a resource click on the edit button and it did the Manifest directly in lens you can also type commands using the built-in terminal K9s is a cool text dashboard that runs in a terminal and you can install it on Windows Mac and Linux it might sound strange to run the dashboard in a terminal but this makes a lot of sense Kina NS is super light starts in an instant and gives you a clean view of all your resources you can get the information about the cluster the resources and you can take a or make a series of action like deleting a resource viewing the logs here we have a list of the pods currently running in the default namespace want to view the services that are currently running simply type colon and type SVC to list the objects type foreign pressing s while a part is selected will open a shell you can also set a port forward by typing shift f you can even see the part logs I really like this K9s dashboard in a terminal and use it all the time and this concludes this look at the kubernetes dashboards thank you [Music] in this lab we'll take a look at Lens so lens is a free dashboard that you can install on Windows Mac and Linux here's the URL k8s lenss.dev let's take a look at the website from there you can install it it runs on Mac windows and Linux so you can download the setup files here or if you're using a security on Windows you can use shoko install lens or Brew on Mac and Brew install cast lens all right I've already launched a lens here and by default it should take you to your Docker desktop cluster if not or if you want to select another cluster click on the hamburger menu to the left and select add cluster Lance will look at the cube config file and you'll be able to select your clusters from that drop down list so here I have my demo cluster in the cloud so I can select that and click on ADD cluster I can be connected to multiple clusters here's my Docker desktop here's my demo so let me switch back to dockerdist up here okay so first thing first uh let's deploy something on our cluster here I have a yaml file it's a deployment we'll have three replicas of an image called Hello app okay nothing fancy here let's deploy this all right and let me switch back to uh lens here now let's click here on workloads and overview so we should see three parts running here one deployment and one replica set I can either from the top menu select path or from this workload menu select paths here and the other type of objects here so here are my three parts I have one deployment and one replica set if I click on an object I get more information labels annotations and so on and so on so something that you would have at the terminal at the common line by typing Cube CTL describe uh replica set or deployment or pod and the object name here but here is presented in a nice UI all right let's take a look at our pods here okay let's see what we can do let's say we want to delete this one so I can select it like this and from the Ellis menu to the right I can open that and select remove right or I can remove it right there okay let's do that remove item hello yep remove it and since it's a deployment uh kubernetes will create a new One automatically pretty cool uh let's uh take a look at the logs we can see the logs from there we can open a shell so at last a pretty cool front directly from that UI if I click on edit well I can edit the Manifest file and click on Save and close that will update that object let me cancel that close close this there's a built-in terminal so by default you have a small icon here called terminal if you click on it if you click on this open button that will open the terminal so I can type command Cube C Cube CTL get the pods right you can try to delete an object directly here so let me copy this object name Cube CTL delete uh and let's paste that we should see right right away we see uh something happening uh at the top of the screen all right uh what's cool is that if you have more than one cluster you can switch between these clusters so here let me uh just to prove it clear and the qctl uh get the nodes okay on Docker desktop I have only one node let me select My Demo cluster my terminology open it there's no deployment yet but let's just do a cube CTL get nodes and for sure I have three nodes in this in this cluster that's pretty cool just by selecting the cluster I can switch and I have a terminal that is in the right context pretty cool so I just scratched the surface there's a lot more information that we can get like a configuration the config Maps the secrets what the network services that are installed here I have a cluster IP service storage so the persistent volume claims persistent volume storage classes so here in my cluster in the cloud I have four search classes defined for me the namespaces and so on and so on all the information that you would get from the command line using cubectl you get the same information but from a nice UI so once you're done exploring don't forget to delete your deployment thank you [Music] in this lab we will use K9s a super great dashboard running inside a terminal so you can get more information about the K9s at the website k9scli.io here's the website nice logo and you can look at the documentation and the how to install K9s from that website if you're on Windows you can install it using chocolaty so Chico install K9s on Mac OS Brew install K9s and on Linux take a look at the documentation all right Let's uh first deploy something in our clusters here in our yaml file I have a simple deployment with three replicas of a simple container or image called Hello app nothing fancy here let's deploy that uh right away okay and let's open a terminal or come in line comment prom whatever you name it and let me type K9s all right let me stretch that a little bit well it's super cool it's a dashboard running inside the terminal it's super light and it gives you tons of functionality so here I have my deployment so I'm in the default namespace you can look at my my deployment I can enter on an object I get more information type Escape go back I have information about my cluster uh I can issue some some commands so Ctrl D I can delete a a pod let me do that let me kill that poor part here uh are you sure yes see you have feedback visual feedback of of uh what's happening the Pod that I deleted was kill was in the terminating State and the new one was uh created uh right away I can yeah it D to describe the resource escape to uh to go back Ctrl K to kill I just deleted one but I can do a Ctrl k there we go I can see the log so let's let's switch to the second one type l and I can look at the uh the logs I can open a shell also by typing here the S letter doing the less right you can type exit go back I can uh even configure a port forward pretty cool here I can look at the yaml file also let me type Escape here we have the parts that are listed but if I type Colin I can change that let's say let me type deploy so here's my deployment now if I type e I will edit the deployment and let's change the number of senses or replicas from three to four and let's close that right and now I have four out of four so let's uh I'll type Colin again and type but here are my four pods super interesting it's a free tool it's super light it's super fast and I always have one open so I can see visually what's happening uh inside my cluster when I'm issuing some some commands let's go back uh to uh our page here our Visual Studio code and let's simply delete our deployment and let me switch back right away here and they're gone pretty cool too [Music] let's see how to scale pods the horizontal pad to scalar is a kubernetes feature that allows you to scale the number of Parts up and down it uses the metric server to gather the pods utilization pods app must have requests and limits defined the HPA checks the metric server every 30 seconds and scale the number of PODS according to the minimum and maximum number of replicas defined to prevent racing conditions the HPA Waits some period of time after a scaling event by default this delay on scale-up events is 3 minutes and the delay on scaled down events is 5 minutes in this part definition you specify the CPU and memory requests and limits the request is what's allocated at first and the limit is what you allow the part to burst to in this example the Pod will start with 64 megabyte of ram but can burst up to 128 megabyte if needed you configure the HPA using a manifest specifying the deployment you want to scale the Min and max number of replicas and the metric you want the HPA to scale on in this case we tell the HPA to kick in when the average CPU utilization is above 50 percent here's the cheat sheet for the HPA commands so you can create one using the imperative way using cubectl autoscale deployment the name and the metric and replicas number you can create one using the yaml file you can get the autoscaler status by using cubectl get HPA and the HPA name and of course you can delete the HPA using the yaml file or cube CTL delete HPA with its name foreign this lecture about scaling pods [Music] in this lab we will use the horizontal particle scalar to scale a pod for the HP it worked it needs some data some metrics coming from the metric server and by default it's not installed by Docker desktop just make sure I'll open the terminal and what we'll do we'll get a list of the parts running in the cube system namespace Accord DNS at CD Cube API server proxy provisioner no nothing that looks like metric server okay to install it you need to run this yaml file coming from this git repo on kubernetes-6 special integers group and Metric server but you need to make a small modification to to it let's take a look at the components that yaml file need to do you need to edit it if you download it directly from the git repo and you locate the deployment section there it is deployment and what you need to do is add this parameter a cubelet dash insecure-tls if not the metric server will not run on Docker desktop all right let's deploy our metric server cubectl apply Dash F components Dot yaml awesome what we can do we can run again Cube City I'll get pod in the cube system namespace aha metrics server there it is it might take a few a couple of minutes for the metric server to start running now that the metric server is running let's take a look at our deployment uh it's a deployment and what we'll deploy is a web server called HPA Dash example uh listening on Port 80. so it's a simple web server that will return a web page nothing fancy what we'll do uh will also deploy a busy box and from that busy box will it that web server uh in a loop and that should generate some traffic awesome let's deploy our web server let's get a list of the pods running in the default namespace awesome it's running let's enable our Auto scaler so Cube CTL Auto scale a deployment called HP Dash deployment and a metric called CPU percent and we want a minimum of one instance and a maximum of four okay let's validate that we have an HPA Cube CTL get HPA that will list all the hpas running on my cluster there's only one awesome let's now deploy our busy box and let's connect or open a session on that busy box perfect and here's our endless loop that will uh it the web server that should generate some traffic okay let's take a look at our K9s I have my deployment here my HPA deployment and my busy box and area we have three instances of our deployment right now awesome so the HPA worked and now what I can do is start the loop by hitting Ctrl C and I'll type exit to exit my busy box and from there I can delete my HPA but be careful when you do that I'll delete my HP here if we take a look at the deployments there's still three nothing will scale that down since the HP uh has been deleted let's delete our busy box and let's delete our deployment that should delete all three instances take a look here yep all three are terminating and optionally you can delete the metric server by using cubectl delete and the components.yaml file here [Music] we are at the end of this Docker containers and kubernetes fundamental scores congratulations you are now an official kubernetes Ninja [Applause] the next steps for you would be to deploy containers in the cloud using services from a cloud provider these courses will teach you how to do that on Google cloud and Azure and also on smaller Cloud providers like Linux and digitalocean the best part is that each offer free credit usage when creating new accounts this way you can create a managed kubernetes service in the cloud without breaking the bank if you enjoyed the course you can help me by making a small donation this is the link to my buy me a coffee page I want to say a big thank you for learning Docker and kubernetes using my course and I wish you all the best
Info
Channel: freeCodeCamp.org
Views: 617,328
Rating: undefined out of 5
Keywords:
Id: kTp5xUtcalw
Channel Id: undefined
Length: 356min 36sec (21396 seconds)
Published: Wed Oct 12 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.