Kubernetes Crash Course: Learn the Basics and Build a Microservice Application

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this course you're going to learn the absolute basics of kubernetes with videos that simplify and break down complex Concepts if you are a visual learner this course is for you and you'll also Gain real hands-on experience through our labs they say we retain 30 of what we see but more than 80 percent of what we experience by doing there is no better way to learn than to learn by doing so our Labs ensure you gain enough practice we'll start with a quick introduction to containers and then we'll understand why you need container orchestration what kubernetes is and then dive into kubernetes Concepts such as Parts replica sets deployments services and finally a project on deploying a microservices application to a kubernetes cluster so here's how I recommend you to take this course this course is about two hours of video and two hours of lap time and by the end of this course you should aim to get a high level understanding of kubernetes not just Theory but with Hands-On practice each concept taught in this video is followed by Hands-On Labs our labs open up right in your browser and it comes absolutely free with this course so there is no time needed to spend on setting up your own environments you go from watching a video to practicing it in less than 30 seconds the labs are challenge based and each lab is specifically designed to help you practice the concepts you just learned in the video so set aside four hours of your time for this video turn off notifications on your mobile phone turn off any desktop notifications like slack or email and any other distractions and get in the zone and make sure you're ready to block out a few hours of your time and aim to stick to our curriculum and our labs and finish this course as Robin Sharma says starting strong is good but finishing strong is epic you want to make sure that you don't just start this course but you also finish it that's why we have created this seamless experience with videos followed by Hands-On Labs my name is and I'll be your instructor for this course and welcome to code cloud thank you okay so before we begin head over to this link to download the deck used in this course so you can keep a local copy of for your own reference as well as to access the labs that comes free with this course so go to code.wiki slash kubernetes-labs or scan this QR code and once you're on this page click on enroll in this course to enroll in the course for free once enrolled in the course you have two sections uh one for downloading resources and the other for the Hands-On labs okay and go to the download resources section to download the PDF click on the link to download a local version of the PDF so this comes with all the slides and all the notes and transcripts of what I'm going to talk about in the rest of this course in the Hands-On Labs section you have access to the labs and when you go into the labs click on the start lab button to start a lab now don't do this right now because as we go through this course I'm going to tell you exactly when to access which lab so this is just for you to know how the labs are organized and after each lab you also kind of have the solution video where we talk about how to solve those particular Labs but yeah just so you know the labs open up right in your browser you have a terminal and you have a set of questions to answer and yep so that's it for now and let's start with the first Topic in this course let's start by refreshing our memory on containers so that's you and you're developing an application on your laptop say it's written in Python and it has certain dependencies such as the flask framework for serving a website you're also working on another part of the application which is a separate built as a separate app say a payment service and that also uses the flask framework but relies on a different version of the library so we have the first app that uses the 2.2 version and this one using the 2.1 version applications typically add many such packages and dependencies as they grow and they may all be different in different applications and that's going to be a challenge if you're trying to run the same application on the same machine now certain programming languages provide solutions to tackle these for example python has the python virtual environments concept that helps isolate python packages into virtual environments that way you can have different versions of dependencies in different virtual environments however that doesn't help you separate dependencies outside of the programming language libraries for example your app May rely on a specific package on the operating system such as say the glib system library or utilities like curl what if your application relies on certain binaries and packages and certain versions of binaries and packages on the system and these are different between different applications it's going to be hard to manage different versions of dependencies on the same OS moreover let's say you bring in a friend to help you develop the application that person needs to set up the exact same environment in the exact same way with the exact same versions of dependencies and libraries and if the other person uses a completely different operating system then a whole different nightmare awaits now you'll need to figure out what the different dependencies are for that operating system if now you have a different operating system to handle now we've only been talking about develop an environment for now what happens when the application is deployed to a test environment or to a production environment you'll have to make sure you set up the environment in the exact same way with the exact same dependencies if you change something in the development environment you'll have to make sure it gets updated in the test and prod environments as well otherwise things end up working in one environment and not working in the other and no matter how much you try it's impossible to make sure that these environments remain the same at some point in time someone is going to make a change to a dependency or add another dependency and forget to update them in other environments and things are going to break now what if you could build an image that consists of the app itself and all of its dependencies at both the app level and the system level and package it and use the exact same image in all of the different environments that way anytime you make a change in the future the image is rebuilt and the same image is used in all of the different environments no more differences between the various environments so that's what containers can help us with containers help us create isolated environments on our systems to run applications completely isolated from each other you could run a different web application with the different versions of dependencies or a postgresql server or a MySQL server or a red server all on the same system with their own libraries and dependencies but not worrying about any impact to each other and each of these may be based on different operating systems too for example containers allow you to build images based on specific operating systems and then add all the system level and application Level dependencies to it to finally have a lean and clean image for each application that only has what that application needs within those image and that can run anywhere on your Linux machine you can run any of these applications even if they are based on a different OS flavor your local laptop could be Ubuntu but you could run an image that's built using Ubuntu susei red hat or any other Linux operating system one of the most popular tools to containerize applications and run containers is Docker so here we have our application code then we have the requirements.txt file which has all the dependencies that are required for the application in our case it's just the flask dependency and we now build a Docker file to package the application with its dependencies into a Docker container so the first line here creates an image from the python base image and then sets the right working directory then copies the requirements.txt file to the working directory and then installs the dependencies and this is where you could add any other dependency to it and then copies the application code into the image and finally defines the command to run the application using the CMD instruction Now by running the docker build command we build an image and by running the docker run command we run one instance of our application so that was a quick introduction to containers and Docker if you are new to Docker I would recommend checking out our free Docker for beginners course on codecloud using the link even here you'll learn with Hands-On Labs using our Interactive Learning environment by working on real systems that's exclusive to you so what is kubernetes with Docker you were able to run a single instance of an application using the docker run command which is great running an application has never been so easy before with kubernetes using the kubernetes CLI known as cubectl you can run a thousand instances of the same application with a single command kubernetes can scale it up to 2000 with another command kubernetes can even be configured to do these automatically so that instances and the infrastructure itself can scale Up and Down based on user load kubernetes can upgrade these 2000 instances of application in a rolling fashion one at a time with a single command and if something goes wrong it can help you roll back these images with a single command kubernetes can help you test new features of your application by only upgrading a percentage of these instances through a b testing methods now don't worry about the command line tool for now we will take a closer look at it very soon with kubernetes that is you're also able to define the expected state of your application for example you're able to Define that your application consists of four different Services the web server must have three instances running the payment service to have two there should be a ready service with three instances running and a database service to which these Services connect to and you're able to Define these in code and kubernetes will ensure that the state that you have defined in this files for your application is maintained at all times so if things go down if something's changed kubernetes will ensure that your application gets back to this state the declarative State that's defined in these files at all times so we will look at how it it works and how it happens and examples of this later in this video but for now we're just going to take a high level look at the kubernetes cluster itself so let's start with nodes a node is a machine physical or virtual on which kubernetes is installed and node is a working machine and this is where containers will be launched by kubernetes but what if the node on which your application is running fails well obviously our application goes down so you need to have more than one note a cluster is a set of nodes grouped together this way even if one node fails you have your application still accessible from the other nodes moreover having multiple nodes helps in Sharing load between the nodes as well now we have a cluster but who is responsible for managing the cluster where is the information about the members of the cluster stored and how are the nodes monitored and when a node fails how do you move the workload of that failed node to another worker node that's where the control plane comes in also previously known as the masternode the control plane is another node with kubernetes components installed in it the control plane watches over the nodes in the cluster and is responsible for the actual orchestration of containers on the worker nodes now when you install kubernetes on a system you're actually installing the following components an API server and a CD service controllers and schedulers the API server acts as the front end for kubernetes the users management devices third-party tools command line interfaces all talk to the API server to interact with kubernetes cluster next is the hcd keystore hcd is a distributed reliable key value store used by kubernetes to store all data used to manage the cluster this is where information about the nodes in the cluster and the application running on the cluster and any other information are stored the controllers are the brain behind orchestration they're responsible for noticing and responding when nodes containers or endpoints go down the controllers make decisions to bring up new containers in such cases the scheduler is responsible for Distributing work or containers across multiple nodes it looks for newly created containers and assigns them to nodes on the worker nodes you have the cubelet which is the agent that runs on each node in the cluster this agent is responsible for making sure that the containers are running on the nodes as expected you also have the cube proxy that is responsible for maintaining rules on the nodes the cube proxy helps in to communicate with each other the worker nodes and the containers to communicate between each other so it's more of a networking component on the worker note you also have container runtime that is responsible for running containers because ultimately that's why we want the cluster to run applications in the form of containers and one example of that container runtime is Docker now it used to be Docker for a long time in the past because kubernetes was originally built to orchestrate Docker containers specifically however over a period of time it evolved to support other container runtimes so it no longer supports Docker directly but supports the runtime component of Docker which today is managed by container d so there is a separate video that where we talk about the whole story of kubernetes and Docker and how they started together and what changed so check it out in the link given below so going forward we're going to refer to a container runtime in kubernetes as container D and that's the high level architecture of kubernetes cluster and next we will look into the kubernetes CLI let's take a look at the cube control utility Cube control is the command line utility of kubernetes this is the tool or command you would use to operate the kubernetes cluster such as to view the status of the cluster to provision applications to scale up scale down delete and many other things one of the questions I get asked often is how to pronounce this different people pronounce it differently some say Cube CTL others say Cube control some say Cube cuddle others say Cube cuddle the canonical pronunciation is Cube control so I'll try and stick to that now I myself have changed the way I pronounce it over the years Cube cuddle came easy to me so you'll hear me say that at times too so forgive me if you hear me mix it up at different times so now that it is out of the way let's get started to identify the version of the cube control client and the kubernetes server run the cube control version command this lists the client and the server version along with the version of any other tool installed that's kubernetes related tool installed in the system the help option lists basic help information such as the basic commands that can be run we will dig into these commands later in the video but it's good to run this command and go around and understand some of the basic commands that are listed here let's begin with a few very simple commands to see a list of nodes in your cluster run the cube control get nodes command the output shows you the name of the node its status the roles how long the machine has been up and the version of kubernetes running on that system two get a more verbose output with more details such as the internal IP the OS image kernel version container runtime Etc run the same command but with the dash o wide option it's time to gain hands-on experience with the code Cloud labs this course is designed for you to have a seamless experience from start to finish and that's why we have Labs after each concept that will help you gain hands-on experience on exactly what you learned until then so to begin with we're going to work on an existing kubernetes cluster that's already set up and this will help you get familiarized with the cluster the kubernetes command line interface deploy applications to the cluster with pods deployments and services Etc and at the end of this course we'll share instructions on setting up your own local environment for you to continue your studies we do not want you to be distracted with any issues that might come up when you try to build your own cluster locally so my recommendation is to aim to complete this course only using our labs and go from start to finish without any Interruption if this is a two hours course you should aim to complete it in two hours or four hours max so head over to the labs using the links given below and come back here once you're done to resume the course okay so it's time to start working on the labs now in case you haven't done it already go to this link to access the labs in this course so go to code.wiki slash kubernetes-labs or scan this QR code so once you go to this link you have this free course where you can download the resources as well as access the Hands-On Labs so now we're going to start with the first lab so click on the start course button go to the first lab or whichever lab it is that it'll ask you to access and then click on the start lab button to load the lab so give it about 20 seconds or so for the lab to load okay so it's loaded and now you have a terminal on the right so this is where this is an actual terminal to a kubernetes cluster and on the left you have some questions so this is the first lab so there are only three questions very simple and straightforward and the goal is for you to identify the number of nodes as part of the cluster and here you have hints where you can get some ideas or hints for resolving this right so in this case the question is to get the number of nodes in the cluster so I'm going to run a cube cuddle get notes command and it tells me that there is only a one node and that's the control plane node so in this case the answer is one so the same way follow through the commands and try to identify the commands to run to get the information for each of these questions some of them are multiple choice questions others are stuff where you'll have to actually make changes to the cluster like create a deployment or a pod or a deploy an application so depending on what lab you are on the tasks will change can expand the terminal to full screen like this or if you need more space if you just want to adjust it you can adjust it like this and if you want to reset the lab click on this button to reset if you are done with the lab click on the stop lab button to end the lab session right awesome thank you so much and wish you all the best so it's time for our first Hands-On Labs activity use the link given here to access the lab as mentioned before the labs come free of cost with this course so all you need to do is sign up for the free course using the link given here and start lab named familiarize with the lab environment in this lab you will use the cube control commands to identify the cluster setup and the nodes that are available in it once you're done with the lab come back and resume the course from here let's take a look at kubernetes pods now but before we begin we would like to assume that the following have been set up already at this point we assume that the application is already developed and built into Docker images and it is available on a Docker repository like Docker Hub so kubernetes can pull it down we also assume that the kubernetes cluster has already been set up and is working now as we discussed before with kubernetes our ultimate aim is to deploy our application in the form of containers on a set of machines that are configured as worker nodes in a cluster however kubernetes does not deploy containers directly on the worker nodes the containers are encapsulated into a kubernetes object known as pods a pod is a single instance of an application a pod is the smallest object that you can create in kubernetes so what happens when you want to scale it up do you add more containers to the pod no you create more pods typically an application instance running as a container has a one-to-one relationship with a pod to create more instances of application you create more pods however the one-to-one relationship is not a straight rule it is a common practice to have a helper container or a sidecar container as it's also known along with the main application this could be an agent that collects logs or monitors the application and reports to a third party and that's absolutely fine let us now look at how to create pods for this we run the cube control run command we specify a name for the Pod and the image to be used to create the Pod what this command really does is it deploys a container by creating a pod so it first creates a pod automatically and deploys an instance of the nginx docker image but where does it get the application image from for that you need to specify the image name using the image parameter like this the application image in this case the nginx image is downloaded from Docker Hub repository Docker Hub is a public repository where latest Docker images of various applications are stored you could configure kubernetes to pull the image from public Docker Hub or a private repository within your organization so now that we have a pod created how do we see the list of parts available the cube control get pods command helps us see the list of parts in our cluster in this case we see the Pod is in a container creating State and it soon changes to a running State when the application is actually running also remember that we haven't really talked about the concepts on how a user can access the internet web server and so in the current state we haven't made the web server accessible to external users you can access it internally from the node for now we will just see how to deploy a pod and in the later lecture once we learn about networking and services we will get to know how to make this service accessible to end users so that was the imperative way of creating an object in kubernetes you run a command to create one object at a time and when there are many objects and services in your application this is not a viable option the more preferred approach is the declarative way where you create a yaml file well with the specifications of the object the pod in this case and have kubernetes apply that configuration so this way you can Define the state of your application and its services as code and store it in source code repositories and version them this approach enables Version Control cicd and sharing these with others and collaborating together on preparing these files if you are new to yaml check out our free course on yaml and Json available on codecloud there are Hands-On activities that can help you get very comfortable with yaml soon because it's going to be an important part for the rest of this course so head over complete the free course and come back here now we will learn how to develop yaml files specifically for kubernetes so kubernetes uses yaml files as input for the creation of objects such as Parts replica sets deployment Services Etc all of these follow a very similar structure a kubernetes definition file always contains four top level Fields the API version kind metadata and spec these are top level or root level properties think of them as siblings Children of the same parent these are all required Fields so you must have them in your configuration file let us look at each one of them the first one is the API version this is the version of the kubernetes API we're using to create the object depending on what we are trying to create we must use the right API version for now since we are working on pods we will set the API version as V1 if you're creating a service replica set or deployments you will use the versions listed here we will see what these are later in this course the next is the kind the kind refers to the type of object we are creating which in this case happens to be a part so we will set it to pod now remember that the kind is type or case sensitive so you want to make sure you use the exact kind as it is listed here if you use all small case oral caps then it's going to error out some other possible values here could be replica set or deployment or service which is what you see in the kind field in the table on the right the next field is metadata the metadata is data about the object like its name labels Etc as you can see unlike the first two where you specify the string value this is in the form of a dictionary so everything under metadata is intended to the right a little bit and so names and labels are children of metadata the number of spaces before the two properties name and labels doesn't really matter but they should be the same as their siblings in this case labels have more spaces on the left than name and so it is now a child of a name property instead of a sibling and that's wrong also the two properties must have more spaces than its parent which is metadata so that it's intended to the right a little bit but in this case all the three have the same number of spaces before them so they are siblings which is not correct now under metadata the name is a string value so you can name your pod my app pod and the labels is a dictionary so labels is a dictionary within the metadata dictionary and it can have any key and value pairs as you fish for now I have added a label app with the value my app similarly you could add other labels as you see fit which will help you identify these objects at a later point in time so say for example there are hundreds of PODS running a front-end applications and other hundreds of them running back end or database it will be difficult for you to group these parts together once they are deployed if you label them now as front-end backend or database you will be able to filter the pods based on this label at a later point in time now it's important to note that under metadata you can only specify name or labels or anything else that kubernetes expects to be under metadata you cannot add any other property as you wish under this however under labels you can have any kind of key or value pairs as you see fit so it's important to understand what each of these parameters expect so far we have only mentioned the type and name of the object we need to create which happens to be a pod with the name my app pod but we haven't really specified the container or image that we need in the Pod the last section in the configuration file is the specification section which is written as spec and depending on the object we're going to create this is where we provide additional information to kubernetes pertaining to that object this is going to be different for different objects so it's important to understand or refer to the documentation section to get the information right since we are only creating a pod with a single container in it it is easy spec is a dictionary so add a property under it called containers which is a list or an array the reason this property is a list is because the pods can have multiple containers within them as we learned previously in this case though we will only add a single item in the list since we plan to have only a single container in the Pod the item in the list is a dictionary so add a name and image property the value for image is nginx once the file is created Run the command Cube control create Dash F followed by the file name which is part definition.yaml and kubernetes creates the Pod so to summarize remember that four top level properties API version kind metadata and spec and then start by adding values to them depending on the object that you're creating once we create the Pod how do you see it use the cube control get pods command to see a list of parts available in this case it's just one to see detailed information about the part run the cube control describe pod command followed by the Pod name this will tell you information about the Pod when it was created what labels are assigned to it what containers are part of it and the events associated with that pod in this demo we're going to create a pod again but this time instead of making use of the cube cuddle run command we are going to create it using a yaml definition file so our goal is to create a yaml file with the Pod specifications in it now there are many ways to do it you could just create one in any text editor so if you're on Windows you could just use notepad or if you're on Linux as I am just using native editor like VI or Vim an editor with support for yaml language would be very helpful um in getting the syntax right so instead of notepad a tool like notepad plus plus in Windows or Vim in Linux would be better now I'll talk more about tips and tricks and other tools and Ides that can help with this more in the upcoming lectures for now let's take with the very basic form of creating a yaml file using a VI editor on my Linux system so here I am on my Linux terminal and I'm going to make use of Wim text based Editor to create this pod definition file so the name of the file I'm going to call as pod.yaml and as seen in the lecture we will start off with the four root level elements or the root level properties that we that we saw which are API version kind metadata and spec so we know that the value for API version for a pod is V1 the kind is POD with a capital P so it is case sensitive so that's important and metadata is a dictionary and it can have values where we Define the name of the Pod so I'm going to use name as nginx and we can have additional uh labels that we can specify under it so labels again is also a dictionary and it can have as many labels as you want under it so we can specify a label which is a key value pair such as a app and nginx and we can also add more labels like tier and set it to front end anything that can help us group this particular pod next we have to define the spec so spec is also a dictionary and it has an object called containers so before we move on to that we have to make sure that we get the indentation right for example the app and tier are children of the labels property so it has to be in the same kind of vertical line here and similarly under metadata you have name and labels which are the children of metadata so they both have to be within the same vertical line so you have to make sure that the spacing is correct typically it would be two spaces or a tab but it is recommended not to use tabs so always stick to two spaces and and stick to that throughout so the next thing that we're going to configure is the container so a container is a list of objects now we first give it a name a note that this is the name of the container within the Pod and there could be multiple containers and each can have a different name so one container could be named app and another container could be named helper any name that makes sense to you we're going to use the same name as that of the container image so we will just name it nginx and the second object that we're going to add here is the image name which is the docker Hub image name of the container that we're going to create so the image name is again nginx and if you're using other registries then Docker Hub then make sure to specify the full path to that image repository here now remember that we can add additional containers to the Pod as well so if you have to do that and we have to declare the secondary element to the list which would be the second object in the list so here I can for example add a busy box container using the BusyBox image and that would be the second element of the array so in this case we're going to stick to one single container so I'm going to just delete that and I'm now going to hit Escape colon WQ to save this file and we will just use the cat command to make sure that the file was created with the expected contents so make sure the format is correct so the name and labels are children of metadata and you can see that they are on the same kind of vertical line and similarly labels have two children which are the two labels app and tier and spec has a list and we are defining it as a list with a hyphen followed by the objects so we can make use of the cube cuddle create command or the cube cuddle apply command so the create and apply command kind of works the same if you're creating a new object right you can either use create or you can use apply um it doesn't matter um and we pass in the file name using the dash F option and here we can see that the Pod has been created so let's check the status real quick and you can see that it's in container creating State and then when we check again we see that it's in a running State and as before if you want to get more details about the Pod you can always run the cube cuddle describe command and specify the the name of the part and you should get a much more in-depth information about the Pod okay so that's it for this demo in the next section we will learn some tips and tricks of developing yaml files and easily using Ides at any time during this course if you feel you need assistance head over to our community group we have a thriving community on slack where our instructors and teaching assistants hang out so go to code.wiki community or scan this QR code to get an invite once you're in explore the various channels available for learning different topics and feel free to post your questions in the respective channels all right thank you and I'll see you in our communities Channel well it's time for our second Hands-On Labs activity go back into the labs and access the labs for pod or click on the link given here in this lab you will create parts and also explore creating ammo files for pods once done come back here and we will resume the course let us now talk about replica sets so what is a replica and why do we need a replica set let's go back to our first scenario where we had a single pod running our application what if for some reason our application crashes and the Pod fails users will no longer be able to access our application to prevent users from losing access to our application we would like to have more than one instance or pod running at the same time and that way if one fails we still have our application running on the other one and the replica set brings the failed one back to ensure a predefined number of replicas are always running at all times the replica set helps us run multiple instances of a single pod in the kubernetes cluster thus providing High availability so does that mean that you can't use a replica set if you plan to have a single pod no even if you have a single pod the replica set can help by automatically bringing up a new pod when an existing one fails thus the replica set ensures that the specified number of ports are running at all times even if it's just one or hundred another reason we need replica set is to create multiple pods to share the load across them for example in this simple scenario we have a single part serving instead of users when the number of users increase and if we were to run out of resources on the first node we could deploy additional pods across other nodes in the cluster as you can see the replica set spans across multiple nodes in the cluster it helps us balance the load across multiple Parts on different nodes as well as scale our application when the demand increases so a pod has a one-to-one relationship with a node a pod can only run on one node at a time you cannot move a running pod from one node to the other you'll have to kill it and recreate it on another node well technically the scheduler decides which node a pod gets assigned to and there are ways for you to control that which is out of scope for this crash course we discussed those in much more detail in our cka course for now we will just stick to the basics so a pod lives on one node a replica set spans across the entire cluster and replica set can deploy a pod on any node in the cluster it monitors the number of parts in the cluster and ensures enough is deployed at all times let us now look at how we create a replica set as with the previous lecture we start by creating a replica set definition file so we'll name it replica set definition.yaml as with any kubernetes definition file we will have four sections the API version kind metadata and spec the API version is specific to what we are creating in this case the replica set is supported in kubernetes API version apps V1 if you get this wrong you're likely to get an error that looks like this it would say no match for the kind replica set because the specified kubernetes API version has no support for that the kind as we know is replica set and under metadata we will add a name and we will call it say my app replica set and we will also add a few labels app and type and assign some values to them so far it has been very similar to how we created a pod in the previous section the next is the most crucial part of the definition file and that is the specification written as spec for any kubernetes definition file the spec section defines what's inside that object that we're creating in this case we know that the replica set creates multiple instances of a pod but what pod we create a template section under spec to provide a pod template to be used by the replica set to create replicas now how do we Define the Pod template it's not that hard because we already have done it in the previous exercise remember we created a pod definition file in the previous exercise we could reuse the contents of the same file to populate the template section move all the contents of the Pod definition file into the template section of the replica set except for the first two lines which are API version and kind remember whatever we move must be under the template section meaning they should be intended to the right and have more spaces before them than the template line itself now looking at our file we now have two metadata sections one is for the replica set itself so that's what is here this is a metadata section for the replica set and the other is for the Pod so this is the section of the pod it's as if taking the entire pod template and putting it inside this file so there's there's that hierarchy that you can see and so we have nested kind of two definition files together and the replica set being the parent and the part definition is the child now there is something still missing we haven't mentioned how many replicas we need in the replica set for that we add another property called replicas and we input the number of replicas that you need under it in this case we put the number three so remember that the template and replicas are direct Children of the spec section so they are siblings and must be on the same kind of vertical line having equal number of spaces before them so replica set requires a selector definition the selector section helps the replica set identify what parts fall under it but why would you have to specify what parts fall under it if you have provided the contents of the Pod definition file itself in the template it's because replica set can also manage parts that were not created as part of the replica set creation process so say for example there were Parts created before the creation of the replica set that match the labels specified in the selector the replica set will also take those parts into consideration when creating the replicas now I will elaborate this in the next slide for now note that it has to be written in the form of match labels as shown here the match labels selector simply matches the labels specified under it to the labels on the pods the replica set selector also provides many other options for matching labels other than just what is shown here so just remember that whatever label you provide here this needs to match with whatever label is set on the pod that's given here so these these two needs to match so to create the replica set once the file is ready run the cube control create command and input the file using the dash F parameter the replica set is created and when the replica set is created it first creates the pods using the Pod definition template as many as required which is three in this case to view the list of created replica set run the cube control get replica set command and you will see the replica set listed we can also see the desired number of replicas or parts the current number of replicas and how many of them are ready and if you would like to see the pods that were created by the replicas that run the cube control get parts command and you will see three pods running note that all of them are starting with the name of the replica set which is my app replica set indicating that they are all created automatically by the replica set so what is the deal with labels and selectors what do we label our pods and objects in kubernetes let us look at a simple scenario say we deployed three instances of our front-end web application as three pods we would like to create a replica set to ensure that we have three active Parts at all times and yes this is one of the use cases of replica sets as we've just discussed you can use it to monitor existing pods if you have them already created as it is in this example in case they were not created replica set will create them for you so the role of replica set is to monitor the pods and if any of the merger fail deploy new ones the replica set is in fact a process that monitors the pods now how does a replica set know what parts to monitor we'd like the replica set to monitor these three parts specifically and make sure that these three are running at all times there could be hundreds of other parts in the cluster running different applications so this is where labeling our pods during creation comes in handy we could now provide these labels as a filter for replica set while creating the replica set definition under the selector section we use the match labels filter and provide the same label that we used while creating the pots this way the replica set knows which part to monitor let's now look at how we scale the replicaset say we created the three replicas and in the future we decide to scale to six how do we update our replica set to scale to six replicas well there are multiple ways to do it the first is to update the number of replicas in the definition file to six then run the cube control replace command specifying the same file using the dash F parameter and that will update the replica set to have six replicas the second way to do it is to run the cube control scale command using the replicas parameter to provide the number of replicas and specify the same file as input you may either input the definition file or provide the replica set name in the type name format however remember that using the file name as input will not result in the number of replicas being updated automatically in the file in other words the number of replicas in the replica set definition file will still be three even though you scaled your replicaset to have six replicas using the cube control scale command and the file as input so the recommended approach is to always create yaml files and then edit the yaml files that way there is no difference between the actual state of the environment and what's defined in the yaml files there are also options available for automatically scaling the replica set based on load but that is an advanced topic for another time so let us now review the commands real quick the cube control create command as we know is used to create a replica set you must provide the input file using the F parameter use the cube control get command to see a list of a replica sets created use the cube control delete replica set command followed by the name of the replica set to delete the replica set and then we have the cube control replace command to replace or update replica set and also the cube control scale command to scale the replica sets simply from the command line without having to modify the file well that's all for now and it's time for labs activity click on the link to go directly to the lab for replica set if you haven't enrolled in the labs already feel free to enroll it it's free of cost and once you complete the labs come back here and we will continue let's now talk about deployments so we saw how to deploy an application to kubernetes by creating a pod and deploying multiple instances using replica sets but deploying and managing the number of replicas won't cut it when it comes to deploying applications for production use cases when newer versions of application is released you would like to upgrade your application instances seamlessly when you upgrade your instances you may want to upgrade them one after the other and that kind of upgrade is known as rolling updates suppose one of the upgrades you performed resulted in an unexpected error and you're asked to undo the recent update you would like to be able to roll back the changes that were recently carried out and finally say for example you would like to make multiple changes to your environment such as upgrading the underlying web server versions as well as scaling your environment and also modifying the resource allocations Etc you do not want each change to be applied immediately after the change is run instead you would like to apply a pause to your environment make the changes and then resume so that all changes are rolled out together in all of these capabilities are available with the kubernetes deployments so far in this course we discussed about pods which deploy single instances of our application such as the web application in this case each container is encapsulated in pods and multiple such parts are deployed using replica sets and then comes deployment which is a kubernetes object that comes higher in the hierarchy the deployment provides us with capabilities to upgrade the underlying instances seamlessly using rolling updates and undo changes and pass and resume changes to applications running on the cluster so how do we create a deployment now as with previous components we've first created a deployment definition file the contents of the deployment definition file are exactly similar to the rotic asset definition file except for the kind which now is going to be a deployment the API version is the same as a replica set which is app slash V1 now if we walk through the contents of the file it has an API version the metadata which has the name and labels and a spec that has template propagals and selector the template has a part definition inside it it's all the same as before now once the file is created Run the cube control create command and specify deployment definition file then run the cube control get deployments command to see the newly created deployments then run the cube control get deployments command to see the newly created deployment the deployment automatically creates new replica set so if you run the cube control get replica set command you will be able to see a new replica set in the name of the deployment the Republic assets ultimately create part so if you run the cube control get pods command you'll be able to see the power what's with the name of the deployment and the replica set so so far there hasn't been much of a difference between replica set and deployments except for the fact that deployments created a new kubernetes object called deployments we will see how to take advantage of the deployment using the use cases we discussed in the previous slide in the upcoming lectures to see all the created objects at once run the cube control get all command now once the deployment is created and you have a newer version of the app available how do you upgrade your application as before one way is to update the deployment definition file to update the new image name with the newer version of the app once that is done run the cube control apply command to apply the changes to the cluster the imperative approach would be to use the cube control set image command and specify the deployment name and the image name like this now when you specify the image name remember to specify the container name so this name this format this name equals the image name so the container name equals the image name so this is going to be the name of the new image of the app well it's time for laps click on the link to go to the labs directly and if you haven't enrolled already enrolled for free in this lab you will work on creating deployments and deploying applications to a kubernetes cluster let's now talk about services in kubernetes so we have two sets of applications deployed on our cluster a web server and a redis server so kubernetes assigns a unique IP address to each pod in the cluster the web server in this case has the ap10.244.0.2 and the Pod has the 10.244.0.11 these are the IP addresses assigned to pods in a kubernetes cluster so there's an internal Network formed and all the parts have internal IP addresses with which they can kind of communicate with each other so in our example the web server needs to access the ready service so here's the code of our web server and what do you think would be the address that the web server should put here to connect to the redis service the redis port has an IP address and can the web server address the ready service using its IP address now it can technically but it shouldn't because this IP is for each part and it is bound to change if the Pod were to crash or restart for some reason so you don't really want to kind of tie down the AP address into code of course so that's where a service comes in so a service enables communication between applications within a kubernetes cluster so think of a service as a proxy or a load balancer although it technically is not in a traditional sense and it provides an endpoint for other services to connect to this case we create a service named redisdb and now the web application can refer to the service with the name redisdb now similarly to expose the web service outside to the external users you would create another service for the web server we will call it the web service so as service enables connectivity between applications within the cluster as well as to expose applications outside the cluster to end users now we'll see how to create service in a few minutes but first let's understand the different kinds of service so the first one that we discussed is the cluster IP service this is a service within the cluster that is not exposed externally and helps different Services communicate with each other this is the example that we saw about the web server reaching the ready service the redis DB service is a cluster IP type of service the second is the node Port kind of service and in this case the service exposes the application on a port on the Node to be made accessible to the external users so this is the example where the web service is made accessible to external users through a node port on the Node the third type is a load balancer where it Provisions a load balancer for our service and supported providers like Google cloud or AWS or Azure a good example of that would be to distribute load across different web servers on these Cloud environments now in the scope of this course we will look at the cluster IP and node Port Services so let's look at the cluster IP type of service the service we talked about earlier where the web server tries to connect to the ready service is the cluster rip type of service so this is pretty straightforward so here we have a redis part that needs to be exposed within the cluster for the web application we do that by creating a service but we know that pods are usually deployed in replicas multiple instances and there could be hundreds of other parts in the cluster now how can a service identify which are the parts that it should be routing traffic to again same as before we have labels and selectors the pods have a label with the name set to readyspod we Define the same label as a selector on the service the service identifies all parts with the same label and configures them as its endpoints to create such a service as always use a definition file in the service definition file first use the default template which has the API version kind metadata and spec the API version is V1 kind is service and we will give a name to our service we will call it redisdb under spec we have type and Port the type is cluster IP in fact cluster IP is the default type so even if you didn't specify it it will automatically assume it to be cluster IP under ports we have a Target Port and Port the Target Port is the port where the back end is exposed which in this case is 6379 and the port is where the service is exposed which is 6379 as well now I'll explain that in a bit more detail in a sec but just to continue here to link the service to a set of ports we use selector so it will refer to the part definition file and copy the labels from it and paste it under selector and that should be it we can now create the service using the cube control create command and then check its status using the cube control get Services command the service can be accessed by other pods using the cluster IP which is given here but a better approach is definitely using the the name itself the service name so let's talk about ports when creating a service we must specify what port the application running inside the Pod is listening on and that's defined as the Target Port here so here we have 6379 that's the the port that ready service is listening on the Pod so that's the Target Port that is specified here and we also need to specify which Port the service itself must serve on and these could be different so the application could be listening on one port and the service could be listening or Exposed on another Port completely however in this case since any application connecting to redis expects it to be at 60 seven nine we're just going to stick to the same port so if you look at the code of the web server to connect to the redis service it must use the name of the service as the host which is what it is in this case in this case it's redisdb and use the same port defined as a port on the service which in this case is 6379 let's look at what a node Port service is next so a node Port is a type of service where a normal service is created first and is then exposed to external users through a port on the Node so in this case we're looking at the web server which is something that we expect to be available or exposed to external users and if you look at this pod there's a single pod the port on the port where the actual web server is running is 80 and it is referred to as the Target Port because that is where the service forwards the request to the second Port is the port on the service itself and it is simply referred to as port and remember these terms are from the Viewpoint of the service and finally we have the port on the Node itself which we use to access the web server externally and that is known as the node Port as you can see it is 3008 so that is because node ports can only be in a valid range which is from 30 000 to 32 767. so let's take a look at how to create the service as before we will use a definition file to create a service so the high level structure of the file Remains the Same we have the API version kind metadata and spec the API version is V1 the kind is service the metadata will have a name and that will be the name of the service it can have labels but we don't need to do that for now next we have spec and as always this is the most crucial part of the file as this is where we will be defining the actual services and this is the part of a definition file that differs between different objects so in the spec section of a service we have type and ports the type refers to the type of the service we're creating as discussed before it could be cluster IP node border load balancer in this case since we are creating a node Port we will set it as node Port the next part of the spec is ports this is where we input information regarding what we discussed on the left side of the screen so the first type of Port is the Target Port which we set to 80 the next one is simply Port which is the port on the service object and we will set that to 80 as well the third is the node Port which we will set to 3008 or any number that's in the valid range now remember that out of these the only mandatory field is port if you don't provide a Target board it is assumed to be the same as port and if you don't provide a node port a free port in the valid range between 30 000 and 32 767 is automatically allocated also note that ports is an array so note the dash under the port section that indicates the first element in the array you can have multiple such Port mappings within a single service so we have all the information in but something is really missing there is nothing here in the definition file that connects the service to the Pod we have simply specified the Target Port but we didn't mention the Target Port on which pod there could be hundreds of other pods with web services running on Port 80. so how do we do that so as we did with previously with replica sets and and the cluster IP service we're going to use labels and selectors so we have a new property in the spec section called selector under the selector we provide a list of labels to identify the Pod for this refer to the plot definition file that we used earlier to create the Pod pull the labels from the Pod definition file and paste it under the selector section so this links the service to the Pod and once done we create the service using the cube control create command and input the service definition file and there you have the service created to see the created service run the cube control get Services command that lists the services their cluster IP and the map Port the type is node Port so as we created and we can now use this port to access the web service using curl if you are within within the kubernetes cluster or web browser if you're accessing it externally now so far we talked about a service mapped to a single pod but that's not the case all the time what do you do when you have multiple pods in a production environment you have multiple instances of your web application running for high availability and load balancing purposes in this case we have multiple similar Parts running on a web application they all have the same labels with a key name set to reduce pod so the same label is used as a selector during the creation of the service so when the service is created it looks for matching Parts with the labels and finds three of them the service then automatically selects all the three ports as endpoints to forward the external requests coming from the user you don't have to do any additional configuration to make this happen and if you are wondering what algorithm it uses to balance load it uses a random algorithm so that's the service acts as a built-in load balancer to distribute load across different pods now as you can imagine at times you may want to have other algorithms and other settings when load is balanced so that's where service meshes come in like sto and Linker D so we do have some courses for that on code Cloud if you want to see but as in the scope of this crash course we will not be covering that so let's finally look at what happens when the parts are distributed across multiple nodes so in this case we have the web application on pods on separate nodes in the cluster when we create a service without us having to do any kind of additional configuration kubernetes creates a service that spans across all the nodes in the cluster and maps the Target Port to the same node port on all the nodes in the cluster this way you can access your application using the IP of any node in the cluster and use the same port number which in this case is 3008. it's also available on any node where even if the pots are not scheduled on that node even those nodes will expose this application or the service through the node Port which is 3008 so to summarize in any case whether it be a single pod in a single node multiple Parts on a single node multiple Parts on multiple nodes the service is created exactly the same without you having to do any additional steps during the service creation when parts are removed or added the service is automatically updated making it highly adaptive now once created you won't typically have to make any additional configuration changes so it's time for laps for services so click on the link to go directly to the labs if you haven't enrolled already enroll this course was free in this lab you will work on creating services to applications within and outside a kubernetes cluster well I'll see you back here once you're done with the labs hello and welcome to this lecture in this lecture we will try and understand microservices architecture using a simple web application we will then try and deploy this web application on multiple different kubernetes platforms such as Google Cloud platform I'm going to use a simple application developed by Docker to demonstrate the various features available in running an application stack on docker so let's first get familiarized with the application because we will be working with the same application in different sections through the rest of this course this is a sample voting application which provides an interface for a user to vote and another interface to show the results the application consists of various components such as the voting app which is a web application developed in Python to provide the user with an interface to choose between two options a cat and a dog when you make a selection the vote is stored in redis for those of you who are new to redis reddis in this case serves as a database in memory this vote is then processed by the worker which is an application written in.net the worker application takes the new vote and updates the persistent database which is a postgres SQL in our case the postgres SQL simply has a table with the number of votes for each category cats and dogs in this case it increments the number of votes for cats as our vote was for cats finally the result of the vote is displayed in a web interface which is another web application developed in node.js this resulting application reads the count of votes from the postgres SQL database and displays it to the user so that is the architecture and data flow of this simple voting application stack as you can see this sample application is built with a combination of different Services different development tools and multiple different development platforms such as python nodejs.net Etc this sample application will be used to Showcase how easy it is to set up an entire application stack consisting of diverse components in docker let us see how we can put together this application stack on a single Docker engine using Docker run commands let us assume that all images of applications are already built and are available on Docker Repository let us start with the data layer first we run the docker run command to start an instance of redis by running the docker run redis command we will add the Dash D parameter to run this container in the background and we will also name the container redis now naming the containers is important why is that important hold that thought we will come to that in a bit next we will deploy the postgres SQL database by running the docker run postgres command this time too we will add the Dash D option to run this in the background and name this container DB for database next we will start with the application services we will deploy a front-end app for voting interface by running an instance of voting app image run the docker run command and name the instance vote since this is a web server it has a web UI instance running on Port 80. we will publish that port to 5000 on the host system so we can access it from a browser next we will deploy the results web application that shows the results to the user for this we deploy a container using the results-app image and publish Port 80 to Port 5001 on the host this way we can access the web UI of the resulting app on a browser finally we deploy the worker by running an instance of the worker image okay now this is all good and we can see that all the instances are running on the host but there is some problem it just does not seem to work the problem is that we have successfully run all the different containers but we haven't actually linked them together as in we haven't told the voting web application to use this particular redis instance there could be multiple redis instances running we haven't told the worker and the resulting app to use this particular postgres SQL database that we ran so how do we do that that is where we use links link is a command line option which can be used to link two containers together for example the voting app web service is dependent on the Reddit service when the web server starts as you can see in this piece of code on the web server it looks for a redis service running on host redis but the voting app container cannot resolve a host by the name redis to make the voting app aware of the redis service we add a link option while running the voting app container to link it to The redisk Container adding a dash dash link option to the docker run command and specifying the name of the redis container which is which in this case is redis followed by a colon and the name of the host that the voting app is looking for which is also redis in this case remember that this is why we named the container when we ran it the first time so we could use its name while creating a link what this is in fact doing is it creates an entry into the ETC host file on the voting app container adding an entry with a host name redis with the internal IP of the red disk container similarly we add a link for the result app to communicate with the database by adding a link option to refer the database by the name DB as you can see in this source code of the application it makes an attempt to connect to a postgres database on host DB finally the worker application requires access to both the redis as well as the postgres database so we add two links to the worker application one link to link the redis and the other link to link postgres database note that using links this way is deprecated and the support may be removed in a future in docker this is because as we will see in some time Advanced and newer Concepts in Docker swarm and networking supports better ways of achieving what we just did here with links but I wanted to mention it anyway so you learned the concept from the very Basics so we just saw how the voting application works on docker let's now see how to deploy it on kubernetes so it's important to have a clear idea of what we are trying to achieve and plan accordingly before we get started so we already know how the application works and it's a good idea to write down what we plan to do so our goal is to deploy these containers these applications as containers on a kubernetes cluster and then enable connectivity between the containers so that the applications can access each other and the databases and then enable external access for the external facing applications which are the voting and the result app so that the users can access the web browser right so how do we go about this now we know that we cannot deploy containers directly on kubernetes we learned that the smallest object that we can create on a kubernetes cluster is a pod so we must first deploy these applications as a pod on our kubernetes cluster or we could deploy them as replica sets or deployments as we have seen through throughout this course but first for the sake of Simplicity we will stick to pods or in this lecture right and later we will see how to easily convert that to a deployment so once the parts are deployed the next step is to enable connectivity between the services so it's important to know what the connectivity requirements are so we must be very clear about what application requires access to what services we know that the redis database is accessed by the voting app and the worker app the voting app saves the vote to the redis database and the worker app reads the vote from the redis database we know that the postgresql database is accessed by the worker app to update it with the total count of votes and it's also accessed by the result app to read the total count of votes to be displayed in the resulting web page in the browser so we know that the voting app is accessed by the external users the voters and the result app is also accessed by the external users to view the results so most of the components are being accessed by another component except for the worker app note that the worker app is not being accessed by anyone you can see arrows going into all of these components but there are no arrows going into worker which means none of the other components or external users are accessing the worker app the worker app simply reads the count of votes from the redis database and then updates the total count of votes on the postgresql database so none of the other components nor the external users ever access the worker app now while the voting app has a python web server that listens on a port 80 and the result app also has a node.js based server that listens on Port 80 and the redis database has a service that listens on Port 6379 and the postgresql database has a service that listens on Port 5432 the worker app has no service because it's just a worker and it's not accessed by any other service or external users so keep that in mind so how do you make one component accessible by another say for example how do you make the redis database accessible by the voting app should the voting app use the IP address of the release part perhaps no because that can change the IP of the Pod can change if the part restarts and you may also run into issues when you try to scale your applications in the future the right way to do it is to use a service now we learned that a service can be used to expose an application to other applications or users for external access so we will create a service for the redis Pod so that it can be accessed by the voting app and the worker app and we will call it a redis service and it will be accessible anywhere within the cluster by the name of the service redis so why is that name important the source code within the voting app and the worker app are hard-coded to point to a redis database running on a host by the name redis so it's important to name your service as redis so that these applications can connect to the redis database and this is not a best practice to hard code stuff like this within the source code of an application instead you should be using environment variables or something but for the sake of Simplicity we will just follow this application as it is developed right now these services are not to be accessed outside the cluster so they should just be of type cluster IP so we will follow the the same approach of creating a service for the postgresql Pod so that the postgresql DB can be accessed by the worker and the result app so what should we name the postgresql service if you look at the source code of the result app and the worker app you will see that they are looking for a database at the address DB so the service that we create for postgresql should be named DB also note that while connecting to the database the worker and the result apps pass in a username and password to connect to the database both of which are set to postgres so when we deploy the the postgres DB part we must make sure that we set the these credentials for it as the initial set of credentials to while creating the database now the next task is to enable external access so for this um we saw that we could use a service with a type set to node Port so we create services for voting app and the result app and set their type to node port now we could decide on what port we are going to make them available on and it would be a high port with a port number greater than 30 000. so we'll do that when we create the service so we're done and we have the high level steps ready so to summarize we will be deploying five parts in total and we have four Services one for radius another for postgres both of which are internal services so they are of type cluster IP and we then have external facing services for voting app and the result app however we have no service for the worker pod and this is because it is not running any service that must be accessed by another application or external users so it is just a worker process that reads from one database and updates another so it's not going to require a service now I say that again as that's a common question that I get when we talk about Services why does this worker not require a service right so a service is only required if the application has some kind of process or database service or web service that needs to be exposed that needs to be accessed by others in this case that's that's not true for the worker app now before we we get started with the deployment and note that we will be using the the following Docker images for these applications so these images are built um from a fork of the original developed at the docker samples repository the image names are code Cloud slash example voting app underscore vote with a tag of V1 and then again worker we want a result V1 and for the databases we will use the official redis and postgresql releases that are available right so that's it for now and we will see this in action in the upcoming demo in this demo we're going to deploy the voting application on our mini Cube cluster so here I have created a new project folder called voting app so the first thing that we are going to do is to create the Pod definition files for each component within the application so let's begin with the voting app itself so we will name this pot definition file as voting app pod Dot yaml and let us build this pod from scratch so we will begin with the API version and set it to V1 and the kind will be pod the metadata section would have the name which will be voting app dash pod and let's add a couple of labels the first label would be the name which can be the same as the name of the pod which is voting app dash pod and the second label will be the name of the application which is the demo voting app so we will kind of use that label for all the components part of this application stack that way we can group the components of a single application together by assigning the same kind of label to all of them but there will still be a different label for each component to differentiate from each other so let's add the spec section and here the first thing that we're going to add is the name of the container so we'll use a voting app as the name of the container and the image we will make use of the Custom Image that we have built from the docker samples voting app get repository page which is here so we will use the custom images that we built under the codecloud docker Hub repository the name of the image here is code Cloud slash example voting app underscore vote with a tag of V1 now we will also specify the port for this voting application as a container Port property so this should be the port on which the application listens for this voting app and we know that it's Port 80 so we'll set it to Port 80. next let's create the Pod definition file for the result app so again I'm going to create a new file here called a result app pod and and because this is a pod definition file like like before we can simply copy the template from the voting app file we just created and then we will just make changes to the name and labels and the image so let us change the name to result app pod and we will make the same change to the label and then the app label will remain the same as all of these are part of the same app and again let's make the change to The Container name to result app and here the image will also be changed to code Cloud slash example voting app underscore result and with the same tag of V1 and the result application is also Exposed on the container Port 80. so we will leave that as is next let's create the part definition file for the ready spot so I'm going to name the part definition file as redis dashboard.yaml and again I'm going to make use of the previous pod as the template and we will change the name of the pod so we'll change it to reduce pod and we will use the same as the label and let's name the container as redis and the image should be also be redis and the container port for it is we will change it from Port 80 to 6379 because this is the default Port um for the release image so let's save this and now let's create another pod now this time for our database and we'll name it postgres Dash power.yaml and as before we're going to copy paste one of the Pod definition files here and we'll make changes to the name so this one will be postgres Dash pod and the same for the label now the name of the container would be postgres and the image we can also use the postgres image itself without any tag which means that it will make use of the latest postgres image the container port for postgresql database is 5432 by default so we will add that in and we also have to add a couple of other environment variables here so this is to make sure that we specify the postgres username and password for the database so as we saw in the previous lecture the source code of the worker and the result pod has the password for the Post postgresql database hard coded in them so we must specify the initial password to be set for the database here as environment variables now a better way to do this would be to use Secrets or some kind of world to pass in these credentials and not have these credentials in a plain text format in a file but those are out of scope uh for this course we discuss about these Concepts in much detail in our Advanced courses on kubernetes the cka and ckad courses we discuss about environment variables services and secrets and other Concepts so for this example to work we have to make sure that we specify these two environment variables in the postgres port definition file so for this we will make use of an EnV section which is a list of dictionaries and we will have the environment variable name and value entered in them so it must be postgres underscore user for the username and postgres underscore password in as a password I'm all in caps now the value for both would just be postgres for now and again just to retread we are adding these values because the work report and the result pod uses these credentials while connecting to the database and if you don't configure this the worker will not be able to connect to the database and as a result the total voting account may not add up right so in case you run into issues with the vote count not updating are not able to view the results then this is probably an area that you can check so now we have created four parts the postgres part the ready support the two front-end application ports the result app and the voting app the last one is the worker pod so let's create a new file for the worker pod our worker pod.yaml and I'm going to copy paste the definition file so in here let's make changes to the name so let's change the name to worker pod and the same for labels and the name of the container will be worker app and the image is a worker instead of vote and one important change here is that we must remove the ports section because as we discussed the worker app has no Services listening so no Port definition is required so as a result we can delete this entire section over here so we now have five pod definition files for all our microservices next we will create services to expose these pods um except for the for the worker so let's get that going let's start by creating the service definition files so let's start with one of the internal Services which is redis so we'll call the file as redis service.yaml so we will start with the API version which is V1 and the kind is service let's add the metadata and the name of the service we will use as redis itself so remember that this is important and we'll add a couple of labels the name would be set to reduce that service and the second one is the the one we have been using for all the other objects which is demo voting app now next we will add the specs section and within this we will add the ports so for redis we know that the port to be used is 6379 and we'll also add the target port and which is also going to be 6379 now we don't need to specify um anything else like a node Port because this is going to be an internal service now let us add the selector so in order to link the service to the Pod we must specify the same labels configured for the Pod so let's copy the labels from the Pod definition file and we'll paste it under the selector section now since this is an internal service we're not going to expose it outside on the network so that that should be good so this file is now complete so let's save it and next we will proceed with the creation of the postgres service so now let us create the postgres service file we'll follow the same approach as before we'll name this as postgres service.yaml and the easiest way to create a service and now is to just copy the contents of the Reddit service file and paste it here so again we will make the appropriate chain so if you remember the architecture from the lecture the name of the postgres database must be DB so this is because a worker app expects the name of the pros postgres database to be DB so if you name it anything else you'll find that the connection will fail so now we'll now change the labels these labels could really be anything so it doesn't really matter Let's uh name it as postgres service and we can even name it as DB service and let's change the the port to 5432 because that's the port in which the postgres database runs and again this will be 543-5432 and let's make sure that we copy the labels from the Pod definition file so here we will copy the name label which is set to postgrespod so let's delete the older selectors and the the labels and then replace it with the new one so now we're done with the two internal Services let's now proceed to creating the external facing Services which is the voting service and the results service so let's start with the voting app service let's create a new file called votingappservice.yaml and we will copy the contents of the other file and then paste it here again let's change the name to voting service let's change the label as well and we we know that this is a front-end application which runs on Port 80 so let's set that port number as the service port and as the Target Port and as before we'll copy the labels from the part definition file so the next step would be to create the final service which is the result service so let's call the file as the result app .service.yaml and let's copy the voting app service definition file into here and then again we change the name to result and everything else is the same and we will update the the selector section with the labels of the Pod the result app pod we actually created them as internal services like the others now we have created the voting app and result app Service as internal services like like the others but they are supposed to be externally accessible so we must set their type to node Port so since we have not specified any type it would be considered as cluster IP so let's do that now and each service requires we'll set the type to node port and each service also requires an additional Port specification for the node port and we will set that to 3004 for the voting app okay now we will go and update the same on the result service so we will set the type to know the port and we will add a note port number of thirty thousand five for the result app service okay so we are done with all the all the files and we have completed the creation of all the Pod and service yaml definition files and we will now proceed with the creation of these objects and and we'll then try to access the application on the web browser so we will switch to the terminal of our system and we are in the voting app directory and which is where we created all the Pod and service definition files so now we can start creating these objects so first let's check if there are any pod or deployments or Services running on the server so when we see we see that there are there's nothing except for the default kubernetes service there is there's nothing else running so let's start with the Pod and the the service for the voting application right so we'll start uh with one by one and we'll test and make sure that they're working as expected and then we'll proceed further so to create uh the Pod we will use the cube cuddle create command and specify the Pod definition file and similarly let's create the service using the service definition file for voting app so let us now inspect the status of the Pod and the service so if we want to see the the Pod and the service in a single command we can run the Cube cuddle get pods command and specify the service as SVC separated by a comma and so it will both the objects so we can see that the service for the voting app is created and it is of type node port and the Pod is also created and it is in a running state now before we proceed further Let's test to see if that bit is working right so um what we could do is simply access the voting app service using a URL which could be formed by the IP of the mini Cube node so if you know the IP you could just use the port number which is 30 0004 the port number of the service and view it in a browser um or if you're not sure about the IP you could run the command minicube service and specify the name of the service with the dash dash URL option okay and it'll give you the URL that you can use so we'll copy the URL and we will try to access this in our local browser on my system so here I'm at the local browser and I'm going to try and access this and as you can see we are now able to load the voting application so that's that's one step which is complete now let's not try and cast any vote for now as we don't have the databases ready now let's go ahead and create the remaining objects the parts and services so back to the terminal the next pod um and service that we created is the ready spot so we run the cube cuddle create command with the redist part definition file and then the service file and as before we run the cube cuddle get pods and service command and as you can see the red is part and service are created the service is now the cluster IP because this is an internal service and let's create the postgres database now I will create the postgres database with the postgres port definition file and as well as the service definition file for creating the service again let's check the status and we can see that the postgres Pod is in a running State and similar to the ready service and we can see also that the postgres service is up as well with its name set to DB right so now that both are radius and postgres ports and services are up and running we can now create the the worker pod and to do this we will use the cube cuddle create command with the worker pod definition file so let's check the status of the Pod and I can see that the worker pod is also now in a running state now finally let us create the Pod and service for our result application so let's do the the same thing as before keep run the cube cuddle create a result app command and let us run the create the result app Service as well okay so now let's check the status of all the parts and services and now you can see that all of our five parts are up and running and we can see that we have two node uh Port Services one for our results service and one for our voting service the other two services that we created are the redis and database service which are internal only so we've already accessed the voting application let us also generate the URL for our results service so for that I'm going to use the same command as before the mini Cube service voting service command will give us the URL to access our voting app and let us change the the name here to get the the URL of the result app okay so we now have both the URLs so let's go back to our web page and here we have the voting application which is running on Port 30004 let's copy and paste the new URL so this is going to be 3005 um so let's try and cast a vote here so I'm going to click on the docs as selection and here you can see that there's a check mark against the vote that we selected which indicates that our vote has been recorded now uh as in it has been saved in the redis database and you can also see that below that there's this particular web page is being processed by the voting app pod now if we go to the results page you can see that the uh the docs application has um 100 of votes because in this case we just have one vote and that was for dogs and I can also change that what if I if I want so I could go back and click on cats and I can see that the result has changed to cats so that's our demo we have successfully deployed a multi-terror application on a kubernetes cluster and we have confirmed that it's working right so the data actually goes through from one end all the way through through the redis database through the worker pod to the postgresql database and up to the the result part so is working as expected okay so this is what we saw in the last demo so we deployed parts and services to keep it really simple and we were able to access our application um from a browser but deploying applications as just part has its own challenges deploying pods doesn't help us scale our application easily if we wanted to add more instances of a particular service and if you wanted to update the application like an image that was used in the application then your application will have to be taken down while the new pod is created so that that may result in in a downtime so the right approach is to use deployments to deploy an application so let us now improvise our setup using deployments so we choose deployments over replica sets as deployments automatically create replicasets as required and it can have it can help us perform rolling updates and rollbacks and maintain record of revisions and record the cause of change as we have seen in the previous demos so deployments are the way to go so we'll add more pods if required for the front-end applications like voting app and result app um by by creating a deployment and setting the the replicas to three I will initially start off with just one replicas for all for each of these components and later we'll see how easy it is to scale them to three or more right so we will also encapsulate the databases and the worker applications within deployments so let's take a look at that now so here I am in the visual studio code and this is the project directory which has all the pods and service definition files so let's create a new file for the deployments so we'll start with the voting app itself so I'll name this file as voting app dash deploy.yaml and I'm going to use the the split screen function so that I can open the pod and the deployment definition file side by side so let's create the deployment file for the building the application by using the Pod definition file as the template so let's start with the API version and it will be um apps slash V1 the kind will be deployment and let's add the metadata the name of the deployment would be voting app dash deploy and we'll add some labels uh next let's add the specs section and so it has already pre-populated a couple of uh entries for us so we should be specifying the number of replicas and and for all our pods we're just going to stick to one replica to begin with and since we are on a single node cluster to save some resources and under the selector section I'm going to add the labels from the Pod so we use the match labels option and then we will copy and paste the labels over from the Pod definition file and now under the template section I'm just going to copy everything from the metadata to the end of the file and then paste it under the template section once done we fix the formatting all right so that looks good so let's proceed with the next deployment which will be the redis deployment so again I open a redis the redis part definition file and just like before we create a new file called redis that deploy.yaml and then we will copy the contents of the voting app deployment file just to get started and we will change the name of the deployment to redis deploy and the the same for labels as well so we'll stick to replica one and we will copy the labels from the Pod definition file for redis and then we copy over the template from the pods and then paste it and then fix the formatting so that's done and we will proceed with the postgres ql deployment let's copy the radius deployment file and I'll open the postgres pod definition file for reference and let's make the changes to the names and labels and update the selector labels with the ones on the pod and then move over the the definition under the template section and then we fix the formatting okay so that looks all right um next let's proceed with the worker so we're going to close this let's create a worker app deployment file and we will copy and paste the template and and update the name and let's copy the labels from the Pod definition file and put it under the uh the selector section and the same as before let's copy the Pod manifest file and paste it under the the template section so that's the deployment for worker and now we are left with one for the result app so again we close these two and here is our result app we will create a new file result app deploy.yaml and we copy and paste a template again and we update the name and labels and the template as we did before so we are now done with all our deployment definition files and I'm going to get rid of all of this so here is all the new deployment files that we we created and now let's head back to our terminal and create these deployments along with the services so before we do that let's make sure that there are no pods and services running in the cluster so we have a cleaned up everything that was created for the previous um demonstration so there are no pods or Services other than the default kubernetes one and now let's refresh and make sure that all our deployment files have been created so here we have the five new deployments that we created and the the services will remain the same so we'll first start with the the voting app deployment so we'll create it using the cube cuddle create command and pass in the voting app deployment file as input and let's also create the service and let's do a quick check on the deployment and make sure that it's running so yes we see that it is in a running state so now let's create the redis deployment and followed by the ready service and similarly let's create the postgres deployment and the postgres service and let's make sure that everything is up so we see that all the pods are up the deployments are running one out of one Parts which means that the pods are up and running and let's check the service so we have the DB service we have the ready service and the voting app service created so far so now let's uh clear the screen and we create the worker deployment now remember the worker does not have a service so let's make sure that everything is running as as expected and we see that the worker pod um of the deployment is running as well so finally we create the result app deployment and the result app service and we now see that everything is running as expected so let's change this to deployments comma SVC so we have all the the five deployments in a running State and we have four services so now let's get the URLs for our two front-end services so we'll use mini Cube service and the name of the service with the URL flag and let's do the same for the result Service as well and we get the URLs with the ports thirty thousand four and thirty thousand five so now I'm going to launch the web browser and we'll try to access these applications so let's look at the the first URL which is the voting app itself and let's get the vote and let me open another window here which will go to 3005 and you can see that the result is shown um as expected so now I'm going to scale up the deployment so run the equip cuddle scale command and specify the number of replicas to three to add two more I know replicas for the voting application so when we run the get deployments command um we see that there are three pods for the voting app right now now if we now go to the URL and refresh the page each time we see that the page is served by a new pod each time so we see how easy it is to scale our applications with deployments well that's it for this demo I will see you in the next so here we are at the end of this crash course I hope you enjoyed material we've covered the basics of containers kubernetes Parts replica sets deployments and services in this course and also deployed an example voting application however that's only the tip of the iceberg there is a lot more when it comes to learning kubernetes there are these different ways of provisioning a cluster administering a cluster maintaining and upgrading a cluster logging and monitoring security backups storage networking Auto scaling designing a cluster for hosting production grade applications and much more now all of these are covered in our kubernetes learning path at code Cloud this involves three certification courses the ckad the certified kubernetes application developer course that's for application developers to get familiarized with building applications to be deployed in kubernetes the cka course the certified kubernetes administrator scores for assist admins who are responsible for managing your cluster the cks course the certified kubernetes security specialist course for security engineers and other tools in the cloud native ecosystem that work with kubernetes like Helm customize service meshes like istio Linker d github's principles like Argo CD at core Cloud we also cover a wider spectrum of content such as Linux where the only learning platform where we teach Linux by doing using our Hands-On labs and of course don't forget to subscribe to our Channel as we release new videos about Cloud native and kubernetes all the time hello and welcome to this lecture on setting up kubernetes in this lecture we will look at the various options available in building a kubernetes cluster so there are a lot of ways to set up kubernetes we can set it up ourselves locally on our laptops or virtual machines using Solutions like minicube or micro k8s these are solutions for developers or those who want to just play around and learn kubernetes the cube admin tool is used to bootstrap and manage production grade kubernetes clusters there are also hosted Solutions available for setting up kubernetes in a cloud environment such as gcp AWS Azure or IBM cloud and many others we also have a demo on provisioning a kubernetes cluster on gcp and of course these are just a few among the many options available to deploy a kubernetes cluster so you may really follow any of these approaches to set up a kubernetes cluster but to go through this course you don't really need to set one up as part of this course we give you a real kubernetes cluster that you can access right in your browser with the click of a button without having to set anything up and we have guided challenges and fun Hands-On lab exercises that will get you familiar with kubernetes in no time in this section of the course we will just start with one of these options the remaining examples are in the appendix section at the end of this course so we will start with the mini Cube option which is the easiest way to get started with kubernetes on a local system if mini cube is not of interest to you and you just want to rely on the online Labs then now would be a good time to skip this lecture so before we head into the demo it's good to understand how it works earlier we talked about the different components of kubernetes that make up a master and worker node such as the API server the hcd key Value Store controllers and scheduler on the master and the cubelets and container runtime on the worker nodes it will take a lot of time and effort to set up and install all of these various components on different systems individually by ourselves minicube bundles all of these different components into a single image providing us a pre-configured single node kubernetes cluster so we can get started in a matter of minutes the whole bundle is packaged into an ISO image and is available online for download now you don't have to do that yourself minicube provides an executable command line utility that will automatically download the iso and deploy it in a virtualization platform such as Oracle virtualbox or VMware Fusion so you must have a hypervisor installed on your system for Windows you could use virtualbox or hyper-v and for Linux you could use virtualbox or KVM and finally to interact with the kubernetes cluster you must have the cube cuddle kubernetes command line tool also installed on your machine so you need three things to get this working you must have a hypervisor installed the cube cuddle utility installed and mini Cube executable installed on your system in this demo we're going to install a basic kubernetes cluster using the mini Cube utility as part of this beginner's course to keep things simple and easy we will stick to mini Cube as our lab solution we explore additional options of provisioning a kubernetes cluster using the cube admin tool in the cka course as for this course we just want to stick to the very Basics and all the basic operations can be performed on a mini Cube cluster so we will start at the kubernetes dot IO page within this website click on the documentation section and navigate to tasks and install tools section now before installing minicube we must install the cube cuddle utility it may be called Cube control or cube cuddle or cube CTL whatever you prefer now you might hear me mix it up at times so bear with me on that so the cube cuddle command line tool is what we will use to manage our kubernetes resources and our cluster after it is set up using minicube installing the cube cuddle utility before installing minicube will allow minicube to configure the cube cuddle utility to work with the cluster when it Provisions it so the cube cuddle utility can work with multiple clusters local or remote clusters at the same time and there's a small configuration for it and minicube will automatically take care of that when it starts when it Provisions a kubernetes cluster but that is if you already have the cube cuddle utility installed so that's important now our goal is to set up a cluster on our local machine I'm on Linux Ubuntu system but the same procedure will also work on Windows or Mac operating systems as well all the demos and tools that we have throughout this course will work on all operating systems Linux Windows or Mac okay you just need to follow the respective installation procedure for your OS and to start with I'm going to install the cube cuddle utility on my Linux system and we're going to go ahead with the latest version so just copy and paste the command provided here for downloading the cube cuddle binary the binary has now been downloaded the next step is to make this command executable so I'm going to use the command CH mode plus X to make it executable and finally we are going to move this to a location within the path user local bin okay so this way I'll be able to run this Cube cuddle command from anywhere within my system so let's run the cube cuddle version command and you can see that it has installed the 1.18 version now what we just saw was one way of installing a cube cuddle utility there are other ways to do it you can install it using a package managers depending upon the type of distribution the OS distribution that you are on and the documentation associated with these are available um here so if you scroll down you'll see that there are instructions for installation on Mac OS and there should be one for Windows as well so make sure that you use the appropriate link and set up Cube cuddle based on the documentation provided now that we have completed the installation of cube cuddle utility we can proceed with the installation of mini Cube now the first thing that we have to check um and this goes for all operating systems Linux Windows or Mac is to make sure that virtualization is enabled for your laptop or desktop wherever you're setting up this lab so one of the easy ways to make sure that virtualization is enabled on Linux is to grab for the vmx or the sum keyword under the proc CPU info file so that's what I'm going to do now so as long as this command shows an output such as the switches listed here virtualization has been enabled and you don't have to enable it specifically from the BIOS if it's not enabled then you have to check your laptop's bio settings so you have to restart your laptop go into the BIOS and there should be an option to enable virtualization and you have to do that um you might have to check your laptop's manual in order to know how that's done or just check online with your laptops model and search how to enable virtualization on your laptop and again make sure that you check the documentation here for the respective operating system that you are on for each of these there are specific commands that can be used to run a test to see whether virtualization has been enabled okay so next we are going to install mini Cube and again we will go ahead with the option for Linux and the first prerequisite is to install uh Cube cuddle which we have already done the next one is to install a hypervisor so for Linux we can either use KVM or virtualbox um we will go with virtualbox as that is our preferred virtualization solution you can also run mini cube without a hypervisor and directly on your host using Docker so if you already have Docker installed you could leverage that um and have mini Cube you know provision equivalents cluster using a Docker container however note that as you can see here in the documentation page um there's a warning that says it can result in security or data loss issues so we will just stick with a virtual machine based approach for now I just prefer virtualbox because it can in case you mess up something on your system and you need to restart it's easy to get rid of the VM and restart again right it won't really mess up your laptop and you can take snapshots before you make a major change and then you can restore that snapshot in case that change you know doesn't really well work well for you now virtualbox is supported on all variety of operating systems including Linux windows and OS X so I'm going to open this in a new window and it takes me to the download section and here I'm going to select Linux distribution and the one that is most appropriate um for my system you may choose one that is appropriate for yours so wait for it to complete the download and then we will install virtualbox it has downloaded the Debian package so I'm just going to install it directly wait for the installation to complete and while it installs let's go back to the documentation section so now the installation has been completed and I'm just going to close this and I'm going to launch a virtualbox so as you can see here this is what the virtualbox interface looks like and you will have a similar interface for Windows or Mac um with minor differences but that should not matter so right now we don't have uh any virtual machines running so when we provision a cluster using minicube it will automatically create a virtual machine as required so apart from just installing virtualbox you don't really have to do anything directly with it now let's proceed with the installation so the next step is to install the mini Cube utility again there are different ways to do this either use the package manager and install it as a package or we can do it using a direct download approach so we're going to download the latest version and just like we did with the cube cuddle utility I'm going to curl the package and then install it on my machine so I'm just going to copy the whole thing and this will download the mini Cube binaries and assign and execute bit so that you know we can run it as a command and once that has been done let us add mini Cube to our path the user local bin directory has already been created so we don't have to do that first step we will run this command to install minicube at the location slash user slash local pin next we will provision a kubernetes cluster using the mini Cube utility so we're going to run the mini Cube start command but we also have to specify the driver name to be used with this command now minicube can work with different virtualization tools and that's where you must specify what driver to use in our case we use virtualbox so let's open this link and make sure that you we are using the correct driver name so the name of a virtualbox driver is virtualbox so we'll make use of that name so I'm going to copy and paste this command until the driver name and then I'm just going to copy the driver name from this page and paste it here we will now execute the command when it starts you'll notice that it follows a process so it is in fact if downloading the iso image for minicube and then this is the image that will be used to provision a VM on virtualbox we now see that it's downloading kubernetes version 1.18.3 and any other required binaries let me switch to the virtualbox UI and we will see that a virtual machine by the name minicube has been created and it is in a running state and you can see that the VM uses two CPUs and 2GB of RAM so let's wait for this setup to complete okay so now this has been installed and Cube cuddle utility is now configured to use the kubernetes cluster provisioned using minicube so let's head back to the documentation page and the next thing that we are going to do is run the mini Cube status command to ensure that everything has been set up correctly so I'm going to clear the screen here and then run the mini Cube status command we can see that the mini Cube control plane cubelet API server and cubeconfig are all in a running and configured state so that's good if you run into issues with the installation anytime feel free to run this command and check the status so our cluster is now set up we will deploy some applications on the cluster and make sure it's working as expected now we will get into talking about the different concepts on deploying an application in the upcoming lectures right now we just want to make sure that the cluster we setup is working as expected so we will simply follow the tutorial given in this page although it may not make total sense now but I assure you that we will get to that in a bit so click on this link under what's next and here we have some examples that could be used to test our setup in the new page you can skip the first step of starting a mini Cube cluster so we have already done that so the next thing that we need to check is if Cube cuddle commands are working so I'm going to run the coupon get notes command and you can see that it is a single node cluster and the node name is mini Cube and it is in a ready state and it was spun up about 8 seconds ago and it's running the latest release of kubernetes which is 1.18 as of this recording so next let us try to create some deployments using this cluster so here we have an example on this page I'm going to run Cube cuddle create deployment command to create the deployment once that is done we will run the cube cuddle get deployments command and you can see that the hello mini Cube deployment has been running for 22 seconds next we will expose this deployment as a service for that make use of this command here and Cube cuddle expose deployment hello minicube command now don't worry about the command for now we'll talk about this in much detail later in this course for now we'll just copy and paste okay and then we will skip to step 5 where we will get the URL of the Xposed service so by running this command copy the URL and paste it into a browser on your laptop and it should list the details about the application like this okay so that's not the most exciting application but this is proof that your setup is working and that's all we need for now now follow the remaining instructions to clean up your system delete the services and delete the deployment the deployment will be in a terminating state for a few seconds and after it's done the application will no longer be accessible on the web page foreign
Info
Channel: KodeKloud
Views: 866,477
Rating: undefined out of 5
Keywords: docker, kubernetes, kubernetes architecture, kubernetes basics, kubernetes certification, kubernetes containers, kubernetes crash course, kubernetes deployment, kubernetes explained, kubernetes for beginners, kubernetes management, kubernetes online course, kubernetes orchestration, kubernetes pods, kubernetes setup, kubernetes training, kubernetes tutorial, kubernetes tutorial for beginners, learn kubernetes
Id: XuSQU5Grv1g
Channel Id: undefined
Length: 130min 0sec (7800 seconds)
Published: Fri Mar 31 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.