Deploying Kubernetes to AWS Cloud: AWS EKS 101

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what's up not just developers welcome back to a new live stream and welcome to the AWS eks 101 Workshop today we're going to learn how to deploy and run a kubernetes stack on AWS manage service eks which stands for elastics elastic kubernetes service kubernetes is a great way of running um containers at scale and when you power it up with some AWS service is through eeks you get a really powerful full tolerant highly available and scalable infrastructure for running your application kubernetes is also great when you are using a microservices architecture where maybe different teams in your uh organization is working at different parts of the application and all of these parts are independent applications and that's actually exactly what we're going to do today we're going to deploy and run a web store application and this is going to be a very Hands-On uh Workshop we're going to do everything together the application is made up of multiple microservices on top we have a user interface which is a application that will serve a front end of of our web shop and Below we see the backend Services all of them are microservices all of them have are built using different Technologies but for us that does doesn't really matter because we are working with images and not necessarily with the lowlevel application um programming languages so here we see the wers checkout cards catalog assets microservice and all of them are actually also using like different databases so everything uh here we're going to deploy today we're going to run using AWS eks and I want to mention that this is a very beginner friendly tutorial and uh I'm pretty sure that most of you uh will be able to follow along the only thing that you need is a browser and actually previous week I helped my grandma deploy her kubernetes stack and if she could do it I'm pretty sure you can do it as well so all you need to to have is an AWS account and everything else we're going to do it together so let's run the intro we have a lot of things to do today and let's get to it [Music] all right so uh to get the most out of this tutorial I always encourage you to follow along and try to implement it yourself especially with this one because we are going to follow a workshop from aw uh together so uh here AWS put together a really easy to follow and practical Workshop uh called e s workshop and you can find it at www.ek workshop.com and we're going to follow it together uh here I will go into a bit more details of what to expect in this video but first let's go ahead and do the first step because that we will have to wait around 5 minutes and then I will explain what we actually have to do so let's go ahead and open this uh this link I'm going to also paste it in our uh um chat and let's go ahead here under AWS Workshop let's go to the introduction and we're going to go to the setup we're going to use the uh we're going to use our AWS account for this Workshop so let's go under this in your AWS account tab scroll down until we find this one uh yeah we need to open a Cloud shell on AWS console you can either do it by clicking here or let's go ahead in our AWS console let's log in make sure that you're using the uh region that you are you want to deploy to in my case that's going to be EU West one in Ireland and let's go ahead and look for cloud shell Cloud shell is a simple terminal interface uh that allows us to run and connect with uh run commands and connect with our AWS account so what we have to do from the documentation of a workshop let's go ahead and scroll down a bit until we get to the this part once Cloud shell has loaded we have to run the following commands I'm going to explain in a moment what everything here is doing and what's the plan but let's go ahead copy them from here by pressing copy all commands because there are actually two of them and let's go back here paste them paste and the first one should download uh a cloud formation template for uh initializing our environment and the second one is actually using that cloud formation template to set up our uh to set up our environment so um exactly uh what now that we are going to have to wait for around 5 minutes for our environment to become available I'll put it this way so you can see it as well like this um let me quickly explain what's happening here but first of all hello everyone who is joining us live how are you doing guys today we're doing something different something new so uh let me explain what's happening now what we are doing the first step was to set up our IDE IDE the uh is the environment in which we will execute our Workshop today and this is powered by AWS Cloud9 service and this Cloud9 service allows us to uh Cloud 9 AWS is a cloud-based idea so that's really nice because you don't have to install anything on your system everything is going to be done inside this Cloud ID so yeah uh it will allow us to to run code it will already have pre-installed some of the AWS and eks tools so it's going to make a lot of things easier so that's going to be the interface we're going to use the browser for that and now we are waiting for uh for our cloud Cloud n environment to be set up uh I want to make a warning here uh that Prov provisioning this Workshop environment in your Ed account will create some resources and there will be some cost associated with them uh but at the end we're going to cover the cleanup and we're going to make sure that uh I will show you how you can delete all the resources that we uh create during the workshop so that you're not going to be charged and anyway for for the amount of time that you're going to be using it you will most probably have some some charges some of them might be under the free tier uh but not a lot so there will be some Coast charges uh all right and actually uh when it comes to managing cast on AWS that's in a lot of times a pain in V because it's really hard to monitor the C most of your infrastructure uh that's why uh for this video we partnered with cast AI which is a tool that allows you to Monitor and to optimize your kubernetes stack on AWS they can cut your kubernetes costs in half and we're going to put this to the test by the end of the tutorial and yeah more about cast AI uh we're going to discover later when we actually integrate with them to Monitor and optimize the cost of power infrastructure that we're going to deploy today C is sponsoring this video thank you very much and uh we're going to see how we can benefit of cod savings using cast so before we uh move to the next step we have to see if our stack has been completed usually this process takes around 5 minutes uh and if you want to see more details about what's happening behind the scenes because this is as I said a cloud formation stack we can go ahead in our AWS console somewhere in a new tab and we can look for cloud formation cloud formation is a way to deploy application using a yam file so uh we see that the latest one is create complete so it should be finished but here in my terminal shell it's not yet complete so probably I still have to wait a bit this is for today yeah oh the second one the eks workshop ID is still creating progress and if we check uh if you open it up you're going to be able to see more information about this tack it's very zoomed in for me but you can have a look at for example resources and see what's actually happening here if you're new to AWS maybe this will not mean a lot of things to you um but the most important thing is I think it's in the second one in the resources there is a ec2 instance this ec2 instance is a private a server on AWS and you can view them if you go to the E tools here this is the uh virtual servers on AWS we're going to use E2 more in this tutorial not only for our Cloud environment uh we're going to use as workers for our stack for our kubernetes as well but yeah let's uh let's Wait nothing to do here just waiting until this stack uh will be initialized and after that we will be able to use them Cloud9 environment hello ad hello D David hello Donan uh so as we can see our stack has been successfully created and if we look in the documentation here of our Workshop after our stack finishes uh initi izing we can execute this command to take the URL of that Cloud9 stack so in our Cloud shell if I'm going to put that command here paste enter we should see a URL this URL if we copy it and open in a new tuab will open our Cloud9 environment let's have a look here it is here is our cloud 9 environment here is where we can um work on our uh code on our application we can run execute commands in a terminal so everything is going to be here perfect I'm going to close the bash at the bottom and actually I'm going to close the r me here the welcome and I'm going to open a new t as a terminal I want to be it here at the top so we can see it and I can even close the bottom one maybe zoom in a bit so that you can see also better not sure how zoomed in maybe like this should be good all right so now that we have our Cloud9 environment where is it this is our Cloud9 environment we can safely close the cloud shell because we use cloud shell simply to initialize our Cloud9 environment now everything else in this Workshop we're going to do here in our Cloud9 environment so yeah we can close everything else and go to the next step uh yeah the next step is going to be to create our eks cluster all the services that we will need for running our kubernetes stack on AWS we need to First create that and this process also takes around 30 minutes not 30 almost 20 minutes so let's go ahead and execute the command first to start creating our cluster and after that we're going to go ahead and um dive deeper into the theory maybe I'm going to explain also some theory about kubernetes and that will help us along the way so let's go to the setup in your AWS account and the first step is using eeks CTL CTL this is a command line tool for interacting with eks if we scroll down until we see the commands you should see here apply a configuration file like so let's copy all commands make sure to copy all of them because if you press on one of them it will copy only one command if we press on copy all we're going to get all of them now we go to our Cloud9 and in a terminal you can open it by pressing on plus new terminal here we can go ahead and paste the command and and I'm going to have a problem because I already did this uh once so I really hope that uh this is not going to be one of those problems that takes a lot of time to to figure out so let's see what's the problem in your case you shouldn't have a problem um everything should work fine for you uh building one error cluster hasn't been created properly delete cluster yeah let me try to delete but deleting it is going to take a bit of time already exists maybe I can uh change the name yeah in your case continue like this in my case I'm going to just simply change the name of my cluster to eks Workshop one for you everything should be fine and now I'm going to initialize I mean create this cluster and that's exactly what you should also see you should see deploying stack and the name of your stack yes this usually takes around 20 minutes so we're going to have a bit of time uh to to discuss uh to maybe expl explain some theory behind uh what we are doing here and something about kubernetes what does eks mean eks means elastic uh kubernetes Service uh this is a managed AWS service that helps us run kubernetes on AWS so it takes care of a lot of heavy lifting of managing kubernetes tax it automatically takes control of the control plane and allows us to also integrate with different AWS services to to to provide resources to our kubernetes for example with EAS we can integrate with first of all with the ec2 which stands for where is it Amazon E2 this is is the most important uh service probably on AWS because it's the the computer resource this provides virtual servers where you can run applications so yeah that's where we will run the nodes and some other services for for example load balancers autoscaling groups this is also something that kubernetes allows us to integrate with not kubernetes but uh Amazon elastic kubernetes Service uh ad is saying now I'm getting invoice for AWS for rent house what do you mean all right so before uh I explain like the the step that we are doing right now is building our eks cluster with the resources that we're going to need before we dive into more details about that uh if find that it would be a good uh practice to uh cover some fundamentals about kubernetes um even though this is not a tutorial about kubernetes it's mostly about eeks I thought it's going to be good um to to brush up uh on the knowledge on the fundamentals on the most important parts of kubernetes components of kubernetes that we will use today so first of all what is kubernetes uh kubernetes is a open-source container orchestration tool developed by Google and what it does it allows us to manage to run uh application that are inside containers so if you're familiar with for example dock er uh with Docker we can create containers and uh on top of that we can use kubernetes to run these containers at scale so we can do we can use kubernetes to run containers on physical virtual Cloud environments or even we can do hybrid where some workload is in the cloud some of them is at your physical location uh now I I want to discuss a little bit about about some of the most important components that makes up kubernetes and the first one is actually not part of a kubernetes but is where the kubernetes workload is running and here I'm talking about a worker node uh where simply called a node is a physical or or a virtual server where we can run some of our application so as simple as that it's a server think about it is either a physical or its a server in the cloud for today's Workshop we're going to use the Amazon ec2 as for our worker nodes um which is a server provided by AWS now uh speaking about kubernetes the smallest unit in kubernetes uh is a pod a pod is um an abstraction layer on top of a container so inside a pod usually is a container but kubernetes didn't want us to limit on what technology for the container we can use either kubernetes or some Alternatives so for that reason they created an abstraction layer a pod that is specific to kubernetes and inside the Pod we can specify the image that we want to run and how to run it so a pod to remember is the smallest unit in kubernetes for example we can have a our no GS backend running as a pod or containerized and running as a pod um very simple kubernetes stack or cluster as we can see here might be created from two nodes basically two servers and they are running free pods as we can see one pod can have multiple containers inside it for example it can have application and also maybe the caching layer on the same pod maybe that's not very smart but uh it can happen uh usually we're trying to keep one container per pod uh and the cool thing about this is that if one of our nodes are crashing or for example one of our servers catches fire and is not accessible the PS on other servers will still be working and running so this is the benefit of of uh running in parallel on different nodes and different availability zones and we're going to see more about that later as we uh progress through the workshop the communication between the pods is happening through IP every pod uh gets a unique IP address inside the network and uh this uh enables them to communicate between each other but that doesn't mean that this IP is a public IP usually most IPS uh unless we specify a specific server service that will make the IP public all of them will be private and will be used only for the communication inside the cluster uh moving on the replica sets a replica set is a component uh of kubernetes that helps us maintain a stable amount of PODS running for example if we have our application running in one single P pod on a server if that server crashes our application will uh not be accessible by the end users so what we can do is we can run our application simultaneously as two different pods on two different servers on two different nodes so if one crashes at least one will still be there uh to serve a traffic and kubernetes can go ahead and uh install uh or create a new one so that's exactly what replica sets are it's a configuration where we specify how many replicas of the same pod we should run and we should always keep running this is also good for fault tolerance uh or disaster recovering and if kubernetes discovers that a replica set a pod uh has crashed inside a replica set it will go ahead and create a new one in order to maintain the desired amount of PODS running at the same time now on top of replica sets or the way to create replica sets we are doing that through deployment so deployment is one step above replica sets and it's a basically a blueprint uh on how to run a set of PODS so if we put everything together we have on the lowest level we have pods that are running in replica sets where that are spe that are configured by replica sets uh which are uh inside a deployment so that's uh that's the basics of kubernetes and and how how how it works and the things that we're going to use uh today Edward is asking am I late no you are right on time um let me see so yeah if you have any questions here feel free to to to ask me but I think we still have around 10 minutes because what we are doing right now is creating our uh creating our eks cluster now here uh I also want to make sure that the difference between kubernetes and eeks is clear and you can visualize it in your head because the kubernetes is the open-source technology for running uh containers while eeks on the other hand is a AWS server service that allows you to run kubernetes so kubernetes uh is platform agnostic that means that we can run kubernetes on any servers on physical servers in the cloud on AWS on Google Cloud but AWS eeks is the specific Ser service from AWS that allows us to run kubernetes and it has very uh tight integration with other services from AWS and this is actually the benefit of of running it there um so uh to interact from our to interact with eks we can either do that through our console so if we look at eeks elastic kubernetes services this is the console of vs we see here the uh the stack that we are currently running which says here active but I think it's not yet ready we're still waiting for it but basically we can add and create clusters through the visual dashboard or we can can use some kind of um um CLI tools to uh interact with eks and in this Workshop we are using this eks CTL CTL tool to interact with eeks from our terminal from our Cloud9 terminal the configuration that we are currently uh executing or running uh is represented here we can see this is basically everything that our um that we are currently creating using eks so as we can see uh what's the interesting part here let me let me show you some of the interesting Parts here uh I think the the interesting part and the thing that will make sense is if we scroll down until managed node groups this is uh um this is the configuration for our nodes that we will need in our cluster so remember from uh the fundamentals here nodes are the physical or virtual servers they are the actual resource that needs to be provided for kubernetes to have where to run so through eess we are uh configuring that we want free uh virtual server of the m5. large instance type and yeah that's it and in this node group we specify that the minimum amount of uh servers will be three and maximum will be six so with this managed groups we can always uh scale up our node group so if we need more resources more capacity uh and free is not enough we're going to going to be able to increase it to four five six uh but not more than the maximum size that we specified here another interesting part is this instance type and this specifies what type of virtual server do we want to uh configure we can go ahead and look at ec2 instance types uh this is a very important uh part of easy to to to understand maybe not to memorize because there are so many instance types but at least to understand because it affects directly the performance of our server and also the price the bigger the instance type the more U resources it has the higher the price will be but most easy to um servers uh they are uh charged by the minute of usage so it's not like for a month or something like that but as long as long as you are using it you're going to be charged so if we finish this Workshop in two three hours you're going to be able to delete it and you're not going to be charged too much uh I don't think if you keep it for I I I can have a look uh at how much I spent yesterday but it shouldn't be more than uh a couple of dollars uh so here on the AWS ec2 pricing we can I think there was a better way to to search for this let me see uh in types we can do also pricing I think let's see yeah here is the calculator where the way we can look at the pricing of different instance types so starting from the Nano which is a couple of cents per per hour and it only has two V viral CPUs and only half a gigabyte of memory uh what we are interested in is to have a look at this m5. large so M m5. large uh costs N9 pennies an hour I think that's 0.096 an hour it has two viral CPUs and 8 GB of memory uh perfect so now we are waiting for these free instances to be created for us and the other resources that we provided here uh at the top it's also important the availability zones uh availability zones uh well well okay let's start with a region we specify that we want to create the cluster in the region that we uh currently are in so if our Cloud9 is in Ireland is going to be used for the cluster as well now a region is a um a region in AWS is a specific location or country where AWS has servers so it has all over the world um and usually one region is is uh has multiple availability zones physically these availability zones are separated so they might be if we're speaking about uh Ireland one of them can be in the m i mean they are physically uh separated in order to if one of them is affected by some kind of natural disaster the other availability Zone will be um will be safe so this way if we are running our application in different availability zones we diversify uh where our application is being run and this way we have a highly available application so if one of these availability zones uh something really terrible happens to it the others will be safe uh okay let's see if our it's still waiting for for the cluster to be created so I'm going to probably explain a bit more about this terraform uh no we're not using we're not going to use terraform in this tutorial but by the way uh terraform is another way other than eksctl that we can create and uh manage the cluster so if you have experience with that you can have a look at the using terraform but this is alternative to eeks CTL so if we're going to use this one but if you want to learn more you can look at the terraform as well um so while we are still waiting oh no actually our uh eek cluster is ready so let's have a look at uh the next step in the using in our getting started setup so after we have run and created our eeks cluster we can use we can run this command to use the cluster that we just created and this way the CI command that allows us to work with kubernetes will be updated and will use the cluster that we just deployed so make sure to run this command as well now we are done here we have to go to the getting started and at the end of the video we're going to come back to cleaning up uh our environment but we don't have to do it now we just created it so um navigating the Ls I'm going to explain that in a moment but let's go to the getting started and we're going to execute the First Command prepare environment so this prepare environment we're going to execute inside the same cloud terminal so let me clear here and let's PR paste the prepare environment for the introduction getting started um so you'll see that all the labs and we are planning to do around five lbs today uh all of them we'll start with this preparing uh environment what it does it usually downloads the files from GitHub that we will need inside this uh lab and also it it will reset our kubernetes stack to the to the initial state so this way everyone will even if you mess up something don't worry like during the next Lab we can start fresh um so prepare yeah let's go ahead and have a look at our application and I'm going to try to explain some of the things that we have here so for the most of the yeah actually for all of the labs that we're going to do today we're going to work using a sample application this is a web store application where you can view products buy products uh pay for them and so on the application that we're going to deploy is made up of several components as I explained earlier everything starts with a UI this is the front facing this is the front end uh uh of our application it serves the HTML and routes all the requests to the specific backend micros service we have a micros service for orders for checkout cards catalog assets and so on um again it sounds like a lot but most of the configurations for them we have so we're just going to have to deploy them create uh connect them together and so on so the UI yeah I I don't think I'm going to describe them because it's quite clear what the cart is doing it's the API for customers shopping carts that's it and also it's important to understand that initially like we're going to start slow by initially deploying it simply within the Amazon eeks cluster without integrating with additional service services but as we Pro progress through the workshop we're going to add different a services that will power uh our cluster and make a lot of things much better so packaging the components um yes as I said kubernetes is working with pods but pods are working with container images they uh depend on the images and all the images for our applications they are already created for us so you can uh have a look at the repository for the source code of these applications I actually um no this is the the repository for the image itself uh but the source code you should also find it somewhere here now microservices on kubernetes all right so here we will see a couple of things that we already discussed and a couple of new things if we look in these small parts these are our small pods these are application pods that as you can see are put together inside the deployment so the Pod is the smallest unit for example our application is running in a pod and a deployment is a way to uh manage like the replica sets and the different uh pods now because our application now is running in three different pods and all of them have a separate IP address uh it's going to be hard to be able to Route traffic to a specific one so for that reason we put in front of them a service a service is a simple let's say um uh a request um balancer I mean it redirects requests that it receives to the pods that it manages so a service is a simply entry point towards a set of PODS it allows us to connect internally different Services together for example our application to the database is going to be uh connected using an a service and we see this actually here on the right with the MySQL service if I do it like this maybe it's going to be better uh yeah on the right there is a different component called stateful set um in a couple of words a stateful set is very similar to a deployment the difference is that it also allows us to have persistent storage so where do we need persistent storage we need that for for example databases or for components that have files uh because if a pod in a deployment fails all the memory that it had all the files that it had will be lost so deployments are good for um application workload uh and stateful sets are good for um components that needs persistent storage like a database uh okay let's see what's next here and if you put if you look at the whole uh architecture together you're going to see that everything is uh grouped into name name spaces uh and this is simply a way to separate different groups of services for example if uh the catalog team only works on the catalog application they will be bound to this catalog namespace and as we can see everything starts with our UI that um user will open a browser will open our website the UI server will the UI service will receive a request and it will redirect the request to one of the pods that is healthy and is uh available to take requests after that uh happened maybe we are requesting a list of products so our UI application will make a request to let's say the catalog Nam space the Catal L npace again it has a service behind uh as we saw here it connects with a database get the the products and returns them back to the UI so that's going to be our um cluster our stack and let's go ahead and actually start deploying our first component the way we're going to interact with kubernetes is going to be using the cube CTL so let's scroll a bit uh and all the configuration for uh kubernetes is part of this eeks Workshop folder that we received after running the prepare environment command it has a base application which contains configuration for all the parts of our application for example if you look at the UI here you're going to see the configuration for deployment for example the deployment uh specifies how many replicas it specifies what kind of template to use uh and how to run the application what resources it needs and so on um including like the Nam space and the other components specific to the UI the same thing with other uh folders for example the checkout has its own deployment specifying its own more resources um that that interacts like Nam space and so on and this is the base application but the second folder modules is simple customization files that will show us how we can change our stack and uh this is going to be how we will learn today okay okay uh yeah before we do anything let's inspect the current name spaces in our eeks cluster let's go ahead and copy this one and back in our terminal I'm going to paste the cube CTL get Nam spaces we see the default Nam spaces but if we want to um to filter the namespaces only the the ones that were created by us we're going to run this command to filter based on the created and we see that we don't have any resources yet found should I maybe zoom in yeah okay going to close ours and only leave our terminal maybe I should also okay perfect so uh the first thing that we're going to do is deploy the catalog component by itself and the Manifest for this component can be found in the base application catalog so uh we already Sol that but let me actually uh scroll a bit down until we get to until we get to this let's create the catalog component and while we're we going to wait uh we will be able to to discuss what's happening so copy the command Cube CTL apply and we are applying the uh the catalog folder from our base application if you want to have a look at what exactly we're applying we can open up oh come on our EAS Workshop base application catalog and here are all the kubernetes components that we create so back to our sample application maybe here no we're going to see that everything starts with a namespace namespace is a group for for combining multiple um components so as we can see the catalog has a name space with a name catalog then it has for example the deployment the deployment contains one replica for the uh catalog service it has a catalog service it has also some volumes uh we're going to speak more about the storage and volumes a bit later and here we specify what image to use the a containers retail store sample catalog and like this all the other components that it needs these files are the same files that I showed you just now this is the deployment of our catalog uh the deployment expresses the desired state of the catalog API component yeah it specifies the image that I just showed you it runs a single app replica it exposes the container port 8080 named HTTP so you're going to see 8080 here container Port sorry HTTP 8080 uh and it run runs some health checks to make sure that the Pod is always running and applies some labels uh the same manifest includes a service so as I explained earlier a service is a component that allows us to connect to a set of pods in a replica set so this is the service that uh targets the HTTP port on our pods which is 8080 but it exposes the port 80 so someone from uh another another um for example our application or our UI if it wants to request information from the catalog will make a request to our service to the 80 port and this will redirect it to the 8080 of our deployment yeah this is explained here so now we created them and if we're going to execute the next command to see what Nam spaces do did we create we will see that now we have one new namespace called catalog uh we can also see uh the specific pods that are running so if we run copy this command here maybe I can do clear and always keep it at the top uh we will see yeah oh that's so nice so uh having a look at the pods we see that we have a catalog pod and this is the application itself running one pod and we have a catalog MySQL this is the database which runs as a separate pod and also it's running as a single replica set um also it's good to see that our application the catalog application it restarted two times and finally it's in running but initially it uh if you run it right after executing the command you're going to see that it's in a status crash loop back uh loop back off so what you can do why that was happening is because the the catalog application depends on the database and the database took some time to initialize and while the database was not initialized our catalog was failing to connect that's why it restarted two times and eventually our mySQL database was configured and it was able to connect and we can um uh verify what was actually happening behind the scenes by checking the logs so if we take a look at the logs from our catalog deployment we're going to see that con invel connection config missing required IP or host name and the same thing for the second one and only the third time it was actually connected kubernetes also allows us to easily scale the number of PODS horizontally and this is the replica sets that we were talking about so if a replica set specifies that we want three nod three Pods at the same time running we can do that using kubernetes uh to do that we're going to scale the catalog to free replica sets let's copy this command or let's as you can see there are two commands here so let's copy all of them because the first one is uh scaling up the catalog and the second one is waiting until uh kubernetes actually finishes uh running and uh executing the additional pods as we saw it was waiting for the first one for the catalog uh and yeah it basically created three pods for the catalog now if we go up a couple of times until we uh get to the ql get pod we're going to see that uh it actually runs fre application pods and only one MySQL yes we can also look at the service and the service as I said is a way to connect to a set of PODS and if we execute this command we're going to see that we have two Services one of them is for the MySQL and it allows the catalog application to connect to my SQL and as we can see it uh exposes the 30 the actual MySQL port and the second one is a service for our catalog both of them do not have an external IP address and that's that actually is a good thing because it protects these services from being accessed from outside our cluster they can only be accessed by other application running inside our cluster uh and here in the workshop they also show us how we can use the exact command to uh to access existing pods in the cluster so by simply executing this command uh we will execute in a interactive terminal on our uh deployment Catal it will so what's happening here uh from the catalog it will uh send a coral request to our catalog service and uh we get back a list of uh products as a response from the product from the catalog API so this is a way we can interact and see that it's actually running our application there even though currently it's not accessible uh online or through the public internet but we're going to get that to that in a moment uh and we are finished with deploying our first component now we can do the same thing for the other components and in the in the last part of the first lab that we're going to do today here we are going to uh apply the whole base application so if I copy this one from here and paste in our terminal so now we're not going to apply a specific uh application from our whole cluster we're going to apply all of them from the base application so if we look in the base application there is one file customization and this is a way to basically combine all the resources that we have in this directory together so when we executed the previous command apply environment all we did is apply a customization for all the services that we have but you might think well the catalog was already uh deployed w't that affect it in a way well no because kubernetes will see that it's already running it already fulfills the uh desired State and it will not create a new one so it will simply uh keep them uh you can also run the second command to wait until all the components are ready I think they should be yes all of them are ready and we will now have a nam space for each of our application components we can look at all the name spaces that we deployed and we see that we have assets cards all all of them including the UI yeah there are also deployments we can have a look at them what we see here is that most of our deployments they uh they are running they are available and all of them have one replica set let me see at the Top If I didn't miss anything no I think I and yeah that's it that's our first lab uh now all the components from our application are running we still cannot access our website and we're going to do that uh in the next Lab but yeah uh believe me everything is working there somewhere in the cloud and and we saw that uh when we executed the command to request something from the catalog uh we can adjust this command to also do some request for example to maybe product or which one what what what else do we have here cards I don't know assets ah I'm not going to come on all right so first lab is finished we created and deployed our first components there yeah in the second lab we're going to cover uh the way of making or exposing our application over the Internet uh but before we do that maybe no actually let me first uh show you this let let's first go through the second lab and after that I want to show you also a little bit behind the scenes of what's Happening actually on AWS what resources are being uh created there and so on but now let's go ahead and start the lab tool and this is about Ingress uh if we go to our Workshop we are finished with the introduction uh let's go to the fundamentals this is the second module the second part and here we're going to learn how to expose the application over the Internet we're going to learn how to configure uh worker nodes and we're also going to uh integrate with storage providers with EBS and E EFS for providing storage to our stateful applications like our database and the assets application so starting with exposing applications right now our web store application is not exposed to the outside world everything has internal IP addresses uh but what we need is to make the UI application available to and users uh so we're going to do that using we're going to expose our application to the world using an aw us component called load balancer um so yeah elastic load balance controller um helps us manage the balancing of a load for a cluster so what does that mean it means that it stands in front of our for example UI interface uh UI application that is running across a lot of PODS it receives it serves as the entry point for the traffic it receives the traffic and spreads it across um um balances it in order to make sure that um the workload is properly balanced across the multiple nodes that we are running in our stack the AWS uh this controller this let me actually show you AWS load balancer distributes Network to improve application scalability uh the elb the elastic load balancing it has two types of load balancers actually I think it has three but we're going to focus on two of them the application load balancer um this is working on uh a very high level of the of the OC model so if you remember from your school the OC model uh of different mod um where is it of different layers of um how the information travels across the internet uh I'm not sure which one is the best to show you we see that at the top layer seven is the application layer at the application layer uh the good thing is that where where yeah load balancer at the application Level we have a lot of information about for example HTTP headers uh we can handle some authorization at that level we can also see what puff uh the user is requesting and based on that we can redirect uh traffic to different applications and this is the benefit of the application load balancer and network load balancer on the other hand it works on the fifth uh layer no on layer for uh on the transport level so it only knows Bas basic basic information about where the traffic is going to which IP addresses so it allows you to balance the network on a lower level which is more performant but it allows you less flexibility in terms of how you want to distribute traffic in the workshop um in the workshop there are two workshops one for the network load balancer called load balancer said the top and the Ingress we're going to go with the Ingress because the Ingress is the application load balancer that allows us to do a little bit more things and I want to show you that and let's first prepare our environment by executing this command and while we will wait we'll have a bit more time to to discuss and explain so yeah the AWS load balancer controller has already been installed in our cluster uh actually it's being installed when we are doing this prepare environment and we will be able to to integrate with it so we can go quickly through the load balancer part which we're going to skip uh because we're going to we're implementing the Ingress but yeah maybe maybe I can explain some of things if it makes sense so the way to create it uh with kubernetes is by creating a service of load balancer type so a service uh a service of type load balancer will use the application not the application the network load balancer will use yeah I followed it uh but it's very similar to the second one that we we're going to follow together and let me see yeah we are still waiting for our environment to prepare but yeah in the end it allows you to have to serve traffic over the Internet to our UI application uh but it uh it lacks the possibility for us to redirect traffic based on uh for example Puffs um that way user is requesting and that's going to be powered by our Ingress so when we are executing this one we are waiting this is installing the a load balancer controller in our e cluster and we see that our environment is ready and we can get started so let's go to the introduction and let's make sure that we don't have any Ingress components uh yet so Cube C I'll get Ingress this Ingress is a uh kubernetes component so we see that we we didn't Define any so far let's go ahead and start by creating our Ingress resource with the following Manifest this file we already have it there let's go ahead and apply this file inside our terminal so Cube CTL apply from the modules exposing Ingress creating Ingress we can have a look at that by going to our files modules uh fundamentals not exposing Ingress creating Ingress so we will see here that uh it specifies one Ingress component of type Ingress uh the class name ALB application load balancer and it has some rules it specifies that uh if the puff starts with empty uh go ahead and use the UI service so now with that command that we executed here we applied this Ingress rule so if we're going to execute the first command to check our available Ingress rules we're going to see that yes we indeed have one Ingress rule in our U y name space it's of class ALB and it has this address let's go back here and yeah we can have a look uh at the configuration of the Ingress rule what do we see here we see that DNS name and this is important because this is the URL that we can access our load balancer or our application let's see uh maybe yes the state is provisioning so I think it's not going to work yet uh if I'm going to copy the URL paste it here yes as we can see it's loading it's not ready yet but uh after the state will go in ready in that situation we will be able to access our application we already saw how to get the URL oh and our application has the um Ingress component has finished creating and that means that now our application is accessibly through the public internet and anyone can access it here so we can navigate we can add things to cart uh we can go and check with catalog uh we can add more things to the card we can also check out and so on but yeah uh the thing is not about the the application itself it's about deploying it but as we can see if everything here is working that actually means that I hope you cannot hear that any give me one second I'm going to close the window okay now it's better so as what I wanted to say is that if we see that most of the things are is are working that means that our uh whole stack has been uh set up correctly um yeah like all the catalog and all of this the card They are separate microservices um yeah we saw how to get the Ingress but here is another command of how you can uh filter the how you can query the host name of our Ingress and here it is I'm not sure why I didn't put it from the new line but yeah basically here here it is all right so uh that's uh how we can use Ingress to open up internet public traffic to our application uh in the next step is going to be something very specific to the Ingress and the ALB the application load balancer that is running on the seventh uh layer uh and here we're going to expose one more service from our stack to the Internet so the whole stack uh that we have I'm going to show in a moment only the UI is supposed to be publicly accessible but for the demonstration purpose we will also uh go ahead and um which one we will the catalog we want to make the catalog also accessible publicly and we can do that using the same Ingress uh rule the same ALB that we created previously uh by default uh if we create two separate Ingress uh rules they will create two separate um load balancer load balancers but if we put them uh in a group if we group them by the group name then they can uh they will use the same load balancer component and simply add some uh conditions some rules on how to uh redir traffic so by adding the group name we make sure that two Ingress with the same group name will be using the same resource on AWS so what we can do is let's go ahead and apply this manifests we are uh expose Ingress multiple Ingress we can see the files exposing Ingress and here multiple Ingress should be the one for the UI which contains uh the only difference is this one it contains the group name retail up group and we create a new Ingress rule for the catalog which is also of type Ingress uh is also ALB and is using the same group name as our UI and this way um it the only difference is the rules so here in the rules we see that if the puff starts with catalog then the service that we're going to invoke is the catalog everything else is going to be uh handled by the UI Ingress rule through the puffs that starts with anything okay so what can we see right now we can uh get a list of uh Ingress rules and we see that we have one for the catalog and one for the UI and we see that they also have a same URL it's not a different URL that means that it's using the same name load balancer on AWS and if we want to should we see the ALB listener not sure if it will give us a lot of interesting info but H here what we can see is this rules it it has conditions so it redirect I already explained that like if puff starts with catalog it will redirect to to to the catalog application otherwise it will redirect to our UI and it also has the 404 uh Target as well status code 404 this one let's go ahead and wait until the load balancer finishes uh deploying you can now access yes it has finished and what can we do is here if we're going to query slash catalog will that go to the catalog API that's what I'm thinking it's loading even the home screen so I guess the ALB is not ready yet yeah it's not ready yet let's wait hello I seeu hello hello Rogelio come on I can look uh directly at the uh load balancer that was created for us while we are waiting for our load balancer to start working and to see what's the status there so what I can do is on our AWS console we can go to uh ec2 console and on ec2 here if we scroll down we are going to see load balancing uh load balancers this is the one that we are looking for at it's active it has free availability zones if if we open it up uh what do we see what do we see it has some rules and it has this rule here and it has like these patterns that we have so if it starts with catalog or catalog and something then forward to the Target group here if it starts here forward to this one and otherwise it's 404 so why it's not accessible yet maybe change the url can that be uh should we try to get the URL you can now access it here so if I oh yeah yeah yeah yeah it has a different URL it's very retail up group than the previous one so now if I access it here we uh get served with UI but if I'm going to do do slash catalog uh it's still the UI uh we're going to try to execute corl command to the to our slash oh it should be catalog okay it's not catalog but catalog and it's uh a Json file because behind the catalog here is the API an API simply returns us a list of products from the catalog so with this we um we showed how we can expose over the same URL different applications on different Puffs using an um application load balancer uh yeah with the Ingress component from kubernetes so yeah now as we can see our application is public it's available over internet uh we can navigate and so one so this is the end of the second Lum about Ingress and we still have uh two more or yeah we still have the Amazon EBS which is the storage provider for the SQL databases the eps have the elastic file system which is going to be used for the assets for our files like images and uh also lab five is about the managed uh node groups but before we go into that I prepared a bonus lab from from our sponsor and this is about price monitoring and optimization well we already are running our cluster in Cloud and I want to see what's going to be the price of running all the workload Veer and also maybe some optimization maybe we can save something with C AI the sponsor of today's video uh we are saying that they can cut the cost in half for kubernetes that are running on AWS Google Cloud Azure or even on your um premises so you can create a free account uh using the link in the description below or following ci/ nodev I'm going to paste the URL in the chat as well so let's go ahead and try to connect it with our cluster it's super easy to connect uh let me maybe try from I already have an account but I will join from uh from a new account to show you the whole process because it's really uh easy to to connect to our cluster so all we have to do is uh we can start for free and actually we're offering a lot of things for free uh for analyzing monitoring and getting saving insights uh always for free so that's really powerful as a savings Tool uh we can create a new account with GitHub or even with Google so let me go ahead and connect with my Google account give me just one second second I'm connecting from a maybe maybe I shouldn't connect with a new one no no no uh yeah so let me sign in sign up with Google and here is the next step to complete our account setup so let's uh give the the information continue and here is our C dashboard we can take a small tour to see what's going on here and they prepare for us a demo cluster to see uh how how cast AI is working in practice so we can go ahead and click on this Arrow to see a um an example of a cluster running on AWS is probably of a big application because uh the cast a here uh are suggesting 64% savings uh and they can reduce the uh the plan for this application by 27 uh, a month so if we scroll down to the configuration comparison they display a list of how your current cluster uh looks and what would be an optimized cluster configuration with different uh nodes types that are better with spot instances and so on uh so it's a lot of data but let's go ahead and actually connect our cluster and see real data about our application and how much we could save so I'm I'm going to press on this button at the top connect cluster and we can select uh the eks the first one the provider from AWS that we are using today let's press next and all we have to do is copy this command from here and open our Cloud shell or terminal and run this script so in our Cloud9 in our oh cancel in our terminal that we used so far let's go ahead let's do clear and let's execute the command that we copied from cast AI what this will do is it will install uh cast AI uh as a read only agent uh and uh this will allow it to read the information about the cluster configuration it's also a secure way uh so you can read more about uh security here and after this is done we see that it created a deployment for the the agent so we can go back to our cust Ai and press I run the script uh it shows us that it connected successfully and it should already have some savings it will take a couple of seconds to calculate all the to analyze our existing cluster and to uh create a personalized plan specifically for us for a better uh configuration for our cluster so as we can see with our small cluster that we have we already can save $50 a month uh but this of course depends on how big is the cluster here we're using a very small one at demo um application but if your workload uh grows uh in that case with 23 or 30 or 50% is going to be a lot of money that you can save so the only way to check it out for specifically your production cluster is to go ahead follow the link in the description below create an account connect it you saw how easy it is connect and right away you can see how much you can save and as we will add uh in a moment more resources we're going to see that it actually can save much more for now we didn't give it enough information to work with so if we scroll down to the configuration we see that we have three um nodes of M5 large with their price and they would cost us 234 per month and on the right here uh you will see the optimized cluster configuration a list of um a a different set of um nodes that you can run that will still fulfill the requirements of your pods that will be much cheaper in this case 179 so if you want to know more details make sure to book a Tex session with their team they will always be there uh to not sure why now not but yesterday I received a call for them and we went through all the configuration uh together it's really a great team that is there to help you so make sure to book a Tex session uh if you want to integrate it in your specific case we're going to come back here uh a bit later as we add more things uh to our stack and we're going to see we're going to monitor how everything is working because be besides the uh savings the potential savings that you can make uh I found a is a very great way to monitor the whole infrastructure that is running so starting from the dashboard you can see how many nodes uh are running how many are spot instances or on demand how much uh load do we have how much unused resources we have that we can reduce uh for the cast monitoring as well we we can see how much we currently spend if we would run the cluster Until the End how much monthly it will be uh the workloads and a lot of ways we can see this for example we can also look at the Nam space so we can see which name space is costing us the most UI catalog checkout and so on and also there are some security reports as well now uh yeah not going to go much in depth here but yeah as I said we're going to come back to cast AI a bit later after we uh Implement some of out of scaling groups uh in our stack so thank you very much C for sponsoring this video uh it's really powerful uh and especially if you are running uh kubernetes at scale in production if your company uh that you're working for is using kubernetes on eeks or even on Google cloud aure or on your premises you can use cust AI to get um um a lot of savings uh monitoring and security for your cluster so at least give it a try see for yourself how much you could spend and when you can take it from there it takes you probably three minutes to do this report anyway uh continuing our Workshop let's go ahead and continue with lab number three Amazon EBS uh the lab number three here it's about storage let's go ahead and prepare the environment for our Amazon EBS La so I'm going to open it here and let's copy prepare environment fundamentals storage EBS I'm going to clear here let's prepare it let's put it like this and contining so here it's about storage um so if we look at uh the diagram of our application we're going to see that uh on the first layer here with blue we have our app services so these are uh Services these are workloads that do not need to perase data they don't need to save data all they have to do is to process data uh that is coming from the the user interface or from the user process it read it from the database and um return it to the user so these kind of workloads they do not depend on persisting the data itself so these kind of workloads is very easy to uh scale horizontally so if for for example all of a sudden you get twice the amount of users in your application all you have to do is to increase the uh the replica number in your replica set for the application from for example four to eight and this will spin up twice as many pods all of them will work um at the same time serving the traffic they're also great because if one of them fail we can easily swap it with a new one we can create more of them they basically do not depend on each other um they are simply workers when it comes to services that need persistence and here a very good example would be our um databases uh databases for this application for example MySQL needs a place where to write the data that it stores it needs a per storage infrastruct so the same is with redist Dynamo DB and other databases even the assets here they will also need to persist images and files so when we're talking about um services that need persistence um these are already not that's why we need um state state full uh this why we need a stateful uh set which is similar to a deployment for applications that I explained previously but in this case they have some rules that make them uh that make them that make make them uh be able to connect to the same data source and if one fails um um and a new one is created they would be connected to the same dat The Source uh they also have some rules of keeping the same ERS so if for example um we cannot replace the order we cannot easily create different ones so they they are a bit different than our deployments so uh in this uh Workshop what we're going to do is we're going to use um we're going to integrate a persistent storage called EBS elastic block storage for the storage of our um databases okay our environment is ready um so let me first yeah EB AWS EBS this is an AWS service that provides um basically storage it's storage in the cloud the and specifically it's not there is s free storage and that is a object based storage but EBS is a file system on EBS you can install um operating systems you can install databases and so on so that's the difference Amazon elastic block storage and it's yeah so when we prepare the environment uh for for this lab uh it installed the EAS uh addon for this EBS driver that will allow us to connect from uh from our eks storage to EBS hope I'm not confusing you with too many uh short hands like EBS eks and so on yeah this is a good EXP explanation persistent storage enable users to store v data until they decide to delete it so we're going to learn about stateful sets the EBS driver and stateful set with EBS volume so let's start with stateful sets so like deployments stateful sets manage pods that are based on an identical container spec uh yeah this is unlike deployments where we can replace things stateful stats they maintain a very sticky identity for each pod and they are not inter interchangeable so we already have some stateful sets uh deployed as as part of a catalog micros Service uh which utilizes the my database running on eeks yeah databases are a great example for stateful sets because they require persistent storage uh and we can analyze our mySQL database pod to see the current volume configuration so let's copy this command to describe the catalog stateful set and the stateful set also has uh information about volumes where it can read as we can see our current MySQL stateful Set uh is uh has only one volume and it's the empty deer it's a temporary directory that shares a pod's lifetime what does it mean mean it means that this directory lives together with a pod and if a pod fails the data will be lost so for example if now we're going to uh go ahead and for example that of the catalog if we're going to add a new item in our catalog uh if we're going to add a new item in the catalog and we are going to stop the server we're going to lose the that it item be the next time so yeah as we can see the volume section shows that we are only using an empty deer which share the pods lifetime so all the MySQL they're connecting to their own directory that lives on the same node on the same pod and another thing is that if a different MySQL is creating an item the first one will not be able to read the same one because they they are different directories so when a pod is removed from a node for any reason the data in the empty deer is deleted permanently yeah and here we can also show a demonstration and see VC action to better understand the concepts and what we're going to do is we are going to create a a file on the Pod file system by executing this command on the catalog MySQL with id0 uh we are writing one two three to a file in this puff now we can verify that our test.txt file exists by running this command and we see that yes it indeed exist and it was created today now the next step is to remove the current catalog MySQL pod this will force the stateful S controller to automatically recreate a new catalog MySQL pod so we simply simulate the Pod of the MySQL to uh crashing where we deleted and it will be recreated automatically for us but during this process uh we will see that uh we lose the file that we generated previously we need to wait for a few seconds or maybe we can wait using this command condition me and we can see the status of our pods for the catalog MySQL it's running and age 24 second it was restarted so finally let's execute back the in the ls command to see what files we have in that directory so if I'm going to copy this command here we're going to see that the command terminat with no such file or directory test.ts txt because it was created in this empty directory that was living on the pod uh environment and when we deleted the Pod we lost the file as well so now let's go ahead and connect a persistent volume to our database Pods inv Storage uh I think I also have to explain you the yeah we need to understand two concepts before we move on so there are Epal Emeral volumes which are the volumes that we will lose when the container stops and we just saw an example of EP ephemeral volume now there are other types of volumes there are persistent volumes and they are a storage that will persist um persist when any pods in our cluster are removed and as long as we and after we put some files there we'll be there unless we uh delete it specifically now this uh persistent volume and yeah like persistent volumes is the physical um is the physical volume we can think about it as the EBS volume the persistent volume claim is a way for our resources to claim some of the existing resource pool so for example if we know that the database for our application needs 10 uh gigabytes then we're going to create a persistent volume claim of 10 GB and kubernetes will look among all the persistent volumes that exist that are attached and we'll see which one has available 10 gbes and will'll allow or will uh use that part of the disk specifically for uh the service that claimed it so this is a way to basically claim resources and uh yeah that's the most important things that we need to know now let's go ahead back in Amazon EBS in the EBS driver so yes first things first we need to confirm that we have a EBS driver installed uh this was installed part of our preparing the environment but you can look at how you can install it in a different cluster in your personal cluster as well well by following this URL so what we have to do is to get uh this command and see if we have this driver yes we have uh three uh instances they that are replicated among three different um availability zones so this is very um full tolerant because if one in one availability Zone we lose data it's going to be available in two LS and will be replicated back so this also is a very uh good way to to add full tolerance and highly availability to your application this storage class uh the storage class here we're going to see what kind of storage we requested from EBS uh yep and yeah that's it now the last step is V uh creating the or updating our stateful set by uh connecting it instead of having the where is it the empty directory what we want to do is we want to have a EBS volume and we want from MySQL pod to to connect to our EBS volume but yeah EBS is where a service this is the physical volume on AWS now the persistent volume is the representation the one1 mapping from AWS to our uh kubernetes cluster now this volume simply says like hey there is this volume on AWS this is how it can connect this is how much data it uh space it has now from our pods we create persistent volume claim and this way we say that hey out of all of like 100 gigabytes here 20 will be for my SQL 20 maybe will be for the assets and 20 will be for mongodb for example and that's how we can connect uh our pods to the persistent storage let's go ahead and follow this guide yes as we can see we will create a new stateful set because there are some fields that are immutable and cannot be changed and we can see the code of a new stateful set that we will create here has one replica and it runs this MySQL container but it has this volume claim template read write the storage class name and how much um storage it wants to claim so let's scroll down a bit until we get to the apply the changes let's copy both of these commands and let's run this one first and the second one there are two commands make sure one Cube CTL apply the second one Cube CTL roll out fire gaming is asking does a scare any of you guys I don't know what's so scary about it it's I probably I understand why it's scary it doesn't have a very good overview of what's happening in your account so you're always wondering like did I leave anything there but I'm going to show you the end like how we can clean up and with tools like C A we can have a better visibility of our costs or what's happening behind the scenes specifically about kubernetes in this case so yeah we see that our new stateful state stateful set for the MySQL has been rolled out and we can confirm that it was by running this command yes it's ready and the AG 77 seconds it's a fresh one and we can now go ahead and inspect the configuration again and now we're going to see uh what we expect is not to see that uh volume that was empty EP de but we see this persistent volume claim with read ride access with a 30 gigb request for this storage class uh that is still pending so we are still waiting for this to to be provided we can analyze how a dynamic volume provision created there automatically yes on AWS we can see this one if we go yeah let's go to our EBS um AWS console let's go to ec2 because EBS is also part of this ec2 as elastic Block store so if we look at the volumes we will see uh the volumes the volumes this one with 30 I think I can also filter them here somehow to show you only the ones from today with a this tag let me see if I can filter properly equals to data catalog MySQL this one yes yes we see two of them one of them is currently in use another one is uh coming available still bending we're still waiting and after that is going to be done not sure if it's finished yet I think it's not let me see what's going to be the answer to this command uh we should see uh this one MySQL it's here I think it should be ready let's try to uh do the same uh experiment as we did previously to see if creating a file on our volume and removing the pod if a file will persist that was the whole purpose of creating this EBS volume so what we can do is using this one we can create the file back in our volume in the MySQL directory and with the second one we can double check if a file is there by doing LS we see that there is a test.txt file and now we can remove the catalog MySQL EBS pod which will force the state fold controller to automatically recreate it so let's go ahead and remove this pod that is running our database it's deleted and we should wait until this one is ready is back ready so it has finished let's execute the second command to see if our MySQL is running yes it's running and lastly we can check uh if the file is still there we see that our pod has the age of 18 seconds we restarted it and in the previous example we lost the file but now if I do LS command again we will save it with test text file is still there and that means that uh that means that the file system uh the file system where the volume is persistent and if our were for example for some reason our pod that is running our database crashes and has to be restarted the data itself will not be lost and the new pod with running MySQL will connect back to the same data and will start working again that was uh our lab for the Amazon EBS and how to provide persistent storage to stateful sets specifically to a mySQL database well probably not specifically to mySQL but to a database application on cast AI uh I think we will see some information about this one no it's still to notes uh all right let's go back to our storage because we have one more uh lab for for the storage uh the next one is for the EFS this is another uh file system solution from AWS so if we look AWS EFS this stands for elastic file system and it's similar to EBS but it allows uh it gives you a way to have a very elastic file system with EBS if you had to if you are running out of space it's a bit uh more challenging to uh to to basically have more space in the EBS you'll have to stop it uh reconfigure it install it again uh then you'll have to repartition with this so it it's it's a pain like I remember at the previous startup at fum our server was working with EBS and probably once in half of a year I had to go there to increase the space because our data uh was using was running out of space and I had to go through all of this process just to expand how much storage I need in the EBS with EFS this is different this is much easier because from the name we see that this is elastic so it can grow with that demand without uh losing uh without having downtime so let's go ahead and prepare our environment for not for the EBS but for the next Lum for the Amazon EFS so let's prepare it in our terminal this will install with EFS driver for the cluster and we'll also create the EFS file system so from this definition EFS is a simple serverless set and forget elastic file system for use with ablus cloud services uh it's built to scale on demand to petabytes without disrupting uh applications growing and shrinking automatically as you add and remove files sorry eliminating the need to provision and manage capacity to accommodate growth so this is basically in comparison with EBS elastic file system is a better way I mean um a different way where it uh grows grows and shrinks automatically so in this lab we'll learn about the following concepts assets microservices deployment and the assets that we currently have where is our application it's here need to find the url of our application so the load balance Ser that we created was removed when we prepare the next environment so does it mean that we cannot access it well for the assets we will be able so the asset microservice is for serving images for our products mostly in this situation so our environment is still being prepared and we can see the next step that we're going to have to do persistent network storage when our e commer application we have a deployment already created as part of our assets micros service the assets microservice utiliz a web server running on eks web servers are great example for use of deployments because they scale horizontally and declare the new state of PODS assets component is a container which serves static images for products these product images are added as part of a container image build at the moment so at the moment the way we at the moment we saw a couple of images just because they were bundled together inside the container image that is running that means that if we want to add more products in our catalog we will have to rebuild the whole uh container image to include the new images so as you can understand this is not uh a very desirable solution to rebuild uh the image of our application the docker image of our application every time we add some images so in this um in this Workshop we're going to add the EFS file system for the persistent volume so yeah infrastructure this one takes a little bit more time let's see what it will do so as you can see if we would describe the deployment of our assets we're going to see that it mounts to a empty deer so the same temporary directory that initially copies the images from the container and service it but if we change something there and then remove our assets pod then we're going to lose the the images the new images so assets POD at the moment connecting to the empty directory yes as I said the container has some initial product images copied to it as part of a container built under the folder HTML assets so our environment is ready um here they demonstrate the problem so I'm thinking should we do it as well okay let's try so the problem is going we will see the problem when we we create multiple replicas of our assets deployment and this way if we copy this one these two commands from here uh when we create multiple replicas our images will live uh in two different directories so if we do some changes on one of his directory it's not going to be uh reflected on the other one because they live in two different nodes so to demonstrate that the first step was to increase the replicas to to from one to two to have two number of two pods and now let's try to put a new product image name new product in the directory HTML assets of the first pod using the below command let's copy it what it will do is it will first take the Pod name as the first pod from the list and and it will uh execute inside that pod it will put a file with a new product we can config that the new product uh yes we can if we copy only the second command from this uh Alex it's too much if we copy the second command from here we're going to see we're going to confirm that the new product is available inside the HTML directory of the first pod so we actually see this new product here but if we check if we execute the same command on the second pod by copying both of these commands uh what we do is we just change the Pod name to take the second pod from the list list of items and we execute the same command in this case we see that it only contains the original images without the new product image so that's not a very good solution for serving assets because we need a single source of Truth we need our files to be stored in a single place and the worker knows to connect to the same um volume so yeah in order to help solve this issue we need a file system that can be shared across multiple pods if the service needs to scale horizontally while still making updates to the files without having to redeploy so our desired solution is to have this file system and uh all our pods connected to the same file system let's go ahead and actually do that by going to the next lesson and let's go ahead and make sure that we have the driver for EFS it was installed automatically for us when we prepare the environment uh but you can follow this URL if you have to integrate it in your stack so let's first of all get the ID of the EFS because if we have a look on a console we can actually go to EFS and here we're going to see the created EFS with yeah this e KS Workshop uh file system so what we are doing here is we are saving the ID of that in the in a variable called EFS ID and using this ID we are going to use it in our uh configuration for the storage class and we can apply this configuration by executing this command the configuration is coming from fundamental storage EFS storage class so we can have a look at that file in modules fundamentals storage EBS not if EFS and inv storage class here you're going to see important information but also yeah uh here at the top we see them uh now we'll get and describe the storage class using the below command notice that the provisioner used the EFS driver and the mode is EFS let's see if that yes we see EFS for the storage class that was just created so we created the storage class now we need to assign it or attach it to our uh pods uh using persistent volume so uh you can inspect a file that will create this one it creates an EFS claim it creates a claim uh for five gigabyt redri and we will mount it on the on the deployment on our assets deployment here under volumes that's how we mount it we can apply these changes but by copying the commands it's applying this one and it's rolling out when you update successfully rolled out so we can have a look at the volume uh mounts to see what different volumes were mounted on our assets deployment so we see that the first one is mounted on the HTML asset and it's called EFS volume and the second one is the temporary that is always there so this way we we confirm that uh a volume uh with a name EFS volume was mounted to our assets deployment so we can look at the persistent volume and also add the uh persistent volume claim itself self the claim that we make for resources we see the capacity 5 GB for the namespace assets so we can uh do the same tests uh with a image with a newly created image so let's first create the new product image on the file on the file system by executing this command in the first p pod so we still connect with to the first pod and uh add a new product image there we can verify that it is uh that it exist both on the first pod and the second pod if we do an LS command in the second pod we see that the new product is still there uh and it's present both in the first and the second po because we are connected to the same file system so if I'm going to execute the same for the first one we see new product also V and you can also uh remove a pod recreate it and it's going to still be there so that's how we can use the elastic file system and AWS service to provide a persistent volume to our uh deployments the difference between EBS and EFS was that with EBS we provided it to a stateful state Set uh but with EFS we Simply Connected it to a deployment okay uh and the last lab for today that I have prepared is the managed node groups how we can manage our uh nodes how we can increase the capacity get more nodes in our stack and so on so let's go ahead and prepare the fundamentals for the management for the managed node groups and yes give me just one second for so yeah managing notes we're waiting for our environment to set up and while we're doing that uh I wanted to take the moment to ask you for a very small favor if you're enjoying what you're seeing now if you're learning something new today then please make sure to subscribe to the channel that will help us a lot to reach more developers like yourself and help more people we set ourself a realistic goal or for for this year to reach 100,000 subscriber by the end of a year we're currently at 83 and that's 5% all the way till the end uh we have this goal for for the last three months of of the year and I know that's possible because 99% of you watching right now are not yet subscribed I can see this from analytics and only 1% of watch time is is coming from subscribers so yeah if you're enjoying our tutorials please help us by subscribing that really helps us a lot in uh doing better content and reaching more people so I'm keeping track of our progress here today how many do we have uh where is it yeah we still have some time until the environment is preparing for us so let's play with 48 100 48 no 84 V 84 100 48 and how much how many percentage do you think that's it that's 9% yeah we are almost at 10% and we have two months until the end of a year very hard seeing how we progress but I really believe that you can make this happen so thank you in advance okay and the environment is actually ready and we can go ahead and yeah let's first of all yeah go to through through a documentation so in the getting started lab we deployed our sample application to eks and so we running pods but where are these pods running we already know we know that they are running on ec2 we saw that and I can actually show you directly here if we go to ec2 we're going to see free servers free instances running for our eks Workshop so here Workshop default node three of them and you'll see that they are M5 large and also you'll see that all of them are in different availability zones one A B and C so yeah a node group is one or more of these ec2 instances that are deployed in the out in an autoscaling group so EAS can manage them and they are standard ec2 instances and that's exactly what we are built for EAS uh as a service I think doesn't cost anything we are only charge for the for the resources that we are using and ec2 is one of them so yeah we can inspect the default managed node group that we pre-provision for our that we pre-provision by executing this command and we're going to see the information about the node group we see that uh the type the image ID the minimum three size and maximum six and the desired capacity of free and the instance type uh yes the configuration minimum the instance type and so one we can also see more information like what availability zones they are uh deployed into because I'm too zoomed in it's it's a bit confusing to see but this is the Zone one a BC but we already saw that uh let's have a look at how we can add nodes to our uh cluster yes that usually happens when you get a lot of workload and you need to add and your cluster doesn't have enough resources to run so what we need to do is to provide more noes for for the cluster to have resources we're going to use the eks uh CTL command to scale up our node group and to do that uh uh first of all let's retrieve a current node group scaling configuration so we already did that actually we saw minimum three maximum six and the desire capacity three so what we can do is we can scale the Desir capacity from 3 to four by executing the next command so if I do clear here and execute eeks CTL scale node group we are targeting the node group the cluster and we want to scale it to four nodes what we are going to see is uh if we go to ec2 for example and refresh we're going to see a new the fourth one added here and the status is initializing that means that just like that with a simple command we added one more E2 instance to our cluster to fulfill their requirements of a new load uh and yeah it may take two to three minutes for the node provisioning and configuration changes to take effect yes we can retrieve this one again to see information so now minimum size is three maximum is six but the desired capacity is already four we can wait for everything to to execute and doesn't mean that it initialized not sure let's try command to see the status of different nodes in our group and we see ready ready ready ready three of them uh has the age of one more than 100 minutes and this one is a fresh one that we just added and we can do that we can increase the uh we can scale up our node group for example I can go ahead and scale it to six nodes for example if for example we we are running some kind of a campaign during Black Friday and we expect uh three times more traffic to our website um this would be uh a good practice to go uh get get more resources for our cluster uh in order to be able to um meet the new traffic demand let me see what's happening here uh it's still four of them ready so I'm going to wait until all of them are ready and we can wait until all of them are in the condition ready like this so three of them are in the condition ready I think we're going to have to wait a yes all of them are so now if we look at our ec2 instances we will see a lot more instances we're going to have to scale down very fast because if we go back to cast Ai and have a look uh at the updated dashboard we see that now cast AI can save us uh 60% of a cost it can save us $278 a month and in one year that's going to be $3,000 $344 uh and that's where um yeah their claims of um cutting the cost in half comes from in a lot of workloads uh you're going to find out that there are a lot of unused resources similar to how I have here in my simple uh cluster so what we see here is uh that currently I'm using six M5 large which which which costs me almost 500 a month and in the optimized cluster we can get it down to 18992 uh I think after booking a text session uh you will be able to see the actual data because yesterday I was a able to see but yeah you have to uh to to talk with her security that's things workloads Nam spaces dashboard we see six nodes and yeah let's go ahead and um yeah we saw how we can um scale up to scale down we will use the same command command scale and we will provide how many nodes do we want so let's go back to free nodes uh and see if it will scale down if I refresh here desire capacity scheduling disable yeah I think this three will be removed soon they are living our cluster the next step is about yeah about adding nodes I think that's everything we saw how we can add nodes we saw how we can remove nodes uh the next one is POD affinity and anti-affinity what does that mean that means um these are some rules that uh specifies different uh the requirement of different pods to run on the same node for example if you have uh in this case we have a checkout um the checkout pod and the checkout radus which is the caching mechanism well if these two will be scheduled uh at the moment these two they do not have any rules so they can uh be run or yeah they can be scheduled on different physical nodes so the communication between the checkout and the cash will take a little bit more time because uh of the amount of time needed to communicate between two different servers so what we can do is we can say that uh the checkout r or the checkout service should always be run in the same node as the checkout red and this will ensure that our caching the redis Run locally with a checkout pod uh for the best performance this would be one rule of affinity like how do we want to run pods on different nodes so let's make sure that our checkout and checkout Ries are running yes both of them are running check out and check out Ries uh we can see both application have one p running let's find out where they are running so on what node from our cluster are they running the checkout is running on the IP that ends with 11953 but the red is running on a node that has IP of3 like basically a different IP which means that this is a different node in some cases like this is random like for you it might be actually the same pod the same IP address uh but uh it's not um it's not enforced to be the same so in my case we were schedule on two different uh nodes so with P Affinity uh in this uh step what we are uh changing the spec of our checkout pod and we are adding an affinity we want we say here that uh the component should the components should contain this Radice component and anti-affinity is the anti- rules like what shouldn't happen like what condition shouldn't be met in order for this to to execute so I think it tries to run it in a different uh node than that checkout service so to make the changes let's copy the commands let's go ahead and delete yes the first one is with delete and then apply a new Affinity checkout so yeah the P Affinity ensures that the Rus is already running on the Node this is because we can uh assume the checkout pod requires checkout R the an Affinity requires that no checkout pod are running already on the Node yeah we want to make sure that only one checkout node uh pod is running per on a single node not more than that so what's happening here successfully roll rolled out let's scale a checkout out to two replic us uh and we now have to validate each P is running so the checkout is running on uh VIP with 153 their radius is running on the same um node because it has the same IP address here uh but the second one doesn't have an IP yet yet that means that it's not yet running why because uh because of our configuration we have two rules one rule is uh we need the checkout R to run on the same node and because the checkout R is only one uh the anti Affinity also prevents another checkout to be executed on the same note so basically it doesn't find a place an environment to run it where it fulfills both the P affinity and anti-affinity so if you scale the radius up to two instances now we will have where to run uh the second checkout so let's do that right now um by applying the next configuration we are also adding uh the affinity for the checkout radius and we will uh scale up the checkout radius also to two replicas now if we check the running pods we should see that there are two for check out they are both running and two for their radi and if we check where they are running will it be this one no uh apply no delete no clear no get pods I think this one uh we see that the checkout and the checkout redes they are both running on in pairs so check out the first one with check out R is this one we have the same IP no no no the first one and the last one we have the same IP and the second one with the third one share the same node and yeah that's everything about the affinity and anti Affinity uh I will stop here uh there are uh more information about T and this is a way to also specify some custom configuration to um to some no and then using Toleration to specify what pods can we run on that node with spot instances this is a very good um thing to get more in depth spot instances are a way to get uh very uh large uh cost benefits for ec2 instances and it works by beaing on unused resources on unused a s resources so the thing is that you are paying way less for a spot instance than then on an on demand instance but the only challenge is that um if for example the price increases where are demand higher demand you are not going to have uh an instance I mean you can lose access to it so this is very good for workloads that do not depend to always be up and running for for example if you have some uh background jobs that are doing some um sorting or classifications and so on and these jobs uh are not time bound like you don't care and you don't have to have them always running you need to run them once a week you can execute them on a spot instance and bid on the smallest amount of money on them on EAS to instances and get a lot of um benefits from from this one all right so guys that was our eeks Workshop 101 uh before we go uh I promise you that uh we're going to delete the environment and what we have to do to delete the environment when you finish this one um that's going to be in a moment but I highly encourage you to follow and go through the other labs from here as well for example in OS scaling this one is very uh important aspect of uh running application at scale because with out scaling you can Define different rules to scale up your resources automatically uh we saw how to manually scale up Resources by uh scaling up the node group but without a scaling we can Define some rules that hey if the servers are uh get at the 80% uh CPU usage then go ahead and uh and um bring in a new um server a new ec2 instance in your cluster to provide more resources and if for some period it's unused then remove it and release and have a better um instead not to spend a lot of money so make sure to check out autoscaling of Compu and also of workloads with compute you are uh bringing in more E2 instances by scaling up workloads you are basically increasing how many pods are running for uh your application uh check out other labs from this one as well it's really in depth it's really easy to follow uh and I learned a lot uh but once you are done um if you look at cast AI if you keep free nodes that we created here that's going to cost you around $200 a month and even more I think 250 or something like that so we don't want to do that we don't want to to have that and what we can do is we should delete things that we created so back in our setup uh in your AWS account on this page let's go until we see no in the eeks CTL page at the end there will be cleaning up so the first thing is to call the delete environment inside our Cloud9 here so delete environment is the first step this will make sure to remove all the uh things that we created during this Workshop from for specifically for our application it's not yet removing our eks cluster it's only about the application it's going to take a bit of time the next step is to delete the whole cluster the cluster includes our ec2 instances it includes our AOS scaling load balancers EBS and so on so by executing this command we are going to delete the cluster I'm not sure if I'm going to be able to stay with you here because this one takes a bit of time uh but yeah after the first command is done this is the second one that you have to run and there is one more that we have to do and that's as part of a cleaning up page because here we are removing everything that we have created using our Cloud9 environment so this Cloud9 environment it will remove a cluster it will remove application and so on but the Cloud9 environment will still be there because we created the Cloud9 through the AWS Cloud shell so the last step is to remove a Cloud9 environment uh but we should do that only after finishing everything here so first step First St is delete environment in our Cloud9 we are doing that right now second step eks CTL delete cluster also inside our Cloud n the first one finished and I can do the second one and only after this is finished we're going to go in our AWS console look for the cloud shell Cloud shell and and only after the cluster is deleted we are going to go here and we're going to execute AWS Cloud information delete stack with a St stack name that you provided uh so in Cloud shell you're going to execute V command paste hopefully I didn't run it but in my case that's going to be that's going to have a one and this will also to remove your uh Cloud9 environment the ec2 that was used for that and everything that we created today in this tutorial to double check what you have uh most of the things will be on E2 so go ahead on the ec2 dashboard and see how many instances do you have running at the moment I see that I have running only the oh actually I have a lot maybe they are closing the cloud n but the other ones I think they are in the process of being deleted so on the dashboard make sure you don't have any running instances make sure you don't have uh even terminated instances will consume resources volumes volumes I think you should manually delete uh yes that's true this is another step that you'll have to take because volumes have delete protection in order not to get to lose data from the from the volumes so after everything is deleted make sure to come here and remove the delete the volumes as well and that's it everything else like load balancing autoscaling groups they will automatically be deleted for us and we will not be charged anymore here in cast AI we see that our current uh no still connected there but slowly is being deleted oh I made a mistake hopefully I told you not to to delete the cloud stack before deleting the that one but yeah anyway uh that was our awss Workshop I hope you enjoyed it uh if you have any questions uh that I can answer right now go feel free to to ask them in the chat if you enjoyed this one consider subscribing to the channel we're doing a lot of tutorials on this channel to become a better developer and finally I want to say big thank you to cast AI for sponsoring and for making this video possible um I will see you next week uh with more tutorial probably about mobile development if you're interested in learning more about mobile development uh check out our premium courses at academy. nojust dodev or follow the link in the description below for a free master class anyway have a great rest of the day bye-bye guys
Info
Channel: notJust․dev
Views: 4,188
Rating: undefined out of 5
Keywords: vadim savin, not just development, notjust.dev, live coding, AWS, AWS EKS, Kubernetes, EKS deployment, Cast.ai, cost optimization, Kubernetes on AWS, EKS tutorial, AWS cloud, AWS services, Kubernetes cluster, EKS cost savings, AWS Kubernetes guide, EKS setup, cloud computing, Kubernetes deployment, Cast.ai integration, EKS cost reduction, AWS infrastructure, AWS tutorial, AWS cloud services, cloud-native, AWS cost management, AWS EKS workshop, Kubernetes tutorial
Id: qRSB0aPcf_s
Channel Id: undefined
Length: 164min 6sec (9846 seconds)
Published: Sat Nov 04 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.