AWS Tutorials | Classic ELB Configuration & Auto Scaling Masterclass | May 20, 2019

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this conversation will see that how you just create and configure a classic load balancer okay so over here first of all before I can just deploy the load balancer I have to this configure these security groups now you may ask this question what's the purpose of the groups or what's the purpose of the the security group in this entire configuration see if you see the EOB configuration let me just go with the the configuration part now what happens is let's imagine this is my use these are my users these are my users and they are sending the traffic to the load balancer fine and the Loeb answer is forwarding the traffic to a group of easy to instances fine now as you can see the Loeb answer is exposed to the Internet it is exposed to the Internet so it's it's very very important it's extremely important that I just apply a firewall that's called the security group to the load balancer to ensure that it's not being it's not being exposed to some unwanted traffic so let me just draw it again because I want to just show you the configuration part so what I'm gonna do is that let me just draw the stuff again so this these are my users and the users are sending the traffic to the load balancer fine so this is my low parecer and then the load balancer is now forwarding the traffic to let's imagine these are ten ec2 instances these are my 10 ec2 instances these are my tenant assists that uh they're fine these are ten stances now the thing is it's extremely important that you understand that how exactly you just formulate is configure these security group of the dough balancer see first of all let's understand this thing we have to just create a new security group of the loop balancers name it as e lb - Agyei okay so this is the group that I will be assigning to the your balancer now for for these tennis neces I will have a common security group that is easy to - is she so these are the two groups that I'll be using in the entire config to protect my applications and servers okay now the question is is that how exactly you you will just structure and formulate the inbound and the outbound routes let's do that let's start with the EOP - is she now let's imagine that we are sitting on the top of the load balancer they're sitting on the top of the load balancer now for us what should be the incoming traffic the incoming traffic should come from the extol uses on the internet so for example this the slope answer is receiving the web traffic that is HTTP slash HTTP okay so let me just write down this this this once again so these are my users so what should with incoming traffic for the load balancer the incoming traffic for the loop answer should be the ish t TP / HTTP coming from anywhere 0.0 0.0 it's not 0 fine 0 dot 0 dot 0 dot 0 dot 0 dot 0 / 0 this means it's coming from any external user on the internet so this is the inbound rule of the EOP - Agyei what about the outgoing the outgoing will say that the http/2 http/2 be sent to easy - - entry what is that that's easy - - is she so the thing is that instead of sending out the outgoing traffic to each instance individually because in that case you have to configure 10 different outbound routes we're saying we create only one rule that would be sent across to a group of his testers that's the easy to have energy I showed you this configuration so that you understand this in a much better way right now you just try to understand the the concepts fine now on the other hand let's do one thing let's formulate and define the rules in the easy - - is she easy - easy - - is she now for easy - - is she the group that that would be assigned to all the tennis nurses running behind the scenes what should be the incoming traffic so let's imagine that if we are sitting on the top of the instances or via businesses for us the incoming web traffic is coming from where from the load balancer the incoming web traffic is coming from where from the load balancer so I would say that the HTTP slash HTTP is coming from the ELB - AJ so I would say that we will as we will receive the traffic from the lower balances group will speak assigned to the load balancer right so I'm saying the incoming HTTP and the HTTP traffic should come from the EOP - is she fine then let's go to the outgoing watch so what should be the outgoing traffic the outgoing http/2 nash HTTP should be sent to directly to the users so this thing you have to remember the low pass is only for the incoming traffic the outgoing response traffic should be sent from the instances to the users directly okay so what's about the outgoing traffic the outgoing traffic should be sent to anywhere which is 0.0.0.0 two forward slashes zero so I hope that we understood we understood the the entire crux of this conversation we have to ensure we have to make sure that the traffic is coming from the exact source and going to the exact destination and we have to ensure that we don't compromise in terms of security fine now let's do one thing let's go ahead and start formulating or structuring there's these two different groups so I'm gonna go back to my aid of a spammer console so let me just uh minimize the screen and here we are let's do one thing let's go to the the ec2 and I will just create two different security groups so I already have a running instance so let me just see if we can stop it I think this is the same instance be as initiated in the previous session because I also have some sessions going in the weekends I go to these services right now okay not services we are we are on the we are on the east to dashboard under the region of Mumbai we are there fine so let's do one thing let's click on these security groups under network and security on the left hand side now this is this this entire thing is called the navigation pane so in this inner vision pain I just go to the security groups under network and metalness security under the navigation pane and I click on create severe group now before that is the one thing if I could if I can just delete the duplicate one I can just do that let me just see if we can I can skate rid of the duplicate groups let me just give me a few seconds over here let me just get rid of them because they they will just allow some kind of duplication which I don't want [Music] okay so that's the one thing let's create two different groups so I click on create security group right now and the very first thing is the two I'll just do is that I will do is as that I could assign a name to it let's name it as ei o P - is she and in a description stipend this stopping once again this secret group will be assigned to our load balancer fine and it's used my VP see the virtual private cloud I will not formulate any inbound and outbound rules right as if now and straightaway go to this create option and click create done now I click on create security group once again let's do that and I descript another group real quick its name it as easy to - is she and in a description stop and this segue group will be assigned to all the ec2 instances that will be registered to our o balance so fine so so we sent to her stressors that would be there this cigarette group will be sent to all the stresses that would be registered to a low balancer so I choose the same we busy without configuring any inbound and outbound rules and just go ahead and click on create' done so as if now I've created two different groups this is my let me just assign them to tag ELB - is she and for this one this are they used to do - is she right now let's now start defining the inbound at the out bottles of the EOB - is she the group that would be assigned to my lower banner and so I had added I would say let's allow the HTTP from anywhere to say HTTP traffic from the end-users similarly ads include the HTTP traffic from anywhere so I would say let's allow the HTTP traffic from n users from any view to click Save then I go to outbound I will hit edit I will say that let's allow the HTTP to the custom source and what a custom source easy to high finishing I'll just type that over here can you see that easy to - is she so I'm saying that I would like to just devote the entire web traffic I want to forward it to the group of businesses that's the easy to - is she then stopped an HTTP traffic to be folded - easy - instances right done so I'm saying that the HTTP traffic will be sent to these instances fine similarly I'd include the HTTP traffic let's choose the HTTP to the custom source okay I choose the custom source then I will say that's an LD HTTP traffic fine and then what we'll choose over here easy to - - a fine so not able to relate be the same concept with this configuration over here I am saying that with this the Loeb answer will get the traffic from from any source from any end user but the same traffic would be forwarded over be sent across to thee across to thee instances or the instance group behind the scenes click Save fine if I don't do that if I don't assign this group then that case I have to create ten different rules for tennis' thesis after different specified at ten different individual private IP addresses as a destination so this will bring down the amount of configuration that I need fine now I just go with the I disk or the go to the easy to - HT and start configuring it I go to inbound and hit edit and I will say because the instance will receive will get the traffic from the lope answer I would say let's choose the HTTP traffic from the custom source and custom source would be the EOP - Ishay let me just type that in and I just type in the HTTP traffic HTTP and say incoming HTTP traffic from ELP or they lasted or balancer incoming HTTP traffic HTTP traffic from the load balancer similarly I choose the HTTP so as the incoming HTTP traffic coming from the elastic load balancer and the custard sauce will be the custom there's the ELB - is she let me just type that in for you and here we go ei lb - ishe click Save done so we're saying that the the entire web traffic should come from the low balances group or the load balancer then what about the outbound so if we hit edit we remove the all traffic default you add a rule and we'll say that the HTTP response should be sent to any user so HTTP start an HTTP response to end users fine similarly and include the HTTP traffic to anywhere so HTTP traffic to the end-users response to the end-users I would say that click Save fine so I hope that I hope that this is this is clear oh we have to create and configure these rules to ensure that you can just go ahead and apply these these groups these security groups to the low balances and/or the load balancer honest assessment was to start deploying them this is very very important extremely important okay because this enhances the security of the load balancer and the instances is running behind the scenes so it's extremely important that you can figure it now the next thing that you have to configure which is optional I'm not saying it is important the next thing that you may configure it's optional is the configuration of something called the SSL slash SSL certificate or I would say it is let me show you once again TLS certificate so does the SSL slash TLS certificate so how this works for you SSL slash TLS certificate so what's the purpose of it and why you need it see let's imagine that this is a user okay and this is the you view your banking server for transition right now the thing is that whenever users make any transaction it's very important that this this connection is entirely closed or encrypted it's entirely closed and encrypted for example the this Nick his this person is Tom now the thing is that if you don't encrypt if you don't encrypt it anyone can pose as Tom and get into this stream and make transitions okay so we want our servers our websites to be secure enough so that the the the communication that the connection established from the user to our servers the that anti channel is encrypted so what I do is that to your servers or the webs or the websites you just upload something called the SSL slash TNS certificate okay this makes your website or the front end web servers secure for the fuzzy front end users you may have seen this thing or almost on all the secure web sites these a is a simple like this lock simple right this means that yeah this is secure if I just good on this it say this is secure I can just click on show certificate and this is the the certificate be the SSH SSH /t certificate being issued by Amazon so this means that you can see over here secure socket layer SSL that is the certificate the certificate is issued to ensure that the the T channel the connection established from my end through this website is encrypted and it's been kept secure fine so to make a website secure behind the scenes you may upload these certificates another two is how to upload certificates either you upload that to the instances directly you upload that to the bank to the back instances because ultimately all the websites all the applications and services are running upon these stresses but the thing is that if you have the load balancers running in the front end why can't we upload the same certificate to the low balancer now that process is called the SSL termination what is that let's call the SSL termination now what happens in the SSL termination is that let's imagine this is my these are my use users let's imagine fine and they will send a cross their request to the laste to a balancer like now what we do is that we we upload the SSL SSL slash TLS certificate to the load balancer now the loeb answer is sending all the travel to these instances for example these may be 50 instances instead of uploading the certificate to each and every instance you upload the certificate to the certificate to the front-end load balancer now what happens is the Loeb answer will encrypt this front-end connection this front end condition would be encrypted the front-end connection would be encrypted purely encrypted then the Loeb answer will then decrypt the traffic and send it across to the instances so it will first of all encrypt the connection the front-end connection and then it will just decrypt the backend traffic and forward it it will perform it from the decryption of of the back end connection so that the the decrypted traffic is being sent across to the instances behind the scenes this process is called the SSL to SSL termination so you upload the SSL / TLS certificate for the load balancer okay now to issue a certificate you have to you have to have a domain you have to have a domain to issue a certificate so I I need to make sure this is the same page where my D my domain is being issued so I'm gonna scoot the route 53 and if eyes and I need to just confirm that I'm have to signed in to the same account thereby the domain is there okay there's another one so I just show you a small prototype but how you issue a certificate okay basically you have to have a domain that's that you have to have a domain the the main services are used to issue a certificate is this one that is the certificate manager which you can find under these security identity and compliance where you have the I am so if this go downwards you will see these certificate manager this will help you to issue the free SSL / TLS certificates for your domains I go over there let us show you the process I can't show the entire uploading process because the thing is that I have the domain with a tow players but for some reason it's not able to very date their certificate so that's the one thing I just go back to the main dashboard I just go to the main page of a WS certificate manager and you can see that the description that this service makes it easy to provision manage deploy and renew SSL /t s certificates on the platform I just go to go to the provision certificates on the left-hand side get started I just request a public certificate click on request a certificate then I specify the domain for example I have a domain learn it appears the hard way com fine so I am just requesting to issue a SSL / TLS certificate for this domain I can also get this certificate issued for my subdomains let's click on add another name to the certificate as include a wild-card it's an Asterix star then a dot then I specify my domain so this this wildcard this wildcard I will include the subdomains for example WWE learn eight of years the hard way calm okay learn it appears the hard way calm so Astrix dot will include these subdomains so click on next over here now this is where I get stuck because I think there's some issue with my domain I can go over the DNS validation or the email meditation so if I for example go with email validation I should receive an email to authorize the certificate it click review click infirmary request and click continue now basically it should send across an email to me I think the service should be behind the scenes by email is not being recognized as the administrator fine so in that case I'm not able to receive the emails the one step that is missing that I can't show you right now is that I received the email with a link I click on the link and it's click on I approve that's it I approve the certificate you can see over here validation validation not complete the status of the certificate request is pending validation further action is needed to very date and approve the certificate it is just that I I am supposed to receive a link I click on that and I prove the certificate I approve the certificate after that the status would be issued or active that's it ok so that's how you you can make use of the a WS certificate manager to issue the free SSL / TLS certificates that you may upload later on to the load balancer fine so these are the two prerequisites this is the optional stuff this is not mandatory this is only needed if you allow the HTTPS or the SSL traffic in the front end for example this website is secure so it is HTTPS so if you include HTTP in the front end or SSL so this is very important stuff so if you in the front end let me just go back to the same note so if if I include the HTTP or SSL traffic at the front end then I need this certificate if I don't include these two different protocols or any one of these two I don't need the certificate at all it's not mandatory okay now I go to the services openly in situ and start showing you the the process to deployed a new balancer so I go to the low balances on the left hand side and one thing you have to understand the Loeb answer is a region specific resource the Loeb answer will work upon the multiple Easy's in a single region I go ahead and click on create Loeb answer on the top left hand corner and you can see that these are different types of Loeb answers that I can deploy application Network and classic now what's the difference si esta the classic one the classic one the classroom answer is the one that works at two layers transport and the application layer this is a very simple load balancer with no advanced routing mechanisms it's a simple load balancer with no advanced routing capabilities it's a previous generation basically they say that you don't use it you use any of these two different load balancers but still it's been used in the exam you get few questions from the classes loop load balancer configuration then we have the application load balancer now this is much more advanced than than classic it works only at layer 7 which means it entertains HTTP and HTTPS traffic now this is mainly used for the content based routing it you can just read out the travel to the IP addresses pouts containers micro-services microservices me sat on a single instance you have different services running upon upon it so you can you can divert the traffic based upon the port's the content and services running upon it upon D instances so this is much more advanced net elope answer is a newest type of loop answer we just need it in case if you need the ultra high performance this is quite fast and it has ability to handle millions of requests per second by maintaining the ultra-low latency it's very fast it is what it is the cost list of all the load balancers it's used in the cases there you your application should be handling should be taking care of millions of requests coming per second basically to be used by the e-commerce platforms fine and also you can assign a static elastic IP to the to do this load balancer we'll discuss it afterwards but the main thing that you have to understand is that the the Neto dope answer is used in the cases where you are looking for the ultra-high performance and the same low pass level will need to take care of millions of requests per second okay now I apply any load balancer I will deploy any loop answer it doesn't mix any difference over here recently because right now we have to understand the basic stuff the the concept I'll deploy the classic one and afterwards I'll deploy the application one network globe answer is not the part of the curriculum as if now why I am starting the classic one because in the exam the few questions that come from from this load balancer and if I show you the deployment of the classic load balancer then deploying the application is an easy piece of cake classic is tougher classic load balancer config is of higher level as compared to the application so if you understand classic I understanding the application is very very easy because the the configuration is almost the same very few differences I think not more than 5% or 10% maximum so I go with the classic global load balancer click on create so and squid create on the the classic lope an answer now after that once I click create the next thing I would do is that I would just assign a name to it so the name I'll just name it as Mumbai ELB okay I just picked my VP see where this this low passing will be running generally the Loeb answer that you you create the those low balances are the public facing low balances or the external balances the public facing load balancers or the external load load balancers are the one that they are exposed to the external users to the internet now in very few cases you can make a load balancer internal in nature in that case we that internal load balancer will just take you of the internal routing that's it it's not intended to receive any requests or traffic from from the internet so I gonna uncheck it I will uncheck it to make it as the or to have this load balancer act as the public facing load balancer my go-to Li I discover the enable advanced VPC configuration enable advanced VPC configuration check mark this option and was a check mark this option I have to choose at least two different sub subnets in different Easy's to provide higher availability for your lower Panzer so it is saying that it choose two different subnets you choose two different subnets across two different zones so that the recent pain or the purpose is that you can achieve a higher availability if one of the easiest goes down you just start using the second one so under actions I click on the plus sign to choose both the subnets that are associated to two different availability zones there are Susilo to two different availability zones the last config is the is the list of configuration the listener configuration is a process of listening to the request the load balancer protocol input it determines my friend and traffic and instance protocol and put defines my back and traffic so the front end is the is a connection that's coming from the users to the load balancer and the back end traffic is the one that's being sent or folded from the load balancer towards the back instances so for simplicity so the thing is that since we're using classic we can choose any of these product protocols HTTP HTTPS TCP or or SSL at the front end and at the back end for simplicity I'll go with the HTTP at the front end or HTTPS at the back end if I choose HTTPS or SSL at the front end then on the fourth step I have to upload the SSL / TLS certificate but as if now I just make this can fix simple and and only include HTTP at the front end and HTTP at the back end I click on next assign security groups now I'll just pick the same group that we have pre-configured which is the ELB - Agyei this one okay click on next configure salutis settings now on the third step this is where we have to upload the certificate since we haven't included the HTTPS or the SSL protocol for the front end connection we don't have to upload it we click on next configure health check now this is where we can figure the basic health check options the main thing you have to look into is the protocol the port by default which use the TTP sure to call or put 80 as the health check traffic you can change it if you want the pink path is the default web page of the website or the web application or the web server running upon thee back instances so in my case it would be a forward slash okay in the advanced details it's a it's a time well it's it's a these are the timer values these are the seconds that work for your health checks the main thing that you have to look into is the interval that's 30 seconds this means that after every 30 seconds the health check would be repeated it's the amount of time between the health checks so minimum is five seconds and maximum you can go up to 300 seconds so you can manipulate this default interval if you want this rest you don't have to go by that I click on next and stresses no I don't have any stresses right now I gonna do I will just deploy the stresses afterwards I gonna deploy the stresses afterwards we'll just do that afterwards and these are two different options cross zone or passing and connection training we discussed that cross zone rule passing means and the instance or every instance will get the the equal amount of traffic from the load balancer okay so the Lopez self will perform the equal distribution of the traffic across all the instances connection training this means that if instance is unhealthy before taking that instance out of service it ensures that the existing connection connections are being taken care off and in the meantime the nuke new connections should be sent to the other health instance so by default we have 300 seconds now we click on next add tags we can add a tag value for this is optional click review and create we can review any of of these these configuration parameters if you vote or else we click create and a low passer is created now the thing is that it will take some time for the Loeb answer to become active in the meantime and it's deployed to different instances using a script that script will install the Apache PHP web application upon instances so I go to the snes's right now and also I'll give you a copy of the script afterwards so that you can just have a look upon that just give me a few seconds let me just fetch the script for you okay so this is the script I share descriptive with all of you I cover the script and I sure that how you use it I click a large instance right now now this script will will hook on this ami only that's the Amazon Linux ami this one okay it chooses click select we go with the same children my class mr. time click next configure assess details now I choose my default V PC but this time at you I choose a pea salt 1a I go to advanced details and copy and paste this script now this is script well install the Apache PHP web application upon the snes's okay so you have to make sure that you just copy and paste it properly then click on next and storage next set tags I just assign a key Val value or name key as name and value as Estes a it's just a we click on next configure security group now I just pick the easy to have an issue that we're pre-configured we click on reveal launch continue launch I could choose any of the key peers and launch masters similarly and launch a two-block instance with the same ami same instance type this time I'll pick the a different easy ap salt 1 B go to advanced details and paste the script the same script click on exit storage add tags I will just name it as let me just name it as my instance B we click on next configure security group this pick the same group that's the easy to - a shochet click on review and launch continue launch choose a keep here and watch westerns now I'll show you exactly that what are put you will get so manipulate my instance security group and I just allow the HTTP just for this this example from any view so that so that we can test whether the web application we are running upon this test it's working or not so I go back to the stresses right now I choose my instance a copy it's bubbling IP and paste it on my browser so let's see if I get something can you see that so if it browse the this instance using its public IP it shows the in society of distance and the availability zone where Dennis test is running there that instance is running fine so that's it so I can I can remove this this Jew because we have tested it out okay we can just remove it if you want to that or okay I'm sorry I think I that's fine we can yet so lets me just remove these these things we don't need that and you anyhow fine so the HTTP is coming from the Loeb answer and I should use from the same stuff first same no Bannister right so that's it I think we have to I don't have to again just leave it as it is and not change it as if now fine now what I'll do is that I go to my load balancers right now on the left side and click on load balancers I go to his Tessa's right now and click on edit as Dez's I add a and B hit save now as you can see that this is Tessa showing auto service if it is hit refresh within a few seconds they'll become in service because right now these distances are in the process of registering themselves to the load balancer so we have to wait for a few seconds okay here we go so as you can see that instances aren't service now if at a school of description and this is the DNS name of the load balancer if I just copy this DNS name and I paste it over here I can just a mist browser it once again I can you see that it it is sending my traffic to the instance in AP South one be hit refresh it was sent it to the instance in a Passat one name hit refresh another instance in a Passat won't be refresh the instance in a piece not one a if a copy and paste something for you on the chat window even you can browse it on your own okay now in the real-time situation I should be linking this load balancers DNS name with my own domain for example xyz.com fine so we use a service called route 53 for that that will discuss me with this week or or the next week so this is how we're going to test our load balancer this is how we gonna test our load balancer so this is about the stuff this is how we do that now this one last option that we're gonna discuss over here that's called the stickiness stickiness means that you can stick the user's session because right now if you if you just go to the link and you just hit refresh you will be sent across to multiple instances so it's using the round-robin algorithm the next time you come in you may get to a new instance but the thing is that you may need to override this this default behavior by using the concept called stickiness so what happens to the stickiness is let me just discuss with you so let me just [Music] okay so let me just discuss with you the this stuff give me a few seconds over here so over here the next thing we'll discuss with you is the stickiness so in the stickiness what happens is Stratton once again so in the stickiness you intentionally stick they use this session to a single instance for example if this is my user he's being sent to a load balancer he's been sent to a load balancer and Loeb answer has two instances a and B now if for the very first time if his request has been sent to the B and the sickness is checked out the Loeb answer will save a cookie that is a WS e lb which consists of a timer value it is the 60 seconds for example in our case so sir it's a timer value so for next 60 seconds if the same user comes in again he will be sent to the same instance this called a stickiness we intentionally stick the users session to only one instance for a predefined time frame right so the user comes back in the notice of a chip cookie settings and for next 60 seconds the users request will be sent to 21 Astaire's this is many use in the cases where maybe you're running some payment applications bonuses you wanted the the payment transitions payments are the transitions to be completed before the the users session may land to a different instance or maybe you're running at an e-commerce platform and you want the transition to be completed each transaction to be completed before users requests having sent to the other server so that's a stickiness now how enable it by default is to say both you hit edit you just go over the load balancer generator cookie stickiness or if you have created a cookie when you have created your application you can go with this one I go with the lobe answer generated cookie stickiness and define the expression period of 60 seconds it's safe now you guys if it is go to the same link hit refresh again and again you will see that eventually this will not change this will not change now because the stickiness has kicked in and you just hit refresh again and again again and again he will not go to any other server any other instance for next 60 seconds that's the main usage of stickiness then if it is go to the this thing access locks you can just configure the access locks enable it and have these locks logs logs delivered at the interval of 16 minutes or 5 minutes so that two intervals five minutes or one hour so these logs kemi's saved to one of the s3 buckets in the same region where the loop answer is so that you can just analyze the request made to the load balancer it's mainly used for the troubleshooting or analyze the the access patterns fine that being said this completes the the configuration of the classic load balancer so let's get started and let's understand the auto scaling stuff and we have to understand what is auto scaling what are these different components and how it's been configured so let's get started what is auto scaling what it is first of all we have to understand some of the concepts or the principles and after that we'll start with the the configuration so the auto scaling is a service that is based on the principle of elasticity so elasticity is is a principle elasticity is a principle so you make use of the auto scaling as a service to apply elasticity in your day-to-day environment now what is the elasticity elasticity means that you can grow or shrink your entire infrastructure based upon the patterns of the income traffic time of day or the demand a repeat my statement elasticity is a principle it's a it's a concept that helps you to grow or shrink your entire infrastructure based upon in the patterns of the incoming traffic time of day or the demand for example today or in the normal days in during the normal days you you have torn instances okay now on any of these special days for example you have advertised some some discounts some purple discounts on Christmas so you have a Christmas sale okay during that those during that event or during those days you may bump up from 20 to 200 and then once the Christmas sale is over then you just shrink back to the normal size the normal fleet size that's the main concept of elasticity you can grow a shrink your entire infrastructure based upon the patterns of the incoming traffic time of day or the demand that's the crux of it that's the essence of it fine an auto scaling is the the practical application of or I would say it's it's a practical version of elasticity now let's start with auto scaling over here so auto scaling has few components we have to understand those components first and then we start with the the configuration part so what are the auto auto scaling components see the first component of the auto scaling is the auto scaling group what is the auto same group the simplest definition I can give you is it is a logical collection of identical easy to and stresses it is a logical collection of identical instances you group instances together depending upon these services and applications running upon them you croupier instances together depending upon the the applications and services you run upon them for example I have two different auto scan groups so let's take over this simple example over here to understand is this concept so these are two different groups this is my ASG auto scaling group fun and this is my ASG to auto screen group two it consists of de-stresses up upon which I have I'm running the WordPress site and on the other hand I have the other Auto scan group upon which upon each instance and running the joopa web application so basically the the main concept behind the auto scaling group is that it's a logical grouping of his disses that's it it grew businesses together depending upon the the types of applications and services you run upon them depending upon the the types of services and applications you run upon them fine it's a logical trooping of the instances the second component that we are about to discuss is the launch configuration see the thing is an auto scaling which uses the concept of or it's based upon the principle of the elasticity them it has ability to launch a terminator distances automatically so if there's a demand there's a need auto scaling games this term this go ahead go ahead and this launch distances automatically and the question release is over here that okay if the auto scaling has the ability to launch necess automatically we agree to agree to that but how exactly it will have the understanding of the ami that what should be the EMI of distance what should be the distance type of those distances what should be the EBS volume settings what should be the security groups assigned to instances what should be the keep here for for those instances what should be the software's scripts applications to be deployed upon instances this is a big question if the order skinning is an automation feature it has the ability to launch the stresses by itself how on earth it will understand these parameters it will understand it it will get to know that these are the parameters it need to large distances so large configuration is basically it is a template it's a what is that it's a template but template I mean that the launch configuration will have the ability to just give you the template so that the auto scaling can does refer to this template and decide upon the parameters or the specifications of this missus say I give a very leave an example you want to cook a dish you ought to prepare a meal a dish so this the first time you you are you won't just cook a dish or a prepare something special and you don't know how to do to make it or how to prepare it or how to cook it so first of all what to do is that you cook you just pick up the the recipe book in the recipe book the first thing that there are the other one on the first page is the ingredients before you just know the process to make that particular dish you have to see that what are the ingredients the spices the vegetables that that goes into preparing that that meal that dish similarly before the order skilling can decide upon that okay I have to disk go ahead and launch distances how on earth it will understand that what should be the ingredients what should be the ami the instance type the EBS volume settings how much Troy needs to be added what should be the security group need to be added the key pair software's scripts applications so it was referred to the launch configuration so if it is draw it auto scaling as a service will refer to the launch configuration to understand all the ingredients or the specifications and then based upon what's there in the launch configurations it will this deploy the instances inside the auto scaling group so it will refer to it it will just apply the same ami instance time keeper the firewall which is these security group the key peer it will install the same application services everything upon those instances certain template it's a it's a template so you have to have the launch configuration being created before you just this configure the auto screen group it's it's important that you have that it's extremely important that you have this launch configuration why so it's a template which consists of all the ingredients and specifications that are needed by the auto scaling to launch the instance inside the auto scaling group right the third component that we will discuss right now as we have discussed the auto sync group we have discussed the launch configuration the third one will discuss is these scaling policies the three or three I'm sorry the to tab types of screening policies we have one has scale out and the other one is scale in what's the difference see if you if you talk about scale out this means you would like to you you need or you liked the order scaling or you want to have auto scaling launch which means increase the number of instances inside the auto scaling group of within the auto scaling group that means scale out scaling means that you like to or you would like to have the the auto scaling as a service to terminate or I would say decrease because if it if it terminates it will decrease the instances with their the auto screen group now one thing you have to understand the order skinny will scale out launch a scale in terminate it will not stop instance there's no process to stop misters using auto scaling either it scales out which means it launches or it scales in which means it terminates these are the two processes it it follows these are two different processes right a skidding policies scale out and scale in the fourth component that we'll discuss right now as scaling plan that how exactly Watteau scale the first scaling plan is fixed fixed means that you go with the desired capacity which means that you fix the number of instances and auto scaling will ensure that it always have the same number of instances to deploy the to run the application fixed scaling plan means that the auto screen it shows that we have the fixed number of a stresses up and running to support your application for example you fix five instances to be running in the auto screen group auto scaling as a service will ensure that we it always has not less than five not above five fixed number of the stresses upper running to support your application it's fixed it will not go below it it will not go above that number this is a simplest scaling plan you can go for the second scaling plan is called the dynamic the dynamic scaling plan means that depending upon certain thresholds you have the order skilling to increase or decrease the number of instances so for example let's take an example I can say that let's imagine this is my auto scaling group and for the autozone group the CPU usage is more than 80 percent if that's the case then initiate the scale out policy which means we need to increase the number of instances we need to increase the number of businesses fine similarly but similarly or the other way around if for the same Auto scan group if for the same Auto scan group if the CPU usage the average CPU usage of the entire group is less than 20% go ahead and initiate a scale in event which means decrease the number of instances this is called the dynamic skinning plan depending upon the values and thresholds you have the instances increased or decreased and fixed you fix T number of instances the third skinning plan is the scheduled actions Oh time of day in the short form is Tod scheduled actions means scheduled actions means that depending upon the the time or the date you either have the scale out of it or skill element for example I can say that every day between 10 a.m. to 6 p.m. I should have 10 ec2 instances in my auto scan group but after 6 p.m. I generally see a dip or the amount of requests coming down or going down so the volume of traffic eventually goes down after 6 p.m. so I can say that then I can say that every day from 6 p.m. ok to 10 a.m. next day ok so from 6 p.m. to 10 a.m. next day I will need to have only 2 Easton's disses in the same auto scan group so depending upon the schedule and the timeframe because from 10 a.m. to 6 p.m. it's a peak time for you you see a lot of users coming in and browsing your websites and making some making some purchases from 6 p.m. to 10:00 a.m. the next day you see a very the you you don't see a lot of traffic coming in or requests coming in so you can just go over the two instances so this will eventually decrease your cost fine so you can go the scheduled actions or time of time so these are three main scaring policies were skinning plants we can also go with the manual but we don't use it these are three main scaling plans you can go for okay we have the fixed dynamic and scheduled actions ok so these are the different components of the auto scaling right now that we have discussed these are different components one is the arts order screen group then we have large configuration scanning policies and skinning plants right so these are the different components that we have discussed now we shall go ahead and start with e we should we shall start with the configuration part now the configuration part is quite lengthy what I will do is that I just go ahead and open my Evernote checklist I also could give you a copy of that checklist we are gonna be using the dynamic scaling policy or the target tracking policy the same thing to Dan we scale our applications using the target tracking policies I'll let you know what is a target tracking policy in the in that process so we will I just to give you a copy of this list afterwards or I can just give you right now so I'll also include that let me just just copy this just copy this link and just give it to you so I think that this is this is there so you can just check check out this link so this will be included so this list I'll be just going with this this checklist to ensure that we follow the step the step by step procedure to deploy an application that or we can we can say we can dynamically scale an application using the target tracking policies so the very first thing was will do is that we gonna launch an instance using the same script that we use in the lower balancer using the same script that we used in deployment of the classflow balancer the deployment of the classic hope an answer now why I'm going with this this first step see it's very important that we have to understand the process to deploy the custom AMI so let me just give you a small or I would say a brief overview exactly that what exactly a custom ami is and why you need it so right now the main thing that we have to understand is the custom ami or customized am I what what exactly that this thing is see the customized ami is needed in a case where you want to deploy the same services and applications upon instances whether you launch them manually or you launch them automatically so what's the process the first first step is that you launch instance you launch an ec2 instance using a public ami public ami means that it is there for everyone's use if we could go to the Quick Start menu and use any of the public am eyes then you launch services applications software's scripts upon that East winston's that's what we do in the second step there is a third step finally you capture image you capture the image you capture the image so I just draw it right now you have the system I just give one example let's take one example the example which we're taking is that you just take the taken instance you launch an easy-to instance using for example amazon annex am i right now you install upon it let's imagine you install upon it Apache web server plus let's imagine my SQL SQL database plus you deploy upon it the word press site and you can figure it you customize it so you customize your WordPress site you customize it okay now once installed everything and you customize everything then what you do you issue a command that command is that command is you just go to the instance and this issue a command create image image will capture this operating system this Apache web server the master database and the customers WordPress site everything in a single package from that image you just deploy this the n number of mistresses with the same operating system services applications database and of the customized web WordPress site running upon it or upon them that's that the process and strategy that you go for so heavy the customized ami is the heart and soul of the automation because ultimately this test has to boot up from the ami so you you encapsulate the surface this roses the applications the operating system as a part of the the custom ami so you create the image and what do you ought to get as a product you get a customized or custom ami you issue a command create image and it you get a custom AMI now in my case what I want to do is that we have we used these instances if it is go bad look to the stresses right now we use these instances as a part of the class Lua balancer and I use a script okay now I can install anything upon it but right now I've installed the Apache PHP web application upon hostesses it's all be installed and capture the image out of it so I choose instance code to actions go to image and click on create image so I I issue this this command create image let me show you once again I have the Apache PHP web application running upon instance I choose it choose the instance which already has the operating system and the my desired application running upon it I choose it go to actions go to the stress State and the next thing I will choose I'm sorry image I go to image and click create image I name this image as Apache PHP image I just copy and paste I'm sorry so let me just Apache PHP image now Ida's copy and paste the same the same in the image description now whenever you capture the image from any instance you capture the image it would take a snapshot of all the associated volumes of that instance in my case it will take the snapshot or the backup of that root volume I click create image it says create image request received at the close I click close then if I just go to the am eyes on the left hand side in my navigation pane if it is go to am eyes you will see that right now it is just initiated the process to create my custom ami fine the custom AMI and if I show you these snapshots on the left hand side you can see that the snapshot is also in the process of gonna create it so as if now if I just go back to the same checklist that we were referring to I can check mark this option we launched an instance using the script and then we captured the ami out of it because this ami wouldn't be the part of my large configuration which is my template now once you capture the image once install everything upon it and you just carve out or capture the image it's not it's no longer needed if you want you can terminate those instances if you want but as if now what I will do is I go to my load balancers on the left hand side go to the to the stresses and remove both the lobe and both instances from the load balancer but click on remove from your panel because I don't need these instances anymore it's all no use so I just want to capture the image if I want I can just go ahead and tell me need assistance they're no longer needed fine because if I use Auto scanning Auto scanning will deploy the fresh new instances then I go to my this notes and I if I want I can terminate but I I have just removed my stresses from the load balances from the loop answer domain in our previous conversation we already have created and configured the classic load balancer so that that step has been done so we have already done that so I will just shake mark this option because we have already created and configured a classic load balancer then now comes the configuration part of the auto scaling and the first thing that we have to do is that we have to create and configure the template which is the large configuration so what I would do is that I just go back on the left hand side I just shoot back on the left's left side in the navigation pane click on auto scaling much configurations I click on create large configurations launch configurations now if you remember and recall what we discussed in the large configurations a large configuration is nothing but it's a template in this template I choose my this customized ami that we just captured before coming to this page so I choose my Apache PHP image right now click select now I will select the instance type that would be used to launch each and every instance in my auto screen group so I'll go with the t 2 dot micro click on next configure details Eddie bottom right hand corner then I name this large configuration as let's name it as my Apache launch you can assign any name to it right and once you assign a name to this this launch configuration then these are the optional values you you don't have to configure them and you don't have to go to advanced details as if now so I just go ahead and click on next add storage now I will be adding my storage 8 gig of storage that would be created from the same snapshot see we will discuss the snapshots and the volumes in a different conversation in a separate discussion we can we can create the snapshots from the we can create the volumes from the snapshot this is this the same snapshot that was created at a time when we captured the image so my all the volumes that would be the part of instance would be extracted from the same snapshot behind the scenes I click on next configure security group I'll pick the same group that would be assigned to all the Assessors that will be launched in my auto scan group so choose easy to - AJ this one click review click continue click create launch configuration at the bottom right hand corner and this creates my template now in the last step I have refused that what's keep here I'll be using to log in to my instances in the future so I choose I'll create a new key pair let's name it as a SG demo that's it I click on download keep here and this downloads the copy of the the private key on my desktop and I click on create launch configuration and this creates my launch configuration now from this large configuration a discrete an auto scan group now can you see this option right now it says create an auto scaling group using this launch configuration so I just name this this launch I've already created my launch configuration now is create my cube through this to this so I just name it as Apache ESG or I would say Apache auto scaling group okay so from this so I'd be using this launch configuration to be applied to this auto screen group then we have the group size group size we'll specify the desired capacity so I will have to this is my desired capacity so these number of instances should be up and running at any time I should have at least or these mean number of mistress's pink upper running this is the desired capacity but it's being specified by the group size the next thing that we have issues is the network this is my default V PC I choose both the subnets of my default BBC that are associated to two separate Easy's okay then I just go to the advanced details and now I include the load balancer as a part of this this entire can fakin okay now you may be wondering how come this low passes is coming in this picture does it makes any sense why you need a load balancer as if now we're discussing the auto scanning but now how comes the slope answer is jumping in see the thing is that in order to have a complete architecture you have to configure these these components to work hand-in-hand so let me just sketch it for you that how exactly the auto scanning fits in with the load balancer it's very important so I just draw an architecture for you and help you understand that how exactly these two different components work with each other how they can work with each other see let's imagine this is my V PC so this is my region of Mumbai and this is my V PC now I have the subnets this is subnet a that belongs to a piece out one a in Mumbai and I have subnet B that blocks to AP South one B in Mumbai what is submit it's a complete block or it's a complete range of shriver diapies now what I will do is I will deploy a low pass or the top of them and the Loeb answer will just distribute the traffic across these instances running in the subnets these are my easy to instances right so users will just type in for example xyz.com my domain it will be converted to the DNS name of the load balancer and the Loeb answer will just load balance this traffic to the instances running in separate subnets or the zones now eventually would have what I'm trying to do is I am saying that these two different subnets are the part of my auto scaling group okay so what if what if there's a demand for more instances I'm just giving from an example let's imagine okay let's do one thing let's imagine that this availability zone goes down on the left hand side my availability zone AP South money goes down and all my stresses become unavailable the Loeb answer will declared them as unhealthy and make them out of service the problem is that I'm left with only two instances half of my infrastructure is is down half a man of my infrastructure is not available so auto scaling group will now refer to the large configuration we'll just look into the parameters of the instances and it will launch two health instances in the other easy and the Loeb answer since it's working with the AG or the auto screen group had in hand the Loeb answer will start sending the traffic to the newly launched instances that are be launched by the autoscanning within the auto screen loop because these two different subnets are the part of the auto scaling group that's how these two different components they fit with each other if everything anything goes down or there's a new stress that's been launched for any reason maybe the Prius sisters are was unhealthy or maybe there was a demand for new stresses the load balancer will start diverting the traffic to the new stresses fine that's why you see very good you go with them now thing is that load balancer okay so this is this one thing that that's very important we include some some one important parameter let's call the health check grace period will solve 300 seconds by default now I go with that see the thing is that when these distances are we launched a fresh these two these two instances I will not refresh they will not immediately come in service you can't expect any serve any instance to come in service immediately after it's been launched see the thing is that auto scaling will trigger the scale out event to launch the stresses but instances will take the own time to store the opener system services applications upon them in the meantime if the load balancer performs the health check they will be declared as unhealthy by mistake because they will not respond they're still putting up so will will will allow the newest testers to boot up for next 5 minutes by default so that we just give them a crisp period of 300 seconds so that in those 3 seconds they can boot up and become in service so that mistake was by mistake so that by mistake the Lopez sir doesn't declare them as as unhealthy so I choose the thing under advanced details load balancing I go with the load balancing I choose receive traffic from one or more load balancers and I choose my classical open answer target groups do not come in this picture because target groups are relevant to the application load balancer they go into classic one so this is a classic Loeb answer that we've pre-configured we we want the stresses that are the part of the auto scaling group receive the traffic from this load balancer health care type healthy type are of two types ELP and easy to not a difference so it's very important that one has to understand the differences right now if you go with the so there are two types of health checks so what are they health check types one is ELP and other is easy - what a difference if we go with the EOP health check type then it will take into account the EOP health checks okay for EOP health check we know how it performs the health checks plus it will go with the easy to health checks easy to also perform sits in telehealth checks because for example let's suppose I'm just going example if an instance is impaired its unhealthy any period instances is unhealthy so if you go with the EOP health checks it performs the it takes it into account the low passes health checks plus the Estelle's health checks so any any impaired instance is unhealthy but if you go with the other health check that's the easy to health check in that case it will only check the used to do health checks only it will discover the he's to do health checks only you not get the benefit of both you will not get the benefit of both I would say it is easy to health check or I would say it's easy to status checks this one that it will determine the health of of the instance so if you go with the easy to health check it go with the easy to status checks only okay so to get the benefit of both you go with the ELP health check because in that case you get both the benefits you get the benefits of the low pass or health checks and the instance status checks but for the if you go with the easy to health check then the help of business is determined by the easy to status checks only I've do an example if innocence is impaired it is unhealthy if it's running for example it is running then it is healthy okay so let's do one thing let's go back and I go to the health check type as ELB to get the benefits of both the Loeb answer and easy to status checks health check grace period as 300 seconds by default I have discussed with you it's this protection I can choose this option to give the immunity to my instance protect from scale in what do we wear protect from scale in if I choose this option in that case none of my stresses would be terminated so I give the immunity to every instance as if not I gonna uncheck this option service link role now this would create a role for the order scale to work with the low pass and Clare watch because the cloud watch will work will the cloud watch configuration will come just after that okay so this is the create auto scan group and this creates a role automatically so that the auto scanning can work with different services like lobe answer cloud watch so the cloud watch will come just after this page now we click on the next step that's the configure scaling policies now this is where the real action begins this is where the exact stuff or exact things work over here the two type of scaling plans you can see over here keep this group at its initial size this is the fixed scaling plan remember we discuss about this thing if I just go back to the notes we discuss about this fix killing plan this one this means that is stick to the fixed number of his disses if I choose this option keep this group at its initial size then that case it fixes the number of stresses depending upon the desired capacity of choosen in the previous window for example if I go to the previous can you see the group size - this is the desired capacity what is this this is the desired capacity so if I go to the configure scheme policies and choose this scaling plan keep this group at its initial size then it sticks to the desired capacity it will ensure that we always have two held instances to be the part of the auto screen group it will never go above it it will never go below that if you want to go in detailed way you just go with the target tracking policy which comes under the dine with skinning plan if you remember we discussed that what's a dynamic skinning plan you scale out or you scale in depending upon the the thresholds for example if the average CPU usage of the entire auto scan group is more than 80 percent we want to scale out if it's less than 20 percent we want to scale in right we discuss that so in that direction we understand or we will discuss about the target tracking policy which is a part of the dynamic scaling so let us understand that that concept so that we understand exactly that how this works for us so the concept we'll discuss right now is the target tracking policy what happens in the target dragon policy is that fight sample this is my I'm just taking one example right now let's suppose this is my auto scan group right so let's go thee this is my these are my stresses in my auto scaling group fine and to monitor its performance I use curve watch cloud watch okay now in the cloud watch I choose a metric metric is a parameter against you which you want to analyze the performance of a resource metric is CPU utilization CPU utilization now the thing is for this this thing for for this thing or for the auto scan group the target value for this metric is 70% what I would say let's let's remove this so this is my metric that I can figure in my cloud watch and for this the target value is equal to 70% so I'm saying that for the for this for this auto scan group so the cloud watch will monitor the performance of the auto scan group and shows that CPU utilization of the entire group is is up to 70% not more than that okay now because of the increased number of processes and load let's suppose the CPU has breached and gone up to 90% it has gone up to 90% so let's imagine because of the number of processes and the traffic come again the CPU has increased more than 90% so it's been breached it's been breached right what happens next the cloud watch will send an alert to and also it can send an email to you as well you just also receive an email it was sending an alert to the autoscanning as service that these cpu is more than 90% now it has been preached autoscaler will trigger a scale-out policy and for example previously there were three instances now there would be five it will increase the number of instances in the same auto scan group it will launch more instances fine so you understand this process the target dragon policy means that it it keeps an eye on its target and tracks it so in in this case the metric is CPU utilization okay and the target value we have said is 70% if it's been breached so the club wash ensures that the CPU usage of the entire auto screen group on an average doesn't cross a 70 if that's the case it's sent an alert to the order screen as a service auto scanning refers to launch configuration as a template and issues a scale-out policy or event it is scale-out event to launch more instances similar instances in the same auto scan group fine so this is the the essence or the crux of the target tracking see and if the CP goes down it has the ability to decrease the number of instances let's do one thing we we start configuring the target tracking policy but choosing this option uses getting policies to edge just the capacity of this group so I choose between 2 to 10 so 2 is the minimum number and 10 is the maximum number 2 is a minimum number and 10 is the the maximum number then this is the name of the scaling policy or the skinning plan and these are the 4 different metrics application load browser request count per target average CPU utilization average packets in average packets out I go with the average CPU utilization let's go with that target value is set as 70% this is the warm-up value a period this means that we allow the distances to warm up for 300 seconds once they get in Austin it's the amount of time that instances need to warm up because as well as you large distances you can't expect those instances to be in service immediately they can't contribute to the metrics of the autoscanning so we give this time for the instances to boot up before they contribute as a part of the metrics I can disable the scale in so that order scaling will not be able to terminate any of the stresses but as of now I can uncheck it so that's how we configure the target tracking policy I set the minimum size the maximum size so it will oscillate between 2 and 10 number of instances the metric type is average CPU usage values 70% 20 seconds for pestis is to warm up time I click on configure notifications now I add the notification and click on create topic as this is the S and as topic that's never simple notification service topic that will shoot up the notifications if any of these these events are cool in my auto scan group is distress is launched they terminate they failed to launch they fail to terminate then I should be notified service name it as notification or I would say no t v-- occasions to my Inbox that's the name of the SNS topic and I specify my email so that I should be sent the notifications to to my this email address so the meantime I just open my inbox and that's it I can add more notifications if I want but as if not as good with only one notification I click on next configure tags at the bottom right hand corner and can you see that I just received this email I think this is this another one let's see that because we have to see I don't think this is the one this may be from the previous configuration I'll show you so I click on configure tags I can add it a key and the value for this group it's optional I click review I can just review any of these parameters before I click create I click create Auto scan group and this creates my auto scan group from scratch and immediately so I just click on view your auto screen groups and it will launch two instances in the meantime I go to notifications and I can just click on confirm subscription so that I can just confirm my subscription to the SNS topic so that I receive the l/d alerts and email in the future fine so as if now if it is good and stresses you will see that it has just launched two healthy stressors from scratch and these to stresses if I go to the load balancers they'll start reporting or registering themselves to the loop answer and can you see that I have started seeing these alerts this this alert is for the instance launch so if I just go to the instances you will see that these two instances are immediately in service with respect to the load balancer and if I just go to description copy this DNS name of my load balancer and paste it you can see my application is up and running immediately so the auto scaling automatically picked the minimum number of businesses refer to the launch configuration launch must assist with the same script application services instance type am i everything and here we go fine so it launched two held instances now let's do one thing let's do some disruptions I will intentionally bump up the CPU usage of both instances so that the average CPU utilization of my auto scaling group it increases above 70% let's do that I go to my stresses right now in the meantime I just go back to my this my downloads and I drag and drop my p.m. key ok do you wanna stop and let me rename it so that it can mean into a PM format I opened my terminal let's do that let me open my terminal real quick and let me just close this one out let me open a new terminal window and I go to my desktop and let me see the name of the key yes your demo ok so I need to change the access rights because I'm using Mac OS so after chain access rights sorry yeah then I initiate the SSH connection to distance to all businesses right now so business does that it just launched this distance it just not this distance this one okay let me just tell me of the previous instances I don't need them to avoid any confusion okay so we just let me label these new stresses that have been launched by by my auto scaling as a and B okay so if I filter my mist as a okay visit one thing that was named it as instance a and this is my Estes B so these are the two freshness this is launched by by auto scaling group I have this terminated instances that we launched in my in our load balancer configuration fine so the business name had instance a so that it is filtered them instance be fine so I go to us this a and copy its public IP paste it so just on your real quick I need to ensure that I allow the SSA for my IP so I gotta go to the security group and allow the SSH from my IP real quick so let me just press ctrl-z and rerun this thing connect it then I run two commands to bump up the CPU usage of the systems sudo yum install stress okay these are two command this is one command that is okay I'm just created sudo yum install stress let me just correct the this command sudo yum just hang on one st. over here install stress it just downloads asterisk package I dispatch some Hawks stress - C space v it will just dispatch the Hawks to bump up the CPU usage of this instance similarly I just initiate a new connection to my other instance right now new connection let me just go to the desktop and if I see it if I can this copy and paste command from here and just go to this distance right now and just mention the public IP address of my second instance [Music] right so I just run the same command sudo yum install stress upon stress package and stress - c5 done so as you can see that it would dispatch the Hawks within next three to four minutes of five minutes I'll see the CPU usage of both distances to bump up and as a consequence it will just launch moistness so we have to resume for next five minutes I mean I would say we have to stop next five minutes and see for the output to come back to to show up because if it is good to the instance right now go to the monitoring and I will eventually see the CPU going up of both instances so let's wait for few minutes over here so if we just go back to the notes if you just go back to the same notes right now so we killed the launch configuration we cleared our span cube we configure the target on your policy notifications tags we created the Orion group we have confirm the description of the assess topic and we have test us getting policy fine so now let's do one thing let's see the CPU usage of instance a and as you can see for instance a it is gone up to 100% and if it is goof work with this this B it has touched up to a hundred percent fine now if it is this remove the filters you will see that it is it has just launched two held instances just launched can you see that just launched two healthiest misses and if I disco to the auto screen groups right now and I just go to the instances you will see that not not to it is it launched three is three instances to the one that were there from the previous instances it launched two held instances and if I show you the be activity history you can see that these are the instances it just launched if I go to any of the instance launching activity I can see a cause of it what's the cause what's the why this was initiated at this time and date and alarm was triggered this one this is this is the alarm and it it increase the desired capacity from two to three heads in that case it launched a single instance then then when it saw that the CPU utilization is above 90% or 100% it has two large moisturizers to bring down instances I'm sorry it has to launch more instances to bring down the value below 70 percent then it launched to moistness and we'll keep on doing that until analyst the cpu of the entire group go it comes below 70 percent now I'll do one thing to just make sure it doesn't lotta snow instances I can just press control C to just terminate this process okay so this is all about the auto scaling target random policy so what we discussed over here we discussed that how you dynamically scale your applications using the target ranking policies now one thing that's very important wants to perform the lab session till lead your discrete group don't terminate this test manually or straight away don't tell me it instance because if it terminate any instance alter scanning will launch a new instance to composite for it so as a process of cleaning up once you get through with the entire process you choose the order so in group go to actions hit delete if you delete the this group plus distances you don't have to delete the tip that's the launch configuration why because launch configuration is just a template you can have hundreds of templates and you don't have to pay even a single penny for it okay but for the article Auto scan group it's a free service but here you pay based upon the number of hours consumed by the instances so once you're through with the entire lab session go to actions and click delete okay that's all about the configuration of the I would say dynamically scaling your application using target tracking policies I will include the script as well so that you can just go ahead and practice on your own and as well as this checklist
Info
Channel: Rohan Arora
Views: 3,609
Rating: undefined out of 5
Keywords: AWS, Amazon Web Services, Cloud Computing, Elastic Load Balancer, Auto Scaling
Id: IveYuqds1U0
Channel Id: undefined
Length: 121min 36sec (7296 seconds)
Published: Tue May 21 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.