AWS Autoscaling | AWS Autoscaling And Load Balancing | AWS Tutorial For Beginners | Simplilearn

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi guys uh welcome and today's topic is on aws auto scaling so this is akil i would be taking up a tutorial on the auto scaling but before that i would request you guys to subscribe our channel the link you can find just below this video at the right side let's begin with our tutorial and let's look into what we have in today's session so i would be covering up why we require aws auto scaling what is aws auto scaling what are the benefits of using the scaling service how this auto scaling works the different scaling plans we have what is the difference between the snapshot and the ami what is the load balancer and how many types of load balancers we have and along with that i would be covering up a real life demo on the aws let's begin with why we require aws auto scaling now before aws cloud scaling there was a question in the mind of enterprises that they were spending a lot of money on the purchase of the infrastructure if they have to set up some kind of a solution so they have to purchase an infrastructure and one time cost was required so that was a burden for them in terms of procuring a server hardware software and then having a team of experts to manage all those infrastructure so they used to think that no longer they require these resources if there was a cost efficient solution for their project that was the project manager used to think now after the aws cloud scaling that was introduced automatically the auto scaling maintains the application performance based on the user requirements at the lowest possible price so what does the auto scaling does is that whenever there is a scalability required it manages it automatically and hence the cost optimization became possible now what is aws auto scaling let's look into deep so aws auto scaling is a service that helps users to monitor their applications and the servers and automatically adjust the capacity of their infrastructure to maintain the steadiness so they can increase the capacity they can even decrease the capacity also for the cost optimization and also predictable performance at the lowest possible cost now what are the benefits of auto scaling it gave the better fault tolerance applications you can get the servers created and you can have a clone copy of the servers so that you don't have to deploy the applications again and again better cost management because the scalability is decided by the aws automatically based on some threshold parameters it was a reliable service and whenever the scaling is created or initiated you can get the notifications onto your mail ids or to your cell phones scalability as i mentioned is always there in the auto scaling it can scale up it can scale down as well and it has the flexibility the flexibility in terms of whenever you want to schedule it if you want to stop it if you want to keep the size of the servers at a fixed number uh you can always make the changes on the fly and the better availability now with the use of the auto scaling we come around with the terminology called snapshot and the ami let's look into the difference between the snapshots and the ami snapshots versus ami so in a company there was one of the employee that was facing an issue with launching the virtual machines so he asked his colleague a question is it possible to launch multiple virtual machines with a minimum amount of time because it takes a lot of time in terms of creating the virtual machines the other colleagues said that yes it is possible to launch multiple ec2 instance and that can be done at a lesser time and with the same configuration and this can be done either you use a snapshot or the ami on the aws then the colleague said that what are the differences between the snapshot and ami ah let's look into the difference now the snapshots basically kind of a backup of a single ebs volume which is just like a virtual hard drive that is attached to the ec2 instance whereas the ami it is basically used as a backup of an ec2 instance only the snapshots opts for this when the instance contain multiple static ebs when you opt for the snapshot whenever the instance contains multiple statics evs volumes ami this is widely used to replace the failed ec2 instance in the snapshots here you pay only for the storage of the modified data whereas with the ami you pay only for the storage that you use the snapshots are non-bootable images on abs volume whereas ami are bootable images on the ec2 instance however creating an ami image will also create the ebs snapshots now how does aws auto scaling work let's look into it so for the aws auto scaling to work you have to configure single unified scaling policy for application resource and this scaling policy with that you can explore the applications also and then select the service you want to scale also for the optimization select do you want to optimize the cost or do you want to optimize the performance and then keep track of scaling by monitoring or getting the notifications now what are the different scaling plans we have so in the auto scaling a scaling plan basically helps a user to configure a set of instructions for scaling based on the particular software requirement the scaling strategy basically guides the service of aws auto scaling on how to optimize resources in a particular application so it's basically a kind of the parameters that you set it up so that how the resource optimization can be achieved in the auto scaling with the scaling strategies users can create their own strategy based on their required metrics and thresholds and this can be changed on the fly as well what are the two types of scaling policies we have so there are basically dynamic scaling and the predictive scaling now what is dynamic scaling it basically guides the service of aws auto scaling on how to optimize the resources and it is helpful in optimizing resources for availability and particular price now with scaling strategies users can create their plan based on the required metrics and thresholds so a metric can be like let's say a network in network out or it can be a cpu utilization memory utilization likewise now in the predictive scaling its objective is to predict future workload based on daily and weekly trends and regular forecast future network traffic so it is kind of a forecast that happens based on the previous past experiences it uses a machine learning technique for analyzing that network graphic and this scaling is like how weather forecast works right it provides schedule scaling actions to ensure the resource capacity is available for application requirement now with the auto scaling you would need the load balancers also because if there are multiple instances that are created then you would need a load balancer to distribute the load to those instances so let's understand what do we mean by a load balancer a load balancer basically acts as a reverse proxy and it is responsible for distributing the network or the application traffic across multiple servers with the help of a load balancer you can achieve a reliability you can achieve a fault tolerance of an application that is basically it increases the fault tolerance and the reliability so for example when there is a high network traffic that is coming to your application and if that much traffic comes to your application to the instances your instances may crash so how you can avoid that situation so you need to manage the network traffic that is coming to your instances and that can be done with the load balancer so thanks to the aws load balancers which helps in distributing network traffic across backend servers in a way that it increases performance of an application here in the image you can see the traffic coming from a different resources landing on to the ec2 instance and the load balancer is actually distributing that traffic to all the three instances hence managing the network traffic quite properly now what are the types of load balancers we have there are three types of load balancers on the aws one is the classic load balancer second is the application load balancer and the third one is the network load balancer let's look into what we have in the classic load manager so the classic load balancer is the most basic form of load balancing and we call it as a primitive load balancer also and it is widely used for the ec2 instances it is based on the ip address and the tcp port and it routes network traffic between end users as well as in between the backend servers and it does not support host-based routing and it results in low efficiency of resources let's look into what we have in the application load balancer uh this is one of the advanced forms of load balancing it performs a task on the application level in the osi model uh it is used when there are http and https traffic routing is required and also it supports the host-based and path-based routing and performs well with the microservices of the backend applications the network load balancer performs the task at layer 4 of the connection level in the osi model the prime role of the network load balancer is to route the tcp traffic and it can manage a massive amount of traffic and is also suitable to manage the low latencies let's look into the demo and see how practically we can create the auto scale hi guys let's look into the demo for how we can create an auto scaling on the aws console so right now i'm logged into the aws console and i am in the mumbai region uh what you need to do is you have to go to the compute section and under that click on the easy to service let's wait for the ec2 servers to come now just scroll down and under the load balancing there is an option called auto scaling so there first you have to create a launch configuration and then after that you have to create the auto scaling groups so click on launch configuration and then you have to click on create launch configurations so click on create launch configuration now this launch configuration is basically uh this set of parameters that you define while launching and auto scaling so that this uniformity is maintained with all the instances so that includes let's say if you select a windows os or linux os that particular type of an operating system will be implemented in all the instances that will be part of an auto scaling so there are certain set of parameters that we have to specify during the launch configuration so that we can have a uniformity in terms of launching the servers so here i would select an amazon linux ami and then i would select the type of server which will be t2.micro click on configure details put the name to the launch configuration let's say we put it as a demo and the rest of the things we'll keep it default click on that storage since it's a linux ami we can go with the 8gb storage that should be fine click on configure security group let's create a new security group which has the ssh port opened and that is open for anywhere which is basically source ipv4 and ipv6 ips any ip would be able to access that click on review just review your launch configuration if you want to make changes you can do that otherwise click on create a launch configuration you would need the key pair and this key pair will be a unique key pair which will be used with all the instances that are part of the auto scaling group so we can select an existing key pair if you have that otherwise you can create a new keypair so i have an existing key player i'll go with that acknowledge it and click on create launch confusion now we have successfully launched the configuration of an auto scaling the next thing is to create an auto scaling group so click on create an auto scaling group using this launch configuration put a group name let's say we put something like test and the group size to start with uh it says one instance so that means at least a single instance will always be running and it will be initiated and running 24 cross seven till the auto scaling is available you can increase the size of the minimum base instances also let's say you can change it to two also so you would get at least two servers running all the time so we'll go with the one instance the network would be the vpc default and in the bpc particular region we can select the availability zones so let's say if i select availability zone 1a and then availability is on 1b so how the instances will be launched so one instance will be launched in one a the other one in the 1b the third one in the one a fourth one and the one b likewise it will be equally spreaded among the availability zones next part is to configure the sailing policies so click on it if you want to keep this group at its initial size let's say if you want to go with only a single instance or two instances and you don't want the scaling to progress you can put it keep this group at its initial size so this is basically a way to hold the scaling but we'll use the scaling policies to adjust the capacity of this group so click on it and we would scale between let's say minimum one instance that we have and we'll scale it between one two four instances and what condition on what basis these instances will be scaled up or scaled down would be defined in the scale group size so the scaling policies you can implement based on a scale group size or using the simple scaling policies using the steps so in the scale group size you have a certain metrics you can use average cpu utilization you can define a metric related to average networking average network out or the load balancer request counts per target and if you create the simple scaling policies using steps then you need to create the alarms and there you can get some more metrics that you can add up as a parameter for the auto scaling let's go with the scaling group size let us go with the metric type as average cpu utilization and the target value here you have to specify what would be the threshold that when the instance cpu utilization is crossed then a new instance should be initiated so you can put a reasonable threshold for that let's say we put something like 85 and whenever the instance cpu utilization is crossed 85 threshold you will see that there will be a new instance created let's go to the next configure notifications and here you can add notifications so let's say if there is a new instance that is initiated and you want to basically be notified so you can get notifications over your email ids or you can get it on the cell phones so for that for adding the notification you would need the sns service that is called as a simple notification service and you have to create a topic there you have to subscribe for the topic using your email id and then you should get the notifications click on configure tags the tags are not mandatory you can basically put a tag let's say if you want to identify the instance or purpose it was created otherwise you can leave it blank also click on review and review your scaling policies notification tags as well as the scaling group details click on create auto scaling group and here you go your scaling has been launched click on close and you should get at least a single instance initiated automatically by the auto scaling so let's wait for the details to appear so here you can see our launch configuration name demo auto scaling group name test minimum instance we want one the maximum instances we want four we have selected two availability zones ap south 1 ap south 1b and the instance one has been initiated and if you want to verify where exactly this instance has been initiated just click on the instances here and here you will see that our single instance has been initiated that is in service and that has been initiated in ap south 1b now once the threshold of this instance crosses 85 that is what we have defined in the scaling policies then you should see that another instance will be initiated so likewise this is basically i have created steps to initiate a scaling policy that means to increase the number of servers whenever the threshold crosses likewise here itself you can add another policy to scale down the resources in case if the cpu utilization goes to a normal value fine guys that's it from us hi there if you like this video subscribe to the simply learn youtube channel and click here to watch similar videos to nerd up and get certified click here
Info
Channel: Simplilearn
Views: 125,016
Rating: undefined out of 5
Keywords: aws autoscaling, aws autoscaling step by step, aws auto scaling with load balancer, aws autoscaling and load balancing, aws auto scaling group, aws autoscaling policy, aws auto scaling demo, what is aws auto scaling, what is aws auto scaling group, aws load balancer, aws load balancer tutorial, aws load balancer auto scaling, aws load balancer types, aws load balancer and autoscaling, aws tutorial, aws tutorial for beginners, aws training, simplilearn aws, simplilearn
Id: 4EOaAkY4pNE
Channel Id: undefined
Length: 18min 34sec (1114 seconds)
Published: Mon Jul 06 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.