Azure Load Balancer

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everyone and welcome to my course as you overload balancer in the next few hours we are going to dive deep into Azure load balancer to cover every single aspect of azure load balancer and actually we are going to start from the beginning to see where Azure load balancer fits in Azure load balancing services like front door traffic Manager application Gateway and load balancer and what will be the use cases that tell us that we should be using Azure load balancer so in this course we are going to be focusing 100 of the time on Azure load balancer only and I have already created separate videos for other load balancing Services if you are interested to learn any of them feel free to go ahead and watch my videos it's important to mention from the get-go that this course is hundred percent Hands-On there is no PowerPoint slides no presentations at all all the time you're gonna see me either on Azure portal or opening Microsoft Azure documentation to make the course as practical as possible because I believe this is going to be the best way for you to learn Azure load balancer properly then you will see the different types of azure load balancer public internal and Gateway load balancers then we are going to talk about Zone redundancy and why it's very important in production workloads and what is the mean difference between Zone redundant and single zone deployments moving on to how to protect our load balancer and configure the Dos protection for our load balancer then we are going to dive into the inbound rules in more details we are going to see the inbound load balancing rules and inbound nut rules separately and we're going to cover the high availability ports when we should be using floating IP and why we need to consider enabling TCP reset on the incoming ports and as we will be covering the inbound connectivity we will cover the outbound connectivity as well we will see the different options we have to allow the backend instances in the load balancers back in pool to have an outbound connectivity to the public internet and what is the pros and cons on each of these options and what's going to be the recommended one to be used in a production workload then we will be covering how can we Implement egress only or how it bound only traffic using the load balancer then finally we are going to look at the monitoring aspects of our load balancer to see how can we use different monitoring tools to keep an eye on our load balancer and get alerted or notified when something unexpected happens that's enough for an introduction for this course now let's get started and let's learn Azure load balancer together hi everyone in this video we are going to talk about Azure load balancer at a very high level and as the name suggests Azure load balancer distributes the incoming traffic evenly across resources in the back in the pool two types of load balancers public load balancer and internal load balancer for the public load balancer allows you to accept incoming traffic coming from the public internet and the best use case for it is to use it in front of your web tier to allow the load balancer or the public load balancer to accept the traffic coming from your clients over the internet however for the internal load balancer it doesn't have this benefit internal load balancer only accepts traffic coming from the v-nuts it only accepts internal traffic internal load balancer cannot accept internet traffic so as you can see in this example here the public load balancer will be able to accept traffic coming from the clients over the internet and then the public load balancer is going to distribute the traffic to a different VM in the back in the pool which is going to be our web tier then we are going to be sending all the traffic from these VMS to an internal load balancer and again the internal load balancer is going to distribute the traffic to different VMS in the business tier of our architecture this is again a very high level architecture to show you how can we combine both public load balancer and internal load balancer in one architecture at this point someone might say we have so many load balancing services in Azure we have front door application Gateway traffic manager and now we have load balancer it's going to be so confusing to use the right Azure load balancing service for a specific use case and for this reason I'm going to show you this comparison table that will make the life easier for you to select the right Azure load balancing service so this table here makes a comparison between different Azure load balancing services across two Dimensions whether it's a global or Regional and whether it's HTTP or non-http and by answering these two key questions you will be able to land on the right service that will fit into your architecture I'm going to show you an example let's say that we have a requirement to have a global Service a global load balancing service in this case we only have to choose from Azure front door or Azure traffic manager because these services are the two Global Services we could be using now let's ask ourselves with the second question what kind of traffic we should be supporting if it's HTTP traffic then we will have only one option to use Azure front door in our architecture and if it's non-http traffic then we can only use traffic manager and let's turn it around to the original case and let's say that we want to use an Azure load balancing service in a regional scope and in this case we have only two services to choose from either an application Gateway or Azure load balancer then coming to the second question what kind of traffic we are supposed to support and if it's HTTP traffic then we can be using application Gateway only however if we want to use non-http traffic then we can use Azure load balancer again this table is very important to show you the key differences between Azure load balancing services and it allows you to select the right Azure load balancing service that will fit into your requirements and your architecture now coming back to our load balancer diagram at a very high level again the load balancer allows you to distribute the traffic across availability zones across regions allows you to develop healthy probes to see or to know the stealthy status of the backend services in the backend pool allows you to do a lot of things which you are going to see in more details in the next videos that's all I have for you in this video to just show you a very high level overview of azure load balancer I hope you enjoyed this video and I'm gonna see you in the next video thanks for watching hi everyone in this video we are going to see how can we create a public load balancer so we are going to create a load balancer and two VMS and we are going to see how can we configure the load balancer to distribute the traffic to these two VMS but before we get to that we need to create a virtual Network first so we can put our resources in it so let's go ahead and create a new virtual Network and let's create a new Resource Group I'm gonna call it RG Sydney and I'm gonna call our v-net vnet Sydney as well and I'm going to put it in Australia east region let's go ahead and create our virtual Network then I will need to create a net gateway to allow an outbound internet connectivity for the resources in the v-net we have just created so let's go ahead and create a new net Gateway let's select Sydney Resource Group let's call it Nat Gateway Sydney and let's leave everything else to the default value and then let's go ahead and create our Nat Gateway now let's go ahead and create our load balancer now let's go ahead and create a new one and let's put it in Sydney Resource Group let's call it public load balancer and I'm gonna use a standard squ public type Regional tier let's go to the front-end IP configuration and let's add a new one let's call it load balancer IP and then let's select ipv4 IP address and let's create a new public IP address let's call it p i p and then let's leave everything to the default values and let's go ahead and add our front and IP configuration now let's go to the backend pools and let's add a new back in the pool let's call it back ends and let's select our virtual Network we have created before and since we don't have any VMS at the moment I'm going to leave the backend configurations empty and I'm going to show you how we are going to associate the VMS to this load balancer while we are creating the VMS but for now let's go ahead and save our backend pool and let's go to the inbound rule let's go ahead and add a load balancing rule let's call it rule 01 and let's select the front-end IP address of our load balancer and let's select our backend pool let's specify Port 80 back in the port 80 as well and let's create a new health probe let's call it health rope and then let's select TCP protocol and leave everything to the default values and let's enable TCP reset setting and then let's add this inbound rule now let's go to the outbound rule we don't need to do anything in this at the moment so let's go all the way down and create our load balancer now as I said in the beginning we need to create two virtual machines so we are going to use them to test our load balancer so let's go ahead and create our first VM let's put it in Sydney Resource Group I'm gonna call it vm01 and let's specify username and password for the vmo1 and let's leave RDP Port open to allow us to connect remotely to the VM however you should never do this in production environment or any other environment to be honest a better way to connect to your VMS is going to be a Bastion host however it's not in the scope of this video now let's go to networking and making sure that the VM is going to be hosted in the vnet we have created and let's scroll all the way down and click on advanced for Nic Network Security Group click on Advanced however we are not going to configure it at the moment until we select our load balancer so let's take on the load balancerate checkbox and let's select our load balancer and let's select our backend pool now let's go up to configure our Network Security Group and let's go ahead and create a new one let's add an inbound rule let's select HTTP traffic Port 80 to allow the traffic and then we are going to give it periority 100 and let's add this rule and then let's go ahead and add it then let's go all the way down and create our vmo1 now let's do exactly the same for the second VM so let's go ahead and create a new VM let's support the density Resource Group let's call it vmo2 let's specify a username and password for vmo2 and let's leave RDP Port open let's go to networking making sure that we are using the same v-net we have created before let's use the advanced Nic Network Security Group and let's take on the load balancing checkbox and let's select our load balancer we have created before an hour back end now let's go up to configure our network security group and let's add an inbound rule for HTTP traffic and let's give it Priority 100. and let's go ahead and add this rule now let's go all the way down and create our vmo2 in order to be able to test our scenario we will need to install IIs on these two VMS so let's open Cloud shell terminal on these two tabs and we are going to use Powershell to install IIs on these two VMS so let's paste this Commander that I'm going to leave in the description box and let's specify some parameters resource the group name RG Sydney NVM name vmo1 and let's install IIs on vmo1 now let's go to vmo2 and do exactly the same let's paste our Command and let's specify our parameters RG Sydney for the resource Group name and vmo2 for the VM name and let's go ahead and run this command all right now we have the IIs installed on both VMS so let's go back to our load balancer and let's browse to the front-end IP configuration and let's copy the IP address of our load balancer and let's open a new browser Tab and let's paste our IP address as you can see here it browse to vm01 and no matter how many times you browse it is going to stick to the m01 because of the caching so let's open incognito window and let's paste the IP address as you can see here it went to vmo2 this is just to show you how can we create a public load balancer and configure it to distribute traffic across multiple VMS now let's go ahead and clean things up um I'm gonna browse to Resource groups and I'm gonna go to RG Sydney and I'm gonna delete the resource Group so it's going to delete all resources were created in this lecture let's go ahead and delete it that's all I have for you in this video thanks for watching and I will see you in the next video hi everyone in this video we are going to see how can we create an internal load balancer and it's very similar to the previous video when we created the public Cloud balancer however there are few configurations you need to make in order to make the internal load balancer scenario works and similar to the previous video we are going to create a load balancer a few VMS and of course we need to put them in a virtual Network so let's go ahead and create a new virtual Network and let's create a new Resource Group first let's call it RG Sydney and let's call our v-net name v-net Sydney as well and let's support it in Australia east region let's go ahead and create our v-net and let's create a net gateway to allow the outbounds internet connection from the resources we are going to place in the v-net let's put our netgateway in Sydney Resource Group let's call it Nat Gateway Sydney and let's leave everything else to the default values and let's go ahead and create our Nat Gateway now let's go ahead and create our load balancer let's go ahead and create a new one let's support the density Resource Group let's call it internal load balancer it's going to be standard we're going to use internal type this time and it's going to be original let's go to the next step and let's go ahead and create a front-end IP configuration let's call it load balancer IP and as you can see here the virtual network has been already selected for me I'm going to choose the subnet and then making the availability Zone to be Zone redundant pretty much the default values let's go ahead and add the front-end IP configuration and let's go ahead and configure our backend pool let's call it backend and again since we don't have any VMS created yet we are going to leave it empty and we are going to assign the load balancer to the VMS as creation time now let's go ahead and save our backend pool and let's go to the next step and configure the inbound rules and let's go ahead and add a load balancing rule let's call it rule or one let's select the front-end IP address of our load balancer let's select our backend pool and let's specify port 80. and let's create a new health probe let's call it health probe let's specify Port DC tcpe protocol and leave everything else to the default values and let's enable TCP reset let's go ahead and add the inbound rule so we don't need to specify any outbound rule so let's go all the way down and create our load balancer now let's go ahead and create three VMS two VMS are gonna be internal without any public access to the internet and then the third one we are going to have as a test VM to allow us to test this scenario now let's put our VM in Sydney Resource Group and let's call the VM vm01 and let's specify a username and password for vmo1 let's say leave RDP Port open let's go to networking making sure that we are using the CMV that we have created in this video and let's enable the advanced network security group and let's go ahead and configure it by adding a new inbound rule for HTTP traffic on Port OT and let's give it Priority 100. and let's go ahead and add the network security group and then let's take on the load balancing and select our internal load balancer and the vacant pool let's go all the way down one important thing I forgot to mention is that we want to disable the public IP address on this VM as you can see the public IP here it's going to create a new one for me so we are not going to create a public IP address for vn01 now let's go all the way down and create vmo1 now we are going to do exactly the same for vmo2 we are going to put it in Sydney Resource Group we are going to call it vmo2 and we are going to specify username and password for vmo2 I'm going to leave RDP Port open then let's browse to networking making sure that we are going to use the same v-net we have created in this video let's use Advanced Network Security Group and let's add a new inbound rule on HTTP traffic on Port 80 and let's give it Priority 100. and let's go ahead and add this new rule now let's check on the load balancing and select our internal load balancer and the back in the pool we have created not to mention that we need to not to create a public IP address on vmo2 as well now let's go all the way down and create vmo2 now the third VMware going to create is going to be a test VM so let's support it in Sydney Resource Group where I'm gonna call it test VM and again I'm gonna specify username and password for my VM and I'm Gonna Leave RDP Port open going to networking going to choose the same v-net we have created in this video and I'm not going to specify any load balancing or any configuration this is simply the VM we are going to use to connect to the internal load balancer now let's go all the way down and create our test VM all right now we need to install IIs on vm01 and vmo2 so let's go and open Cloud shell terminal on these two tabs so we can install IIs using Powershell Scripts now let's paste this command that I'm going to leave in the description box and let's specify our Resource Group RG Sydney and VM name vmo1 and let's go ahead and install IIs on vmo1 and let's go to the second Tab and paste the Powershell script and let's specify the resource Group and the VM name to be vmo2 and let's go ahead and run this command as well all right now we have the IIs installed on both VMS let's go to test VM and let's connect to it to test our scenario let's connect it using RDP and download RDP file so I'm gonna use a username and password I use to configure the test VM and let's connect to our test VM all right now let's go to the local server here and let's disable the enhanced security configuration for IE to allow us to test out scenario now let's open Internet Explorer and let's go back to my load balancer and let's go to the front-end IP and let's copy the IP address of my internal load balancer let's go back to my VM opening the IE and let's paste the IP address of the internal load balancer as you can see here it directed me to vmo1 and as you can see as I'm refreshing the page sometimes it brings up vm01 sometimes it brings up vmo2 because again as we said that's an internal load balancer now let's open a new browser Tab and let's paste to the same IP address of the internal load balancer and see if we are going to get any lock hitting the internal load balancer and hopefully getting browsed to vm01 or vmo2 technically we shouldn't be able to do that because we have configured an internal load balancer at the first place and this is the main difference between the public load balancer and the internal load balancer now let's go ahead and clean things up let's go to Resource groups and let's select RG Sydney and let's go ahead and delete the resource Group that's all I have for you in this video thanks for watching and I'm going to see you in the next video hi everyone before we get further into our course I just wanted to give you a quick heads up as you already know that this course is completely Hands-On so you should be expecting to spend so much time on Azure portal creating different services and configure different components and as a result this will add a little bit of cost on your monthly bill and in my case it costed me around 30 Australian dollar to create this course and I created this course in a way that will allow you to get rid of the resources you create in Azure portal really quickly so you don't have to hold on a resource too long until you finish the course I did this specifically to make each video Standalone by itself with minimum dependency on the previous or next videos having said that in some situations I found it better for you to keep on the resources for two or three videos to just save time and avoid recreating the resources we are going to use in a very short time this is just a quick note I wanted to give you before you get started with my course if you are not happy paying any more cost in your Azure bill so please don't create any resources and just watch what I'm doing thanks for watching and I will see you in the next video hi everyone in this section we are going to talk about the load balancer core elements starting from creating multiple VMS and using the inbound net rules to direct traffic to each VM separately and we are going to look at how can we configure the availability zones for our load balancer or even better how we use the Zone redundancy option in Azure load balancer in addition to how we create a global load balancer Gateway load balancer and integrate a net Gateway with our load balancer and lastly we are going to see how can we use the Dos protection for our Azure load balancer let's get started and I'll see you in the next video hi everyone in this video we are going to see how can we use the inbound net rule of the load balancer to direct the traffic to multiple VMS through the IP address and the port number and in this video we are going to create some VMS and a load balancer and of course we need to place it in in a virtual Network so let's go ahead and create a virtual Network I'm gonna create a new Resource Group I'm gonna call it orgy Sydney and I'm gonna call my virtual Network v-net Sydney as well and I'm going to put it in Australia east region and let's go ahead and create the virtual Network and now let's go ahead and create an ad Gateway and I'm going to put it in Sydney Resource Group and I'm gonna call it not Gateway Sydney and let's leave everything else to the default values and let's go ahead and create our Nat Gateway now let's go ahead and create two VMS that we are going to use in this scenario so let's create our first VM and I'm gonna put it in Sydney Resource Group let's call it vmo1 let's specify a username and password for my ivn let's leave RDP Port open let's go to networking and make sure that it's been deployed in my v-net Sydney v-net and let's use the advanced Network Security Group option and let's allow the inbound rule on http protocol on Port 80 and let's give it Priority 100. let's go ahead and add this rule and let's go all the way down we haven't created our load balancer yet so let's go ahead and create our VMS and see how we are going to associate the VMS to the load balancer from the load balancer let's go ahead and create our first VM and let's create it and let's do exactly the same on the second VM for the density Resource Group let's call it vmo2 let's specify a username and password for vmo2 and let's leave RDP Port open go into networking make sure that it's going to be deployed in Sydney v-nut and let's select Advanced option of the network security group and let's allow inbound rule for HTTP traffic on Port 80. and let's give it Priority 100. and let's add this inbound Rule and again we haven't created our load balancer yet so we are going to associate it with our load balancer while we are creating our load balancer let's go ahead and create the second VM now we need to install IIs on these two VMS so let's go to uh our first VM and open a cloud shell terminal on both VMS and we are going to use a Powershell script to install IIs on these VMS and I'm going to leave this script in the description box and now let's change some parameters Resource Group name it's going to be RG Sydney and the VM name it's going to be vm01 and let's go ahead and run this command and let's do exactly the same on the second VM let's put RG Sydney in the resource Group and VM name it's going to be the mo2 and let's run this command as well as we are waiting for the IIs to be installed on this VMS let's go ahead and create our load balancer let's support it in Sydney Resource Group let's call it load balancer Australia is region standard public and Regional now let's go to the next step and let's create a front-end IP address let's call it load balancer IP address and then let's create a public IP address for the front for to be the front end of my load balancer I'm going to call it public IP and leave everything to the default values and let's go ahead and add this front-end IP now let's go to the next step going to the backend pool and add a new pool let's call it back end and we are going to select Sydney v-nut that we have created before and now let's add some resources to the back end we are going to add these two VMS we have created to be in the back in the pool for our load balancer let's go ahead and add this back-end pool and let's save our changes now let's go to the next step for the load balancing rule let's add a new load balancing rule let's call it rule or one and let's select the front-end IP of our load balancer and the back in the pool Port 80 back in the port 80 and then let's create a healthy probe let's call it health probe let's select TCP protocol and let's enable TCP reset and now let's go all the way down and create our load balancer all right now my load balancer is created and the two VMS have the IIs installed on them now let's go to my load balancer let's browse to front-end IP and let's browse to the public IP address of my load balancer and as you can see it opens vm01 and as you refresh you are going to get vmo2 now so far there is no difference from what we have done before we have a load balancer distributes the traffic to two instances in the back in the pool now we want to use the inbound Nat rule to allow the load balancer to direct the traffic to a specific VM instance based on the port number so let's go ahead and see how can we do this let's browse to inbound and at rules and let's go ahead and add a new Nat rule I'm gonna call this one vm01 rule and let's select the VM machine and let's select the IP address the private IP address of our VM now let's select the public IP address of my load balancer now we are going to specify which support we want to use on our load balancer to accept the traffic and then forward it to vm01 so I'm going to specify port 81. and then for the backend the port let's specify Port 80 because we are using HTTP traffic but as you can see here doesn't allow me to do this because the backend Port has to be unique across inbound the natural Rule and the load balancing rule so if I'm gonna go step back as we are creating the load balancer we have already created the rule on Port 80 for these VMS and we cannot have duplicate rules on these ports so what I'm going to do now is to delete this rule to allow us to add inbound inat rules for specific VMS however if you want to have both you just need to go to the VM and allow multiple ports allow the VM to listen to multiple ports to accept the traffic coming from uh the load balancer however we are not going to do that it's not the main scope of this video now let's browse back to inbound and at rules and let's add a nut rule let's call it vm01 rule and let's select the vmo1 it's a private IP address and the public IP of my load balancer let's select the port 81 and the back in the port is going to be 80 and as you can see now it didn't give me the same warning it gave me before now let's go ahead and add this Nat rule now let's go ahead and add my second natural for vmo2 and we are going to select vmo2 it's a private IP address the front-end IP of my load balancer let's select Port 82 and sending the back in the port to be 80. let's go ahead and add my second inbound natural now we have created our two rules let's go and test it so this is the front-end IP of our load balancer so let's add Port 81 and as we have configured our load balancer any traffic coming to Port 81 is going to be directed to vmo1 so let's try the same but with Port 82 it should get directed to vmo2 this is exactly how we have configured it in vmo2 to sending all traffic coming on Port 82 to vmo2 so this is to show you how can we use the inbound the net rules to configure routing on different VMS based on the ports number now let's go ahead and clean things up let's go to Resource groups and let's delete Sydney Resource Group that's all I have for you in this video thanks for watching and I will see you in the next video hi everyone in this video we are going to see how we are going to deploy multiple VMS in an availability Zone and then we are going to use a load balancer to distribute the traffic evenly across all of these VMS in a specific availability Zone and of course we will need to create a virtual Network to put in all of our resources so let's go ahead and create a virtual Network get a new Resource Group let's call it RG Sydney and I'm going to create a v-net I'm gonna call it v-net Sydney in Australia east region and let's go ahead and create my virtual Network now let's create an ad gateway to allow outbound internet connection to all of the resources we have in the v-net let's support it in Sydney Resource Group let's call it net Gateway Sydney and let's leave everything else as the default values and let's go ahead and create the net Gateway now let's create two VMS [Music] so let's create the first VM I'm gonna put it in Sydney Resource Group let's call it VM or one and when it gets to the availability options I am going to choose availability Zone and I'm going to choose Zone 1. and this is going to be the zone I'm going to use to deploy all of the VMS including the load balancer so I'm going to use a specific Zone to have all of the resources and the load balancer is going to distribute the traffic evenly to all of the VMS in this Zone now let's go ahead and specify a username and password for my VM let's leave RDP Port open let's go to networking make sure we are using Sydney v-net let's select Advanced Network Security Group configuration and let's allow inbound rule for HTTP traffic on Port 80 and let's give it Priority 100 and let's add this Rule and we don't have a load balancer yet so we are going to associate it with our load balancer later on now let's go ahead and create our VM and let's go ahead and create it now let's do exactly the same on the second VM let's create a VM and put it in Sydney Resource Group let's call it vmo2 for the availability options let's choose availability Zone and let's select Zone 1. now let's specify username and password for my VM and let's leave RDP Port open let's go to networking make sure it's going to be deployed in Sydney v-net let's use the advanced configuration for my network security group and let's allow inbound rule for HTTP traffic on Port 80. and let's give it Priority 100 let's add this Rule and again we don't have a load balancer yet so we are gonna associate it to our VM later on now let's go ahead and create our VM let's go ahead and create it as you already know we will need to install IIs on these VMS to allow us to test this scenario so let's open Cloud shell terminal on these two tabs so we can run a Powershell script to install IIs on our VMS all right now let's paste this command that I'm going to leave in the description box and let's adjust some parameters let's call the resource Group RG Sydney and the VM name vm01 let's go ahead and run this command let's do exactly the same on vmo2 let's specify a resource Group name to be RG Sydney and the VM name to be vmo2 and let's run this command as well now while you are waiting for the IIs to be installed let's go ahead and create our load balancer and let's go ahead and create a load balancer let's put it in Sydney Resource Group let's call it Zone one load balancer and it's going to be in Australia East standard public Regional load balancer let's go to the next step and add a front-end IP configuration let's call it again Zone one load balancer IP and let's create a public IP address and let's call it Zone 1 public IP and in here we are going to do some changes usually we go for Zone redundant when it gets to the availability Zone but in this scenario we want to deploy all of these resources in one availability Zone load balancer and all VMS are going to be deployed in zone 1. so I'm going to change the availability Zone to be in zone one and let's go ahead and add the IP now let's go to the next step and add a look in the pool let's call it back end and let's select the v-net of my resources and then let's go ahead and add the VMS I've created before and let's save my backend pool let's go to the next step and add the load balancing rule let's call it load balancing rule let's select the front end IP address of my load balancer and the back in the pool Port 80 back in the port 80 and let's create a healthy probe let's call it health probe and the protocol I'm going to use is TCP let's go ahead and add this health probe and let's enable TCP results now let's go all the way down and create my load balancer now my load balancer has been created and ready for me let's browse to the front end IP and let's browse to this public IP address as you can see here we have been able to hit our VMS in zone one so in this scenario we have deployed all of our resources the load balancer and our VMS in zone 1 in my region so I'm not going to delete these resources because I'm going to need that in the next few videos to show you how we are going to use a multi-zone load balancer and cross region or a global load balancer I hope you enjoyed this video thanks for watching and I will see you in the next video hi everyone in this video we are going to see how we create a load balancer and use it to distribute traffic across VMS in different availability zones so let's go ahead and create two VMS let's go ahead and create the first one let's support it in Sydney Resource Group let's call it vmo3 and for the availability options I'm gonna choose availability Zone Zone 2. and let's specify a username and password for my VM and let's leave RDP Port open and let's go to networking make sure it's going to be deployed in Sydney v-net and let's go to the advanced Network Security Group configurations and let's allow inbound rule on http Port 80 and let's give it Priority 100 and let's add this rule and let's go all the way down and create our vmo3 now let's go ahead and create our fourth VM it's going to be vmo4 so let's put it in Sydney Resource Group let's call it vmo4 and for the availability option let's choose availability zone I'm going to put it in zone 3. and now let's specify a username and password for my VM let's leave RDP Port open let's go to networking double check it's going to be deployed in Sydney v-net and let's choose the advanced configuration of my network security group and let's add an inbound rule allowing a traffic on Port 80 HTTP traffic on port 8080 and let's give it Priority 100 and let's go ahead and add this rule now let's go all the way down and create our VM now we need to install IIs on these two VMS so let's open Cloud chill terminal in these two tabs to install IIs on these VMS so I'm going to use this Powershell script that I'm going to leave in the description box and I'm going to change some parameters Resource Group name it's going to be RG Sydney VM name vmo3 and let's go ahead and run this command and now let's do exactly the same on the other Cloud shell terminal let's paste this command and change some parameters RG Sydney and the VM name vmo4 and let's go ahead and run this command now let's go and create our load balancer so let's go ahead and create a new one let's support it in Sydney Resource Group let's call it multi-zone load balancer because the VMS that are going to be associated with this load balancer sets in different availability zones Zone 2 and Zone 3. and it's going to be standard public and Regional let's go to the front-end IEP configuration and let's add a name for our front-end IP it's going to be multi-zone load balancer IP and let's create a public IP address let's call it multi-zone public IP address and for the availability zones who are Gonna Keep It Zone redundant because we are using this load balancer to distribute traffic to VMS in different availability zones it will be best for us to use on redundant option rather than loading or deploying the load balancer in a specific availability Zone now let's go ahead and add this front-end IP and let's go to the next step and add a back in the pool let's call it back end and let's select Sydney v-net and let's add the two VMS we have created this video like vm3 and vm4 and let's go ahead and add them and let's save our backend pool let's go to the next step and let's add a load balancing rule let's call it load balancing rule select the front-end IP address of our load balancer and then let's select the back in the pool Port 80 back into Port 80 let's create Health probe let's call it health probe and it's gonna be using TCP protocol and let's enable TCP resets and let's go ahead and add this rule now let's go all the way down and create our load balancer all right now my load balancer has been created for me let's browse to the IP configuration of my load balancer and let's browse to the public IP of my load balancer as you can see here it's going to hit vmo3 and vmo4 because that's how we have configured our load balancer to distribute traffic to different VMS in different availability zones again I'm not going to delete the resources in this video because I'm going to use it in the next video to show you how to create a cross region or a global load balancer that's all I have for you in this video thanks for watching and I will see you in the next video hi everyone in this video we are going to create a global load balancer or cross region load balancer now let's go to load balancers and in previous videos we have already created two load balancers if you remember Zone 1 load balancer we created it when we wanted to distribute traffic to VMS created in the same availability Zone while the multi-zone load balancer we created it to distribute traffic to VMS created in different availability zones now we are going to take it one step further and we are going to create a global load balancer to distribute traffic to these two load balancers so let's go ahead and create our Global load balancer let's support it in Sydney Resource Group let's call it Global load balancer and this is one important point you need to be aware of when it comes to the global load balancers and the portal doesn't do enough validation for it we are only allowed to use specific regions for cross-region load balancer or Global load balancers if we decided to use any other region the portal at that point as of March 2023 is not going to give you any validation error and it's gonna keep going and then when you try to deploy the resource at the end it's gonna fail so from the get-go make sure you are going to be using any of these regions we call it home regions when you create a global load balancer so I'm going to use estos 2. for my Global load balancer and I'm gonna use standard public and global now let's go to the next step and let's add a front-end IP configuration let's call it Global load balancer IP and let's create a public IP address let's call it Global load balancer public IP address and let's go ahead and add this IP now let's go to the backend pools and let's add a back-end pool and as you might notice it looks already different from the back in the pool of the regional load balancer so let's call it back in the pool and and this is one of the keys difference between Global load balancer and others in global load balancer we are allowed to select other load balancers as awake in the Target in our back in the pool so I'm going to select a multi-zone load balancer as a back-end Target and I'm going to select the front-end IP address for it and same thing for the Zone one load balancer I'm going to add it as I back in the Target in my backend pool now let's go ahead and save our backend pool and let's go to the next step and create a load balancing rule let's call it load balancing rule and I'm going to select the front-end IP address of my load balancer and the back in the pool and let's use port 80 and let's enable TCP reset now let's go ahead and add this Rule and let's go all the way down and create our Global load balancer all right now my load balancer has been created so let's browse to the IP address of my Global load balancer and let's try to hit it as you can see here we are getting vm1 and again because of the browser cache so let's copy it across to Cognito and see what we might be able to get vm4 because this load balancer distributes the traffic to other load balancers we have created before so it's kind of a load balancer for load balancers so let's speak that's all I have for you in this video again I'm not gonna delete the resources because I'm gonna use it in the next video I hope you enjoyed this video thanks for watching and I will see you in the next video hi everyone in this video we are going to see how can we create a Gateway load balancer so let's go to load balancers and let's go ahead and create a new load balancer let's put it in Sydney Resource Group and let's call it Gateway load balancer it's going to be Gateway internal and resonant let's go to the front-end IP configuration and let's add a new one let's call it Gateway load balancer IP and let's put it in the default subnet and let's make its own redundant and let's add the front-end IP configuration now let's go to the backend pool let's go ahead and create a new backend pool and let's leave everything to the default values and let's add some backended orbits and let's add some VMS and let's go ahead and save our back in the pool now let's go to the inbound rules and add the load balancing rule let's call it rule and let's select the front-end IP address of our load balancer and now we're back in the pool and let's create a health probe let's call it health probe and let's select TCP protocol and let's add it now let's go all the way down and create our Gateway load balancer all right my Gateway load balancer is now ready for me now I'm going to show you how we are going to use it with other load balancers so I'm going to go to Zone one load balancer and browse to the front-end IP configuration and then in here you will be able to see the Gateway load balancer and from here we should be able to associate the Gateway load balancer to our regional load balancer we have created before let's go ahead and save our changes and this is not the only use case for the Gateway load balancer if we also browse to one of our virtual machines let's say vmo1 and then when we go to networking browse to the network interface of our VM and let's go to the ipconfigurations we should be able to associate the Gateway load balancer to the VMS as well and these are the different use cases of the Gateway load balancer and again I'm not going to delete these resources because I need it for the next video thanks for watching and I will see you in the next video hi everyone in this video we are going to see how can we integrate a load balancer with a net Gateway so let's browse the netgateway that we have already created in a previous video let's go to the outbound IP address and as you can see here we don't have an outbound IP address for our Nat Gateway so let's go ahead and create one let's create a new public IP address for Nat Gateway public IP address and let's go ahead and add this one and save our changes all right now we have got an outbound IP address for our Nat Gateway which is going to be presented or used to for all outbound the traffic from our v-net and subnet to the public internet so if we go to Virtual machines for example and if we connect it to vmo4 using RDP connection and let's connect to the VM using the username and password we used when we were creating the VM let's browse to a local server and let's disable the enhanced security configuration for IE to allow us to test our scenario now let's open ie and let's browse to what's my ip.com as you can see here 179204 this is going to be the IB address of the outbound address we have created 179204 this is to show you that all outbound internet connections are gonna be presented by the outbound IP address we have created for our Nat Gateway that's all I have for you in this video thanks for watching and I will see you in the next video hi everyone in this video we are going to see how can we do Adidas protection for our load balancer so let's go ahead and search for the Dos plan and let's go ahead and create a new DDOS protection plan let's put it in Sydney Resource Group let's call it DDOS plan in Australia east region and this will go ahead and create our DDOS protection plan so unlike Azure front door or application Gateway this kind of WAFF or DDOS protections are not going to be specified on the load balancer itself instead it's going to be specified on the v-net level so let's go to the v-net we have created before and then let's go to DDOS protection and then let's enable it and from there we will be able to select our DDOS plan we have created before let's go ahead and save our changes this is well provided the Dos protection for all resources hosted in this v-net and our load balancer is hosted in this v-net so it's going to be protected from the Dos attacks that's all I have for you in this video now let's go ahead and clean things up we have been using so many resources for the last few videos so let's go ahead to Sydney Resource Group and let's see delete that is also group to delete all resources we have created in the last few videos that's all I have for you in this video thanks for watching and I will see you in the next video hi everyone in this section we are going to talk about load balancer components and we're going to see in more details what are the different squs available in Azure load balancer and different types as well like internal Global and Gateway load balancers we are going to dive deep into the backend pools and the front-end IP configuration and we are going to see The Zone redundancy and inbound outbound rules let's get started and let's jump straight into it hi everyone in this video we are going to dive deep into load balancer configuration specifically the basic configuration of our load balancers so let's go ahead and browse to load balancers and go ahead and create one I'm not going to create a new load balancer anymore because we have created so many load balancers in previous videos but I'm gonna walk you through different steps and configurations you need to be aware of when you create your load balancer of course you need to specify a resource Group name and region for your load balancer then you will get to specify squ and you get to choose from three tiers either standard basic or Gateway tiers let's leave Gateway here aside for now because we are going to cover it in more details in a separate video and for the main difference between basic and standard tier you will be able to find more details about it on this link and as you can see here it's a comparison between the main features between basic and standard load balancers and at a very high level standard load balancer should be used in a production workloads because standard load balancer supports High Network performance and Zone redundancy while the basic load balancer doesn't support many options as you will be able to see in this table here and it's important to understand the main differences between these two tiers before you go ahead and create your load balancer in azure now that's for the main differences between standard and basic load balancers and now when we get to choose the type of our load balancer we get to choose from either public or an internal load balancer and we have already seen in previous videos what are the differences between public and internal load balancer public load balancer allows your load balancer to be able to accept traffic coming from the public internet while the internal load balancer only restricted to accept traffic from within your v-nets and then finally here you get to choose the tier of your load balancer whether it's a regional or global I said it before and I will say it again for the global load balancer make sure that you are hosting your Global load balancer in one of the home Azure regions the portal will allow you to do it but when you try to go out and deploy it it won't be a successful deployment like I can go ahead now and specify settings for a global load balancer in Australia east region the portal will be okay with that but when it comes to the deployment it will fail because Australia East is not one of the Azure home regions for load balancers it's important to keep that in mind that's all I have for you for the basic settings of our load balancers thanks for watching and I will see you in the next video hi everyone in this video we are going to talk about the front-end IP configuration of load balancers now let's go to load balancers and create a new one let's put it in a resource Group let's call it load balancer let's select the public type and let's go to the next step now let's go ahead and add a front-end IP configuration in here we get to specify a name for our front-end IP configuration then we will get to choose between the IP version could be ipv4 or IPv6 and then we'll get to specify the IP type could be IP address or IP prefix and we are going to see both in this video now for the public IP address assignments if you already have a public IP address created in your subscription you can go ahead and select it from here however if you don't have like myself you will need to go ahead and create a new public IP address for your load balancer and as you can see the first step you need to do here is to specify a name for your public IP address and you will be able to see that there are some pre-selections has been already done for us and we can't change it for the squ of the public IP address it could be either basic or standard standards should be used in production workloads and basic tier doesn't support Zone redundancy and for the public IP address here it could be either Regional or global and this is linked to the load balancer configuration we have specified in the first step when we have created the original load balancer then we are going to have a public IP address in the regional tier however it will have created our load balancer to be a global load balancer within the public IP address will be in the global tier and for the availability Zone you should go for Zone redundant if you want to have a resilient load balancer and resilient architecture having said that you will still be able to choose a specific Zone to deploy your load balancer if you want or maybe if you want to choose to go for no Zone at all you will still be able to do that if we have selected a basic load balancer in the first step we wouldn't get a chance to specify which is on now for the routing preference we can choose either from routing the traffic through Microsoft to Global Network or routing the traffic through your internet service provider you call the choose from either of these however the recommendations to choose Microsoft Global Network and this is the configurations we need to do when we create a public IP address for our load balancer now let's go with the second option IEP prefix and the mean difference in the IP prefix you will get to choose a reserved range of azure public IP addresses instead of just one public IP address if you selected IP address type and similar to the IP address type you will need to specify the zoom redundancy option or availability Zone option and again Zone redundancy will be the best option to have a resilient architecture for your load balancer for the Gateway load balancer you might remember in a previous video when we have created the Gateway load balancer we had been able to assign it to our standard or basic load balancer from the configuration you are still able to assign the Gateway load balancer to your standard or basic load balancer at creation time and since I don't have any Gateway load balancer at the moment we can't choose from any for now these are the different options and configurations you need to be aware of when you get to create your front-end IP address configuration that's all I have for you in this video thanks for watching and I will see you in the next video hi everyone in this video we are going to talk about the back-end pools in our Azure load balancer but as a first step let's go ahead and create a virtual Network first let's go ahead and create a new one and let's create a new Resource Group let's call it RG backends and let's call our v-net v-net Sydney and let's put it in Australia east region and let's go ahead and create our v-nut now let's browse to our load balancer and let's go ahead and create a new one let's put it in RG back end let's call it lb Australia East public Regional now let's go to the front-end IP configuration and let's call it IP and for the new public IP address let's call it public IP and let's go to the next step back in the pools and as you can see you can add a back in the pool to our load balancer we can call it back into for example and here is why I have created a virtual Network in this video because we'll need to specify a virtual Network for our backend pool in the Azure load balancer and we get to choose from one of the two options either we configure our back in the pool based on the Nic which is network interface cards of our virtual machines or we can configure our backend pool based on the IP addresses of our virtual machines these are the two options available for us to configure our back in the pool for our load balancer and since I don't have any resources at the moment I will not add anything into our backend pool I just showed you the different options we have or different backend targets you can have in our back in the pool in the Azure load balancer that's all I have for you in this video now let's clean things up so I'm gonna go to Resource groups and I'm gonna go to RG backend and let's go ahead and delete the resource Group thanks for watching I will see you in the next video hi everyone in this video we are going to talk about the inbound rules and to be more specific we are going to talk about the load balancing rule now let's go ahead and create a load balancer let's put it in Resource Group let's call it lb standard public Regional and let's create a front-end IP configuration let's call it IP and let's create a public IP address let's call it p i p and let's go ahead and add the back in the pool I'm going to add an empty pool and let's go to the next step the inbound rules and let's go ahead and add a load balancing Rule and as we have seen before we will get specify a name for our rule let's call rule01 then we will get to choose the version of the IP address then we select the front-end IP address of our load balancer and which back in the pool we want to send the traffic to it and for the protocol here it could be either TCP or UDP and there is no HTTP or https as you might notice and the reason for this is the load balancer operates at layer 4. it doesn't support a layer 7 HTTP https traffic so let's keep the protocol to TCP protocol now for the port this is going to be the port number associated with the front-end IP address of our load balancer and for the backend the port this is the port it's going to be on the instance in the back in the pool that we wanted the load balancer to send the traffic to the VM in the backend pool on the back and the port so let's put port 80. and for the healthcare probe it's a way for the load balancer to know the healthy status of the back ends in the back in the pool and only forward the requests to the healthy backend instances uh we are going to dive deep into the healthy problem future but for now let's just create a new one healthy probe TCP and now let's get to session persistence which is a way to maintain the traffic between the client and the virtual machine in the backend pool so we have three options either to not to configure session persistence at all all we could configure the session persistence based on the client's IP address so the successive requests coming from the same client IP address it's going to be directed and handled by the same VM machine in the back in the pool or we could configure our session persistence to be a combination of client IP address and protocol and this will ensure that all successive requests coming from the same client IP address and the same protocol will be handled by the same VM machine in the back in the pool now let's keep it to none for now and for the ideal time out this is how long to keep the connection open without relying on the client to send keep alive messages now we are coming to the TCP reset and we are going to cover it in more details in a future video and it's a way that allows the load balancer to send the TCP requests to create a more predictable application behavior when the connection is ideal and finally here we have the floating IP and we are going to cover it in more details in a future video and final step we have here is the outbound is not and you should go with the recommended option to use the outbound rules to provide the backend pools with access to the public internet so these are the different options we can use when you create or add a new load balancing rule into our Azure load balancer and in the next video we are going to see how we are going to add an inbound Nat rule thanks for watching and I'll see you in the next video hi everyone in this video we are going to talk about the inbound net rules and let's go ahead and create a new load balancer let's put it in a resource Group let's call it lb Australia East standard public Regional let's create a front-end IP address let's call it IP and let's create a public IP address let's call it pip and let's create a backend pool let's go ahead backhand and let's keep it empty now let's go to the inbound rules in the previous video we have talked about the load balancing rule but now we are going to add an inbound Nat rule and you are going to see the difference and the first thing we need to specify here is the name for our Nat rule let's call it rule 01 and we have two types of net rule either it could be an Azure virtual machine or it could be a back-end pool and we are going to see both in this video let's go ahead first with the Azure virtual machine we will need to specify the target virtual machine of this rule and then we'll need to specify the front-end IP address of our load balancer and the front-end port of our load balancer then we'll get to specify the service tag let's say it's going to be HTTP and then the backend Port we're going to make it 80. the backend Port it's going to be the back end port of the VM that the load balancer is going to send that traffic to it and then choosing the protocol again either TCP or UDP because load balancer operates at layer 4. it doesn't support HTTP or https traffic and then same settings we have seen before in the load balancing rule whether enable or disable TCP reset and specify the ideal time out or enable or disable floating IP these are the settings or configuration you need to be aware of if you are gonna create an inbound net rule for Azure virtual machine now let's see what we need to do if we are gonna go for back in the pool instead of Azure virtual machine for the backend pool we will need to specify or choose the backend pool for our inbound net rule and same as before front-end IP address of our load balancer and it's going to be the IP address of our load balancer and then we need to specify the port number of our load balancer and as you can see here we currently have zero number of instances in the backend pool and this is where we specify the maximum number of VMS in the backend pool and again this is going to be the back-end port on the VMS in the backend pool protocol it's going to be either TCP or UDP and same as the previous settings this is all I have for you for the inbound natural and we have already created one not rule in a previous video that's all I have for you in this one thanks for watching and I'll see you in the next video hi everyone in this video we are going to talk about the outbound rules in the load balancer so let's go ahead and create a new one let's put it in Sydney Resource Group let's call it lb standard public Regional and again let's add the front-end IP address let's call it IEP create a public IP address call it p i p and let's create a backend pool let's call it back end and let's leave it empty uh let's just skip the inbound rules and just go straight to add an outbound rule now for the outbound rules it's important to mention that this tab it only valid for the public load balancers it's not going to be available for the internal load balancers or the basic load balancers having said that it's not recommended to configure the outbound rules for the backend pool instances in the outbound rules in the load balancer the recommended way to do it as we have done it before in our videos is to create a net gateway to provide a with bound internet access for the backend pool however I'm gonna show you how you can define an outbound rule in the load balancer now first thing we need to do is to specify a name for our rule let's call it rule 01 and then we get to choose the IP version it could be either ipv4 or IPv6 then we select the front-end IP address of our load balancer and then we select the protocol btcp or UDP or all then we specify the ideal time out and with our TCP reset is enabled or not and of course we will need to choose the backend pool and when it gets to the portal location we have two options either to manually configure the number of outbound ports and this is the recommended way of doing it or we just use the default number of outbound ports the risk of using the default number of outbound ports that we might run into s-net Port exhaustion and that's why it's better to manually be in control of the number of the outbound ports and then as you can see here we get to choose it per instance or based on the maximum number of the backend pool instances if we're gonna choose it for instance we get to specify how many ports per instance let's say one thousands this is how we configure an outbound rule in the load balancer to provide an outbound internet connectivity to the instances in the back in the pool however it's not recommended to create or configure the outbound rules in the load balancer itself it's better to create a net Gateway and configure your outbound rules in the net Gateway that's all I have for you in this video thanks for watching and I will see you in the next video hi everyone in this video we are going to talk about the availability zones in the Azure load balancer and let's go ahead and create a load balancer let's support it in Sydney Resource Group let's call it lb Australia East standard public Regional and let's go ahead and add the front-end IP configuration let's call it IP and let's create a new public IP address and call it pip and here is the option we are going to talk about it in this video the availability Zone and if you remember in previous videos we had few options to choose from when it comes to the availability zone of the public IP address of our load balancer and the most resilient option we could choose is the Zone redundant and it's recommended to use Zone redundant option when you go for production deployments and we are going to talk a little bit more about why Zone redundancy is a really good option so in any Azure region there are going to be three or more availability zones Zone 1 Zone 2 and zone three and when we configure the availability zone of the public IP address of our load balancer to be Zone redundant then we are going to have a single front-end IP address that is going to serve the traffic to all healthy back in the instances in any availability zone no matter where they are we are going to be able to send the traffic to these instances as long as they are healthy and in unhealthy Zone now let's assume that we had a complete failure in one of the availability zones still using this single front-end IP address we will still be able to direct the traffic to all healthy instances in other healthy availability zones which means Zone 2 and Zone 3. we should be able to survive sending the traffic to all healthy instances in these two zones and let's take it one step further let's say that zone 1 is completely out and Zone T is completely down now we have only Zone 3 is available and still as long as we have configured the availability zone of the public IP address to be his own redundant having the single front-end IP we will still be able to send the traffic to the healthy instances in Zone 3 as long as Zone 3 is healthy zone so briefly having a single front-end IP address with a Zone redundant option will help us to survive the failure of multiple availability zones as long as we have at least one healthy availability Zone that has healthy instances in it and this is the power of having its own redundant option for the public IP address for the load balancer and if we get back to the Azure portal there are gonna be other options we could choose from like you could choose our deployment the public IP address to be in a specific Zone either one or two or three so and this is gonna be called the zonal deployment and when we go for a zonal deployment then we are going to have a single front-end IP address for each load balancer in each availability Zone and it's going to be responsible to send the traffic to all healthy instances in this availability Zone and our front-end IP address in zone 1 will not be able to send the traffic to the healthy instances in zone 2 because we have only a Zone deployment for the public IP address for our load balancer and having said that we are going to have a front-end IP address for the load balancer in zone 2 and similarly for Zone 3. each of these IP addresses for each of these zones are going to be separate and independent of each at all now if we wanna have some kind of load balancing for the traffic between all of these zones we could use traffic manager on top of all of these to send the traffic to the healthy availability zones or to healthy load balancers in different availability zones and this is the zonal deployment of the public IP address for the load balancer and if we go back to the Azure portal and the last option we have is no Zone like simply we are going to create our public IP address and is not attached to a specific Zone so these are the different availability Zone options so we could choose from for our public IP address it's important to know some limitation we cannot change the Zone once we have created it like if we have created our public IP address to be in zone 1 we will not be able to change it to be to Zone 2 in future and if we have configured our public IP address to be in a zonal deployment and we wanted to change it to be a Zone redundant we want to be able to do that and the other way around if we have configured our load balancer to be Zone redundant we will not be able to change it to a zonal deployment later on this is kind of non-limitation of the Zone redundancy option in different Azure services that's all I have for you in this video thanks for watching and I will see you in the next video hi everyone in this video we are going to talk about the cross visual load balancer or some people like to call it the global load balancer it's important to mention that as of March 2023 cross visual load balancer is still in preview and it's not recommended for production workloads yet the main use case you want to consider using a cross visual load balancer is when you have something like this when you have Regional load balancers deployed in different Azure regions and you want to have a load balancer with a superpower that sits on top of all of these Regional load balancers that will be able to distribute the traffic to these load balancers according to the incoming traffic coming from the user so the main use case for having the cross visual load balancer is to enable ju redundant High availability scenarios when you're incoming a traffic coming from multiple places or multiple regions around the world one of the main benefits we are going to get when we use a cross visual load balancer is we get an instant Global fillover to the next optimal region and we are going to see how now let's assume that we have three regional load balancers deployed in different regions around the world and then we have deployed across visual load balancer that sets on top of all of the three regional load balancers now let's assume that there is a failure happened in one of these Regional load balancers then what's going to happen is that the user is going to get directed to the closest Regional load balancer and when I say closest I don't mean closest from the geographic perspective but closes the from the network latency perspective and this is what the cross Regional load balancer is going to do by sending a different Health probes and healthy Chucks to this Regional load balancers the cross region load balancer will be able to detect a failure in any of these Regional load balancers and once it detects a failure in any of these Regional load balancers it's going to take it out and then sending the traffic to the next optimal destination based on the user location or based on the originating traffic and also having a cross visual load balancer will give you the ability to scale up and down behind each of these in the points separately because for each of these Regional load balancers we will be able to add more instances remove more uh back in the pools as much as we like based on the workloads we are receiving from the clients also we are going to have one static Global IP address for the cross Regional load balancer that is going to serve the traffic to all of these Regional load balancers in different regions around the world and one of the key things in the cross visual load balancer it preserves the client IP address and other key feature of the Cross Regional load balancer is the client IP address preservation like when the clients in the request to the Cross Regional load balancer the cross Regional load balancer will still forward the client IP address into the regional load balancer onto the backend applications in case you have some algorithms or some filtering happens based on the user location you will still be able to do that and as we said before when you go deploy your cross Regional load balancer you need to make sure that you are deploying it in one of the home regions because if you deployed your Global load balancer in a participating region the portal won't complain as of March 2023 but you want to be able to deploy it at the end so you need to be mindful of that now let's talk a little bit about some limitations for the cross Regional load balancer first of all it only works for public scenarios we cannot have internal cross region load balancer as of now also we cannot have internal load balancer as a back in the Target in a back in the pool of the Cross region load balancer and also udpa traffic is not supported in Cross Regional load balancer only TCP traffic is supported that's all I wanted you to know when it gets to the Cross Regional load balancer thanks for watching and I will see you in the next video hi everyone in this video we are going to talk about the Gateway load balancer and if you remember we have three types of load balancers public internal and Gateway and Gateway load balancer is created for scenarios that require high performance High availability and requires an integration with a third-party Network Appliance like a firewall packet analysis traffic mirroring DDOS protection Etc and the way it works as we've seen before in a previous video that we are going to chain the Gateway load balancer with other load balancers we have in our solution this will allow us to easily add or remove Network Appliance in the network path without it changing the whole architecture because everything is going to be attached to the Gateway load balancer which is chained to the standard load balancer and then we are going to attach all of these Network Appliance into the gateway load balancer as much as we like and then when it gets to the supported Partners to be integrated with the Gateway load balancers this list shows you the different partners you could work with to have their Network Appliance integrated with your gateway load balancer and then it chained with your standard load balancer that's all I have for you for the Gateway load balancer thanks for watching and I will see you in the next video hi everyone in this section we are going to talk about the inbound connectivity in Azure load balancer we are going to look at the inbound and net rules in more details and the high availability ports now we can configure multiple front-ends and use a floating IP for the inbound connections and also you're going to touch on the TCP reset in Azure load balancer let's get started and let's jump straight into it hi everyone in this video we are going to talk about inbound not rule and we have talked about it in previous videos already but we are going to cover it from a different perspective so let's go ahead and create a load balancer you don't have to follow along with me and create a load balancer I'm going to create a dummy load balancer to just help me explain on different areas in the load balancer so let's go ahead and create a load balancer I'm gonna create a new Resource Group I'm gonna call it RG Sydney gonna call my load balancer lb I'm gonna put it in Australia east region standard public Regional and then let's create a public IP address for my load balancer call it IP and then create a public IP address call it pip and let's create an empty backend pool let's call it back end and let's save it I'm not gonna Define any inbound or outbound rules at the moment let's go all the way down and create our load balancer now my load balancer is ready for me and let's go to my load balancer let's go to inbound not rules I want you to keep in mind that there is a specific use case where the inbound and natural are going to be extremely useful for you inbound on that rules simply is used for port forwarding port forwarding allows you to connect to a virtual machine in the backend pool by using a combination of front-end IP address and a port number and we have already done it in a previous video so feel free to get back and review the video if you want and what we have done in that video we have configured different port number for different virtual machine we have so when the incoming traffic hits the front-end IP address of the load balancer and Port 500 then the traffic is going to be forwarded to this virtual machine however is the traffic coming to the front-end IP of the load balancer and Port 501 then the traffic is going to be forwarded to the second virtual machine so the load balancer is going to receive a traffic on the front-end IP address of the load balancer and a port number and based on the inbound net rules and how we have configured it then the load balancer is going to forward the traffic to a specific virtual machine as we have already seen in the previous video and you might have noticed that there is something wrong in this design that for example if this virtual machine has been completely unavailable for any reason then all of the traffic on Port 500 will go nowhere because we don't have any load balancers or any more virtual machines to take up the incoming traffic on Port 500 simply the primary virtual machine is down so this is one consideration you need to keep in mind when you use a single virtual machine for the inbound Nat Rule and as a way to overcome this you can use the other option and as you remember here you specify the start range of the front-end port and the maximum number of instances in the back in the pool and based on that the allocation and the allocation of new virtual machines that are going to be added to the pool are gonna happen automatically so the main difference between the two options is that in the first scenario we explicitly Define the port number or each port in our load balancer and where we want to direct the traffic that are hitting this port however in the back in the pool we have a little bit more flexibility where we are defining the starting port number and the maximum number of instances and based on our scaling activities in the backend pool the port allocations is going to happen automatically with the inbound nut rule to register or de-register virtual machines have they been added or deleted from the backend pool this is very important to keep in mind when you are architecting or designing your solution using inbound nut rule because you want to make sure that you are not introducing a single point of failure for an incoming get traffic that is hitting a specific ports that's all I have for you in this video thanks for watching and I will see you in the next video hi everyone in this video we are going to talk about high availability ports in the Azure load balancer so let's head to Azure load balancers and let's go to our load balancer that's a public Cloud balancer and if I'm gonna go to load balancing rules and let's go ahead and add a new rule as you can see here we don't have high availability ports available in the public Cloud balancer because it's something only available for internal load balancer so this is an internal load balancer I've created I'm gonna browse to load balancing Rules by adding a new rule you should be able to see ha ports available for you to use and hitcha ports provides an easy way for you to load balance all the flows on all ports that are arriving on your internal load balancer so the decision is going to be made per flow according to a combination of five things Source IP address source Port destination IP address destination port and protocol and he ports is going to be used in critical situations when you wanna ensure you have a high availability audio provide high scalability for your network virtual Appliance inside your virtual Network also when you have a large number of ports that must be used or must be load balanced and the way to configure it is pretty easy you just tick on the checkbox of high availability ports so by checking this checkbox you have enabled High availability ports on your internal load balancer and I'm not going to jump into the details of high availability ports from networking perspective because it's going to be a completely different topic that we could cover in a different video but that's all you need to know about high availability ports so only available in internal load balancer it allows you to load balance all flows on all ports coming onto your internal load balancer and it's extremely useful for critical scenarios which require High availability for your network Appliance inside your virtual Network or when you have a large number of ports to load balance that's all I want you to know about the high availability ports thanks for watching and I will see you in the next video hi everyone in this video we are going to talk about how can we enable multiple front-end IP addresses on our Azure load balancers so let's go to Azure load balancers and go to our public load balancer load balancing rules and let's go ahead and add a new rule and in here we get to specify different information about front-end IP address back in the pool and the port on the front end and the backend the port and we have kind of one rule we need to apply at all times the destination IP address and port number has to be unique across all load balancing rules now let's go ahead to the documentation to understand this in more details so let's assume that we have these different load balancing rules and as you can see here we have the same IP address in the 33 rules and we have a different IP address in the fourth one so the fourth one should be all right because it doesn't conflict with any other rule however for the top three it uses the same protocol for the first and second but they are on different ports so we should be all right without any problem but the third rule is using Port 80 but should be fine because you are using UDP protocol not TCP so the main rule that we need to keep in mind when we get to Define our load balancing rules that the combination of the destination IP address and destination Port has to be unique across all load balancing rules now if we go to see how we are going to implement this let's assume that we have two IP addresses in our load balancer and we want to spread the traffic to two VMS so we are going to have public IP address one and it's going to be configured on TCP protocol sending the traffic on Port 80 and same thing for ip2 now when we want to map this IP addresses to backend pool we are going to configure the front-end IP address number one to get directed to the backend instance IP address number one and lesson on Port 80. and we are going to send it to the backend instance IEP address number two on Port 80 as well and it shouldn't be any problem because these two ends are completely separate but if we wanted to configure the front-end IP address number two to send that traffic to the backend instance IP address number one on Port 80 it won't work the portal actually is going to stop us from doing this because it has to be a unique combination of the destination IP address and destination ports and this has to be unique across all load balancing rules but if we try to configure the front-end IP address number two to send the traffic to the backend instance IP address number one but on a different port and in this case it's going to be 81 it should accept it without any issues similarly if we configure that to send the traffic to the backend instance IP address number two on Port 81. the portal will accept it and it's not going to be any issues and this is going to be the final result we will be able to see on the portal having the front-end IP address the protocol port number and then the destination IP and ports now this was the scenario where we want to not allow back in the port reviews but what if we want to allow back in the Poetry use now for the other rule type we can create is we can allow a backend the port reuse by using floating IP address but first let's talk a little bit about why we want to consider allowing back in the port reuse in our rules we want to do this for multiple reasons first of all you want to have clustering to enable High availability or you want to use it for Network virtual Appliance or you want to expose multiple TLS in the points without re-encryption so these are the few scenarios where you want to consider using the backend Port reuse by enabling floating IP on your load balancing rule for the floating IP if you don't remember if I'm gonna go back to the load balancing Rule and scroll all the way down and you should be able to see the floating IP address checkbox and you can enable it by just ticking off the checkbox here now let's get back to the portal as you can see here we have two IP addresses configured for the load balancer and we are sending traffic to different VMS on the same ports without any issues but in order to do this we need to create additional interfaces on the backend VMS as you can see here the VM originally comes with a backend IP address which is the Nic interface of the VM but we are going to create additional loop back interface within the operating system and it's going to be configured with the IP address of the public IP address or the front-end IP address of the load balancer and we are going to see this in a bit now let's look at this table here now we are going to have IP address number one and the front-end IP address number two on TCP protocol both are sending traffic to port 80. and when we get to map it while we're enabling a floating IP address then the front and IP address number one is going to send the traffic to front-end IP address number one on Port 80 which has already a loop back interface on both vm1 and vm2 and similarly for front-end IP address number two so this is going to be the end result we are going to see in the portal having the front-end IP address and the protocol and the destination in this case it's going to be the same as the front end IP address because we already have configured the loop back interface for both VMS and the front-end IP address and this is exactly what floating IP means it allows you to reuse backend the ports in your load balancing rules and if you want to go ahead and configure and again this is another diagram that explains how things looks like before and after floating IP as you can see here before floating IP we receive the traffic through the front-end IP address of the load balancer and it gets directed to the destination IP address of the VMS at the back in the pool but after we enable floating IP again we are going to receive the traffic as we are through the front-end IP address but it's going to get directed to the VIP which is going to be the same front-end IP address as we have already built a loop back interface on these two VMS how can we create a loopback interface actually this guide here is going to explain to you how can you create a loop back interface on these two VMS whether you are using a Windows server or Linux and as we are here there is a bit of limitation when it gets to the floating IP address like you cannot using the floating IP on secondary ipconfigurations for load balancing scenarios that's all I want to share with you for floating IP and multiple front ends for the Azure load balancer thanks for watching and I will see you in the next video hi everyone in this video we are going to talk about TCP reset in load balancers so let's go to load balancers and let's browse to load balancing rules and let's add a new rule you will see an option for us to enable TCP resets and it's not only restricted to load balancing rule we can go ahead and enable TCP reset for inbound net rules and for the outbound rules as well so for any rules in load balancer we can go ahead and enable TCP resets and it's tightly linked with the ideal timeout that we need to configure in minutes here so what's the benefit of having TCP reset enabled on a load balancing rule it allows you to have a more predictable application behavior for your scenarios by enabling TCP reset based on the ideal time out on the load balancing rule and once this ideal timeout has been reached the load balancer is going to send a bi-directional TCP reset request so they can kill the current session and close all connections happening on this session and any future requests are going to be sent on the same session are going to be ignored and is going to be failed because when we send the TCP request to both client and server both close the correspondence socket immediately and of course we can allocate a new socket established a new connection if there is a new requests coming through but if there is no more new connections coming through then we are going to send the TCP request to just drop the current session and close the corresponding socket immediately it's a good way to preserve the resources on both the load balancer client and server to kill the sessions that are not been in use anymore that's all you need to know about the TCP results thanks for watching and I will see you in the next video hi everyone in this section we are going to talk about the different Outward Bound connectivity options we could configure in our Azure load balancer we will see how can we enable the outbound connectivity through the load balancer itself or through a net Gateway or through the public IP address of a VM and also we are going to look at how we can Implement egress only traffic using the load balancer so let's get started and let's jump straight into it hi everyone in this video we are going to talk about the different options we can use to provide an outbound connectivity to Azure load balancer and this table summarize it well first option we have is to use the front-end IP address of our load balancer to have or to provide an outbound connectivity to the internet but there is an issue with that approach as you can see here it's aesthetic which means it won't provide so much scalability into the solution and therefore it want to be recommended for production workloads but we can still use it without any issues the second way you can use is to associate a net gateway to the subnet and this is going to be a dynamic and explicit and this is actually the recommended way to provide an outbound connectivity to the load balancer we are creating a net Gateway as we have seen before in previous videos and as to say the net gateway to the subnet where our backend instances exists and by doing this we will have so much flexibility to add more instances to the subnet where the net Gateway is associated as we said before and we are going to say it again it's going to be the best way to provide an Outward Bound connectivity to the public internet through the Azure load balancer is to associate net gateway to the subnet where the backend instances exists the third option we could use is to assign a public IP address to the virtual machine directly this one is going to be static it won't allow so much flexibility to scale to multiple instances if we want to because we are going to have only one public IP address per VM and of course we can use this for production workloads and the last option we have is to use the default Outward Bound axis at the VM level and it's going to be implicit and location and it's not recommended for production workloads and by looking at this diagram here it shows some of the options we have talked about before the first option we have is to use or assign a public IP address so the virtual machine directly however this option wouldn't to provide much scalability as our workloads grow and the second option we have is to use the front-end IP address of the Azure load balancer to provide an avid bound connectivity to the Internet so all of the instances in the back in the pool are going to use the public IP address of the load balancer to be able to access the internet but in this approach we need to do the port allocation manually up front and the last option we have is to associate an ad Gateway at the subnet level where would be back in the instances exists and in this case all of these VMS are going to use the net gateway to be able to access the internet and it's going to be completely separate from the front-end IP address of the Azure load balancer now let's dive deep into all of these options one by one and the first one we are going to talk about is using the front end of the Azure load balancer to provide an outbound connectivity to the internet and in order to do this we will need to create an outbound rules add the load balancer level which uses s-net ports s-net stands for Source Network address translation as we have highlighted before we can use the snet port approach however it's not Dynamic and it won't allow us to scale as our workload increases and therefore we will need to specify the port allocation manually at the load balancer level to allow the outbound connectivity using the s-net ports of the load balancer the second approach we have talked about is to use the net Gateway to provide an outbound connectivity to the internet and it's going to be a separate flow from the public IP address of the load balancer and by using the net Gateway you are going to allow our bound connectivity only to the internet and again it's going to be the best way for you to use to provide an outbound connectivity to your environment net Gateway is extensible reliable and doesn't have the same issues of the s-net port exhaustion now the third option we could use is to assign public IP address on each of these VMS separately and as you already see this is not a scalable approach because we will have to specify your associate each of these IP addresses to a different VMS we have in our environments and if you are going to associate a public IP address to a VM it doesn't matter if the VM is behind the load balancer or not because the traffic flow is going to be different by associating a public IP address to the VM we are dictating another way to allow the outbound traffic from the VM to the public internet however through the load balancer we allowing the inbound the traffic to the VM through the load balancer and the last option we have is to use the default outbound access of the VM to allow an access to the public internet in this scenario we don't have a public IP address Associated to the VM and the VM sits behind the load balancer that doesn't have any Outward Bound rules and the VM is not part of any VM scale sets and the VM is deployed in a subnet that doesn't have net Gateway Associated at the subnet level so in all of these four conditions met we can use the default outbound access or some people call it the implicit outbound access from the VM to the public internet now let's get back to our documentation and talk about the s-net port at a high level every IP address has kind of 65k ports and each Port could be used as a listener to allow inbound connection or could be used for the outbound connection sometimes we call it ephemeral which is temporary Port allocation the load balancer allocates to allow the outbound connection to the internet cool and you might have already since the problem we are using the same buckets of ports available per IP address to allow both inbound and outbound traffic to or through our load balancer and in some situations when you have a very heavy traffic running through your load balancer you might run into snet Port exhaustion like you don't have any more S9 ports to provide for any outbound connectivity for your environment and then it's going to be a problem and if you might say that okay let's go ahead and add more front-end IP addresses so we can have wide range of ports we can use this could be an option however and if we look at this example here let's assume that we have provided two front-end IP addresses to allow more ports for our load balancer and we have two back-end instances are allocated for 64k ports what's going to happen is that both back in the instances are going to consume all ports defined or configured in the first front-end IP until all of these ports is exhausted the backend in instances are not going to balance the usage of the ports across two front-end IP addresses and this is the default Port allocation table for the back and the pool size based on the number of instances in each pool now let's talk about the port exhaustion and you might have already know what does it mean it means when we don't have enough ports to allocate to provide an outbound connectivity to the internet and of course one of the reasons we might get this and when we are using the front-end IP address for both inbound traffic and outbound the traffic especially if you're having higher workloads you are increasing the chance for this net port exhaustion to happen in your environment and the way to avoid this is to use net Gateway at the subnet level instead of using the outbound connectivity for your Azure load balancer now finally here we are going to talk about some constraints you need to be aware of when it comes to the s-net port exhaustion and the most important thing I want you to know is that when you have its netport exhaustion doesn't necessarily mean that the exhaustion happened at the load balancer level sometimes the load balancer can still have some its net ports available but the back in the instances ran out of business ports and then the backend instances cannot establish any new outbound connections and then we are going to run into s-net Port exhaustion so the main takeaway here is that when you get a snap Port exertion try to analyze and see where exactly it happens whether at the load balancer level or at the backend instance level that's all I want you to know about the outbound connectivity for Azure load balancer and the s-net port thanks for watching and I will see you in the next video hi everyone in this video we are going to see how can we Implement egress only or how it bound only connectivity using load balancers and in this scenario we are going to use two load balancers internal load balancer and we are going to configure it to provide an inbound connectivity to the VM instances sits in the back in the pool and then we are going to use a public load balancer to allow Outward Bound connectivity to the VM instances to the outside world we are going to use the front-end IP address of the public load balancer we are not going to use the default or recommended not Gateway on the subnet approach to provide the outbound connectivity to the back end pool instances instead we are going to use the outbound rule as the load balancer level to show you different options so you know what are the different options you have in the portal then you are free to use whatever works for you now let's go to the portal and see how can we implement this let's go to Virtual networks and let's create a new one let's create a new Resource Group let's call it RG Sydney and let's call our v-net vnet Sydney as well let's put it in Australia east region and let's go to security Tab and enable Bastion hosts and let's call it Bastion subnets let's make the address space 10 0.1.0 24. and let's go ahead and create a public IP address for our passion let's call it public passion IP and let's go ahead and create our v-net all right now my v-net has been created let's go ahead and create our load balancers let's start with the internal load balancer let's support it in Sydney Resource Group let's call it internal load balancer Australia East standard internal Regional let's go to the front-end IP configuration let's call it internal load balancer IP let's put it in the default subnets and let's go ahead and create or add the front-end IP then let's go to next step and add a new back in the pool let's call it back end and let's leave it empty and let's go ahead and create our internal load balancer now let's go ahead and create a public load balancer again let's support it in Sydney Resource Group let's call it public IP lb Australia East standard public Regional and let's go to the front-end IP configuration let's call it public both balancer IP and let's create a new public IP address let's call it public load balancer public IP and let's go ahead and add the front-end IP address let's create a back in the pool let's call it back end and let's leave it empty and let's go ahead and create our public load balancer now let's go ahead and create a virtual machine let's support the density Resource Group let's call it a vmo1 and let's choose a username and password for my VM now since I have a Bastion host I don't need to keep the RDP Port open so I'm going to disable it now let's go to networking and let's not create a public IP address on our VM and let's allow the advanced Network Security Group configuration and let's allow inbound rule on Port 80 and let's give it Priority 100. and let's go ahead and add this Rule and let's go all the way down and create our virtual machine let's go ahead and create it so what we have done so far is to just setting up the foundation we have created a VM it doesn't set in any back in the pool yet and we have created an internal load balancer and a public load balancer and again we haven't created any inbound or outbound rules yet so what are we going to do next is to put our virtual machine in the back in the pool for these to load balancers so let's go ahead and do that let's go to load balancers let's start with the internal load balancer back in the pool backend and let's add ipconfiguration and let's select the VM we have just created and let's go ahead and save our changes and let's do exactly the same for the public load balancer go to back in the pools backend let's select the V net that we have just created I'm not sure why I'm getting two anyway let's add a new IP configuration and let's select the VM we have created and let's go ahead and save our changes now I assume the VM has been created great now let's go to our VM and let's connect to it using Bastion I'm going to use the username and password I used when I was creating the VM and let's connect to our VM so again to reiterate we have put our VM in the backend pools of both public load balancer and internal load balancer and we have already while we were creating the VM allowed inbound rule to allow the inbound traffic coming from the internal load balancer to the VM we haven't created an inbound rule at the internal load balancer for that but we have created an inbound rule on the network security group for the virtual machine to allow the inbound connections coming from the internal load balancer now let's go to our virtual machine and let's browse to local server and let's disable IE enhanced security configuration now let's open ie and let's browse to what's my ip.org and see what are we going to get of course we are not going to get anything because we haven't created any Outward Bound rule on the public load balancer yet there is no way out for the VM to get an access to the public internet so let's go ahead and get back to our public load balancer and see how are we going to create an outbound rule to allow our VM to have an outbound accessibility to the internet now let's go ahead to the outbound rules let's go ahead and add a new rule let's call it our bound rule and let's select the public IP address of the public load balancer and let's select our back in the pool and let's choose the number of ports per instance and let's limit it to 1000. you remember these are the options we have been talking about when it gets to the s-net port and its exhaustion now let's go ahead and add this outbound rule again this will allow our VM to have an outbound connectivity to the internet using the front-end IP address of the public load balancer now let's get back to our VM and let's refresh this page as you can see you are now getting some traffic because our VM now is able to access the internet through the public IP address of the public load balancer and after we have configured an outbound rule on our public load balancer to allow our VM to get an access to get an outbound access to the public internet again this is not a recommended way to provide an outbound connectivity to your backend instances in the back in the pool again if you remember as we said in a previous lecture that you might run into s-net Port exhaustion if you have a really high traffic and have so many VMS sets in the back in the pool a better way to do this is to create net Gateway on the subnet level where the VM exists that's all I want to show you in this video now let's go ahead and clean things up let's go to Resource groups RG Sydney and let's go ahead and delete our Resource Group thanks for watching and I will see you in the next video hi everyone in this section we are going to see how can we use Azure monitoring to know the healthy stages of our Azure load balancer it's going to be a bit different from configuring vheld's approach to know the status the health status of the backend instances in the backend pools now we are looking at Azure load balancer at a high level we want to see how healthy our load balancer is we are going to see this through insights metrics setting up diagnostic settings and also setting up some alerts so these are the things we are going to cover in this section so let's get started hi everyone in this video we are going to talk about Azure load balancer insights so let's go to load balancers and I'm going to create a dummy load balancer so you don't have to follow along with me just to watch attend the win let's call it RG load balancer and I'm going to call it load balancer Australia East standard public Regional for the front-end IP configuration let's call it IP and let's create a new public IP address call it p i p and let's go ahead and add our front and IP configuration now for the backend pool let's call it backend and let's leave it empty now let's go ahead and create our load balancer alright my load balancer has been created so let's go ahead and browse to my load balancer and if we scroll down here to monitoring section you will be able to see the insights and as you can see here it shows you a graph of different IP addresses you have configured for your load balancer different inbound outbound rules backends you have configured for your load balancer and the backend instances or the VMS so all of these things you should be able to see it in this diagram however because I haven't created any back-end instances we can't see any at the moment however it should be similar to this you have your load balancer front-end IP addresses inbound outbound rules and then going to the back end tools you have configured and then to the VM instances and if you have a complex environment when you have so many back-end instances or so many back-end pools or when you have configured a lot of inbound or outbound rules for your load balancer it's gonna be really easy and quick if you have a graphical representation of your Azure load balancer configuration and if we get back to the portal we should be able to view a detailed metrics about our load balancer we'll be able to see the front end and back-end availability data throughput flow distribution connection monitors and Metric definitions all of these things are going to be available for you in the detailed metrics tab that's all I wanted you to know about the insights of the Azure load balancer thanks for watching and I will see you in the next video hi everyone in this video we are going to see how can we set up diagnostic settings for our Azure load balancer so let's go to Azure load balancer and let's go to my load balancer I have created and if you scroll down to monitoring section and then go to diagnostic settings you can go ahead and add a diagnostic settings let's call it let's call it diagnostic settings and then you will get to specify what kind of data you want to push to the log destination that we are going to specify so let's choose all metrics and then when we get to the destination details we could send our diagnostic settings and in this case all of our load balancer metrics to either log analytics workspace and in this case we will need to specify a workspace that we are going to push all of these metrics to the log analytics workspace or we can choose a storage account assuming that we have a storage account in our subscription we could be pushing all of these metrics to our storage account as well and we can send it to event hub in case if we have one in our environment or we can send it to a partner solution we can send the metrics to all of these things at the same time however it's going to incur an additional cost for you that you need to be aware of however as a minimum you should be able to send or push your metrics to one of these destinations that's all I have for you for the diagnostic settings thanks for watching and I will see you in the next video hi everyone in this video we are going to talk about load balancer metrics so if we go to our load balancer and scroll down to metrics you should be able to see different metrics you are able to use to get more data about your Azure load balancer we are not going to go through them one by one but at a very high level it gives you an overview about the snet port systems in terms of how many uh isnet ports are allocated how many snap ports are used and what are the connection count of your is not Port also you'll be able to know the healthiest ideas of the backend instances by observing the health probe status in the metrics of your Azure load balancer and if we selected any of these metrics randomly for each of these you should be able to get the aggregation whether by current some average min max that's all I have for you in the Azure load balancer metrics thanks for watching and I will see you in the next video hi everyone in this video we are going to talk about load balancer alerts and the alerts comes hand in hand with the metrics like looking at one of the metrics here when you create an alert you need to link it or as you cite it with a specific metric and then you are going to configure the alert based on a certain threshold either above or below once the threshold is exceeded either direction then you are going to get alerted and notified about this metric so let's say that we have we want to observe this metric the number of isnet connection count okay now in here you should be able to go ahead and create a new alert Rule and also you should be able to browse to the alerts and select the metric from the alerts tab so let's go ahead and do it let's click on create an alert rule and the signal name here it's going to be based on the s-net connection counts and then we are going to make the threshold to be static based on the total number of these net ports we have in our load balancer and if it's gonna be greater than 65 000 for example then we should get an alert and this is going to be based on a dimension we could use one or more Dimension to have more detailed view about this alert we may be using the front-end IP address to be the front end of our Azure load balancer or we could be using any connection States however I'm not going to specify any Dimension at the moment now we are going to evaluate this criteria every one minute and we are going to look back over 30 minutes for example and then once this criteria met we are going to get a notification that the s-net connection current has exceeded the threshold value we have configured in our environment then we will get to specify the action like once this alert is triggered we are going to assign it to a specific Action Group to get notified about this alert then we are going to specify some details what sort of message we want to put when these alert action group received this notification let's say a snap port it's it's not connection count exceeds the normal threshold and then we can put more information in the rule description here then we get to create this alert Rule and then every time the snet connection count exceeds 65 000 as we have configured in our alert rule then our Action Group is going to get notified about this condition that's all I wanted you to know for the load balancer alerts thanks for watching and I will see you in the next video hi everyone in this video we are going to talk about the logs available for us to use in our Azure load balancer so let's go to Azure load balancer and let's scroll down to the monitoring section and then let's go to logs and here you should be able to see some pre-created queries you can use it straight away to get more data and insights about your Azure load balancer that's all I want to do to show for the monitoring section so far we have covered everything in the monitoring at a very high level now let's go ahead and clean things up so let's go to Resource groups let's go to rglb and let's go ahead and delete our Resource Group thanks for watching and I will see you in the next video now we are coming to the end of our course and over the last few hours we have covered everything you need to know about Azure load balancer I hope now you feel more confident about using Azure load balancer in different situations and different scenarios please let me know what you think about my course and what would be the best way to help you understand different Azure services and until next time thanks so much for watching
Info
Channel: Hussein Awad
Views: 2,982
Rating: undefined out of 5
Keywords:
Id: Yp_5nTrDLRk
Channel Id: undefined
Length: 151min 14sec (9074 seconds)
Published: Fri Apr 14 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.