Azure Application Gateway Deep dive | Step by step explained

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everyone good evening so today we are going to talk about application game gway in the previous session we discussed about load balancer in this session we discuss about application Gateway as we discussed in the previous session like in AWS we call L4 load balancer also we call a network load balancer same way we call it as an l 7 load balancer which you call an application load balancer in Azure same way we have L7 is an application Gateway and L4 is an app uh aure standard load balancer already we discussed difference between these two Azure standard load balancer will take any kind of request the communication like TCP or HTTP but application Gateway will accept only the traffic for http means application Gateway is dedicatedly for the web servers only but uh Azure standard load balancer can take uh web servers and also even for the TCP traffic also so now just go to the aure portal and Azure web pages like what they are going to talk about this the definition from the Azure level Azure application Gateway so what is azure application Gateway so Azure application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications traditional load balances operate at the transport l here as we discussed like TCP and UDP and Route traffic based on Source IP address and port to a destination IP address and Port so this is completely based upon the ports because it's a TCP traffic it is going to accept but application Gateway can make routing decisions based on the additional attributes of an HTTP request for example U path or host headers for example you can Route traffic based on the incoming URL so if the slash images is in the incoming URL you you can Route traffic to specific set of servers so which means see traditional uh the standard load balancer how the traffic is coming like uh what are the servers we have if we have the load balancer the previous load balancer which we discussed whatever the traffic is coming from the internet it will check to which server which Port it is trying to access it is it will check whether it is going to check 1.16.1 do4 and uh some 8080 put or whatever it may be so how it is going to check from which to which server it is going to access and uh from where it is also coming the source IP range it will check and decision to which IP it is going to access to which Port it is trying to access over here it will verify but whereas application Gateway is also going to check path based routing this is very famous for the path based routing it's like uh we have a minra so this is a mantra.com right if I click on men if you see here SL shop SL men so if someone is trying to access that particular page then it is not going to check any IP address it is going to check just path whatever is configured so it will check the path and it will route the traffic to the specific server that way it is going to be configured I will show you how it is possible so you can Route traffic to specific set of servers known as a pool configured for images if slash video is in the URL that traffic is routed to another pool that optimized for videos which means in this scenario if it is like men then if it is women then it is another tab if you see the tab is different like slash shop sloman if someone is trying to access in this way then this load balancer is not checking the IP where it is going to going and accessing it will check what is a path so if the users are accessing sloman then it will going and send it to here so when this scenario comes see actually whatever in the previous load balancer when we discussed at that time we discussed the complete application content is inside the single server and we are having the same kind of servers multiple servers three servers or four servers for high availability same content is here uh I think I deleted uh the previous slides so same content is here and the same also here likewise all the servers is going to have same content so whenever if you are asking with the IP then it is going to load balancer will decide whether round droping fion here or here here because all the servers contains the same content so in the round dring fashion it will send the traffic to each and every server because it is going to have same content but now for this application Gateway content is not in a single server like for example if this is an men so when you click on men what are the options you are getting all this content might be over here in this server and when you're clicking on wom this tab so when you click on this tab you are getting these many options so all this data what the back end data to access this related to this service then it is going to be like a woman and same way like kids this way so again what happens whatever this main content is there right so this is going to have a high availability replica servers so like this and same way here for women multiple servers because for high availability same for kids also this way multiple servers understood so this way these all together like for high availability and same with these all because if this server not works this is going to take the request this is not work so this way it is going to work so whoever is access the users if they are accessing mina. men then it will check the path only it is not checking the IP whoever trying to access so domain name it will check and it will come and it will route here and whoever trying to access uh whenever they click on women then it will go and access these servers because all the content related over here then the same way kids then this way it is going to Route see when this scenario has been introduced so basically you know right monolithic application and microservice application any application previously it was a monolithic application so what is the difference between monolithic and microservice does anyone know yeah all the application content will be in the single server modic right so the complete content will be in the single server let's like take another painter for example uh you have an application like whatever the discussing where this mintra site is discussing right this complete application content is inside the single server so for this you are going to have what for more servers assume these are in the virtual uh virtual missiones environment so this way you are going to have high availability but all the content will be here but whenever a developer uh added a new feature like uh here he added some feature over here then what is going to happen then whenever they add a new feature they need to deploy the complete application to This Server whenever they adds a new feature then they are going to deploy that the complete whole application they need to deploy again to testing on that new feature whatever they added small feature or else uh if any of the website is not working like properly if it is not working then it is going to impact the complete site itself like in the server if the server is down or maybe some application is having any problem then it is going to impact the complete server also the complete site it is going to impact so later on it divided this which is called as a monolithic application so lot of disadvantages inside this monolithic so what they have done they have divided this big application into small small Services which is called as a microservices individually divided big application into small small services like for example this men option is one micros service women option is one micros service this profile is one micros service likewise they divided assume this minra is dived into 15 microservices and each individual separate micros service is going to deploy separately so whenever if any of the microservice is having problem let's say if this microservice is having problem then it is going to impact only that particular micros service it is not going impact remaining microservices So In This Server so each individual microservice is going to run so developers also going to have each individual repository for each micros service and they are going to do the code changes and they're going to do deploy of this specific service only they are not touching remaining code so this way you are doing testing only to this specific code and you are doing testing and you are deploying and they created new feature and that is also one new service and they'll deploy that new service without touching the existing uh application so this way you can maintain very high availability of your application that's the reason nowadays all the most of the companies are converting migrating their application from monolithic to the microservices so that it will be very easy to manage so if this is the scenario then obviously here each uh option is going to have a separate service so then if it is having a separate service then here this will be like a men and men option here it is running like these are like containers I'm just talking about containers it's not not relevant but just to explain it exact where this application Gateway fits so here this Main Service here men service here men service for availability because if something goes wrong with this physical mission also another physical machine physical server your service is running so these scenarios your application Gateway will work understood so it is going to route your traffic to your different applications understood so here if you see image server P image server means all the images are having in these two servers all the video videos are available in these two servers so if someone is some website they accessing whoever is accessing images then they type ww. contoso.com images then it will go here then same way SL video if they click on that the request will route here so this type of routing is known as application layer load balancing Azure application Gateway can do URL based routing okay so more we can go through here how it will work uh load balancing solution documentation how it works how an application gate Works in detail we can able to discuss so this is what we are going to configure now so here if you see open image now we are configuring this application Gateway and back end we need to configure a listener uh and then we will create a rule here because this an HTTP traffic it is receiving so what kind of traffic it is getting based upon this rule it will route the traffic to your backend pool so backend pool can be your virtual machines virtual machine scale set on premises external app Services Azure app services so in coming sessions we are going to talk about Azure app services so what is azure app Service as of no manually I am creating VM and I'm installing application inside that and we are managing it but Azure app Services is not like that okay so that we will discuss same purpose but you are not going to create a VM everything like they will create aure is providing you that service and uh virtual machine scale State means same it contains virtual machines but it configured with an autoscaling option that is a difference virtual machines means individual virtual machines it doesn't have any High availability B basically we should have this concept whenever you're practicing load balance and application Gateway you should go with the this option virtual machine scale set because it's a combination fall tolerance and high availability front end whatever is coming this application Gateway or load balancer is taking care and sending it for a high available but fall tolerance if something happen to this VMS then that's why we should have virtual machine scale set so this is what we are going to configure step by step we'll see how to configure it for this we have an uh documentation same like uh application Gateway we'll create one RG and inside that RG will create one uh vet and uh two subnets are enough because uh or else we go with three because this VMS doesn't need uh public IPS so for only one VM we'll have a public IP for remaining two VMS we can remove it and obviously one is required for the application Gateway and creating an NSG and this energy will attach to all the 3 VMS and this is availability set already in the load balancer also I told you why we are creating availability set because we are not configured virtual skill set we are conferring virtual machines so at least they are expecting your virtual machines should not be in the same Rack or same server so that's why minimum mandatory requirement is if you're going with the virtual machine at least create an availability set for it otherwise it will not accept that's why we are creating it if you are creating virtual machine scale set then availability set is not required and same creating three virtual machines this is the first we'll create this setup and as usual once we create this setup we need to install engine inside that that is our application we'll treat it in our application and then we'll go and create the application Gateway little bit similar to load balancer but something different is inside this application Gateway so now I already connected to here so what I'm going to do here shell. azure.com and I'm going to create it rgm creating and this is a vet and one subnet and now I'm creating two subnets also inside this because we need to create 3 VMS so that's why we are creating three subnets and each subnet one VM will be there and creating one NSG Rule and before that I need to create let's execute these two commands in a single shot so this is NSG providing uh any any and Port is also star it is going to allow everything and now I'm creating availability set so once this availability sets are created then we'll create this three VMS so once this VMS are created we'll create uh we'll install INX and uh as usual we modify the index.html file but compared to load balancer we do little bit Peak uh in configuring that index.html file for for micr service website should be developed in specific programming Lang see I'm not sure about the CMS WordPress these sites uh can be converted as an micro services but generally it will be like uh mostly I have seen Java spring boot applications converted as a microservices discount coupon means see for on premises server it is not going to expect that because you are going to create a VPN connection and whatever the VMS you are creating already over there that is we are going to add them as an back end pool when you are creating VMS inside the Azure then it knows whether this is where it got created if you're creating in on premises it doesn't know right it is not whether in on premises whether you created on specific Rack or server it cannot able to detect it so it is going to accept your uh servers Microsoft challenge is coming for free exam vouchers I think in YouTube there is one guy logic Ops you can follow him uh he recently shared one video about how to uh get that uh free voucher you can get it he explained how you can claim it so VMS are getting created now so now what I'm going to do this three VMS I'm going to create like men or women and one more will be like not kids it will be like a home homepage so we going to have a homepage and men and women for our site for so let me go back and see whether VMS has been created or not so one of the VM is created now as of now so quickly let me go and install Enix inside this so I have a public IP so once we log in just use apt update and apt install Enix so it will update and install the engin to for as engine is our application for testing purpose so installed and uh just I'm checking Etc in D engine status it is up and running so I should go to the V ww HTML and uh inside this I'm going to update this file opening this file and I'll just make it as a like this is an homepage assume this is an homepage so I'm going to change it to again again I'll come back welcome to home and uh just restart the service Etc init.d enginex restart done so let me go back to the another server still not yet created or yeah all the three servers got created so let me go back to the second server session Cloud freak and in this ser we should do little bit changes compared to the previous server so login as a root and uh appt update app install engine so it will install so path based routing uh we should not keep the content inside the V ww HTML so it should be a separate directory so now what I'm going to do CDW ww HTML and uh now here what we are going to do for This Server the second server this one for these two servers under the where ww HTML if I modify here this file then whenever we are trying to access with the IP if this Mission also having the same path file but only thing is what we'll do uh like for example if I access mindra Doom right if I'm accessing only site what happens it will go here and it will access from this page and it will go here and access from this page but we are expecting something like this when men is clicked only it should go here when woman is clicked it should go here so for that purpose what we need do in the server it should not be the default path it should be like you should create one more directory and here I'm creating men and uh go inside this men and uh copy this file uh index file to this location and before that what I'll do HTML I'll remove so in the main location I'm removing this index file and here I can see only main directory and inside this if you open this file here I'm going to modify it if you don't create and if you modify the default file then it will not work now Etc init.d enginex restart okay so now in the third server also I need to do e e e okay in the third mission also engx is installed so I'll just go to the V ww HTML and inside this I'm going to create a mkd woman and and move this index file to women folder and then go to the women folder VI index here I'm just modifying so that easily we can recognize done on the three servers it is done init.d engine X restart now we'll just test it so all the three servers first Server doesn't have any path because it's a homepage so that's why I provided that and uh so this is the first server I'm taking the IP this is welcome home and uh and each mission is having different IPS now with the same IP I cannot able to access only application Gateway should able to do that so as of now each is having individual IP so I should take second machine IP now and this is one open another tab slash men okay slash men we are getting slash men and third machine IP address women now we got three welcome to women and this is different IP and this is different IP so all together we are adding these three for the application Gateway now we need to go to the application Gateway and create it yeah aara we have that option at the time of creating VMS you can if you have any script you can attach it then once the VM is created it will automatically install whatever like install create delete whatever you define in that script it will work so now let's go and create application Gateway now so here our setup is ready now application gateways create so here if you see subscription free trial and Azure RG this is a resource Group I'm selecting and what is the name you want to give Alex see some name you provided and region default East us because all the setup is in East us and tire we have different tires like standard standard V2 uh web application firewall and web application firewall version two so if you select web application latest to firewall 2 you'll get uh this options okay so here web application firewall is more advanced one like uh for example for a VM you have an NSG as a security NSG firewall and uh you have an Azure firewall for your vnet but what is a security for your applications so for your applications web application fire firewall is more secure so if you want to config because this is an application Gateway specifically for the web application so obviously you can go with the web application firewall and uh it is asking enable auto scaling enable autoscaling for your application Gateway if you see here lot of traffic is coming to here and if this become busy then it is not going to serve the users request so this also needs to be scaled if the lot of traffic and one more thing is uh here we have an option called multi-site option multisite means what um this is an application Gateway and uh this is a an HTTP assume this is an HTTP listener and this is uh Rule and uh this ision backend Services understood so like this back and multiple sites also will work with single application Gateway so this will route this can Route traffic to here this can Route traffic to here also as of now we created 3 VMS here this is one site same application Gateway will work for different site also different website that's why it's called a multi side so this can be route here this can be route here more than two also it will work so if this is routing the traffic to two different sides then obviously this can become busy also that's why if you want to enable auto scaling for this one then you can select this minimum is one maximum instance instance c means back see here application Gateway is also it's a virtual Appliance so back end how many you want it you can decide here one and Max 10 so whenever the load is increasing then obviously based on that it can go up to 10 and again it will reduce scale up and scale down that you can decide it and availability zone so whenever it is getting increased you want any availability zones so in all the three availability Zone if you select then it will get created in all the availability zone for more High availability or else you can ignore this and HTTP http2 version so HTTP protocol version two is the latest version if you want you can enable it or still if you want to go with HTTP version one then it's okay more features are inside the htdp protocol version two and if you select here web application firewall then it is asking to create a policy so it is asking uh malicious attacks such as SQL injection Crosset scripting and other top 10 threats could cause service outage or data loss and pose a big threat to web application owners so it is asking some name uh wellex see some name and configure virtual Network so it is asking in which vnet you want to Launch this application Gateway previously application Gateway was an independent uh service was running it was not part of the vet where VMS created or any of the vet but now it is asking because of this more firewall security it is asking a vet so now if I select this vet the vnet which we created for our VMS what it is telling subnet must only have application Gateway so so this vet is having three subnets and in these three subnets we created one VM in each now it is asking whatever the subnet you want to allocate for this application Gateway it should not contain any VMS or it should be like empty it should only dedicatedly used for the application gway so for that what I need to do uh I need to go to the vet virtual networks and go here create a new virtual Network like subnets here already one two three is done I'm going to create a four name app Gateway subnet and here subnet address range 10.10 4.0 so remaining things you can ignore it save and now again I go back here I'll just refresh this page so Resource Group Azure RG well like see I have Gateway and region is this one and this is web application firewall enable Autos scaling if it is not not required you can just say no by default we'll select with two or Yes means it's okay and uh I think we created it's not created and in this virtual Network Now by default it took this subnet understood so now this way we are going to create front end and it is asking public IP yes you want public IP private IP will not work so obviously we should use public IP but uh yeah before doing this what I need to do I need to remove uh public IP from one of the VM because we have only three so we cannot create fourth public IP public i p this associate yes once it is dis we need to remove it also e so from machine 3 I disassociated the public IP and uh removed that IP now I can create a new here this is the name I'm providing and is by default is static IP now next back ends so here we need to create a backend pool add a backend pool here and what is the name of the backend pull add back end pool without targets means without adding targets also you can add or you can add the targets also so you will select yes and here if you see what is your backend Target like virtual machines app services or virtual machine skill set IP addresses so I'm going to select virtual machines and here we are going to select uh all this 3 VMS like uh you can see this one and uh add it and now next configuration so we have a back end pool and we have a front end we don't have any routing means the communication between these two is not established so there should be mapping between these two so forther we need to configure this routing rules so if you click on ADD routing rules so rule name so give some name and uh this option is I see new Option rule prior Define the order in which the rules are POS the priority value should be between one and where one highest priority and this is I think new Option priority I don't know why they provided okay maybe if you are creating more rules then which rule should have more priority previously it was not there so I giving one and uh here as I mentioned you need to configure this listener so we are configuring this listener now so listener name valaxy listener and front end IP is public by default because we configure our frontend I be public and protocol HTTP we did not configure https otherwise you need to provide here you see certificate needs to be provided for your site so we don't have any https site so we go with the HTTP only and additional settings listener type basic and multi site I was telling you right multi site this is an option so multiple sites means this this way it is going to be configured so we are going with the multi site only sorry basic only error page URL if you are accessing and something is not coming as per your expectation if you want any customized error page then you should select this way so if you don't want default errors it will throw and here if you see bide listeners back end targets you need to configure so here Target backend pool so what are the backend pool we selected this is a backend pool and back backend Target add uh this is like uh home pool like for homepage and then add new begin with the letter number end with the letter number name okay okay okay sorry I did so this is for this back end pool here I need to select back end Target back and Target is this option back end okay so this is the backend settings we need to configure here so here we need to configure some settings for the https default 80 and additional settings so we are configuring this Rule now this one whenever the traffic is coming through this HTTP so whatever the rule you configured based on that the HTTP traffic will come so if you see here cookie based Affinity means what the application Gateway can use cookies to keep a user session on the same server same like yesterday we discussed session Infinity session right same way cookies based means what whenever a user is trying to access for the first time obviously even when we are accessing uh any websites in our local there will be cookies are going to be store cachy based on that again when we are trying to access immediately you can able to access that page quickly same way when a user is trying to access uh specific uh server here like specific site server is going to have that cookies so whenever the user again is trying to access based upon this cookie then again it will route to the same server for the fast performance so that's why we you can configure cookies based affinity and connection training so connection training helps gracefully remove backend pool members during a planned service updates so whenever if there is an backend maintenance activ is going on your VMS there is already some connections has been established some users it will wait for some time to complete those sessions and then it will not allow any new user sessions to access that server so that you can configure here correction training helps gracefully remove backend pool members during planned service updates when this feature is enabled deregistering instances of a backend pool won't receive new request this applies to backend instances that are explicitly removed from the backend pool by an AP call as well as instances reported unhealthy so it is not going to receive any request when you mentioned connection training it is not going whenever it is going for the maintenance mode then it is not going to take any request from the user it will route to the remaining servers and request timeout the request time out is a number of second that app will wait to receive a response from the back end po before it turn a connection timeout error message clearly based on this you can understand if back end pool is not responding it will wait for 20 seconds and it will give the user front end user that who are trying to access for them they will give an error connection timeout override backend path so this feature allows you to override the path in the URL so that the request for a specific path can be routed to another path for example if you intend to Route request to slash images to default then enter slash in this text box and subsequently attach this s setting to the rule associated with slash images so whenever uh you want uh www.ra.com uh kits when you're are trying to access this one but you want to make this as an uh something else mint.com K only simply I want to be in this way then you can write it here override path so that what it will do it will this rule if you go and attach to that particular uh backend pool then it will work in that way understood so whatever this rule we are configuring as of now this rule we are going to attach uh let's say these are the backend servers which we created now I'm creating this rule HTTP rule what it contains some how HTTP should work that rules where conf when traffic is coming so this rule when you create this rule you need to attach to this one this one and this one what how many back end pools you have for those many back end pool you can attach and same rule you can attach if the same rule applies for all the back end rules then you can ATT single rule to all three of them or else each back endend pool should have separate separate rule create one more DP rule create one more Rule and attach this one to this one this one to this one depends upon the requirement so as of now what are the rule I'm creating if this is for this one kits then here in the rule I mentioned uh SLK only that's it here so if this rule if I go and attach to this kids then I know need to give mantra.com slits it's just mint.com SLK is enough that's fa override whatever the path you want to over it that way you can do that and host name by default appliation sends the same host header to the back end as it receives so this is host name we are not confering anything with the host name so now this is the rule which we created add so here whatever the back end pool first we selected now we are going to add multiple targets so now here I'm going to give slash men slash star and uh sorry okay PA men slash star and Target name main pool something like this I'm going to give and uh the rule which I created I'm applying this rule just now I created this rule so this rule I'm attaching to this one and uh back and Target back and Target it is it should show uh back can Target here I did a mistake while adding a back end pool I did not added all the three uh servers I added only one server that's why when I'm creating a rule it is showing only one so I'll delete this one now again I'm creating name uh home pool this is the name I'm providing and uh virtual machine and uh here P VM and same way virtual machine second VM and virtual machine third V so these are the back end pools all the three machines has been added here okay so now when I'm adding a routing rule here rule name valy routing Rule and Priority One and list listener name I'll give just wellax listener front end IP by default public IP this is the one public IP protocol 80 and listener type is basic I don't want to add any error pages and in the back end targets here now when I'm selecting here backend so here what are the back end pool which I created with three servers that will select here and now back and Target add this is a listener name and uh while C Rule and default already we discussed about this and I'm not overriding anything here and create so after this you see path based outing you can Route traffic from this rules listener to different backend targets so so here you can select add multiple targets over here so slash men slash star Target name men pool and uh here the rule name which we created and back in targets again it's sh home it should show individual server add new okay delete this again I'll create some confusion in the creating this backing pool okay home pool and uh here I'll select virtual machine and uh before that first of all let me check in server one what it is running virtual missiones in server one 92 IP 92 is running with the homage so that's why I'll select this as a home pool and this is a first one and uh add and second one also uh men pool and this is virtual machine and mostly this is the second one men 151 IP 15 I the machine to so this is a mission to add and add third one woman pool virtual machine and here the third one so all together I created in a single back end pool but we should create separately this all the three in a separate SE but make sure what are the naming conventions you are giving they are in the same VMS because this is a third machine which is running with this one 238 IP which I actually removed uh you see here 238 IP is removed as of now this machine contains the omen data content okay and this machines contains men data and this is homepage so same way I configured here backend pools add so three pools now if I'm going to the add a routing rule so here creating a route Val C route and Priority One listener name just giving valaxy listener and here this are the default just giving the name that's it and backend targets here I'm selecting home pool and uh back endend settings we don't have anything The Listener how the traffic should come what kind of cookies it should enable VY rule that's it remaining all are normal default so after this go down and here add multiple targets to create path based so here I'm selecting men star and uh Target name men pool so this is a rule which we created and here so I'm selecting this is the one back end so back end Target so add it same way I'm going to add it for the women also slash women slash star Target woman pool and uh this is the rule name and this is a woman pool add it so once it is done so here you need to see listener and backend targets so this is the listener which we created now this is the one so what is there in the listener adding this front end IP and back end this two we are merging like here so that whatever is in the back end we are configured here this three has been configured HTTP listener has been configured and a rule HTTP rule so whether it's in cookie based or connection training you want to enable or any other settings based on that it will talk to the back end and it will provide the information to the front end so those has been configured in these two steps listener and back so in the listener this is HTTP this is not an https and the name and what is your front end that is the front end and here back end so that is what configur here very simple the back end this is a rule whilea rule which we created previously was not available so when you were adding this uh path it was asking this rule I added the same rule to all the three back end pools as I mentioned if you don't want you can create three different rules and three different attach it to the three different back end pools it's up to us that's it very simple add now next review and create create so once this is created then whatever the front endend IP it will get with that IP we should able to access all these three if the configuration is correct previously it used to take some 10 to 15 minutes of time to create this application Gateway now we'll see whether they improved or not the time taking Shiva your question is why we selected individual VMS as backend Target instead of availability set see here we have only option to select vm's virtual machion skill set on premis data center or Azure app Services these are the four options they are giving to select as a back end so that's why we we did not discuss the virtual machine scale set so that's why I created VMS and when application Gateway is telling if you're going with the VMS then better to create them with the availability set like fall domain and update domain feature if you don't create in that way then I was adding the servers to the back end pool right there it will not allow you to add those servers in the back end pool that's why I created them in the availability set Rajesh Chopra should it be static IP or no necessary for which one you're talking for static IP you mean for application Gateway static IP obviously it should be a static IP only because as we discussed in the traffic manager and uh um uh this one load balancer once this front end IP comes whatever the IP comes right application Gateway give some IP something like this so we will access like this but we want some site then again what you need to do save like previously configur the DNS zones and uh configure alas with this IP with the domain name and then what the domain name you purchased with the domain name you create alas inside the DNS zones so that you can able to access with the domain name so in the DNS when you are creating alas 20. 29. 28.2 something like this and you are creating alas with some domain name init 6.com then if it is not a static IP if application Gateway gets new IP then how come dnf knows that it is having a new IP so it should have a static IP right so that's why he created static IP what you want to connect with application Gateway rajes pankaj baj today have created one RL VM having public IP by default INB set to all for any source and DEC however I'm not getting login after connecting however I'm not getting log connecting through SS mobile even though connectivity is there from my machine on 422 if you are allowed inbound connection okay any source and any dation means obviously you should connect rhl VM means which version of rhl you took see some images will not allow you to log in the latest images so better to go with little bit previous versions like rhl 7 don't go with ubun 222 version rhl 8 version some images will do some problem so try to use bit older versions if your security group is good uh then it should allow NSG if you allow the rule then obviously you should able to access it and uh you just ping the error also like what is error you are getting connection refused or connection timeout what kind of error you are getting no actually aure is not giving srikant any certificates they are giving certificate for only Azure app services in coming sessions which we are going to discuss about same whatever we are manually creating VMS installing that Azure itself is going to give the application services for us we know need to create any VMS managing anything so in that they are giving certificates but that two for six months I think and that option is also under preview I think so but not like uh AWS they're giving free certificates so we did not discuss front door right rajes so people will get confused now again what is front door and appgate aure backbone is using a I'm really sorry I forgot to create the voucher uh the weekend so definitely today night I'll create it and tomorrow I will share it with you guys so better to use instead of 8.7 go with the seven uh rhl version images or go with the Centos okay now deployment is completed quickly so now now let's go to the resource application Gateway so this is a public IP let's go and access whether we can able to access or not okay we are getting homepage and we'll try with the slash men men is not coming not form when we individually access okay do we need to give extra SL or what some mistake has been done while confering public IP is not required back end when you if You observe when we were confering backend pool it was showing only private IP only it was not expecting public IP so don't need public IP now only private IP is enough why I created like I to log into the server to install that engineering software I was using public I IP but later on we can remove public IPS for all the three so when we were adding this back and pool maybe backend pools uh that slash uh men slash which we provided right maybe uh there I need to this is [Music] rle we provided that slash men slash star right I think there we should not provide SL star I think so maybe I'll try trying that way it will [Music] work not this one [Music] okay I'll check that and I'll let you know tomorrow where exactly it went uh maybe some settings [Music] yeah okay I'll check today and I'll let you know maybe some mistake uh in this conation only I'll try to troubleshoot and I'll let you know about it yeah already done that save the changes SL men or maybe slash men slash path based is already added uh so here this is the path based back and Target the first one is home pool which we selected that is default path based only so maybe something is blocking this one because individually it was working fine when we are trying to access with individual machine IP slashman and that one I'll see and I'll let you know tomorrow okay guys thank you thanks everyone have a nice evening
Info
Channel: Praveen Korvi
Views: 853
Rating: undefined out of 5
Keywords: azure application gateway, application gateway, application gateway azure, application gateway azure configuration, what is azure application gateway, azure application gateway tutorial, azure application gateway explained, how to deploy azure application gateway, azure, azure application gateway overview, application gateway firewall, what is application gateway, azure application gateway demo, how to deploy application gateway, application gateway azure app service
Id: V9EP4jAg4QM
Channel Id: undefined
Length: 79min 33sec (4773 seconds)
Published: Fri Mar 15 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.