Avi and NSX-T Integration - Part 1: Design & Overview

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everybody my name is trevor spyers with howtonsx.com and today we are taking a look at avi networks or the nsx advanced load balancers integration brand new integration with nsxt the core software defined networking platform from vmware this is actually going to be a three-part series today part one we're just going to do a quick overview of uh the the integration and kind of how it works what what makes it different why it's unique uh maybe compared to how you would have configured a obvi load balancer in the past and um the second part we're actually going to do i'm going to do a deep dive on kind of some of the things i learned uh it took me a few days to get this running uh just because i was you know kind of learning along the way so i want to make sure that you don't spin those cycles trying to get the segment running i want to make sure that you give you some lessons learned that's going to be part two the third video coming is going to just be basic setup i'm going to walk you through the installation the configuration um end-to-end that way you can get this thing running in your production or lab environment so uh with that said let's get started um i just want to set the stage for what this is what this integration between nsxt the the sdn platform and avi networks ie nsx advanced load balancer you'll hear you'll hear the load balancer referred to under both names right now so vmware bought avi the company back in 2019 and ever since then they've been slowly integrating the platform into nsx so up until now um it's august 15th and like this last week the integration dropped but up until now um the two the two platforms nsx and avi networks really have just been associated by name right they they renamed ah you rebranded it as the nsx advanced load balancer but still was kind of a standalone platform now is really the first big step in the marrying of those two together the integration of those two platforms working with one another and kind of complementing one another so it's a big step forward with the kind of overall vision of what vmware is doing with nsx and hobby um really what in terms of what the integration brings to the table other than just like okay cool it's it's integrated right but it does two big things it expands the current feature set of nsx so if you're an nsx customer and nsx user um really it kind of takes low balancing to the next level for that platform right now the the current load balancer has really some kind of basic feature to call maybe a basic load balancer it can do some pretty cool stuff but definitely can't um expand out to do like global load balancing like dns low balancing no waff there's a lot of things that it's kind of missing um no auto elasticity like you get with with the nsx advanced load balancer or avi right so uh that's a that's a big reason i think vmware is doing this it just adds a lot of value to the nsx platform and then really what it does for avi is it changes for me anyway the way i'm thinking about it because traditionally i thought of the avi load balancer as a north south load balancer being like a big a point of egress and ingress to your data center and that's kind of how avi has positioned the platform for a long time they go after web app companies big sas type companies with the product because it has the ability to scale really massively but this change this integration with nsxt is really cool because certainly you can still use this as a north-south load balancer with nsxt but also you can really leverage and harness the power of avi now as an east to west load balancer in this in this sort of configuration and design and you could have done that before um standalone obvi but it just would have been a little more challenging again it's not really how it was marketed before but now i see this thing really being north south east west low balancing when integrated with the sxt platform so i think that's it for an overview i'm going to stop showing you this picture of my face and i'm actually going to just share my screen with you and we're going to take a quick look at uh the platform and kind of what it is and how it's working how this integration between nsxt and avi actually happens so um i want to show you two quick docs before i go too deep into the integration first of all this document right here that i hope you can see is the installation guide excuse me so this is going to be where you start if you want to get the stuff that stood up in your environment this is a really awesome step-by-step walkthrough of how to get this thing up and running i'm not going to really go into it right now i just want to make sure you see the the url and that you can go there and then if you are on the obvi docs website you could also search for it but just make sure that you're searching and that you're navigating to version 20.1 this is the newest release of the avi platform and this is the only one that supports this integration so if you have avi already you're going to upgrade to 20.1 and if it's a fresh install cool just make sure that you're doing a fresh install of the latest bits the 20.1 bits for this platform so i want to show you this page i also want to show you this and i'm going to show you some pictures from this guide but this is the nsxt design guide for avi networks and or the nsxt advanced load balancer for the rest of the video i'm i'm just going to call it avi um but know that it will be referred to as either obvi to me it's just kind of easier to say so that's what i'm going to call it this is the design guide i would highly recommend you reading through every word of this if you're going to use this in production it's not that long but it's worth a lot of really valuable information on both of these pages so if it's in production read through these extensively if you just throw it in the lab hey ignore it and then see what happens but you're gonna wish you read it because i didn't read it i just tried to set it up and i wish i would have spent more time reading upfront it would have saved me in the long run so um i think that's really it uh you know make sure you're running in sxt 2.5 or 3.0 and then you need to be running vcenter 6 7 or 7.0 because the integration is kind of a three-way integration um the ivy controller integrates with both nsx and vcenter at the same time in order to put to automate some of the deployments so pretty cool stuff so um let's go ahead and get to it let's get to the overview of the platform the logical place to start is the architecture okay how does it actually work at a kind of high level well a few things that we do right you you need to install the controller this is an ova if you're doing this on-prem i'm doing this all on-prem because this is an nsxt so usually that's going to be an on-prem deployment um sorry it's nsxt integration so generally that's going to be on-prem right so yeah the ovi controller is the kind of control plane the gui the you know the the the the the queen bee right the thing that manages everything so um it actually plugs into nsxt via api and also it plugs into vcenter via api right so it's pretty cool um they know about each other and that's what allows this integration to happen um vcenter can kind of send notifications back and then the only other thing to note right is the the obvious service engines these are the worker bees these are the actual data plane the load balancing fabric that you would be deploying so these need a management connection back to the controller right so um that's really it it's not like a super complex architecture really all it is is is chaining uh vcenter nsx and ovi controller together in order to allow for the integration of the load balancer i mentioned the man that there is a requirement to have management connectivity between the avi service engines and the avi tier one i'm sorry not the tier one sorry and the obvious controller first of all i would recommend go through the design guide in the install guide to figure out the ports and protocols necessary because if you're using a firewall you may need to allow those ports and protocols not just from the service engines but from an sxt to avi and from vcenter to obvi and vice versa so just make sure that if there's a firewall that you've paid attention to that but there needs to be a management connection between the service engines and obvi controller because avi will constantly be pushing configuration down to those service engines so um it's not too difficult to do one requirement as of today though is that if you're doing this integration the service engines need to be on an overlay segment in nsxt okay it's not really depicted here but you can kind of assume it's an overlay segment um because it's under the tier one but it does have to be overlay right so that means you need to have an sxt overlay networking functional really before you even begin thinking about deploying obvi um on top of nsxt so that's really it you just kind of uh what i'll automatically deploy it to those uh that overlay segment um and you'll see that here in a little bit it's nothing too crazy and i'll show you that in i believe the second or third video that i make about the nsxt and hobby integration okay this is the cool part right here is like um from a traffic flow perspective from a really the data plane how this works there's there's um two different deployment models for the service engines again those are the data plane of the avi networks platform these are the worker bees that are moving packets back and forth between your applications and your end users so um the two configuration options are this um option one is that you can actually have your service engines be deployed on a stand-alone segment um and you know when you look at the diagram you you you might think if you're not familiar with nsx you might look at this and think that seems kind of inefficient why would i do that actually it's it's quite efficient um because of nsxt um you know it might look like we would be like hair pinning traffic up and down between um my service engines and my load balancer but if you think about it right nsxt the whole idea is that we're distributing routing down inside of the hypervisor right and because we're doing that even though it looks like we might be like hairpinning traffic up and down this is actually a really efficient path for the data to take because if your client connects to a vip on one of these obvi service engines it's going to be routed over to another segment all connected to underneath the same tier 1 router that way you know it's really all east to west and even the the packet could stay within a single hypervisor the whole time if you if you had it configured that way through like affinity rules or what have you so um this is option one uh it looks a little funky in the diagram but it's it's actually quite efficient and i wouldn't be scared of using this method if i was doing a large scale deployment i would probably consider this method because then i could have a whole segment dedicated to be my vip network and then i could have you know a number of different back-end segments connected to my tier 1 gateway to load balance 2 right on my back end servers so that's option one option two is what we see on the right hand side here so this is what i've done in my lab and this is probably the simplest most straightforward configuration and this configuration actually would be you put your service engines on the same logical segment the same overlay network as your servers right that's what i have and it's a pretty straightforward uh deployment i did it because it's you know just kind of easier to stand up and quicker to configure but um you can do either of these options it really truly does not matter okay um so the only thing what else do i want to say about this um yeah it might be self-evident but i want to make sure this is clear right if you see this configuration one thing to note is that the service engines are in what we would call like a one arm mode right that means that from the end user the packet is going to go through your physical network into your tier zero router down to your tier one router it's going to go into a virtual interface a v-neck on the obvious service engine and then it's going to go right back out that same virtual nick to hit your server okay and then the return traffic actually will go through the obvious service engine and then back out to your client because there's a nat taking place at the service engine and um i i don't know necessarily why this it is because if you're using avi as a north-south load balancer for example you don't you can have these service engines kind of function as a router they can have a leg in two different networks but the way it works here is it is in a one-armed configuration and um so not a big deal at all but something to note right is is that it's not going to be function these service engines are not functioning as as routers really they're functioning um as a one-arm load balancing and um you can actually you can put multiple um different vips or or different services on the same service engine um and what i've read in the documents i haven't tested this yet is that it'll actually um add an additional interface to that if you do that and then it'll put the interface in its own vrf that way you can just make sure that you can still consolidate vips on top of the service engine but the traffic uh would be separated between your applications which is a good thing from a security perspective you don't necessarily want them all to be on the same vrf right so i think that's it for for this again these are the two deployment options the two kind of design options for this integration okay this to me is one of the most exciting parts about this integration okay so avi uh kind of one of the fundamental values of avi is that it can be an elastic load balancer it can auto scale and it can do active active up to like 64 different service engines well when you configure a load balancer in avi that's integrated with nsxt what's really cool is that it will automatically add static routes from your tier 1 router down to your service engines right so it's automatically going to set up ecmp if you compare that to like a physical network with the deployment you can still do it but the challenge is you need to like have some kind of vgp configuration or like some kind of automated potentially i guess maybe static routes on the physical i haven't done that config but um it's a much more challenging setup with this we actually auto ecmp it for you so that that auto scaling feature is really simpler than ever with this integration so that's pretty cool um is that we will just and i'll show you that in one of my other videos is how the nsxt tier 1 will it's it's being manipulated by the object controller so because of that it's pretty easy to just add additional next hops that are the service engines yeah pretty straightforward um so that's cool that's what i would recommend i'm not going to recommend anybody do a deployment other than the ecmp deployment because to me it's it's too easy to set up and uh the benefits are too massive right because with this being active active you could lose the service engine it'd be totally fine right we would stay alive but you do have one other deployment option and by the way this deployment option would it would be relevant for either an active active configuration in avi or what we call an in plus m configuration um which is kind of like a basically active active but with a with some buffer space um that is relevant uh both of those settings in obvious would be relevant here that this is what you would see if you were doing it on top of venus xt now for people who maybe don't want to do that i don't know why you would not want to do an active active deployment but you do have the option to do active standby so in an active standby deployment obviously only one service engine is going to be available at any given time so we still use this concept of a static route here this will be a static route injected into your tier one router but um it's only sent to one service engine obviously you do not have to um ecmp because you're not doing active active so that is one other option here in this configuration right your tier one routers and this is in the design in the install docs your tier one routers they're actually going to be redistributing these static crowds right so as obvi service engines get spun up and down um the static routes are actually going to be added and removed and then there's those are going to be advertised upstream um so that's pretty cool so what that means is when you like when you create a new vip uh in the ovi controller you don't need to do any routing you don't need to change anything that's going to be automatically advertised upstream because it's static route is injected and redistributed from the tier one gateway and that's part of the setup process is to be redistributing routes from the tier one here is the last thing i'm going to show you be careful though because this is not technically in the product yet they have this and i'm recording this video as of august 15th i mentioned they have this slide in the design guide and the design guide basically says hey by by the form of automation right the obvi controller is going to automatically configure the distributed firewall policy inside of nsxt to allow all of the necessary communications for the system to run that's what the document says but if you go back and read the fine print what you'll actually find is that while it's a very pretty picture and a very cool idea it is not fully integrated yet my guess is that they tried to get this integrated in this release but it probably just didn't quite get there but if you read this note it will mention that this idea of us automatically configuring the distributed firewall policies uh you know via the deployment of a vip um or the service engines does not happen they do create the networking and security group so they do put your service engines and your controllers and everything into a group in nsx and then you can build policies off of that manually right now but as of today the creation of distributed file policies do not exist so if you're reading through this um you know don't be confused even though the picture's there it's not supported just yet my guess is that's probably going to be in the very next release given that it's it's in the dock but has this note that it's not supported okay i believe i've covered all the high-level stuff that i want to cover about this so what i'm going to do now is just take you into my environment and show you uh what this actually looks like from an operations perspective right so i'm logged into my lab and i'm looking at first let me show you the the controller okay so let me get logged into my obvi controller here and perfect so i'm going to do a little drop down so i'm logged into my obvi controller and i just have one vip one virtual service configured right um it is called my first virtual service and very basic just a a four 443 load balancing to two back end web servers right i can actually show you it's just some silly database that somebody at vmware made that you can use this is the same database in the hands-on lab but yeah it's just a quick database and it's a little balancing between two uh two web servers right so basically what happens when you configure a vip in avi after you've set up the the system and done the integrations is that avi automatically deploys these two service engines right again the service engine is the data plane so i can even show you my vcenter these service engines are automatically deployed by avi in fact on the back end what happens is the controller uploads the the ova to a content library and then we'll automatically deploy service engine so um that that is cool because now if we needed to scale out for example so if i go into the virtual service and i wanted to do a scale out event or a scale in event i could do that and they would just deploy an additional service engine to give me that elastic level throughput i could also automate the scale out of scale and we're not going to go into that today so um that's really it i mean it's a basic basic config i'll make another video to show you how to actually configure this but um pretty straightforward right so that's all it really is from a an hobby perspective and from a v center perspective now in nsx it's a little bit different in nsx there's a few things that happen when you configure avi and when you configure a vip into your nsx cloud and that's that's let me take you back and show you one more thing i'm in the controller if i were to create a new virtual service i after you configure nsx you'll actually be presented with the option you can to deploy this to your nsxt cloud right so if you're familiar with avi you would have seen maybe amazon and azure and your vcenter cloud here um in the last version you were using but today you can actually go straight to nsxt and that's how you tell the controller where to deploy the dip that you're configuring so when i configure a vip what happens in nsx well a couple of things happen okay the first thing that happens um well i actually i don't know if it's the first or second thing it's all automated so it all happens really fast um but one of the things that happens is that ravi is going to automatically create these networking and security groups so don't edit these i haven't tried editing them but these are where you put the virtual service the the management vips the servers and just all that good stuff is automatically created and then the you can click here and i can see hey look that's my hobby controller i can look in and also see my service engine management ips and my service engines period right so that's all automated if you wanted to you could go in and add these to the exclusion list or add firewall policies to support these groups but right now the groups are just added and nothing else really happens okay the other thing that happens is that in nsxt when i deploy it remember that that diagram we were looking at before with the the tier one gateway and the service engines and how we do ecmp to low balance traffic well that also happens right so if i were to go into my static routes under my tier one gateway the avi uh system will go in and go ahead and configure a a route and the destination of the route is actually my my vip right because this vip is it will live on every load balancer so this 172.16.10.20 vip actually if i were to look here in vcenter both of these service engines are listening on that ip right so that that's this is part of what makes this elastic and scalable right because i can have one ip one vip live on multiple service engines at the same time and i have two here but again we could scale this out massively right so that happens and then the next hop is going to be the ip of the service engine itself right so this is what allows me to again this is part of the configuration for that scaling is that as service engines are spun up by avi we would actually see a uh a new route added and maybe i can yeah i'm not going to show you that now but yeah basically it would add a new route so it pulls the ips from pool so if i were to add another service engine you would see the exact same configuration but we would just have another next hop that is 172.16.10.102 right because it pulls ips in in sequential order so okay so what did we do everybody let's kind of rethink everything that we you walk through here right i showed you these uh design guides and the install guide and again i would highly recommend that you read through these if you're going to deploy this in production you've got to understand the integration and how it works so look at both of these documents read through them at least once to understand what's going on we walked through some of the architecture components how this thing works how it does ecmp and then i showed you in the lab really what this thing does how it works and and how the audi controller manipulates vcenter and nsx to make this a really automated process the the experience of deploying uh and creating a new virtual service is really really very easy with the integration that has been added to the product so that's all i have for today that was the overview of avi networks and stay tuned i'm going to make another video where i walk through how you would configure a a vip and kind of what happens i'm going to show you the process that is automated when you configure that lift and then talk through some lessons lessons learned that i figured out while i was configuring this thing for the first time and then i'll make a much more straightforward video of just the installation and configuration of the platform so i hope this was helpful for you if you have any questions please as always if if you're on you watching this on youtube the comment section if you're on my blog the comment section ask me a question and my contact info is on howtonsx.com if you'd like to reach out to me individually you can go right there and i'll keep an eye out for you so thank you again for sticking around and yeah happy happy load balancing out there all right bye everybody you
Info
Channel: Trevor Spires
Views: 2,472
Rating: undefined out of 5
Keywords: load balancer, load balancing, avi, avi networks, nsx-t, nsx, nsxt, vmware, load balance, integration, virtual cloud network, nsxalb, nsx alb, nsx advanced load balancer
Id: fTvzUBFGDRg
Channel Id: undefined
Length: 28min 3sec (1683 seconds)
Published: Sun Aug 16 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.