NSX-T Architecture & Benefits by Erik Bussink, VMware

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] so I've got quite a lot of content I actually didn't expect 45 minutes but a little bit more and I've got videos of some demonstrations so luckily we'll save time because it's fast forwarded I just have one or two questions everyone understands English or should I switch to French no no okay all right the next question I have is does everyone know about NSX other people that don't know NSX know other people that don't know about network overlays okay what I'm gonna do in this presentation is actually a level 200-300 I'm not gonna go in deep dive I have quite a lot of off slide so I'm gonna try to keep the questions for the end if you can because I have lots of things that will come in later on so NSX is not just a single project anymore we are now up to portfolio with at least six different products different different acquisitions and different tools the last one that we did was buying Velo cloud so now we also have NSX Software Defined one by velop cloud allowing you to communicate with remote sites but the main focus of the presentation today is going to be on NSX data center and its extension to the cloud based on our last version of the product of NSX which is NSX T so it's the third iteration of NSX so the agenda to start today is first C a bit the the definition and the problems we trying to solve talk about the overlay that we've implemented with the videos then starting for using an s60 and after that I'll go to a demo of NSX tea and how we can leverage the multi-cloud and do microsegmentation between on-prem data center and the cloud then we'll dive into the components of the architecture and finish where the different distributed service that are inside NSX T so I guess everyone knows about hos visualization product we've done for the last 20 years where where you can easily deploy multiple virtual machines so in this case we've got two racks they have each their own subnets so with an SX or sorry with vSphere ESX you can trade machines you can have multiple VMs and the best thing we have since about 15 years and has probably saved quite a lot of you weekends of work is V motion there's no way I will give up the motion the fact that you can migrate a virtual machine live from one house to another and go on your weekend normally without maintenance times well in our case if we V motion now for example this VM to another rack and we've survived it this rack with a different subnet then we're gonna have some issues because the network will not be the same on it so we'll have a problem is that we're not going to have the communication now when I say second racket could be a second datacenter could be on the other side of town so we have an issue so what happened is that people started creating stretched layer tool and lots of products came out about 10 years ago to optimize this but this can also cause us some different issues so what happens if in a stretch environment with layer 2 you have instances of physical firewalls load balancers f5s you're not gonna have them all over the place and you're gonna have to route traffic warmth from one side back to the first one to get through the services that you have for traffic north so suddenly we found ourself again with an issue with trombone in traffic all across the man and so that was a problem we tried to address with NSX so what we're gonna do with an effect is we gonna instantiate a later broadcast domain inside every hypervisor so here I've got some blue VMs on a logical switch will take on each host we will implement NSX T and we'll create a local lay to domain and the communications between hosts is going to be done via an IP tunnel so what you see on the graphic here is actually tabs which are the tunnel endpoints on every host and then our data plane what we're gonna have is keep track of which VM on which host or the MAC address of that VM corresponds to which tunnel endpoint so our two blue VMs are on our logical switch so we now have an N VDS switch inside a host and VDS stands for nsx managed virtual distributed switch so we've evolved from the vSphere distributed switch to this and our two VMs will be on this network you will get all the slides from SEO but the videos here will have it so at this point when a virtual machine is moved from one house to another well it's going to be encapsulated or the traffic between the VMS will be encapsulated through this tunnel so that's our overlay really so coming back to the VMS that move our data plane will update the settings so now we know that the the VM 2 is now behind the tunnel endpoint B 3 and we're gonna be routing again via the IP tunnel or the overlay the traffic between the two machines and as you see now the physical network subnet a and B can be still different we'll still have coming connectivity between the VMS and they will be able to move from one side to another to get this working because we are encapsulating the traffic we have only two requirements on the physical network we need I P connectivity and we need traffic to be at least thousand six-hundred in as an MTU Jim warframe is recommended but not everyone is doing that yet so after that all the communications work the overlay that we use with NSX T is called genève it's the generic and network release virtualization it's a follow up to the week slam but it allows more headers mated meta data in the headers and flexible m qs so for our requirements IP connectivity and thousand 600 MTU you can use whatever physical network you have so if you've got all switches if you've got new switches if your spine leaf if your different constructors routing whatever configuration you have it works so you don't have to reinvest in new switches only in well invest in switches when you are near capacity or you're out of support no need to change your lower level of the infrastructure so if your infrastructure is multiple vendor or multiple generation of switches that doesn't matter to us as long as we have this IP connectivity and dmt you one of the other benefits we get from the visualization of the network where the overlay is that you're only gonna have the logical switches for those virtual machines on the house that have them we can reuse IP ranges so if you later on want to have an application in a shell for Quality Assurance or whatever for testing you can just copy the VMS with the network so there's no address of conflict and we'll see how we can manage afterwards the routing and you can all do this either using the user interface so an s60 is now using an HTML html5 interface no more flash no more flex that's done it's a completely separate management platform so you don't have to go via the vCenter to do this and it can all be automated so after that it's using REST API commands so you can use PowerShell ansible terraform whatever you can automate it so i've got customers here in the swiss c'mon that have deployed the whole nsx with a powershell command script others have used ansible so whatever you fancy within limits you can use to deploy the next step after just managing this layer to broadcast and networking is that we're gonna be able to do firewalling straight on the v-neck of the virtual machine so it's not done inside the video machine but on the network card so after this we can really filter we can do micro segmentation stopping subnet one to talk to subnet 2 micro segmentation within the same subnet so that the database server cannot talk to web server only the app server can talk to the database and so on and all those rules are managed centrally in the management of DNS X there you can group the VMS by multiple by IP set ranges by machines you can use regex commands to group those virtual machines for those rules but with nsx we wanted to do more than just layer 2 and micro segmentation the idea is to have all the different network services inside the software so switching routing load balancing VPN firewall connection to defer to the physical world lay to bridging if you've got hosts that are still physical and you want them to talk to virtual machines NAT DHCP and all the meta Dan metadata we also have a small IPAM for this specific use cases if you do containers inside NSX you can consume this also so the the best thing about this version of NSX and I think it's a game changer compared to the previous version which was NSX for vSphere is that we support multiple platforms so you can manage ESX hosts running and a 60 you can have clusters with k vm hosts so we support Ubuntu right at Santos Susie we have an agent that deploys and manage this Nvidia's on those Linux machines using open V switch and was the same agent actually we can also protect Linux bare metal servers so if you've got SAP HANA running on a Suzy 12 and you want to protect it with micro segmentation you can do it if you run Oracle database on a Red Hat machines you can do it we support workloads may they may be VMs containers kubernetes openshift OpenStack bare metal agents multiple V centers so you're not bound one nsx manager to one V Center so we just consumed the V centers for compute now and it's the same code that is used actually to also extend NSX in a cloud single source code from which we compile the API and then we can deploy virtual machines on Azure on AWS or what we call VM C which is VMware on AWS so that's ESX servers that run in AWS data centers all this with NSX T with a single source code the use cases just gonna briefly point to the different use cases so it's security naturally microsegmentation integration with firewalls from third parties service insertion cloud a native applications so we still get security routing load balancing functions automation customer are wanting to move to really automating the deployment of the infrastructure terraform and a multi cloud it's the only slide I'm gonna do on the use cases because I've got a slide deck and I can talk for now just on really use cases and we don't have that time so the first demo I want to do and also record the demo so I know it works is actually an application we've deployed called plane spotted NSX CC dotnet and you can test it for right now if you want it has six front-end web servers that are located in AWS and in Azure and the database for the application and the data itself is on front in our data center in Palo Alto we've got connections between our own Prem network and edible yes here using V PCs and vennett going to azure so when you hit this webpage you will be on one of those sites and the data is kept inside so let's have a start of a look of the deployment what we have here is actually the the graphical interface of NSX this is not the last version it's the previous one so we have the manager nodes two controllers which is our control plane the host the edges and what we're gonna see is here the the public cloud gate waste going to azure and AWS connected to different V pcs so when I look into these clouds after having that was the same video moving on to the next one so with an s60 we now also have a search function so when you want to quickly find your application for playing spotter you can do a query it will really it will bring up all the devices that have the term playing spot in the infrastructure based on the discovery it has done in a ws and in Azure and as we see we've got different tags on the different machines here we see this front-end web server it's got up here and description of the these tags so the the next step based on these tags is that we're gonna be able to group them we've got different groups for the front-end based on the tags so you can use IP sets VMs and the tags and we see that in this front-end group we've got the six machines that have been discovered we are the inventory so what we're gonna do afterwards with these six machines is we're gonna create some fybel rules here different sections we're gonna open the plane spotter rules and we've have using the tags we're gonna show her for example for the a player the source being based on the tags database MySQL and it's on the distributed firewall so you don't have to be a specialist of AWS security or Izu security just from the single pane of management you'll be able to deploy the rules across the multi cloud environment is this something that's looking promising right now to build up to this kind of demo just gonna go now over the nsx t architecture and the next slide is gonna be loaded so i'll start small hair on the data plane so we have is an ESX host with our n VDS switch on top we've got our VMs so this n VDS which is a new kind of switch that we do it's managed by the NSX and really allows us to work on very high performance and low latency traffic so this is a switch that has also been optimized for telcos for everything that is g5 and we can push up to eight million packets per second on a two CPU server so depending on the requirements of the customers if you do network function video functions NFV you can use NS 64 this so the second one and the extension is now to have n VDS on K VM hosts so it's going to be an open V switch implementation of the switch it has an agent and on top of that now we also have container workloads so if you're using kubernetes openshift OpenStack we upstream our code for this disintegration with plugins so whatever OpenStack solution you use you can consume under it in a 60 environment just talking about the traffic and our nsx edge so the edges and I will come back on them at the end of my presentation and really explain more about them where they are they are the unwraps to the physical world so when you have traffic going from one VM to another in the overlay when it leaves the overlay for the underlay so the physical world it's going to be done via these edges these edges they exist in two formats you can either have them as a virtual machine or if you need performance and high convergence in case of issues or network issues you can also have them as a bare-metal server using the same agent we had for the k vm we can now do also security and micro segmentation on bare metal service so as of the last version which report Ubuntu had at Santos and Susie the extension in the cloud so this is all what we call nf6 cloud so we're gonna deploy an nsx cloud gateway in Azure or Amazon and you will consume virtual machines on these clouds that have the agent inside of them we don't have access to the lower level of these network infrastructure but in machines yeah Google don't know yet we've got quite a lot of things planned in the future but not as of today the next layer above is our control plane so we have an even number of controllers that replicate the data between them so this is our supervisor module they were learn which MAC address is on which host when you have a new machine that is connected it will inform the controllers if you have a VM that wants to talk to someone else it doesn't know it will connect to the controllers on top of that we've got our nsx manager cluster on which you connect using cloud management platform internal or REST API peyten terraform whatever you want and using those rest api we have multiple connections so we've got our NSX container plugin so if you use kubernetes and you want to automate the deployment of a new namespace Network load balancer it's all done via this plugin so in the last version we've consolidated the control cluster and management clusters so it's a single set of 3 vm s that do all these things ok so let's go a bit through the nsx manager cluster and how it works so the first thing while you connect to the manager it will connect to all the different hosts you connect so you can connect straight from the manager to an ESX host or you can jump via a virtual Center to manage clusters so in a small environment you could just add ESX hosts or k vm hosts straight into the manager so this is done yeah at the user interface or via the API we're gonna keep all the user information persistent in this dynamic database and all the queries will be done here so if I want to connect a virtual machine on a network it's gonna be the manager that we're gonna configure so I've got my virtual machine one its v-neck gonna create a logical switch the short name for this is LS naturally we we like to those small names in the documentation we're gonna create a logical port the good thing is you can also really define the port so you can edit create it you can change it so we can put a descriptor as well and a port and then we're gonna do the connection now this is a desired state it's not yet on a physical network but at least it's saved to be deployed so our manager control cluster is gonna talk to the functions of the control cluster it will push the stateless configuration forward and these will be pushed towards the hypervisors just deploying an effects software on a hypervisor doesn't mean it will communicate you have to instantiate and transform a host into a transport node so that they come in can communicate between them it also gives you a facility of creating logical spaces of all the hosts that can communicate together so you could have a production namespace I would say our logical space and you could have a lab space and then you wouldn't be able to connect logical switches from one to the other so the controllers will disseminate the topology when you create microsegmentation firewall rules as well it will be done in a controller and these will be pushed out to the hosts that have those virtual machines so if I create a set of rules to protect one virtual machine they will be only pushed to their hosts that has that virtual machine not the others so I send my configuration and strike the hosts one to attach the v-neck one to the logical switch so we do that it will reply give the information retrieve the MAC address update the table so that afterwards we can communicate with the virtual machine and that will tell the other types we have a logical switch one on that host that's important so who now I'm just gonna have a small demo on how to deploy in a sixty it's not as complicated as it sounds we've got in this demo we've got four ESX hosts a two will be used for the management and EH infrastructure and we've got to house with virtual machine one and two on it so the first step is to deploy the manager cluster so this is accelerated so that you get out in time for lunch so the idea is to import the unified platform among agama deploy for demo one and not free you enter the the information where it's going to be put you enter the IP addresses you give all the information about connectivity passwords and so on good thing when with a recorded demo there not many mistakes and that goes faster than physically if you do it at home or in the office so once the manager cluster is deployed and deploying goes fast in the video we'll be able to start configuring and logging in so you see at the moment we are in the vCenter because we deploy it there the manager virtual machine can be also deployed on a KVM infrastructure so the next step is to connect on the html5 interface of the nsx manager and we've got the interface and the first thing we're gonna connect is actually connect to a computer manager so we're going to enter the information of the vCenter this allows us to simplify the deployment of the different virtual machines and configurations but you could decide not to do it and go straight to the hosts themselves so import the certificate the thumbnail and now we've got our V Center connected to our manager you can manage multiple V centers at this point we're gonna deploy the controller so in a newer version actually the manager has to control itself so this is something that's gonna be skipped if you do it now with the latest version we always have the information at the status of the manager and the the disk space and so on so we know all that information makes it simpler to troubleshoot so nsx controller one in this demo we only deploy one but in a production you would need three to make sure that you've got enough capacity and fault tolerance and you'll put an T affinity rules on those virtual machines right so now we're gonna create a transpose own so the transport zone is the way the different hosts will communicate we'll create an overlay transport own so all the hosts that are part of this overlay of this transpose own will be able to communicate so in the fabric we're gonna go through transport zone we're gonna create one we'll call it overlay transpose own TZ and we will deploy NSX on the host and connect them to those transport nodes as you see we use the V Center actually to decide which host we will deploy NSX on so it's gonna configure NSX and deploy it we have pre provisioned an IP range pool for the tunnel endpoints so that was done in between and we'll have this connectivity so here it's deploying NSX T and we'll create the tunnel endpoints so that the host can communicate between each other now for our virtual machines to communicate we're gonna have to create a logical switch next step we go to switching and we'll create a new logical switch my logical switch the configuration of the logical switch is done in another NSX T it's not done env Center I'll come back on some of the configuration in a moment so we see that this machine here is using an overlay I P range in V Center when we look at our virtual machines a and B will be able to map them now to the the new logical switch we created so take a virtual machine a we're gonna go click browse and we see that we've got our new logical switch in here and we create the connection so VM a is now connected to this logical switch and we do the same with VM be alright so traffic when it flows from the first virtual machine to the second one well we'll have the traffic come out of the virtual machine we'll look up where we have to send this packet so the packet for the vm5 on either side should be sent to the tap equivalent of c1 will encapsulate the traffic reason why we need bigger MTU will transit and that will be d capsulated on the other side it's as simple as that if we do broadcast or multicast and we've got different subnets in a racks we'll be able to actually optimize the traffic flows by using one proxy in each rack to distribute the traffic saving us on network bandwidth so here the traffic is being distributed to the proxy in each of these different subnets racks before being distributed let's talk now about network services and we've got those two virtual machines well now they are called vm 1 and vm 2 they're on different logical switches so Alice 10 and Alice 20 and we want to create a router to connect these two machines together so we create a T 0 router call it my router at this point we're gonna create downlink connections from the router to the different logical switches so I select downlink I select the network I can enter here the IP address of the router port connected to Alice 10 and the equivalent for the second one so these are downlink connections and 1016 20.1 Network and we'll be able to test the connectivity between those two virtual machines so on one side we'll have a plane and on the other side we'll actually have a trace route that will give us the information also showing you the ports of the router it's pretty easy actually how is this connection done well this connection is done with a function called logical router this is not a virtual machine this is an instance that is running inside the kernel of the hypervisor so we're gonna if you have a single physical host well it will be inside a single host if you have multiple hosts and you've got a network on the other you'll have the same router on the other side so this is the flow of the information if we have a logical switch 10 and logical switch 20 on different hosts will have the same router in virtual in the kernel routing will always be done at closest point from the source code so the source points so it's gonna always be routing done inside the host local so traffic flow will be a symmetrical it's important to understand this traffic will come out from vm 1 will be routed put into the transport transport tunnel for the logical switch and distribute it on the other side traffic flow coming back yes it is an error the traffic doesn't go back to the route it goes to the vm 1 is done the other way round this is an asymmetrical routing where it gets really interesting now is if I go and do a trace route or trace flow of this traffic it will document for me all the paths from VM 1 to VM 2 so we see clearly that the traffic's coming out of the VM one goes through a distributed firewall is now an honor logical switch 10 hits the router is routed to logical switch 20 is still on ESX 1 here and now it enters the tunnel comes back to the outside and is being distributed gives us really the visibility of how the traffic flows and at every one of these points you can look at the documentation and pull up information giving you the tables and configuration the speed of the speed of the ports and so on really gives you the visibility of how your traffic is flowing so the thing with asymmetrical routing is that are some functions that we cannot do if we only have one way of the traffic anything that requires a stateful service like firewall coming in north-south firewall load-balancing nat dhcp well these functions cannot be distributed across the halls we will distribute those functions actually in those edge nodes so those edge nodes I mentioned early on can be either VMs that will have the connectivity to the physical network or physical servers so functions will be just distributed across the environment and in these edge nodes to improve performance we've used the Intel DP DK live-reef this L really allows us to push a lot of bandwidth so we can push over 50 gigabyte on bare metal service these can also be for fast convergence in case of failure and we've got special drivers that will consume specific Intel network art and some others and we make sure that we've got enough CPU resources attributed to those services and the memory this allows us to push really the traffic above for normal traffic we can use virtual machine edge notes we have less true put takes a little bit more in convergence in case of failure so it really depends on what you want to do we've got this flexibility so if we take our sample from beforehand of the logical view where we've gotten our infrastructure at one point I need to connect to the physical network and this physical uplink is not gonna be on the house it's gonna be connected to the edge node so if I do a representation suddenly I'm gonna have a huge edge router but this is actually distributed it's this ribbet in a different host and we have an instance running on these edge nodes here we have our physical uplink connection so east-west traffic is always distributed but if we want to go north-south we're gonna have to connect our routers to the edge nodes to the physical network so we set up a dedicated internal logical connection we take care of this management automatically for the traffic so let's see again how we do this well on our t0 route are we gonna create now a new interface we're gonna add actually an uplink and we're gonna you consume actually a logical switch a VLAN we select which edge node we're gonna use and we're gonna use a VLAN for the traffic again we enter an IP address for the outside world up and we've got the connectivity through the physical world now we can use dynamic routing you can use static you can decide what you want to push out what you don't want to push out and so on let's look back now to our trace flow from the virtual machine 1 for example to a physical server on a network so we're gonna use an IP address here this will give us again all the information on how we flow going faster than the video sorry and again here we've got the the router but we hit router we still inside the router until we get to the edge through the tunnel and to the physical network so we just talked about logical routing but we have added a lot of other services gateway to the physical infrastructure we use BGP for the physical infrastructure we can use equal cost multi parsing so you can have multiple redundant paths for the traffic north-south we've got functions load balancing VPN that firewalling so the firewalling can be distributed across all the hosts or we can also have five rolling north-south we've added ipv6 support ipv6 being a very big beast it will come in two different phases we've started with ipv6 addresses routing and we'll add more services as time goes by we've got now guest introspection so if you've got VDI environment and you want to make sure for using different tools from antivirus providers inside guests we've got east-west traffic service insertion so we can use different firewalls from checkpoint Palo Alto 40 net north north-south service insertion if you wanted maybe to add f5 in in a path we have DHCP server for the different networks and we now added also layer 7 app ID based firewall for desktops so lots of different services coming in to NSX so now we've seen the different use cases not working security automation visibility I have to remind you that we are multi hypervisors workloads can be containers virtual machines bare metals and the extension on Prem and hybrid cloud multi-cloud we have a one-day event in April in Lausanne where we do labs so I'll present and go through the different sections you get your hands on we do the lab remotely it's all nested so if you want to come to this event on the 12th see us at the booth downstairs if you want to learn more about nsx I highly recommend the hands-on labs you can do them at any time tonight or not tomorrow not tomorrow tomorrow Saturday on Monday if you're back at work just head to the labs hol VMware so there you are do we have any questions not encrypted encapsulated we have taps we can have a redirection of some of the traffic yes some I don't know all the functions of the the traffic free direction we have functions second we support Windows today on Asha we have an agent that deploys inside as your Windows machines can see more yes for bare-metal service so for to do micro segmentation now the the controllers a virtual machine that can either run on our ESX or k vm no you don't have if nsx is not on that ESX host you don't have to license that host it's really only the hosts that communicate the bare metal physical server that is for traffic edy as an edge node doesn't have to be licensed it's only the hosts that are running nsx yes okay yeah at the start of the presentations in the different products we have a tool which I didn't show today we've got network inside here network is inside is a tool that most of our customers that invest in NSX take as well but it's a tool that also we've got customers that have invested in a CI look Amy - it's a tool that will take ipfx flows from the sources and it will give you a diagram of the flow with information from the overlay and underlay because we connect to Cisco a research in checkpoint Palo Alto and it will give you a diagram of which flow goes to which flow or which flow to which source and based on what it can show you actually it will also allow you to get out information what how you can secure it with micro segmentation based on that you can export the rules and then apply it to one app one an S 60 and starts your micro segmentation of your environment step by step so that's the tool that will give you all those graphics and isn't really an operational day to tool that goes with nsx more questions oh yes so any host V my house that has a virtual distributor switch or the LVDS can output this information using the IP fixed format so if you can interpret this it's just like a net flow analysis you can do that we've got most customers actually use networking site even if some of these customers actually don't have NSX they use networking site to get this information yeah now you can also output information to assess log but it's always different difficult so that was one of the acquisitions we did about two years ago Arkin so the tool is pretty useful to have a quick view we also have actually security officers that like it because it will for example show you new traffic that's never been seen on a network in the last 24 hours or that new VM that's connected also on you had also a question so we've got a process actually in the you can do it using the user interface also on it's a two-step process or two steps one after the other so if you want to move from a VDS to the n VDS the idea is also to migrate vSphere kernel ports using VLAN based and new migrator but yeah so even if you have a host where only two interfaces you can migrate to Nvidia's there's no licensing for the cloud gateways we have a licensing I think on the number of interns instances you you run inside as your AWS a certain ratio of 15 to 1 I think or or something like that I can look it up afterwards yes not yet because we don't have that agent out for Windows at the moment it's more for inside infrastructure inside the virtual machine that you're gonna have you deploy that agent inside a virtual desktop as soon as the user logs in he will be authenticated with the Active Directory based on his group you'll apply at that virtual machine micro segmentation rules I don't know above my paygrade unfortunately lunch time yes seemed so thank you for it thank you for my thank you for the presentation [Applause]
Info
Channel: Scrtinsomnihack
Views: 33,578
Rating: 4.9581881 out of 5
Keywords:
Id: 08LeF8ceMzk
Channel Id: undefined
Length: 52min 11sec (3131 seconds)
Published: Wed Apr 10 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.