VMware NSX SD-WAN Demo with Nick Furman

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
well welcome everyone my name is Nicholas Furman I am a part of a technical marketing team here at VMware in the networking and security business unit obviously a lot of excitement around the announcement this week in terms of what Pat talked about yesterday so I'm here to put a little bit of you know rubber meets the road here and show you a little bit of what he announced but actually show you guys how it relates to what we have in the product portfolio I do want to let you guys know that some of this is things I don't want to say it's things you have seen before but we are trying to tie the story together obviously many of you have heard for quite some time about the idea of having nsx everywhere and really our ability now to sort of execute on that vision and have nsx sort of be this integral part of a lot of different products across VMware's portfolios so you guys are probably already familiar with nsx so we'll go through what nsx is but we'll also obviously talk about how nsx is part of some of the other products the Velo cloud acquisition and some of the things we're doing with cross cloud and multi cloud and the hybrid cloud strategies without further ado I guess we'll get right into it the way that this demo is structured we sort of start with the end user and we sort of build the story all the way up to the public cloud so we'll start with the end user the end user then attaches at a sort of branch the branch then attaches to the data center and from there those data centers can either have work load existing inside them on Prem or could extend into some sort of public cloud environment obviously many of you heard about the announcements around Microsoft Azure and will talk about as you're as a part of this demo today so we'll get started first like I mentioned we wanted to start with sort of what we're doing to enhance sort of end user experience this is where the NSX Sdn by Velo cloud comes in this first part is basically just showing a mobile worker if you will with their iPad or tablet and how we integrate again the air watch component or the air watch managed service of having their ID be managed by the air watch service how that then gets amplified or optimized I should say with the Vella cloud software and how that person that's experiencing an issue we kind of thought we're gonna address or solve that issue with with optimizations with NSX SD win so the first screen here is basically just what the enterprise admin would see when setting up air watch it's essentially configuring the integration directly to from the air watch console to the nsx manager so you see here there's an nsx manager UI or excuse me URL there's a username and password and basically all we're essentially doing is once we enter that username and password and URL we essentially take the security groups that are inside of NSX and we bring them in to air watch so all of the different security groups are configured here and this is just basically showing direct integration between the air watch Council and the NSX service that's running inside your data center is that an import or is it a living breathing connection it's a good question in this instant in this instance it would be more of like a a constant sort of update so if if additional security groups got added we would be aware of them on the air watch side it's not something you do once and then make a bunch of changes and then have to sort of import again so that essentially is that and like I said what we're really showing here is that this this this end-users device this iPad happens to be managed by air watch so when the user goes and wants to access their important sales data what happens here is they basically get the spinning wheel that we all know and hate obviously the the users experience is not that great so the access to the resources in the data center have timed out right so what we want to show is how we can go and leverage something like NSX SD win to sort of optimize those connections and take care of problematic issues that happen in environments between branch and data centers so things where you may see packet loss jitter latency all those sort of very Network centric things that affect usability or end-user experience so here we have the nsx SD win by Vela cloud UI we're basically going into one of the edges here that has two separate circuits or two separate connections so there happens to be an AT&T connection as well as a Comcast connection that you can see here and so what I'm doing is I'm basically clicking here in the UI and I'm bringing up and you can see all different types of traffic characteristics you can see throughput you can see bandwidth you can see latency you can see jitter you can see packet loss you can see all information about those types of or not those types excuse me you can see all the information about those specific links what we basically want to do at the end of this is configure a policy that's going to prioritize the AirWatch traffic and fix the end users experience so that something like a link that has high latency or as experiencing some sort of jitter is not going to affect the end users experience so one of the other things you can do before we jump in and configure that policy is essentially look at the actual live traffic happening on those links so again I mentioned we have a Comcast link and an AT&T link we can go into the UI here and click this live monitoring button and it will actually start to track a traffic statistics live as more and more you know traffic transverses the the environment right so the top environment it's the Comcast link the bottom invite or the bottom link there is the the AT&T link you can also click this little flag here to show very specific again network data TCP and UDP flow details I know this is kind of very much in the weeds but this I wanted to at least highlight the the capabilities of the platform and what you can get from the UI and this data isn't synthetic data this is actually watching the flow that's absolutely right this is live data happening on those those two circuits so now what we want to do is we basically want to go in and we want to configure that policy that I mentioned earlier to essentially prioritize the the AirWatch traffic so we go into here we click on enterprise branches so we want to make this policy apparent across all the branches so regardless if the end user is connecting at XYZ branch and happens to go to a different location we want to make sure that the end user experience is no different regardless of where they connect so we go in here we click on enterprise branches we're going to click on business policy and we're basically going to go in and we're going to create a new role let's create a new rule to go ahead and prioritize that AirWatch traffic so I go in here I basically am going to create a rule that's titled AirWatch we'll go in and we'll click define and this is basically just defining what application we want to prioritize so here under our business collaboration category we're actually going to scroll down and we actually have a VMware AirWatch profile right so we selected AirWatch we're basically going to hit OK and with those few simple steps now what we've done is basically created this rule that will prioritize AirWatch traffic over whatever link has the best availability and is suffering is not suffering any of those problems we mentioned earlier so if there is a problem where there's high latency and high jitter on the comcast link the software will automatically prioritize the traffic and use the alternative link thus providing the best end user experience so if we click back over to the end users iPad and we open the application again now that we've assigned that policy within an SSD when the user then uploads the application and we get the spinning wheel for a second or two but the information is now accessible and the connection doesn't timeout it's not using or trying to leverage those links that are suffering from some sort of issue right in our case it may be a hydrator high latency right so that's the first part of this demo just explaining how we can take the end users experience and combine it with nsx SD win and provide provide that value the next demo that I wanted to go through and show you is sort of moving up the stack a little bit and talking about how we integrate the branches themselves with the actual data centers so how do we provide or I should say more specifically how does nsx SD win provide enterprise segmentation for all the resources in the data center so the example being here let's hypothetically say we want to set up a kiosk at our Tokyo branch as an example but we want to segment that kick that kiosk from accessing any of the resources inside of the data center that we don't want to have access to right so we have important HR information we may have patient information we may have all this type of data but we have this kiosk in our branch and we don't want that we don't want that kiosk to potentially access any of that data so what we're gonna do again is we're gonna look at the NSX sty envelope cloud UI here we're gonna click on the Tokyo branch we'll open it up and again the the UI looks very similar to what we previously saw we in this case we don't have dual links we just have a single time-warner link between our Tokyo branch and the actual main data center which you'll see in a second which happens to be in Washington DC we can scroll the window down and again you get various information that we saw before right categories of traffic types operating system sources who's the biggest speaker etc right so all information that a network operator is going to care about when they want to go in and diagnose problems or look to to begin a troubleshooting a scenario although this is not really showing troubleshooting this is more around segmentation so the next thing we're going to do is we're basically going to go into the configure tab here and we're going to go in and we're going to configure some some parameters on our headquarters now as there as I mentioned the idea here is we want to deploy a kiosk at our Tokyo branch but we want to provide this segmentation in our main headquarters data center which happens to be in Washington DC in this example so we've accessed the part of the UI that's the Washington DC headquarters we're basically going to log in here and we want the kiosk because likely the end user that would be using a kiosk would be considered a guest not somebody who's an employee of our organization we actually have a segment configured in nsx SD win by Vela cloud for that guest segment right so I've selected my Washington DC headquarters and now I'm logging in and looking at the information pertinent excuse me to that guest segment once I've selected the guest segment you can scroll down and you can see a bunch of different information here what's really relevant to us and the way that we're providing that segmentation is the BGP and routing information here so what we're doing is we're basically going to create a peering from our nsx edge that runs into the day in the data center and all the NSX components all you guys are familiar with NSX already right we have logical switches we have rod routers we have yes Gees edge services gateways all of that is 10 now tenant or segment specific so you'll see in a second when I click over here to NSX there is a specific ESG for the guest segment and that's how the integration between these two platforms really works because we're providing that enterprise segmentation where as the guest segment has its own separate logical switch logical router and ESG inside of NSX that is basically you know providing that that segmentation through the NSX sddc so let me let's take a look at what it looks like an NSX right so we click on NSX edges we click on so here's a list of all of the ESG s that are configured in here as you can see there's a separate edge specific to that guest segment like you saw in the previous window there was a PCI segment there's an edge for that PCI segment etc we click on the guest segment and basically what we're going to do here is we're basically just going to add a interface on the ESG so that the bgp peering between what Velo knows about and what nsx knows about in terms of the resources in the data center those routes now get populated to Velo and we'll see that in a second here so I'm basically just adding this interface I'm going to add an IP address I'm going to configure the BGP remote a/s that you saw on the previous screen and we're going to add that interface to the ESG we'll publish the changes within nsx and essentially now we have created that those that routes that route advertisement so that the the the segments and the networks that we know about in nsx are now going to be populated into nsx SDRAM by Velo so we'll switch back to the Velo cloud UI and basically that configuration have to happen for every edge device or is it something that can happen across the entire fabric great question generally it does happen for every edge device but most people wouldn't do it manually like I did it there would be some sort of automation or some sort of adding some sort of men sorry non manual way of adding that interface to the yesterday so it's not really baked in but it's something you could automate yes and and that's true of pretty much how we would you know do any sort of routing peering from that ESG to any sort of upstream router in this case it just happens to be the Velo cloud edge so I'm just going to go back here to the Vella tog UI and basically in here what I'm showing you is that we're going to do a dump of the routing table for that particular edge for the guest segment and what you'll see here I've selected the guest segment I click run and basically what happens here is the routes that I have here the 1 to 7 routes that are on the screen there so 1 7 2 16 32 20 and 10 are now learned from the resources in the data center right so what we've basically done is we've provided and I should probably put a Visio up here in the future but we've provided end-to-end segmentation so now that the kiosk that's in our Tokyo branch is completely segmented from all the other resources that are inside the data center that NSX knows about we've only allowed the resources from that kiosk in the Tokyo branch to the appropriate networks inside of our Washington DC headquarters it cannot access anything else right so it's that end-to-end sort of segmentation story so that that's great for I want to keep it isolated what if I want to firewall it filter it in the data center and allow it to talk but only talk to certain things you're setting me up for the next part which is basically just going into the data center and we'll show you how we can sort of set up firewall rules and sort of firewall use true that's the segmentation that we just went over is really sort of object level and sort of network type segmentation but if you're what you know traditional people think micro segmentation and firewall ruling we're gonna cover that in this in this part right now so we've we've covered the end-user experience we've covered the branch and the branch segmentation into the data center now let's focus a little bit on the actual data center itself right so the next part is basically showing we have a application that we've deployed into the data center in this example it's an application that we call plane spotter plane spotter is essentially made up of both a web a web front-end which contains or I should say the web front-end containers are running in pivotal cloud the sort of app middle-tier is also a container based part of the application that actually runs in our pivotal container service and then the backend database which you see here is just basically plain-jane old-school virtual machine that's running a sequel service so the idea here is we've compiled tin application that both leverages containers and virtual machines but those containers are deployed in Cloud Foundry PKS and also traditional virtual machines running on ESX so as an example here we've logged into our vSphere client and we're basically just if you know to give you a lay of the land looking at what this sequel database looks like so I've selected that particular virtual machine you can note the you know the ESX node that it's running on you can note the IP address you can note the logical switch all of those important details that an admin is going to care about what we want to do is click over or switch over to NSX and we're actually going to start to look at these various components and how it all works together so the first thing I'm gonna do with in NSX is I'm basically going to search for that particular virtual machine so I'm going to type DFW MySQL and based on that search quarry you can see that it's found a virtual machine that NSX knows about I'm going to select that virtual machine and again first thing you'll notice is the IP addresses that we saw in vSphere client match the source is the ESX node that matches with what we just saw on the vSphere client we're just basically correlating the data between what vSphere site saw and knows about versus what we have what NSX knows about the other thing that we do in our integration is we have the when especially to the question you just asked about around firewalling is we we leverage tags so most of the objects whether it's a container and/or a virtual machine we apply tags to and then we can create firewall rules based off of those tags right so as opposed to create a firewall rule that says this particular virtual machine name can only talk to this other virtual machine name we can sort of abstract it and make it a bit higher level where we can say any virtual machine that contains tags XYZ can communicate or not communicate to any other destination and we'll see that but that's where it's important to understand the leverage and the scope of what we're actually using these tags on the screen for so in this case this particular virtual machine the sequel database has a tag database and it has a tagged sequel and we'll show you a little bit later where those come into play so now we're going to look at the actual groups themselves so we have these these notions of what we call NS groups inside of NSX and the group grouping constructs are basically how we would say okay any virtual machine that comes up or container for that matter that comes up in the infrastructure with a tag that it says sequel database or tag that says database we're gonna put it in one of these groups and then we can use those groups to create things like firewall rules right so in our example here I'm just showing you we have a group that's enabled excuse-me named plain spot or DB if I click on plain spot or DB and I click over here on membership criteria it tells you right there the membership criteria for that particular group is tag equals my sequel scope equals app right so this is more of just a verifying of how all these all these components in these pieces work together and if I click on members the member that we just went through which is that sequel database is a member of that particular group and you can see right there that's the IP address that I showed you earlier on the previous screen so now let's actually deploy the application and see what happens right so in the example I mentioned earlier we're going to leverage a pivotal container service to deploy the app component of our application here so I we can run a few basic commands this is looking at some kubernetes sort of low-level stuff but what we're showing here is by default out-of-the-box kubernetes has four default namespaces default cube system cube public and PKS infrastructure what we do in terms of network automation with nsx and our integration with kubernetes anytime that you spin up a new namespace we automatically create a dedicated logical switch and logical router for that namespace so what this is basically showing you here is out-of-the-box we have four namespaces in our in our PKS environment if I switch back to NSX and show you you can see on the screen there that there are four logical switches per each one of those namespace so there's one for default one for cube public one for cube system and one for PKS infrastructure but that's not really that interesting let's actually deploy something and see what happens so for most of you hopefully you're familiar with what yeah Mel is yeah Mel is yet another markup language and basically yeah Mel is the way that most people are deploying container based applications today so this is a yellow specification file for our app tier for this plane spotter app right and it basically has what what the app does where we're pulling the images from what each app does you can see there's actually labels in here so those tags that I showed you earlier that we're on the virtual machine based on this particular spec file were assigning tags to the containers so once we deploy these containers and they spin up in PKS they will automatically be assigned tagged app equals plane spotter tier equals app tier and I'll show you in a sec what that means but I just want to show you this so it hopefully it ties all that together let's go ahead and deploy the application so basically we run a cube kettle create command and basically you can see very quickly we've deployed our plane spotter app tier and there's a watch command that we can run here to basically watch the containers being created and like within two or three seconds those containers went from creator container creating state to status running so basically we have built a container and deployed it in PKS now what does that mean for NSX we go back to NSX and if I hit my refresh button here as I mentioned you before we build all of our logical networking constructs on a per namespace basis so now that we've created a new namespace called plane spotter you can see now at the bottom of the screen that there's a new logical switch that corresponds with that plane spotter app that we just deployed and if I went up there and I clicked on routing you have to believe me there would also be a corresponding tier 1 router for that plane spot or app as well so that basically is how we do our our integration if you will with with kubernetes in this case our kubernetes distribution just happens to be pivotal container service so let me jump back now to the terminal window and we're going to now deploy the with the actual front end of this application in cloud foundry so the first thing we'll do is we'll just look to see what's in our current Cloud Foundry environment we do with CF orgs basically the only org we've created is called system the next thing we do is we create an org called splain spotter so we're basically designating a new within Cloud Foundry specific for this application the next thing we'll do is basically create a dev space in our in our plane spot or org so now a developer can go and deploy containers in this dev space and that's exactly what we're gonna do here so we're gonna basically run a few commands here and basically do what what what is can call a CF push and we're gonna basically push out based on a specification file and go ahead and build that front-end or I should say that container that will represent our applications front-end so here in you know 10 15 20 seconds it's going out it's pulling images from the internet it's downloading its building flask it's adding Python to the container it's basically building a web front-end based on that specification file in Cloud Foundry once it completes just like you saw in the kubernetes integration piece our our Cloud Foundry integration works very similar so once we've created a new org inside of cloud foundry if I go back to my NSX screen and I click refresh lo and behold now I have a new logical switch and a new logical router that relate to the the components that we just I shouldn't set components plural the component or the container that we just built in Cloud Foundry so you see right there PCF FD 1 plane spotters 0 that is the new logical switch and the new logical router that the container that we deployed automatically attached itself to and all of the networking components inside of nsx were automatically created for you so going back to your question earlier about do people do a lot of these things in a very manual process this is Network automation at its finest rate all of these components were automated based off of creating a container in cloud foundry or PKS nobody had to go in and manually create these objects oh this is Oh does this scale and the number of switches and routers and stuff of course we have scale scale numbers for the platform itself and obviously there's there's scale considerations on you know how many pods you can run on a host etc and all of that is documented in our public scale Docs but it scales pretty well I mean one of the typical conversations we get talking to people about when we talk about containers is you know this idea of running I mean the extreme cases some of the web-scale companies or some of the search engine companies like Google right that you know claim that they do millions of containers on an almost weekly basis but generally speaking we have not run into any enterprises to date running into major scale issues with the platform and as we continue to release you know more and more yes this scale grows with every release so with Pat's announcement with vcn this also corresponds with a new new NSX release and those scales numbers from the previous release have jumped and gone up in the new release right okay so getting back to the last piece here for our on Prem piece is is hat net so now we have a sequel back-end deployed in a virtual machine we have our app tier deployed in a container in PKS and we have our web front-end deployed in Cloud Foundry how do we tie all this together and how do we actually make the application function so if we want to provide that segmentation that the gentleman at the end of the table is asking about earlier we can do that using the firewall rules right so I clicked on firewall inside of NSX and basically this is just looking at the firewall rule table so there's a a a section here that says plain plain spot or micro seg and basically what we've done is we've gone in and we've sort of created all of the rules specific to this particular application defining how the application should function so I should be able to access the web front-end from the outside world the web front-end should obviously be able to access the a player and the a player should be able to access the backend database but the front-end shouldn't be able to talk to the database right so we're we're defining all of those actions and all of those rules in our firewall rule table and again we can define allow actions we can define deny actions we can defy blocking actions there's a lot of flexibility in terms of what we can do in defining our application and this is just one application for a customer that has hundreds if not thousands of applications in there you would see you know significant sections likely for each one of those applications in the firewall rule table in our in our example we're just focusing on the plain spot our app so I scrolled down just to give you an idea and I've expanded the name field so it kind of gives you an idea of how someone would actually architect or model these rules right so we're saying allow front-end to talk to Cooper Denny's ingress allow app to talk to DB allow app to talk to Redis allow app allow the application to talk to the Internet outbound right so you generally want to define all of those applications and if there's something you'd want to disallow you can obviously make that explicit rule as well so that's the firewall ruling piece the last thing we wanted to show at least for the on-prem data center part is how we actually provide these operational tools or at least the operational consistency now that we have potentially a data center that contains bare metal resources a data center that contains containers and a data center that obviously would likely contain virtual machines our our tooling and our operational consistency doesn't change regardless of all those different workload types so in my example here I'm leveraging what we call trace flow which is basically a nice pretty picture that says from point A to destination B what is everything that that happened in the path so what I'm doing here is basically configuring a trace float session and I'm saying from a logical port that attaches to my plane spotter PKS logical port sorry the type plane spot are there there we go so my source is the logical port that connects to that PKS container my destination in in this case is going to be a virtual machine in our case we'll continue to use that same sequel virtual machine and what we're basically doing here is we're showing how we can run a trace flow with a source of a container to a destination of a virtual machine and like I mentioned before regardless if you're doing you know let's say bare metal to virtual machine VM to virtual machine or container to container or container to virtual machine the operational value of NSX is consistent right so again to create this and make it if we have to inject some sort of traffic here so I'm basically going to say I want to run TCP traffic with a source port of 10,000 to the destination port of in our case it'll be the sequel port so 3306 and we're basically saying again from the container to the virtual machine let's inject fake TCP traffic using a source port of 10000 to a destination port of 33 six and let's see what happens let's verify that that container can actually talk to the backend sequel database which in our case happens to be a virtual machine so we click trace we sit back for a moment we let the trace go and as I mentioned the output is essentially this sort of pretty diagram on the left that gives you a hop by hop so it's saying our source ESX node is ESX one we've gone from the container to the PKS logical port all the way up through the tier zero central router and then basically what it connects on the egress side how what are the objects along the way that connects to the actual virtual machine and if the picture doesn't do it for you on the right hand side of the screen you can actually get a hop by hop count here so each one of these hops we've captured not only what the direction or excuse me the the actual action was but we've also captured the actual the the object name so every logical switch logical router Tier one router tier zero router etc that that traffic goes through is now on this trace flow output and again in terms of understanding this is a very simplistic example but the reason why this is so important is one of the value adds or one of the trouble troubling problems that we've had many customers talk to us about is providing a tool with this level of information and this level of visualization in the container networking space so the example I just showed you was container 2 virtual machine but if you're trying to get this level of information out of a system when you have thousands of containers and you don't really know where things connect this this is almost invaluable right I mean this is this is one of the main value adds when we talk about operational benefits with NSX in the container space a NIC question is there a way to export that diagram we have actually today no but we have we are driving I don't think it made the release but in a in a subsequent release sometime later this calendar year the ask for many customers were to actually export that into Visio and we are driving that through engineering now so it's not available in the current UI but stay tuned that will be something that we will continue to enhance as we move forward and one more question with the configuration since I don't know a whole lot about NSX how do you back up your NSX config in the event that Sun happens instead of having to go through this manual processing and I didn't see any recovery up yeah great great question so there's a variety of different components that make up NSX you basically have your UI front-end which is what we call the manager then you have your control plane which are your controllers and then you basically have your data plane right so your logical switches that run in each ESX node right you can basically through the manager export the entire configuration or a subset of the configuration and then the inverse is true so if disaster strikes and you have these objects or these virtual machines in your infrastructure you can then re-import that configuration and then sort of restore it to its last known backup or export so in terms of failure scenarios and in terms of when disaster strikes we have most of that baked into the UI and you can go ahead and and recover from from a failure if if that event if that catastrophic event would occur dancer any other questions before we go on so I have a question about packet flow so you know with with the inclusion now of the Sdn product and kind of bring that into the fabric typically in fabrics right you're gonna have a border of some kind more traffic access so traditionally in a nasty way in product we would just it would leave at the SD Way and hub right now we're bringing it in and firewall policy and other rules are being applied by NSX in the data center so is that policy being pushed to the hub device or is the traffic actually coming in and participating in the data center traffic flow and then exiting the border fabric from within the data set and you're asking that specific just so I'm understanding specific to these sort of visualizations like what I'm just trying to get idea of like as a packet comes in to your to your data center so these systems used to be separate and isolated right like the ideas they weren't integrated together so you'd hit that hub device I could send traffic that was guest Internet traffic straight to my firewall it wasn't included in my data center fabric at all you know what I'm saying it just kind of went off and then my data center traffic with my data center now our policy coming from the NSX Orchestrator software that's that's sitting over the top of all of this and the firewall policy seems to be applied inside of the of the data center fabric so does that come in and have to use the data center border to get out traffic wise or the traffic go directly where it needs to go that is true and the other thing that that I didn't show in this particular demo but there there are some firewalling capabilities that were existing in the Velo pre-acquisition product and as we move forward I it's not there today but I would expect to see some some some merging of that but today obviously like you said what we've focused on was primarily on the sort of at least for this particular section the the traffic that's inside of the data center as opposed to providing the like you're talking about like per inner fire Wellington yeah so I guess the the difference in the model that I'm seeing now is the fact that traffic that before it might have never entered your data center fabric right it just came into your your hub site and maybe went off straight to the Internet now because of these products are integrated is now entering your data center fabric okay okay so so policy does get pushed and applied at places other than then they actually even though it looks like we're applying the firewall policy inside that the data center component that that those components are at least some of those components can be pushed out either to edge devices or hub devices yeah sure okay okay so that that that is the on Prem piece the the next piece that we want to extend out is basically showing how we can take that similar concept of what we just learned from plane spotter and as opposed to run all of the components internal to our own on-prem DC we want to look at how those components could potentially run in something like AWS surge or obviously as yours of interest because of the the announcement yes yes yes we we will make sure it gets updated so all we're basically showing here is how we've gone in here and we're basically looking at the clouds' that we've defined so in our example we have an AWS cloud defined and we have an azure cloud defined and basically what we're going in here is just taking that same application that I just showed you how to deploy on premise elimin now that that web that we are excused me the the the app tier can essentially run in in in AWS or in a drawer so we're in here we clicked on AWS we're looking at our instances we're going to basically just look at all the VP C's that have the components that relate to our particular application so I've selected my hybrid V PC and basically here in the drop-down you can see that I have plain spot or AWS web to be web two way those are the web the web front-end services or web front end servers actually running in AWS in this instance and if I go through and I click and I basically do the exact same thing for as your we have the same thing running on on that side as well right so we click on our cloud we click on instances and basically within our configured V Nets you can see here that I have plane spotter as your web 2a and plane spotters your web to be right so those web I basically deep for lack of a better term decommission the web front-end from running in our cloud foundry instance on Prem and I basically pushed that web front-end and now it's running in public cloud so when we go to access our plane spotter application you can act you will actually be hitting a web server running in public cloud as opposed to it running on Prem and this is just a couple screenshots real quick just to show what that looks like within NSX right so I'm in NSX I'm gonna search for all the available machines or virtual machines that NSX is aware about that contain the name plane spotter and you can see their web 2a is running an AWS I click on it and basically those tags that I mentioned to you guys earlier that we do a lot of our firewall policy on you can see that we also use tags when we have the machines running in public cloud as well so there's AWS tags there and then basically if I look at one of the web servers running in Azure we can go ahead and look at the tags that are associated with the web server running inside of jour so that's the public cloud piece the last part of the the vcn message which one of the highlight you guys our highlight for you guys is sort of the migration migration story so how do we take workloads that are running in our own Prem VMware cloud and potentially migrate those to let's say BMC and AWS as an example VMware managed cloud in AWS so here we have an on-prem data center it happens to be in London right and basically what we have here is a whole bunch of Ubuntu servers right and we're gonna select a subset of those a bunch of servers and we're gonna migrate them to two VMC running an AWS so what I'm gonna do here is I'm basically going to log in to or excuse me I'm going to select my hybrid cloud extension tab in my VCR web client and basically there's a bunch of information here about how the extension works the tunnel that we've set up the tunnels up it's active we basically built a tunnel from our own Prem London data center to VMC running in AWS as West Region I believe it is right so you you get a bunch of information that tunnels up its active you can see information about the actual networks that we're extending so the actual logical switch name that's on my own prem side is here and then the nsx managed logical switch that's in vm c on AWS is that destination network right there and we'll see that in a few short clicks to kick off the migration it's literally as easy as clicking the migration tab we basically go in here and we select migrate virtual machines there's a couple of parameters that you have to specify before you can do a migration most specifically they're all destination parameters so what resource pool do I want to put these virtual machines in on the VM seaside what storage do I want those want those workloads to use on the destination side we're going to fin provision our virtual machines and we're gonna do a bulk migration of virtual machines so once we've selected those parameters we can basically scroll down our list and I'm basically going to select server or I should say virtual machine 40 41 42 and all the way up to 48 and I'm basically going to migrate virtual machines in front of you guys real quick so we'll go through here we'll basically select our eight virtual machines and once I've selected all eight it will do one last minor check just to make sure that we've configured all the parameters correctly and there's no issue once the validation is successful we can basically just click finish and the migration starts so what happens is here and I've sped it up a little bit just for times sake but in in in the course of maybe 50 minutes to an hour I have migrated my my 8 virtual machines my 8 who bun to virtual machines that were sitting on Prem in my VMware data center in London to the AWS VM excuse me my VM C instance running in AWS in the West region so that's just scrolling down to verify that everything is migrated now we're gonna go just check on the VM sea side so we'll go over here to our SD DC and VM C you can see it's running in our u.s. West region in Oregon we click view details and basically we're just going to verify that number one the NSX components that we're obviously interested in are running in VM C so we click on network and basically we see here very quick in terms of the management components that are running we have a V Center component running we have nsx manager running and we'll scroll down and that destination logical switch that you saw earlier that l2e that name that you saw is basically here managed by nsx so l2e underscore VMS with a bunch of random characters there that is the destination logical switch that those eight who bun to boxes are now sitting on which is an nsx managed switch inside of vm c on AWS so we basically to just to complete the story we're gonna go back and we're just gonna verify in our host and cluster screen here that our eight machines that migrated and are running so here's our sddc data center in vm c we expand the cluster we expand the resource pool that we configured in the previous slide or excuse me in the previous previous tab and we basically scroll down and you can see there that Ubuntu server 40 through 48 have been successfully migrated and are up and running inside of vm c on AWS and just to close the loop right I've selected to bunt to server 40 and I'm verifying that it's actually on that correct logical switch that NSX has created inside of the VMC service on AWS so that has sort of completed the entire story in terms of how we go from the end users point of view to the branch point of view to the branch connecting into the data center then what we do with NSX inside of the data center on Prem and now what NSX is doing for workloads hybrid workloads that are both on Prem as well as public cloud and then obviously in terms of the migration conversation how we can take those workloads and migrate them from an on-prem vmware vsphere based data center to a in this instance BMC services running on AWS
Info
Channel: Tech Field Day
Views: 9,183
Rating: 4.8518519 out of 5
Keywords: Tech Field Day, TFD, Networking Field Day, NFD, Networking Field Day Exclusive, NFDx, VMware, VMware NSX, Nick Furman
Id: S2i13MlDkYk
Channel Id: undefined
Length: 44min 14sec (2654 seconds)
Published: Fri May 04 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.