Using NIC Teaming and a virtual switch for Windows Server 2012 host networking

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi my name is John silence today I'm going to talk about the new networking options of in figuration we have in a Windows Server 2012 hyper-v cluster that's additionally if we think of Windows Server 2008 r2 our configuration has really been based around multiple different types of traffic and in a one gigabit per second data center network we really had to focus on different networking adapters but each of the types of traffic now the reason we do this is to make sure there's sufficient bandwidth to each of those traffic types so I can imagine if I had I how it would be hosts I'm going to have a number of different network adapters for example I'll have one dedicated adapter for the management traffic I have one for the virtual machine traffic I have one for the cluster and that would also be the CSV cost the shared volume traffic I'll have another one for life vibration traffic and that's a minimum so straight away I for set for one gigabit per second links I may have additional network adapters for the VM T me to actually get me some high availability activated bandwidth I may have I scuzzy sitting over here as well so the guidance was you want to make sure each type of traffic always has enough bandwidth so you do this by giving each of them a separate network adapter now if you have a 10 gigabit per second you would run one of these over one network adapter probably to team them together and then use quality of service well that guidance is really becoming the standard for Windows Server 2012 as well Windows Server 2012 now has native NIC teaming so we want to move away from kind of thinking about the analogy would be today if we have a highway we have a separate lane for each of the cars so is the car here which is our management traffic there's a lane here called the cars that VM is a lane for the car a CSV cluster and the length of the car satellite migration and typically a lot of these lanes are empty there's normally lots of traffic from the virtual machine lane that's really busy and all these cars are stacked up very angry that is basically all these other lines that are empty most of the time very rarely in me doing live migrations when we do we want that one gig we make more more bandwidth there's normally not much cluster CSV traffic very little management traffic maybe follow traffic but in this highway analogy there's a dedicated lanes for police for fire for ambulance and their normal traffic and it's the same what we do here these networks are generally empty only this vo network is typically doing things so your idea is we move away from this instead of having these dedicated network adapters what we're actually going to do into Windows Server 2012 it's we're going to group them all together into a NIC team so we've now done is we've got rid of all these separate lanes in the car analogy we just have one great big Lane that's now our virtual machines can be over here over here over here over here great one virtual mixing but the challenge comes well what if I want to be very specific about certain configurations what if the situation comes for these virtual machines are all over the place and now I wanted to a live migration for the VMS are soaking up all of the bandwidth so what we can actually do it Windows Server 2012 is we would create a hyper-v switch link to this aggregated team so this team has these four adapters in it and from this team I can actually go and create virtual mix that's accessible to the hyper-v post so they're all hanging off of this NIC team but I can actually create separate but for network adapters for the hyper-v ho level and if each nice perk to network adapters I can carve one out for live migration one for the management one for the cluster and one for the VMS but now all of them have access to all of the aggregated bandwidth all of them could use four gigabits five gigabits however many network adapters I add into this and then now all fault tolerant one of these network adapters two of them could go down and they're still going to function what about this scenario of all the virtual machines they're sucking up all the bandwidth I now use quality of service so I've carved out these separate virtual network that is on the hyper-v hose connected to the switch which connects to our team I now use quality of service to carve out a minimum bandwidth guarantee I don't want to use maximum quality of service maximum quality of services will you can only use up to 1 gigabit per second you can only use up to 2 gigabits your back flows separate lanes and in times where there's no contention I'm just wasting all the bandwidth what's the point minimum bandwidth quality of service says only in terms of contention do you make sure you get at least this way to the mail so I could say for example live migration always gets 25 minimum waiting management maybe own always gets 15 I might say the cluster traffic well that's important if it means it always gets a minimum for example of 25 and I kind of want to make these round a up so I've used that 65 so I'll say VMs 35 these are not maximum caps if there is no contention the ends can use a hundred percent of aggregated bandwidth only if there's some tension on the bandwidth to these minimum guarantees kick in - if everyone was trying to do the maximum they possibly could what migration will be guaranteed 25% cluster will be guaranteed 25% management is guaranteed 15% BN is guaranteed 35% so in a contention scenario live migration could use well 25 percent 41 Nick but if there isn't contention it can use for gigs p.m. to news for gigs this is what we want to do we really want to move away from the concept of a dedicated physical network adapter for each of the types of traffic and it's that we want to think about do I put this all in the mixing and they create virtual network adapters connected to that quite could be switched for the management OS the hosts and then I'm going to use minimum quality of service waiting to make sure in terms of contention they can get the fabrics they need and what I'm now going to do is we're actually jump to a demo environment and actually show this in action and what I would say I've done these all as one big team there are our thoughts about well maybe I'll print to knit teams with two each maybe I'll carve out one for management live migration and cluster or storage and create set for one-four virtual machines or one for virtual machines and management and a separate one for class group communication live migration but certainly point in the morning one is totally fine so we're moving away we're going to maximize our bandwidth so it's actually go and see how we do that so here I have a server now I'm not going to move all of these to a virtual switch I'm just gonna move the cluster than the live migration to give you an idea of the actual process so the first thing I actually want to do is remove the IP configuration from these so it's not going to conflict when I reuse these IP addresses on the actual virtual adapters I'm going to create so I'm just going in I've made a note of these configurations or wedding I'm just going to remove any static IP that I want to reuse once that's done I'm actually just gonna rename these so I'm just gonna call it NIC team one and nicotine two so now I have these two adapters I'm going to put into my team so rename them now I've done that the next thing would be to create the team so I could do this graphically through server manager and go to my local server and go to NIC teaming and from here I can go and say create a new team but what I'm actually gonna do here is actually just run it through PowerShell so I've got the commands ready so now you're gonna say create a host switch team this is going to create a new NIC team using that NIC team 1 and NIC team 2 and I can look at the adapters I've got so those 4 adapters so I'm going to put two of them into this new team and I'm using a teaming mode of static and this is because I've already configured the team on the switch if you'll switch to Paul's LACP you could use mode LACP or the d4 is just switch independent so it's created that and I'll now actually see that within server manager so it's now visible so now I remember the next step I want to do is to actually create a switch in hyper-v so a new virtual machine switch so go ahead and create that and now I actually see that appear in hyper-v manager so there's that new switch I just created that it's linked to that NIC team I created now the next step is to actually go ahead and create those virtual network adapters I talked about at the host level so notice I'm actually saying the management OS here it's gonna create a live migration one and I'm gonna create a cluster one and then I'm going to use that minimum bandwidth wait and I'm basically just splitting it 50/50 now again you would repeat these commands for management and for virtual machine I'm just doing it for these two just really for simplicity and time so now if I get a list of all my adapters I can now see I have the team and I also have these new cluster and live migration ones and the important part of you is they're just going to be visible and I would go back and then we configure any of my configuration so if I'm doing static configuration for the cluster Network for example I would go in and pull all of that back this is - and just repeat that process you're going to go through now and just configure them as if they were standard adapters so in a complete all of that and I'm done I'm now ready to just use these networks in the same way I used to use physical network adapters and now I'll just see those in task manager go to my performance and I can see my hyper-v virtual Ethernet adapter 3 so the cluster and I can see my live migration and that's really how hard it was again I only did two of the connections but you could do the management the virtual machines everything I showed you the PowerShell to actually go ahead and configure those networks today you've got more bandwidth and you've got full tolerance across all of those different networks and again you're not limited now to one gig each they're gonna use as much as they need but they are guaranteed whichever values you set if they work intention I hope this was useful thank you very much
Info
Channel: John Savill's Technical Training
Views: 77,766
Rating: undefined out of 5
Keywords: Hyper-V, networking, QoS, PowerShell, live migration, Windows Server 2012
Id: 8mOuoIWzmdE
Channel Id: undefined
Length: 12min 34sec (754 seconds)
Published: Thu Jun 13 2013
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.