Implementing Cisco UCS Part 1

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
we know there's a virtual network interface card inside of our UCS design and one of the things that we're going to show you how to do is to go ahead and virtualize things like MAC addresses so this is something that we're going to emphasize beginning in chapter 6 where we go in and we say ok we're going to take pools of MAC addresses we're going to we're going to have the UCS system create pools it will help us do this create pools of available identifiers that it can use to virtualize these devices and their placement understand that in the UCS system we want to virtualize things like MAC addresses so that if we have an underlying hardware failure we can go ahead and take everything the apps the operating system and vmotion it migrated from one blade to another and obviously we want the MAC address and everything to stay consistent so that there is no confusion in our security infrastructure and and with the application itself so this is going to be critical that we virtualize everything for the environment and Pete says yeah that's live migration in hyper-v and Pete thanks so much one of the inadequacies that I possess and I and I admit so readily is that I am pure VMware with my software experience when it comes to virtualization we are going to be releasing courses here at Stormwind com regarding Microsoft's solution we're even talking about releasing courses in Citrix is solution and I can't wait for those I'll be one of the first people to sign up for those courses because I am biased through my experience with VMware ok so V Mac virtual NIC and virtual mac-address portability something we definitely need to have and we are going to show you how to set that up by the way for you storage area networking people you're saying well what about worldwide names and worldwide port names I need those virtualized as well and yes don't worry all of that can be virtualized in the UCS system as well and we'll show you how to do that all right well earlier we said that the input output module supports in the current generation what's up let's say let's let's call this course the current generation we know there's some newer generation technologies but we know in what we'll call the current generation of technologies the UCS manager supports those three link topologies right the one link the two link or the four link topologies and we know that the mezzanine card inside the blade server has the 110 gig connection so what we're going to do is are going to like we talked about the UCS system is going to be dynamically on its own pinning these virtual MAC addresses to particular uplinks okay we know this is going to take place by default this is an exam question not to give too much away here but as you might guess this is a key point in the exam that with you doing nothing with you doing nothing at all no administrator intervention the UCS system runs in that end host virtualization mode that we talked about MAC addresses virtualized MAC addresses from the server blades are going to be pinned to particular up links and that's how the multiple up links are going to be utilized and there's no spanning tree running and obviously if there's a failure we need to talk about what's going to happen to the particular pinnings of those MAC addresses to up links now first of all let me prove to you what we just talked about we need to prove that there is this automatic pinning that takes place so we're going to prove this in a pretty cool way we're going to go to that read-only shell for the Nexus LS right so notice here we go Connect nx-os and once we get into that Nexus OS shell I'm going to say show run interphase e 1 / 1 / 1 this is our first chassis our first slot our first server and it shows us the running configuration for that particular interface and it says pending server and it says the fabric interface is e1 1 and if we go and look at that particular configuration with show run interface II 1 1 we see the pinning server command which causes this particular pinning behavior right by default so we know we have multiple we typically target multiple input output module links to the fabric interconnect right if we have the loss of one of those links we would have a loss of connectivity for servers that are pinned to that link but the great news is what we will typically configure is NIC teaming in the operating system that's running virtually or will do hardware based failover as part of our service profile which will teach you how to do and this means that when there's the loss of a link we fail over to the fabric interconnect B alright now let's go let's take a look at the high availability the typical high availability configuration here with our 4 link topology so notice we're dealing with our 4 link topology so it's funny because the IOM doesn't support a 3 link topology but notice the 3 active links will now continue to forward traffic until there's a reignite the chassis let's say there were two servers that were pinned to the failed link remember they will be down they lose connectivity unless we've done NIC teaming inside the VMware for instance or we've configured the service profile for our fabric failover by the way if we reignite the chassis the input output module will now form a two link topology and repin the blade servers as part of the to link topology by the way will now have those servers that failed over to the B fabric reconnected to the a fabric so this is pretty complex and this may be something that you need to you know really examine and and and study for a bit but the bottom line is the great news is obviously here is that through the high availability configuration either done at the operating system level with NIC teaming or through the fabric interconnect redundancy through the multiple fabric interconnects in the UCS when there is a failure of one of these key links one of these key server links thanks to these available high availability options that we can configure there's no loss of connectivity obviously there could be if we didn't have the high availability configured correctly all right obviously this is something that's real important for UCS administrators to to think about the general rule is if we have one of these input output module links fail and the server's aren't configured for the high availability we want to reignite a squiggly as possible okay so we need to read knowledge if our servers aren't configured for the appropriate failover if our servers are configured for the appropriate failover then we are not obsessed with reigniting the chassis because all we have to do in this case if our servers are configured for the failover all we've got to do is replace the link and then notice once we replace the failed link we'll have automatic fail back so we restore the link the link is identified by the UCS system and servers are going to do what's called fail back to the appropriate links all right now let's not confuse IOM pinning which is what we were just talking about with uplink pinning yeah notice we're going to be very careful with our words here so we were just talking about server port or IOM pinning and now we want to distinguish that between uplink pinning uplink pinning is where we are going all the way through the infrastructure right we're going from the server max all the way up and pinning those two particular up links how does this how does this work well by default notice that these uplink pinnings are done these server MAC addresses are pinned on the up links in a round-robin assignment fashion exam time as you might guess so how are the server Macs pinned to the uplink ports by default it's a round-robin assignment by default remember the server interface pinning was going to vary based on the number of links that we use but pinning on the uplink ports is round-robin by default or as you might guess you as the administrator can go in and manually pin that's right you as the admin have control over this you can go in and manually pin particular MAC addresses to particular uplinks now what happens in a failure with our up links oh and Jones got a great question what would this look like if you're running those virtual poor channels it's same thing yep the same thing anytime we mention these uplink ports and the uplink pinning keep in mind that that could be a port channel yeah great great point so if it's a port channel its ping pinned to the pork channel excellent point and remember the virtual port channeling that's transparent on the fabric interconnect it doesn't it just considers it a port Channel the virtual port channel is a trick with the nexus and that and the fabric interconnect stays completely oblivious to that all right so what happens when one of these up links fails maybe it's even a port channel that is failed between the fabric interconnect and the nexus equipment well with what we call automatic uplink repinning the the MAC addresses are repinned two remaining up links what the fabric interconnect does here let me draw this for you so here this failed uplink occurs and we have these three remaining up links what the fabric interconnect is it starts flooding out gratuitous address resolution protocol frames GARP frames so that there is of quick relearning or repinning of these virtualized MAC addresses two remaining up links so automatic uplink repinning is the default behavior by the way if you lost all of the up links on a particular fabric interconnect okay so we lose all of them on fabric a well everything gets pinned over to fabric B and it goes ahead and does the round-robin approach to load balance across those particular up links so pretty cool how this automatic repinning works but remember you have the ability as the administrator to not to override this automatic pinning type behavior and this is what we use pin groups for so let's talk about manual pinning uplink pinning and what would happen in failure recovery scenarios so we can create a pin group notice when we create a pin group we're saying okay you know I have some red hot red hat operating systems I'm going to name this the Red Hat pin group and I am literally going to indicate what uplink ports are my target for this particular entity so here we have in our UCS system this virtual MAC address here we have this virtual MAC address associated with this sounds like a Red Hawk Red Hat virtualized box and we pin it to either one - 18 on fabric a or one - 18 on fabric B we talked about reasons why you might do this in our design course right you know there there may be particular security implications particularly you want to direct traffic flows you're doing particular traffic engineering in the data center system okay and you want to manually pin you want to know at all times where the traffic flows from particular virtualized servers are going well sure enough this static pinning would be your answer remember we want controls like this in the UCS system because one of the major disadvantages of the virtualization of this environment and we already talked about it in this course is the the inability to control the virtualized traffic it's like you can you can just everything is virtualized and you can lose sense for ok where is a particular V MS traffic going well if you had a need to explicitly traffic engineer like this ping groups static ping groups would be your way to do it now as you might guess in our example if there is a failure the uplink fails Ethernet 1/18 fails for fabric a then we would have a failover that's automatic to fabric B but it would have to stay on the ethernet 1/18 port which we statically pinned okay great work well the remainder of this chapter 5 is simple because it all kind of builds on everything we just learned except it's in the world very similar world of storage area networking so let's go ahead and wrap up this chapter 5 which has been just such a critical chapter hasn't it in here in Chapter five we've been talking about the hardware infrastructure and how weak a beware certainly talking about some logical configuration elements like VLANs and now we're going to talk about virtual sans but this chapter five has really been about you know building this UCS system and what our capabilities are then in Chapter six we start all the logical entities right like creating a service profile that will allow us to quickly deploy a server for a particular job in the unified computing environment so let's yeah I know this is a lot of material and that's why I'm trying to go through it as slow as I possibly can but yet obviously still cover all the material we've got a lot of material to cover in the second week of classes as well alright so let's wrap up this chapter five we better do that and like I said this will be simple because it's going to be very much tailoring on what we've talked about just in a storage area network environment alright when we start talking about storage area networking one of the things we need to consider is fibre channel now fibre channel identifies two types of ports doesn't it but well there's actually many types of ports but there are two types of ports that we are interested in there's fiber channel hosts and those utilize what are called n ports and four nodes and then there are F ports F for fabric so we've got our end ports and our F ports in a fiber channel type of infrastructure exam question time what type of equipment does Cisco manufacture for the creation of storage area networking it's their MDS equipment so when we see the acronym MDS we know this is the equipment from Cisco for the storage area networking a whole line of MDS devices okay so here you can see yeah yeah I love it someone someone in class who is that it's Pete Pete's getting hungry he looks at the storage array and he sees a bunch of pancakes yeah I hear you I'm getting hungry too so here's our MDS system and you notice the MDS system is bringing together fibre channel hosts that want to consume this data and the storage itself that has its a efficient way to store a bunch of data inside the fiber channel now when you start talking about fibre channel you realize that there are three basic topologies the simplest fiber channel topology but it's one that is seldom ever seen would be point-to-point a more scalable type of topology would be an arbitrated loop but by far what took off as the most popular topology for our fiber channel storage networks is a switched topology this is because it's extremely scalable theoretically scaling to millions of different nodes that could consume the data stored in the storage area network notice in order for this particular switched fabric channel environment to take place we need a fiber channel switch in order to make that technology work all right now a key Cisco technology that is going to be employed by our fabric interconnect and if you haven't you know been impressed with the fabric interconnect at this point you're probably about to become really impressed with it a key technology that the fabric interconnect is capable of is called n port virtualization yeah the fabric interconnect can do what's called n port virtualization and now we're going to have server interfaces and border interface technologies so the blade server interfaces can function as our F ports and the uplink ports from the fabric interconnect to the MDS is depicted here can function as what are called proxy end ports or NP ports to use the cisco terminology okay this is some glory details but all we really need to think about is okay what is this gonna act like well it's going to act like we see here this end port virtualization is going to allow once again for server links to be pinned to particular up links to the fabric switches to the MDS switches that live off of those uplink ports very cool if you want to get into the details of this end port virtualization technology let me show you this slide don't be intimidated by this slide okay we don't need to know all this for the exam for example but this is what the technology looks like remember those n proxy virtualization uplink ports are here on the fabric interconnect these F ports are down here on the server blades and then we know for the consumption of the storage area data we have these HBAs the fibre channel host bus adapters so one of the things that ends up happening with Cisco's end port virtualization technology is that it is extremely scalable yeah it is extremely scalable in traditional fiber channel switching environments we have this consumption of fiber channel domain IDs for example and there's only 239 of those that are available for defining domains so this can have scalability issues notice with Cisco's end port virtualization approach we don't even consume one of these fiber channel domain IDs with this infrastructure alright let's talk about next up here we need to talk about virtual storage area network or V sans now again the really good news is this is going to follow very closely the logic of our virtual local area networks so let's talk about exactly how our virtual sands are going to work in the cisco ucs environment again similar concept to our VLANs we're going to go into the sand tab of the UCS manager and we're going to globally create our virtual storage area networks very similar to our VLANs we have a virtual sand that is a default and by default that's V san1 notice we cannot delete that V Sam just like our VLANs we can create these V sans globally or fabric interconnect specific but once again just like with our VLANs both fabric interconnects will typically share the virtual storage area network configuration okay the up links on our fabric interconnect our connect are configured as f ports in other words the F port here the port here is an F port okay so just like we had this discussion in our land environment I told you we got to go to the person that runs the Nexus and make sure that's a chunk link here notice the MDS switch must be configured appropriately as an F port by the way memorize this for these up links for these up links we have a limitation of exactly one virtual storage area network / uplink so one virtual storage area network / fibre channel uplink all right if we're going to engage in Fibre Channel over Ethernet VLANs notice we need to specify the Fibre Channel over Ethernet VLAN for each of the virtual storage area networks notice we cannot conflict here with our VLANs we created for the land aspect of our environment so make sure your Fibre Channel over Ethernet VLANs are using unique identifiers from your normal VLAN objects so this requires pre-planning doesn't it we want to make sure that we have an unused range of VLANs for use with Fibre Channel over Ethernet all right with all of that said obviously the graphical user interface work that we have to do here is going to be quite simple now that we understand all that we're going in to the sand tab storage area network tab we're going into the sand object and we're going to go to the fibre channel uplinks for one of our fabrics and we use the create virtual storage area network action this initiate the wizard we see how similar this is to the virtual local area network creation mandatory name mandatory ID and then if we're doing the Fibre Channel over Ethernet here we indicate the Fibre Channel over Ethernet VLAN for this particular virtual storage area network notice that we have the default just like with our VLANs of the common configuration of this virtual storage area network between the two fabric interconnects could we do a fabric only virtual storage area network yes we could but again we have issues here with failover now just like with our virtual local area networks and the virtualization here we were concerned about MAC addresses our virtual host bus adapters they need an abstraction of world wide network names and world wide port names and in Chapter six we're going to see how we can accommodate that we're going to tell the UCS hey look you're going to maintain your own virtualized name database so that once again if there is a failure of a device in our chassis we can move these logical components from location to location and the virtualized worldwide names and worldwide port names can move with them all right well what about pinning well sure enough in the virtual storage area networking environment here we are going to pin for the particular virtual storage area networking so pinning is going to be based on the virtual storage area network notice that server interfaces are only pin to border interfaces with matching virtual storage area network configs so if we don't have an interface with a matching V San the link stays down notice this particular V sin 30 here is not properly defined it is not bound to a particular uplink interface and as a result this stays down this link stays down in the chassis okay for the uplinks what's going to happen with this concept of uplink pinning well again it's going to follow just what we saw with the uplink pinning in the virtualized environment so let's take a look at this just like with our land environment the server worldwide networks the server worldwide names on a particular fabric are going to be pinned to uplink ports in a round-robin assignment nothing new to memorize here so this is nothing new that we have to memorize now if a link-up link goes down just like in our VLAN environment there will be an automatic Reap inning to our available up links now what happens if we have all up links on a particular fabric interconnect go down well the operating system or VMware itself is going to have to discover the path loss and rear oh so if your haven't configured high availability inside either the operating system or the hypervisor then you're going to have a lack of communications keep that in mind this is obviously a rarity that we lose all our up links but if we haven't configured VMware or the operating system for the cutover we could have loss of connectivity okay you're not surprised then in addition to this dynamic uplink pinning in our storage area networking we can do manual or static uplink pinning and we're going to create a SAN pin group and this is almost identical to the logic for the VLAN for excuse me for the land pin group that we spoke of so we can go in and we can say alright we have a particular device and we want it pinned to a particular fibre channel uplink and if there is a failure we're going to define what uplink on the alternate fabric interconnect that cut over is to utilize you
Info
Channel: StormWind Studios
Views: 108,204
Rating: undefined out of 5
Keywords: cisco, ucs
Id: juPl4FTp0BQ
Channel Id: undefined
Length: 44min 25sec (2665 seconds)
Published: Fri Sep 14 2012
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.