Multi-Site And Multi-Pod

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in module two we're going to talk about the Nexus 9000 hardware platform we're going to talk about the roles that switches have in the ACI fabric we're going to talk about spines and Leafs and then we're going to talk about the Asics the chips that power the switch on chips that power the Nexus 9000 family okay let's talk about single fabric and multiple fabric ACI topologies well we've talked about most for mostly what we've talked about so far is the single fabric production topology we have anywhere between 2 to 6 spines in a production fabric we show four here typically we're going to see two but we can't have up to six spines the connectivity between leave and spine is going to be 40 gig or 100 gig and right now depending on whether or not you're going to do a layer a large layer 3 fabric or a typical ao3 fabric or layer 2 fabric we have anywhere between 80 to 200 leaves that are supported and in one of these topologies it is considered a single pod technology or a topology rather so it'll show up as pod 1 and we can have anywhere between 3 to 5 a pic controller nodes 3 is typical the only reason why you would increase to 5 is it because you had a high rate of change so you're really banging away at your a pick nodes but for the most for most installations 3 is sufficient to spread out workload and still have a good degree of resiliency 5 doesn't actually get you anymore in redundancy because we're only we're only doing 3 replicas even if you have five nodes you still have only 3 replicas so another way to consider this is single fabric single pod single a pick cluster what a lot of probably the majority of AC I installations are we also have the lab topology of course this would be we're only going to have one spine and it typically is going to be a 93 36 PQ the baby spine and two Leafs and a single a pic controller instead of a cluster for lab environments a proof-of-concept environments or testing environments this is sufficient of course with lack of redundancy in spines and the epic cluster it's not really it's not really going to be good for anything production wise with a single cluster a single a peak node you're going to see this air right here as saying your cluster contains less than three and service controller so it's going to kind of complain about that but it's perfectly fine perfectly functional for labs and testing now we have some options for doing stretching out or fabric between multiple geographic locations or scaling it out via in a different way than we have before first we have the stretched fabric this this has been around for a little while this involves having two sets of spines we not ever not doing a full mesh here so this might be our our delineation between site one and site two they might be separated by a larger distance or maybe just another building or something for whatever and for whatever reason we're not going to do a full mesh so we have what we call transit leaves that are connected to both sets of spines and the rest of them are connected to their own unique spines we can also split out our a pic controller node amongst the two locations the next one we have we call dual fabric dual domain and right here we've got a split so we have some sort of layer two or layer three interconnect between the two we've got two separately independently managed ACI fabrics so these are just two regular fabrics that are that have a DCI in between them some sort of data center interconnect in between them and they are connected via layer two or layer three if it's layer two we can we can do via mobility so we can take a VM and migrate it from one location to another and and this allows us to do dual fabric tool domain and we'll talk about that as well and finally this is new to a CI 2.0 the congo release is we have the capability to do is called a multi pod fabric so it's still considered the same fabric so we've still got one fabric but now we've broken it off into multiple pods and the different pods connect to each other sue something called the inter pod network or the ipn which we'll talk about a single managed fabric so we have the single point of management there so AC I stretched in multi pod design goals why are we doing this one we want to provide IP mobility so typically that means supporting the emotion from one location to another location whether again whether that's across the street downstairs or upstairs or hundreds of miles away we want to be able to have the same IP address in both locations so I have the same address space we want a single fabric so one configuration management point is often a goal there although if we have the dual fabric the location scenario that's not going to be the case but we have some tools that can help us out there oftentimes its geographic diversity we won't have multiple data centers and we want to have them geographically dispersed for for active active or active standby we want for disaster recovery and so forth however right now to keep in mind there is a 10 millisecond round-trip time limit so that's about 800 kilometers or 500 miles depending on whether you use Imperial or metric and that's the limit and primarily that's a that's the tested limit now could you get away of something a little bit longer probably as far as I know a CA doesn't do any latency checks between different leaves to see how far away they are but in terms of syncing up objects sharted objects in the in the a pic data store that's what's been tested for 10 milliseconds longer distances might work but that's what's been validated another reason why we might want to do some other topology is we have a high scale single data center so that would require third-tier spines for aggregation perhaps we have some space limitations or cabling limitations especially for doing multiple wrote like many many many rows we still won't have a single fabric but we're going to break them off into pods and without fully meshing everything so we might have a situation where we don't to necessarily mesh everything together so we'll do a 3-tier instead of a two-two at three-tier cloths instead of a two-tier cloth so let's talk about the stretched fabric again this has been available for a little while what it involves for a stretched fabric is at least in production we're typically going to have two spines in every location although if you want to do like a proof of concept for a stretch fabric you can get away with just one in each location we've done that in our lab environment before but production wise we're going to have at least two typically we're not going to have more than two into strips fabric and we'll talk about why in a minute we have our three AP controllers split between two different geographic locations or stretching between these two we do have via mobility so that is a capability we have so we can migrate a virtual machine from one leaf to another leaf and on the other side of the stretch one thing to keep in mind in terms of the models of these transit Leafs atomic counters because of the ASIC and the first generation the AL ease will not work correctly if the transit leaves are these a leave a system so that le as opposed to the a le to the a le 2 plus or the LSE so the le is what's in the 12 port gem module that goes into the 93 96 or the 93 128 if you have the six port gem module that is the a le 2 if you have this six port gem module with a - e after it that's the a le 2 + ready hit if you have the 93 108 or the 93 or 180 the exes are exes they are LLC based so they will support atomic counters between these two different sides of the fabric we're going to have route reflectors in both locations to distribute routes that we learned from the border Leafs so if I have a router here I learn outside routes they get sent into the route reflectors and then get sent to the different on to the other leaves as of a CI 2.0 you can have up to six route reflectors the limit used to be two so if you had a stretch fabric you typically do just one route reflector here one reflector there but as a as of a CIT I know you can have six again we have a 10 millisecond round-trip response time so that translates into roughly 800 kilometres or 500 miles in terms of our maximum distance and that's just based on the speed of light through TW fiber now in order to connect these transit Leafs to our remote spines we've got three options we can do dark fiber and that's limited because the optics to 40 kilometers are about 24 miles we also have dwe DWDM and Ethernet over MPLS pseudo a pseudo wire and those both give us 800 kilometers and 500 miles so let's talk about dark fiber depending on the optic that you use the long range optics the LR optics we can do 10 kilometers 10 kilometers 2 kilometers and with a QFP 40 gig er 4 we can do up to 40 kilometers as of 1.1 the one auto-release supported only 30 kilometers but now we've got 40 kilometers it's about I think that's 25 24 miles roughly so that's how far you can put these these to the stretched fabric this is how far you can stretch it if we have DWDM we can again we're so limited to the same distance maximum of 800 kilometers or 500 miles in this diagram here and the think the previous one - nope yeah in this diagram here we are only doing one transit leaf per location instead of two partly because sometimes that's your only option because you have a limited number of DWDM hookups there or you know limited number of links of course it's not as redundant as having to transit leaves which you can do just one so our leaf spine interfaces one of the limitations here is that these spine interfaces are 40 gig or a hundred gig so we're going to connect them over 40 gig DWDM or we can break it out into for example we can take the cue SFP and break it out into four channels run it over DWDM as 4 by 10 gig and then get it back into the transit leaf and then it will show up before 2 gig here and for 2 gig here that's really what we care about is these interfaces are 40 gig then finally we have Ethernet over MPLS pseudo wire we have the ability we can do just 10 gigs so this can be a 10 gig link 10 give or better between the two different locations ASR 10 thousands are required because of the because of the QoS that's required to put in between these two sites so 10 gig or better QoS also 10 millisecond round-trip return time which translates into a 800 kilometres and for about 500 miles so our transit leaf will be connected by a pseudo wire all the way up to that spine this transit leaf pseudo wire up to that spine again the spine interfaces and the leaf interfaces those are going to be 40 gig or 100 gig but typically we're just going to do that depending on what our interface availability is we also can do a three site stress fabric in that case because we're in any given fabric we're limited to this is considered the same pod so we're really we're used to say we're limited to six spines per fabric but we're really limited to six Pines per six spines per pod so that'll be two spines per physical location we do have to mesh via the transit Leafs so the transit Leafs have to plug into all of our spines we're going to split off our AP controller between the three different locations although could do two and one just wouldn't have one there for example or we're gonna actually do three in one but typically we're going to have one I want to have something local and in terms of redundancy if I make a right to this a pick the actual right object that I'm doing to the the primary replica might be over here so the ape when I do the right here the right actually gets proxied over to where the primary replica is the right is done it gets disseminated to the two secondary replicas and then we get a notice saying that that right has been complete and and back again that's one of the reasons why we have at ten millisecond response time because anything more than that we're going to start to degrade performance in terms of responsiveness of the AP cluster and since we have a limit of six right reflectors you might as well put a rut reflector on each spine you can do just one per spine but why not just two it takes like an extra couple of seconds do six route reflectors that way each location has some redundancy there and again our latency one thing to keep in mind when we're stretching these fabrics from a networking perspective and a lot of times even from an application developer perspective or application architect perspective they think okay great now we've got this via mobility I can migrate a VM from one site to another case closed not really ten milliseconds is what we need for the apec cluster although vmware can do long distance of emotion there right now the the V motion limit is 150 milliseconds which is pretty far that's I I forget how far that's like from here is from North America to Europe at least this is a pretty good distance that doesn't mean an application can withstand having two different nodes and that far apart for example if I have a VM here and I have another VM here and let's say this is my database layer this is my app layer or maybe it's a collapse a p-- app and presentation or web layer and I do a write the application does a write to the database the database is going to give us an acknowledgement that read acknowledgement might take if it's if it's a hundred fifty milliseconds my ticket 150 milliseconds around trip time it's going to take it's 150 milliseconds to get that write done that's a significant amount of lag especially if you want to snap the application some applications are just fine with it and some applications will go south we'll get into race conditions and locking and so forth or performance will just be unacceptable so just because we can do that migration between two different sites doesn't necessarily mean that we have that it's a good idea for some applications even ten milliseconds is a bit too much so if you're going to be if you're using in-house development developers make sure that they test their applications having part of their application stack in one location part of them the other if you're doing prepackaged applications make sure to part of your acceptance testing is to run them in different sites and see how they respond they may not respond how you think they respond depending on a latency multi pod and this is new with 2.0 as I mentioned so there's part of the congo release we can do a multi pod topology what does that mean well it's sort of a three layer clause so we have our two tier we have our Leafs and our spines and then we have a second tier of spines up here that is represented by our IP n or inter pod Network so we have an a pot here we have a pot here a pod is a collection of spines and Leafs that are just connected to each other then the spines are connected into an upper layer now you may have heard me in previous sections talking about the only thing that plugs into a spine are other leaves we don't plug routers into spines this is an exception here we're going to plug these spines into layer 3 Network here the right here that doesn't need to be Cisco that it can actually be other vendors and we'll talk about what the requirements are but other vendors can certainly fill that fit that bill there of course our spines and Leafs here still need Cisco 9000 but these up here may not need to be Cisco so some sort of layer 3 Network the again the limitation between the pods in terms of how much latency we can have between the two different pods is 10 milliseconds round-trip time so why would we want to do this oh yeah and spines plug into our enterprise network we have this is a single cluster so two in one site one another if we're going to have two pods and we'll talk about what we do with we have more than two pods right now four is supported six is coming relatively soon I think according to Cisco live so probably by the end of 2016 although don't you know always subject to change and anything they say in Cisco lie they always say you know we're announcing it it's always subject to change so that's certainly true there as well we have a single API cluster 3a pick nodes and we'll talk about why three and not not five which is the maximum currently so it's a single fabric we're just we're actually going to have multiple pods and we support V motion and and and network availability in multiple locations so we can actually be motion from one pot to another this name IP address is the same default gateway is the same policies will all be there so it's one pod one or multiple pause but one fabric another node here not all spines need to connect to the IP em so if we we don't need to do a full mesh so we can have a situation where only two of our spines are connected into our IP n or all of them that doesn't matter so why would we want to do this well there's two primary use cases that cisco has implemented this for one is geographic diversity that's a very common ask that we get is we want to have two different data center sites so we've got site number one here we've got data center site number two again ten milliseconds round-trip time and we just want it for hae either doing active active active standby depending on and how we architect things and still be able to have that VM that VM mobility between different sites so I can move from one location to another in case we have some sort of issue there's one fabric one policy across all of these pods another issue is maybe we need to scale out our the number of leaves and our scale out our width to a really high number of leaves and we don't want to buy a huge number of spines so we can actually aggregate our spines our leaves from these spines and then do further aggregation up so we don't need as many of these devices to connect to our spines in a given pod and then have multiple leaves off of that so we can help out with cabling in terms of not having to do is such a crazy not quite such a full mesh saves imports saves on cabling and saves on a number of devices overall so this could be in the same data center they could be on different floors of the same data center but that's one of the one of Cisco's customers asked for this and so that's what that's what Cisco came up with so the multi pod limits in AC i20 right now it's a limit of four pods we can do six spines per pods 200 nodes per pod and then 300 notes total forgiven fabrics so if you have one paw that has 200 nodes then you can only do 100 nodes in the next pod or you can do two other pods with 50 nodes you know the drill so total of 300 notes total and that includes the the spines we do 3a pick nodes not 5 and we'll talk about why we're going to do that so 3a pick controller nodes and we can either put them all in one pod - in one pod and one or another we're going to completely split them across our pods for the ipn so this is what it we are doing to connect pod 1 and pod to here so pod 1 spine spot 2 for force tooth or for spines they're all connected typically redundantly into a pair of routers or network devices that are in our IPN so here the requirements first off they do not need to be Cisco as one case where you might want it to be Cisco but they don't have to be Cisco as long as they have all these IP and requirements one these interfaces here need to be 40 or hundred gig so we need to be able to plug them in there then that's primarily because the interfaces on the spine or 40 or 100 gig the needs to support bi-directional PIM and that's primarily for the bum traffic that we need to disseminate so that's the broadcast on a new caste multicast DTP relays so pod one right here when we add a new spine for example we add a new pod we put a couple of spines in them they're going to do a DHCP request it's going to go through the IP and network down through these spines down to the leaves and down into the into the EFI controllers and get their IP addresses that way so we need DHCP relay we nee and OSPF we only need OSPF here between the spine and the the border of our IP n whatever that is and it just runs here so if you if you're running a different routing protocol like is is inside of your IP n no problem we're just going to redistribute OSPF routes into that and then redistribute back into our spines so for that right there that needs to be OSPF and that's how we brought that's how we disseminate a preach ability we need to increase MTU remember a CI is based on the X LAN IV x LAN that requires a larger payload size so typically something over 9000 bytes is a safe bet in case we do in case we do jumbo frames on our VMs or on our hosts but certainly at least enough overhead to handle the VIP X line header which should not be too much of a problem we need to do some QoS so we need to have some QoS that we need to do on on these devices just to make sure the right traffic it's through again they don't need to be Cisco the IP n this network right here this is not configured through the a pic so this is a separate configuration point is of course if it's not going to be Cisco being a pic second interact there but we can do but we're going to manage that separately and it makes sense because typically the team that manages this network it's going to be different than the team that manages ACI also sub interfaces the the point-to-point connection between each spine interface we're going to run out over VLAN for y VLAN for we wanted to make it as simple as possible and that's what we supported initially and it just needs to be this interface VLAN four in this interface a sub interface VLAN for so those those are what's required for the inter pod Network long as you got that we should be able to build a multi pod fabric couple of consideration siddur ations when we're designing and scoping out our multi pod network each pod will have a separate IP Speight and pull for taps when you did your initial configuration of a given flap fabric you gave it a Tet pull something like 10000 16 and that's just for the tap addresses that are going to go on the different devices even inside of the apex themselves the epic controllers will have their own address off the tab space every pod is going to have a unique address pool and non-overlapping address pool now doesn't matter what address pool you use because this is part of the infrastructure network and these routes never get leaked outside of the fabric they never get announced outside so just an internal network they can overlap with any of your tenants or any of your outside networks and won't cause a problem spy nodes we just redistribute throughout through other pods more than no more than two a pics recommended per pod zero I pics in a given pot are allowed so those are Sutton's so those are some considerations in terms of getting to the outside world in your multi pod environment we can do our LAN router we can Moulton here multiple locations but our when router typically will plug it into a pair of leaves here whatever routing protocol we're going to run there it'll get distributed through MP bgp through yeah through the route reflectors up here so it'll get not in pptv they'll get it'll get redistributed through the route reflectors up here so what rats we learned in the round router are going to get disseminated by a ret reflector to the other leaves and then we'll also get disseminated into the other spines which then get disseminated into the other leaves throughout reflectors locally so that's imported there we can dual home or multi home or have multiple peering that's perfectly acceptable - and if a VM is here and the network is a now the same outside network is available here and here it's going to go out the local shortest path there so those are that's applicable so any border router doesn't need to be Cisco is supported through static routes so we can do static routes here OSPF EIGRP oh that's primarily Cisco and then I bgp we also have the option of we also have the option of putting a border router as part of the IP n this is called Gulf or the Cisco internally called this Gulf the the border routers that is supported through automatic configuration or the ASR 9000 the ASR 1000 and the Nexus 7 k with the FTP line cards um and then that way we can send traffic out through to the VM it's full gateways of course the local leaf typically and then the traffic will get forwarded and peered in and announced and sent out through the border router as part of the IP n rather than through a border leaf so the router configuration here you can do the router configuration manual but part of the ACS that we need to be dynamic so we it'll also use op flex to do that configuration so what are the a pic cluster considerations for stretch fabric I've alluded to this a number of times so far so if we have multi-site or we're doing stretch Paw stretch fabrics or whatever we're typically going to want to distribute our a pick nodes across multiple data centers a couple things to keep in mind every piece of data every configuration point in AC is considered an object every object is replicated three times one of the replicas out of three is considered the primary replica on the other two our standby so as long as a primary replica is available that's where we make our rights to and that's the authoritative replica and then that is copied over into the to the other two other two of the locations also we need to have a quorum of nodes in the cluster so we need to have a majority of the nodes in a cluster in order to make rights if we do not then we cannot make rights we can do reads we can do we can do some logins and so forth but we may we can't make rights so here's some considerations if we have a failure in the data center that has just a single that has just a single a pick node in it we have no data loss configuration configuration changes are still possible because now these two have a quorum there's a majority of the nodes in a cluster are now available this guy over here let's say this is not available because of a fiber cut or something so everything is still running in the secondary data center wasn't hit by an asteroid or anything it is still running but we don't have the ability to make any rights because there's only one out of three and it notices that it's all by its lonesome it does not have a quorum so it goes into a read-only mode that way I can't make a right over here and then I can make out this right to the same object over here now we have in an inconsistency which is the thing we try to avoid in a split brain so this is part of the split brain avoidance mechanism once the data center comes back online it rejoins the cluster and becomes fit again and gets updates of all the replicas if we have a failure in our primary data center or in this case whatever data center has two of our a pick nodes the epic lists are no data loss there's still no data loss because when we only have three nodes in a cluster we still have a copy of every piece of data because if we only have three nodes and three replicas that means every replica has to go on every node so even if our primary data center was hit by an asteroid here's an asteroid comes in and gets hit by an asteroid completely wipes it out we still have no data loss but our configuration is we can't make any configuration changes at this point because we only have one out of the three we don't have a quorum so to prevent overrides and inconsistencies we do not do any we do not do any rights one thing you can do in that case to deal with that is to bring up a spare so that might be an a pic know that you have just lying around it's not configured it's not running it might be plugged in but it's not powered on or if it's powered on we haven't gone through the setup script yet so what we can do is we can bring it up we can decommission the the dead nodes so if they're decommissioned if they do come up we do connectivity they just say hey we're done and typically what we'll do is just wipe them which doesn't doesn't matter because we still have our no data loss because we still have an authoritative copy and they act the surviving data center we're going to bring up this new node and then we're going to add it to the cluster and it's going to sync up and now we have a quorum out of three again we'll just decommission these two then reboot them with a wipe so that they come up clean then if it comes back up we just bring one of them back up and we'll have two here and as long as you have what we really want is we want to make sure that we have three nodes in a cluster long term we can have four it's going to complain not a good idea long term we can also do five but here's why five is a bad idea if I put three of my a pick nodes here and three of my a pick nodes here because we're splitting our replicas across three nodes not every node now has every piece of data some nodes are missing data now we can still hit every node and make a configuration change so for example if I have a piece of I have a configuration object that exists here here and here if I make a change that object on that particular node no problem it will find out which ever one is the primary some say it's that one the changes will get written there and then replicated to the other two so let's say we have a configuration data that is on these three now not all of them are going to be like that it's spread it's evenly spread roughly and deterministically across all five let's say these three die for whatever reason we have a catastrophic data center failure a flood tornado some natural disaster these three die now I've got data loss I still have an active cluster because I have a quorum over here but now I've got data loss so typically for stretched fabrics or multi pod fabrics we're going to want to do just three and then possibly spare so a second a pika being brought online the cluster we made fit so that's actually technical term they call it a fit cluster and then changes can be made another solution and this has been available for a long time is we would call it a CM multi fabric design so what we do is we have actually right here and right here we have two independent ACF fabrics so this is fabric 1 this is fabric 2 there if you cannot configure any policy over here from here and vice versa so there are two independently managed fabrics that happen to have a DCI in between them so that could be dark fiber it could be something like OTV now if you're going to do it this way whatever you stretch whatever whatever bridge domains that you're going to stretch whatever APNs you're going to stretch we have to follow a rule one VLAN one bridge domain one EPG so every bridge domain can only have a single VLAN and a single bridge domain associated to it so that's the rules between this multiple you bujji's per bridge domains which could be multiple VLANs per Ridge domain that could cause bridging loops in between these two different sites so that's what we want to avoid so that's why we're going to do one bridge domain equals one VLAN and one EPG typically this is very if you're similar familiar with some of the terminology that we've used before especially in designing our ACI networks sometimes we referred to it as a network centric design and sometimes we refer to something else as a network to sector design versus an application centric design a network sector design is this it's one bridge domain equals one EPG equals one VLAN so if we do it that way now that doesn't mean that every EPG every error sorry every bridge damn it you created in your ACI fabrics need to be one BPG only with one VLAN only but whatever you stretch between these two locations does need to be this way our options for interconnecting are dolfe multi fabric is we need to do we can do dark dark fiber DWDM and do it back-to-back bpc just trunk VLANs over this VP see here or we can do a DCI such as ot V in between these two locations or even Ethan it over MPLS sous-sous dewater now we don't really have the same latency requirements that we had for the multi pod or the stretch fabric because we have two independent APIC clusters we have two different fabrics independent fabrics so there's really no distance limitation that I'm aware of here except for what is available in your except for what your applications can tolerate again this is something I hammer every time I teach these stretch design these multi data center designs if you stretch them all if you stretch the data center and you want to provide VM mobility which is what we can do here we can VM we can have a VM move from one location to another depending on the hypervisor but it is support us we can use the same address space in both locations we still have a number of other problems such as how do we get traffic in and out how do we get traffic in rather out is okay it's pretty easy so our out traffic is not not a problem there but we do have an issue with the application latency what what sort of latency can the application survive or even thrive in so that's something to keep in mind just because you can have two locations that have the same IP space doesn't necessarily mean that you should now multi fabric layer two gateway so we have a virtual machine that is sitting in fabric one its IP address is 192 168 1 dot 100 slash 24 in this gateway address is 192 168 1.1 pretty standard in ACI every leaf has the is going to be our pervasive svi so the same address space is going to exist in every leaf and anycast gateway rather than being up here which is what we have with a traditional like 7 K 5 K pod topology gonna have a V Mac so that every one of these leaves have the same MAC address so if we've migrated from one Mac from one leaf to another we're still using the same MAC address because of ARP and we're still using the same IP address because that's what's configured in our default gateway in our IP stack of our VM the problem of that is if we do a VM from one location to another our default gateways are still one of these are still one of those leaves so what do we do we configure what's called a common pervasive gateway so what we do is we configure the same gateway in both fabrics remember we're still configuring these fabrics independently of each other so we have to go into both fabrics and configure this this virtual IP address we need a local glean IP address so we need something that is going to be that is something that is unique to each fabric so 1 I 2 1 6 8 1.25 4 is local to this fabric and then 192 168 1 2 to 5 3 is local to this fabric we're each going to have their own MAC addresses but they're going to share the v-max and they're going to share the default gateway so that way when I move from one location to another boom now my default gateway is a local leaf instead of a remote leaf so our multi fabric configuration might look something like this so we have a CF fabric number one a CF fabric number two let's do a quick walk through Bridge domain a bridge domain a we need to create both of these bridge domains identically with the exception of we need to set those glean IP addresses and so forth but we need to configure them identically in both locations so bridge to make a bridge domain a bridge domain B bridge domain B we've got a VM and then we're we're going to do an extension of VLAN 100 so we're gonna make a West single EPG and that EPG is going to be on VLAN 100 and down here it's gonna be VLAN 200 so we're stretching VLAN 100 through a DCI for example OTV or dark fiber across a V PC or something like that this VM right here is going to be visible to this fabric via that leaf it looks like it's a locally connected device so if this applet if this app here this VM so this is app 2 if it if it or let's say let's do this way so the web server over here needs to make a connection into app number this VM here which is that called app to what it's going to do is it's going to see this VM so this fabric will see it this VM connected right here on on whatever leaf this is so it's going to go through connect through the fabric to this bridge domain to this EPG and will then get sent over to the over that link over that VLAN 200 link and then go to that app there the response will go like that as well through that contract again also the contracts need to be synced up so we need to have the same contract in both locations if they get out of sync we can have unpredictable forwarding or unpredictable behavior so that's on the human mind so contracts EP G's bridge domains and PS must be kept configured separately in each fabric and kept synchronized a pic does the a pickax self does not provide sync capability between two independent fabrics so you have to either configure it manually by hand you can do it through scripting through the REST API UCS director has some workflows you can do for this and also there's the ACI toolkit you can do through scripting and through the REST API so that's that's what that is there and you can find that on github
Info
Channel: LumosConsulting
Views: 5,477
Rating: 4.8333335 out of 5
Keywords: Lumos, LumosCloud, LumosConsulting, Cisco, ACI, SDN, Networking
Id: 7jtSm4tJwuo
Channel Id: undefined
Length: 42min 22sec (2542 seconds)
Published: Wed Jan 11 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.