Data Center:Network:Cisco:Nexus: Overlay Transport Virtualization (OTV)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] welcome back everybody to ine CCIE data center Nexus switching class in our next section we're going to be talking about overlay transport virtualization or OTV which is a layer 2 data center interconnect technology now the first thing we need to know like the other protocols that we've been talking about during this class is what is what is Oh TV and why would you want to use it now basically at its core overlay transport virtualization or OTV is essentially just a layer 2 VPN that is running over ipv4 now specifically OTV is actually an IP or ipv6 over ethernet over MPLS over GRE over ipv4 tunnel so essentially it's an Ethernet over MPLS tunnel that is then encapsulated inside GRE which is going to allow us to route it to over the layer 3 data center interconnect so it does add some additional encapsulation overhead it does not support fragmentation so we do need to take this into account when we're looking at the amount of overhead that's added from the routing point of view in the layer 3 header but the idea here is that since we already have the specification for Ethernet over MPLS with features like any transport over MPLS or virtual private land services since those are already tried and true and tested mechanisms of doing layer 2 tunneling there's really no reason to reinvent the wheel here now additionally since we are going over any potential transport and that's one of the advantages here of OTV is that we're not stuck in to certain situations where we must run MPLS or we must have these certain requirements for the interconnect basically as long as we have any ipv4 transport between our sites could be a dedicated link could just be over normal Internet routing we are able to run the OTV feature now the reason that we would want to run OTV is that it's mainly designed specifically for the layer 2 interconnection of data centers or the data center interconnect DCI now we are going to talk about some other potential mechanisms that you could use to connect your data centers like dark fiber or MPLS type tunnels but the reason we want to do this in the first place is to allow us to run what's known as virtual machine workload mobility which essentially means that we have some sort of virtualized resource that we're running in one of our data centers if there's a failure of the network or the failure of the the end host hardware or just we have an excess resource that we're using so CPU utilization is too high the sand utilization is too high we could move that workload or basically move the virtual machine to different physical resources and this is going to happen transparently from the end users perspective so a good example of this would be what VMware calls their V motion technology basically allows the VMware host machines as long as they have layer 2 connectivity to each other they're able to transparently move the virtual machines back and forth between each other since we're already using storage that is based on a storage area network like a fiber channel storage array or Fibre Channel over Ethernet or I scuzzy of course those type of details we're going to be talking about in the storage specific class for CCI data center but all of this is going to fit together towards the overall design of the data center now the reason specifically we would want to use OTV versus some of the other technologies is that there are specific enhancements built in a to OTV that are specific to the later layer to data center interconnect so as we talk about the OT view within the scope of other possible possible tunneling techniques there's many different options that we could use ok the most basic of these would be dark fiber whether this is running over a coarse way the wavelength division multiplexing or a dense wavelength division multiplexing the CWDM or the DWDM which basically means i have a physical fiber typically a single mode fiber because we're going longer distances and then we can take multiple optical signals or multiple optical frequencies and wavelengths and then multiplex that over the single physical line so I have a single mode fiber maybe the individual max signal is 10 gigabits I could be multiplexing that 112 times and get 120 gigabits over the Natha single fiber now the will see that there are some disadvantages of doing that of using just dark fiber for the interconnect and it has to do with the other control plane protocols that are running over the the data center interconnect now it's not to say that you can't take dark fiber and then run OTV on top of it that would be kind of an ideal case because you have a true a point-to-point connection that doesn't have other layer 3 4 wings so you're not going through like an MPLS provider so you have the dedicated circuit then maybe you have like layer 1 SONET protection that's gonna give you very high availability that if there's a physical link cut somewhere in the network and you're able to do like layer one optical rerouting but this is just one of the options that we have basically just a point-to-point link that's dedicated just for our data center traffic get another option would be some type of tunneling that's going to run over IP like the layer 2 tunneling protocol version 3 or l2tp v3 now the advantage of using lttb v3 would be that it does not require MPLS in the transit of the core so you could run l2tp v3 over the Internet as long as you have basic just IP reach ability between the devices it's not necessarily as common as some of the other layer 2 tunneling techniques like the any transport over MPLS or atom feature or the virtual private land services or VPLS feature now atom is basically a point-to-point MPLS layer 2 VPN while VPLS is a point-to-multipoint layer 2 VP of light or 2 MPLS VPN tunnel the disadvantage of either of these technologies then is that your data center interconnect has to go over an MPLS provider now you can potentially do interoperability where maybe on one data center I have one MPLS provider on another data center I have a different MPLS provider there are ways to do inter AAS interoperability but a lot of times it becomes cost prohibitive it becomes very design intensive in order to to actually allow for this so these are the traditional type layer 2 tunnels that you would normally be using like in the enterprise environment but these may not be the best solutions for our data center in Turkey where we're using it to build a very large lair - topology now let's take VPLS for example one of the potential shortcomings of VPLS is that the service provider edge routers so the PE routers on the MPLS network side those must maintain the MAC address table for the customer now in most VPLS designs this is not an issue because if we take a look at our whiteboard here a more typical design that you would have for VPLS would be that you have the the service provider core that is running MPLS then you have the service provider edge routers that are allowing you to do your point to multi-point tunnel so I have for PE routers these would be something like a SRS or GSR or a CRS typically could also be something like 7600 that's doing our layer 2 tunneling and on to the customer side so we have our customer edge devices here typically these are going to be layer 3 routers that they are the demarcation between the layer 2 domain on the inside of the network and the layer 3 domain on the outside so what this means is that basically we would have a different subnet that is from the inside of the C II so let's say that this is like the 10 network here and at another site we have the 20 network on another site we have 30 on the inside another site we have 40 then over the service provider core of MPLS these routers do appear that they are directly connected so it's basically a full mesh tunnel that looks like a land segment but it's actually a full mesh of pseudo wires and what we would have in here is is some additional segments let's say 100 0 0 0 / 24 so these 4 routers appear that like they're on the same land segment now for most design cases this is going to be fine because the MAC addresses that the provider edge routers on the SP side that they need to maintain in a design like this there's only going to be four because there's only four edge routers on the customer side but in the case of the layer to data center interconnected that's not what we're trying to accomplish we're trying to do an end-to-end layer to tunnel not a demarcation between the layer two internal domain and then the layer three basically layer to overlay or three tunnel in the core so the problem is it becomes not only a scalability issue of what are the MAC addresses that we have in the data center but then also the service provider has to maintain those and in most cases when you buy VPLS service there will be a limit that the service provider says you cannot send more than this many MAC addresses into the network because it's their provider edge routers that are going to have to do basically provider backbone bridging in order to allow for that so VPLS does have its place today still in the network a lot of times that's what you would use for like Metro Ethernet connections between your maybe local campuses buildings in a in a particular metropolitan area but in the case of data center interconnect it's not necessarily the best option simply because we have a very large layer to flooding domain a lot of MAC addresses okay we could also theoretically solve this problem just by using normal GRE tunnels yeah I could configure a mesh of GRE I could configure a bridge group I could put bridging over GRE so it is going to solve the same type of problem where we're taking our layer two traffic we're encapsulating it inside layer three and then we're sending it over the layer three routed datacenter interconnect now the issue that we're going to run into here where even though with OTV we actually are using GRE tunnels there's going to be some additional enhancements that are built into OTV specifically to stop the flooding of things like spanning tree the ARP requests and replies the icmpv6 neighbor discovery request and reply and also to limit the the failure domain if we were to have something like a broadcast storm happen in an individual data center site so really at the end of the day all of these tunneling techniques or the different types of data center interconnect are solving the same problem where we're trying to get the two data centers to look like they're on the same LAN now again as I mentioned with OTV the specific application that it is designed for is the data center interconnect you technically can use it for whatever application you want if you're buying layer-3 service from layer three MPLS VPN or maybe just regular layer three internet routing you could configure OTV on top of that and that's going to allow you to tunnel your layer two traffic between your sites but some of the things specifically for the DCI that OTV is going to build in for us as I mentioned it's going to optimize the ARP flooding one of the things that the edge routers are what we call the authoritative edge devices in ODB is that they will have a proxy ARP cache in order to respond to local our / quests for destinations that are reachable over the the datacenter interconnect so something like VPLS or any transport over MPLS they do not have a feature built-in like that so it means that all the ARP requests in the entire network it's automatically always going to be going over the DCI if you were to use MPLS so ideally we want to keep just our data playing traffic over the OTV tunnel or the data center interconnect we don't want to expand the control plane to a very large environment because then we just run into scalability issues of things like spanning tree or just the end-to-end MAC address table for the now the switches that we're using so as I mentioned hero TV is going to be the demarcation point of the spanning tree domain so even though we can use it to span the same layer to segment like the same layer to VLAN between the different data center sites it does not automatically mean that they're going to be in the same spanning tree domain now the other thing that a TV is really helpful with is when we're trying to do the overlay of multiple VLANs now in the case of something like VPLS you technically can a tunnel multiple VLANs so I could say I have you nines 10 20 30 40 on one side and I want them to appear the same 10 20 30 40 on the other side but we run into some limitations of what the provider edge routers on the SP side need to do in order to allow this so we could have basically four different sets of VPLS tunnels or pseudo wires to allow us to have eelain's 10 20 30 40 we could possibly do some sort of dot1q tunneling technique but at the end of the day it's really not designed to do that you can kind of hack the process to get it to work but it wasn't originally designed to do that it's basically meant to emulate just a single land and the outside you put your router's around the edge and then you have your own demarcation between layer 2 and layer 3 to prevent like the the macro just flooding and ARP flooding yeah the other thing that TV is going to give us is multiple edge routers without overcomplicating the design MPLS is not going to allow you to do this you would have to have separate routers that have separate MPLS circuits for layer 2 so separate VPLS tunnels or separate any transport over MPLS tunnels and there are no loop prevention mechanisms really built into those protocols so you could could potentially configure two circuits but then you're gonna have to run spanning tree on top of it in order to make sure that one of them is active actively forwarding one of them is standby and that's not really what we want we ideally want active active forwarding for any of our interconnects so as we start to explore this on the command line and go through some different configuration and and verification examples it'll make a little bit more sense why you would want to choose OTV over these other techniques that you could use for the interconnect so the next thing we need to know then are what are the specific terms you know TVN what do they really mean when we're talking about the control plane behind the scenes and then the actual data plane forwarding and the first of which is the OTV edge device so this is the basically layer 3 device that's actually going to be running the OTV feature set in our case this is going to be Nexus 7000 it is also supported on ASR 1000 and it's going to be supported on other platforms later down the router so this is where we actually have the tunnel that we're taking the layer to traffic encapsulating it inside GRE actually Ethernet over MPLS over GRE then sending it across the layer 3 routed network it's going to come out on the other side and then get D capsulated back into layer 2 now up these edge routers we can actually have multiples of them so we have not only the edge device but we have what's known as the authoritative edge device or the AED now the AED is going to be used in anytime when we have multiple edge routers for the for the purpose of redundancy and we want to make sure that we have some load distribution between the two of them so the AED is going to be essentially the active forwarder on a per VLAN basis that the OTV tunnel is trying to send the traffic across so let's say for example I have my interconnect and I have 4 VLANs I have VLANs 10 20 30 and 40 that I'm trying to essentially bridge over layer 3 if we were to have two authoritative edge devices to AEDs one of them is going to be elected the forwarder for two of the VLANs the other one is going to be elected to forwarder for the other two VLANs now as it stands right now in the current revisions of codes this is not really something that you configure we do some kind of crude load balancing just by saying this AED is going to be elected for the odd VLANs the other a edie is going to be elected for the even VLANs so that does play into your overall design now what I mean by this let's say that we have some very bandwidth hungry applications that are all on even VLANs like 10 third 20 40 60 80 etc hey this means that the same edge device will be elected for four ring of these and if we don't have any odd VLANs used for traffic 10 30 50 etc then the other edge router is not going to be used so the extra numbering now that you're using for the VLANs is going to have an effect on how the traffic is forwarded over the DCI over the data center interconnect now the key here is that since we have multiple edge routers we need to make sure that we have some loop prevention techniques built-in so we'll see that once we configure OTV with multiple edge routers there's going to be an election for the AEDs and that's going to make sure that neither of them are forwarding in an active active state for a particular VLAN because since we are not extending spanning tree over the DCI we have no additional layer to loop prevention technique so again the way that we're doing this is by basically saying only one router and actively forward a VLAN at a time then we're doing again the different the per VLAN load distribution between the edge devices between the AEDs okay next we have what's known as the extend VLAN extend VLAN is basically the layer to segment that we're trying to bridge over Oh TV or we're trying to tunnel over Oh TV we then have the site VLAN the site VLAN is going to be used for the AED election so the site Filan is going to be something that is not spanned over OTV this with something specifically when we configure it it is not going to be allowed on the overlay interface so the siphon on is just internal for the two edge routers to talk to each other similar to how you would think of like the peer link in VPC so it's for synchronizing the control plane between the two of them okay next we have the OTV site identifier this is used as a basic loop prevention technique to make sure that if a packet were to go out and to come back in I'm not going to receive my own packet since I'm part of that same site so when we're configuring this we need to make sure that it's a unique identifier per site and if we have more than one AED so we have more than one edge router we want to make sure that they're using the same site identifier so they know that they should perform the election and then say I'm gonna forward VLANs one three five seven you're gonna forward VLANs two four six eight we then have the overlay interface this is the equivalent of our GRE tunnel interface so this is the logical link that is going to be used for the tunneling and then of course we have the physical link that the tunnel is going to run on top of this is what we call the OTV join interface so the join interface is going to be the physical link or the port channel that is used to route upstream towards the data center interconnect and as it stands with current versions the joint interface cannot be an svi so it has to be either a physical link or a poor Channel and that would be a layer 3 poor channel because that's what we're using to route upstream towards the DCI we then have to multicast groups for OTV one is called the OTV control group now this is going to be used for extending the control plane in protocols like ARP or OSPF or EAG rpm over the data center interconnect and this traffic is going to be tunneled inside of multicast over the DCI Network so this is then implying that by default your transit if you're using like MPLS layer 3 VPN it does have to be multicast aware it does have to support both any source multicast ASM and source-specific multicast SSM we have the second group called the OTV data group this is when we're going to be actually tunneling multicast between the sites so multicast in the data plane so if I have multicast video feeds that are going between my data centers this is going to get encapsulated inside multicast so we can avoid what's known as head and replication or basically taking a multicast packet and not having to make multiple unicast copies of it in order to forward it over the DCI so we'll take a look at some cases where we need a multicast for OTV which is going to be the default behavior and then something that's known as the OTV adjacency server that we can use to tunnel everything over unicast and then not need this multicast requirement for the transit network but the key here is that by default you do need to have a multicast aware transit network between the AEDs now we already talked about up to this point fabric path and how fabric path uses is to iose in its control plane to exchange the switch IDs and to build the shortest path tree in order to for us to do our Macs in a mac routing or basically our layer 2 is the is routing now OTV is actually going to use a similar logic to this that it's going to be using is to is to exchange the control plane there's a key difference between them though and that is to is for OTV is actually used to advertise the MAC addresses of the end hosts in the case of fabric path is die as is only used to build the shortest path tree between these switch IDs then we use normal MAC address flooding in order for the devices in the core to be able to actually switch the data plane but with OTV this is going to be an event-driven behavior that the edge routers the authoritative edge devices they are only going to know the MAC addresses that are actively advertised with is is now the advantage of doing this is that it's going to give us more control as what specific types of traffic can be sent over OTV we're in the case of something like MPLS l2 VPN we would apply layer 3 access list to the interface or maybe some sort of VLAN access list in order to stop the traffic from flowing across the link in the case of OTV we can stop this in the control plane by telling is to is not to advertise specific MAC addresses so if I have an individual host that I wanted to stay local and not go over DCI I could deny that from being advertised in the is is control plane so we can think of it kind of like a distribute list that's applied to layer 2 MAC addresses inside of the is is advertisements so as we compares some of the terminology between Fabri path and OTV fabric path is considered Mac and Mac routing because we're taking our Ethernet frame and then encapsulated encapsulating it inside of fabric path in the case of OTV we're taking our layer 2 Ethernet they weren't caps aligning it inside of IP where again it's actually Ethernet over MPLS over GRE over IP so it's considered mac and IP routing now the control group multicast as I mentioned for OTV is then going to be needed to exchange the is is control plane specifically is is again it does not have its excuse me it does have its own layer 3 transport so it doesn't need IP to get from one point to the other point ok so this means that if I were to have the Aedes the edge devices on the data center edge and I want them to be is i/o so Jason's I'm going to have some problems considering that the transit network between them is ipv4 because is is does not run over IP it runs over its own CL NS or CL NP stack so in order to allow us to do this the control plane of is ayahs is encapsulated inside Ethernet over MPLS over GRE inside ipv4 multicast so again what this means is that in the core of the data center interconnect it must be any source multicast aware or basically allowed shared trees which means we would have a rendezvous point either in normal sparse mode or a point in bi-directional mode now we'll see there are some workarounds for this we can encapsulate is is inside of unicast if we're using the adjacency server and that's going to remove the requirement that the core of the DCI network be layer 3 multicast routing aware the next portion of course then is going to be the data plane so once we figure out who are the edge devices in the network what are the MAC addresses that they're using how do we actually forward the traffic between the sites now the key here is that we're going to be using both multicast and unicast for transport and I'm talking about IP version 4 here is gonna be the transport now for multicast we have the first multicast control group which is going to be used for any type of multicast control plane protocols like OSPF or ERP or PIM that we're trying to tunnel over OTV and it can be used for broadcast so things like our address resolution protocol that would be tunneled inside the multicast control plane group or the multicast control group we then have our normal unicast data like our database replication or web browsing or whatever normal unicast data we have that's going to be encapsulated as normal unicast traffic between the edge routers so again specifically it is our ipv4 ipv6 whatever the application is over Ethernet over MPLS over GRE then in this case over ipv4 unicast then we have the potential for actual multicast data flows this would be like our video on demand or something like that in this case the multicast is going to be encapsulated as source-specific multicast as SSM so this then additionally implies that the authoritative edge devices were the AEDs they must be using IGMP version 3 in order to generate these source specific joins or the s comma G joins in order to get us on the shortest path tree for the particular remote sender so we're going to go through this looking at the OTV configuration on the edge we'll look at what actually happened in the core of the DCI look at what type of traffic is running as the mul control playing group what is running is the multi has a data playing group and then what is running as the regular unicast data playing then we'll look at the configuration of using the adjacency server which is essentially going to remove the requirement for multicast but in the case that we have more than two data centers that we are connecting over the DCI we would not want to use this so ideally you would have a multi guys to wear layer three transit which means that your control plane and your actual multicast data plane is going to be efficiently transported but in the case that you don't have layer three pm in the middle of the network you can use the adjacency server but it could end up in some additional replication so let's say for example our particular data set or application is like triple play service for video where we have like our voice video traffic that's running over the voice video and Internet traffic running over the other data center if we have HD video streams that are moving between the data center sites and I have three or more data center sites I would have to take those streams and Riaan capsulate them in a unicast multiple times in order to get over the data center interconnect so it's kind of like these legacy problems that we had many years ago with framing a hub-and-spoke networks that is we're trying to send multicast or a broadcast traffic over a non broadcast network we would have to do what's known as pseudo broadcast or a replicated unicast to take the multicast or broadcast packet and actually make multiple layer to unicast copies where in this case the adjacency server is forcing us to do multiple layer three unicast copies but in the case that you only have two data center sites it's not going to be an issue because it's always point-to-point unicast between the neighbors now as I mentioned there are going to be some specific optimizations for OTV and some of the reasons why you would run to run o TV as the DCI solution as opposed to like layer to MPLS tunnels if first of them is going to be that the other options are going to bridge all traffic so without some extensive manual filtering of things like are poor different layer 2 protocols like spanning tree those are normally going to be automatically going across the layer two tunnels so like in the case of a point-to-point atom tunnel the any transport over MPLS a layer two pseudo wire that is not going to stop spanning-tree or it's not going to stop broadcast storms OTV is going to fix this by having the edge routers run the proxy arp and the proxy ICMP neighbor discovery cache and it's also going to be terminating this spanning tree domain locally on the edge router so let's go ahead and take a look at the topology that we're going to be using for these examples and we essentially have all of the seven k's here we have eight different VDCs that we're gonna be using to establish the OTB tunnels between and there's a couple hardware requirements that we need to talk about here first now when the in the current nx-os releases on the nexus 7 k there are some incompatibilities between OTV and other features okay the most notable one is that you cannot run OTV inside the same VDC that you have layer 3 SV is so this is actually a pretty big limitation here because normally we're going to have our layer 3 VLAN interfaces where we are running our first hop we're done in C protocols like HS RPE vrrp or GL B P for the the virtual device context that we have that configuration and OTV is not going to work there's some technical limitation behind the scenes of how the tunneling is actually implemented and we're not actually able to do the encapsulation so we essentially need a separate dedicated VDC to run the OTV process so there's a couple different designs that you could do with this you could have a different physical box that's used for OTV or you could do kind of VDC on a stick where you route out to it and then back as long as the OTV process is separated from the rest of the layer 3 routing really that's what we we need to now be aware of so in this particular design if we look back at the the diagram here we're going to have the 2 7 KS that are essentially on the same layer 2 segments as our server 1 and as our router 3 now we know from the previous physical apology that we actually have some five K's in here that we actually have some two K's in here but from the design point of view of OTV that's really outside the scope as long as we have layer to reach ability up to the edge routers up to the AEDs that's all that Oh TV cares about doesn't care if you're running V PC if you're doing a fabric path even you can get the traffic up to the layer two edge then we're gonna encapsulate it inside the layer three tunnel so first we're gonna look at here doing ot V just between two devices so we'll say that we have our layer three domain that it's going to get us up to our I should say our layer two domain that is going to get us up to this edge router so we'll say Nexus 7 K 1-3 this is gonna be our first authoritative edge device then we're gonna have in the middle of the network this is basically the DCI this is our layer three data center interconnect so over Nexus 7 K 1 4 and Nexus 7 K 2 5 we're going to be running routing we could do IGP we could do BGP as long as we have layer 3 connectivity between the AEDs that's really what we care about here ok then we're gonna have the other edge router on the other side which is in 7 KH 2 - 6 so this is our AED here it means that the layer 3 tunnel is essentially going to be between these devices so again this is our Ethernet over MPLS over GRE over ipv4 unicast and ipv4 multicast so this additional encapsulation overhead you do need to take it into account when you are looking at your actual applications because OTV edge devices do not support layer 3 fragmentation so it means you have to offload that down to your actual end machines so you would want to lower if you're using jumbo MTU and your DCI supports jumble em to you you would want to lower it on the end machines maybe down to 8000 something so it's not a lot of overhead and we're gonna look at this specifically the exact packet structure it's not a lot of overhead but it is something that you do need to take into count there okay there's a couple of questions we have here my service provider should allow multicast or is it allowed by default it is not allowed by default so you when you're buying layer three transit service that's gonna be one of the options that generally you're gonna have to pay extra for that you normally don't get multicast service you have to specifically buy that okay also in the case of MPLS layer 3 VPN normally when you're looking at your your service contract with them your service level agreement that's gonna be one of the line items do you support multicast how do you support multicast am I gonna have to use my own rendezvous point am I gonna use your rendezvous point am I gonna use static RP am I gonna use byte or am I going to use BSR that's all part of the the agreement between the service provider and the customer because in the case that you're using MPLS l3 VPN you will be exchanging layer 3 routing updates with the service provider typically bgp between the provider edge and customer edge but it technically could also be an i GP like OSPF or EHR p so typically no multicast is not allowed by default there's a question i'd understood that you can't have SVC sv is in the OT v VDC that is true shortly you'll be able to use a loopback for the joint interface okay that i did read that in some of the release notes that's that's one of the features that they're adding which basically makes it a little bit more independent of what your layer 3 infrastructure routing northbound is towards the DCI because right now again the the limitation is that it has to be on a native layer through route an interface or a native layer 3 routed poor channel so it's not supported on an SPI it's not supported on a loopback where you would think of like in the case of bgp or in the case of like a normal GRE tunnel you could set the update source to your loopback or you could set the tunnel source to your loopback but with the current code revisions that's not a supportive feature for OTV there's a question can your layer 3 DCI devices be virtual off the core 7 K can your layer 3 DCI devices be virtual off the core 7k do you mean the device that's doing the OTV encapsulation so that's actually running the OTP feature set so o TV can run into VDC and that that's what we're going to be doing here because we have two physical boxes we have Nexus 7 k1 and exit 7 k2 these two physical boxes are then split into four VDCs each so the the issue is that we can't run it inside the same VDC that we're doing the now that we're doing the SVI routing like nexus 7 k 1 & 2 are now can your layer 3 DCI device be a virtual device off of the core what you could do is you could have a case like this where this could be your so let's say this guy is over here and the layer 3 uplinks let's say the layer 3 network goes like that what you can do is send the traffic out and then back in and have this one run OTV this is basically like VDC on a stick so you're routing out and then back in then you are physical routing towards the up links are going this direction ok there's also other one other requirement that I forgot to mention you cannot do this on f1 cards because again F 1 is our layer 2 you need the layer 3 and modules in order to do a 2 duo TV so the the edge ports that are facing upstream which is going to be not the overlay interface but our joint interface that's going to have to be on the m1 module there's a question willow TV run over older sonnet type point-to-point circuits or is an underlying MPLS a requirement yes it will actually run over anything and that's one of the design advantages of OTV as long as you have any type of ipv4 reach ability you can use circuits that you want so you could use legacy ds3 you could use OC 3 sonnet at OC 12 sonnet you could use point-to-point 10 gig e or point-to-point OC 768 it's really up to you for the physical circuit the only requirement is that you're basically able to ping between the edge routers that you have basic IP v4 reach ability now when we compare that to some of the other techniques like it like atom or VPLS those are not transport agnostic or basically means they do require a specific transports and in that case it means that for a demand VPLS they do require that you're running MPLS in the core of the the service provider so basically the idea of OTV is that you have a data center in Los Angeles you have a data center in New York you may not have the same service provider that you can buy the same service between so for whatever your long-haul link is you can run whatever you want you could just route it over the Internet of course you're probably not going to get the greatest service level agreement if you just route it over the internet but essentially any layer 3 connection that's all that is the requirement for OTV there's a question is Oh TVs Cisco proprietary I believe it actually is right now but it is not going to be there is a standards track version of it and let me open that up which is we search for the Oh TV RFC it is overlay transport virtualization currently still in draft this most recent version version 3 right now it's expired so there should be another version that's coming out shortly one thing that's kind of notable out this about this if you read through the RFC specifics and we look at the encapsulation of the data plane it says that the format of OTV is for UDP ipv4 encapsulation this is what the standards track says but this is not actually what cisco is implementing it as cisco is implementing as Ethernet over MPLS over GRE over ipv4 so it's not encapsulated in UDP its encapsulated in MPLS over GRE so most likely the next provision that comes out is going to update this because it wouldn't make sense to write a standardized implement a that says use UDP and then Cisco's implementation uses GRE and then of course those two would not be not interoperable so right now it's Cisco proprietary but it is on standards track okay there's a question just to make sure I understand this if I only have two DCs I can run oh TV without multicast that is correct yeah you can run it without multicast if you have more than 2 DC's so you could do three four or five it just means that your replication of the traffic is not going to be as efficient as it would be if you had multicast he really it only matters if you're running multicast in the data plane if you don't have any multicast applications then you really don't need multicast in the core just means that for some of the control playing stuff like your is is advertisements or your art flooding or some of your Mac flooding that would have to use unicast replication but that's gonna be a much smaller amount of your transfer versus like your your actual UDP multicast data flows so like if you're running HD video if you're a you know a video on-demand provider that's gonna be a big deal because you generally have a couple hundred channels each of them may be what 15 megabits per second or something for HD so it's going to start to add up quickly the amount of traffic that you would have to re replicate over and over for the multicast there's a question how about layer 3 addressing will fall in with OTV between the two data centers do both data centers have to share IP addressing and VLANs for the few VLANs they do so the the goal of what we're trying to accomplish is that we're doing layer 2 tunneling so if we look back at our diagram here the end result is that router 3 is going to have the address 1000 3 and router 2 is going to have the address 1000 2 we're going to have VLAN 10 on this side we're going to have a LAN 10 on this side and it's the same broadcast domain end to end so these devices will not have to route to reach each other from their perspective they're going to look like they're connected on the layer 2 LAN and the reason why we're doing it that way is that the virtual machine mobility applications require it so when you're doing like VMware vMotion it is a requirement that you have your VMware hosts on the same subnet so they have to have layer 2 connectivity in order to do the like the live emotion between them so we can do layer 3 routing but that's kind of outside the scope of the that OTV config [Music]
Info
Channel: IT-TALK IT-TALK
Views: 8,134
Rating: 5 out of 5
Keywords:
Id: y9AM52uugig
Channel Id: undefined
Length: 42min 43sec (2563 seconds)
Published: Thu Nov 22 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.