You’ve come on a long journey with me. We’re finally at part 5, where the configuration
begins. In part 4, we looked at the two ways that
addresses can be learned. One of these is through data plane learning,
also known as flood-and-learn. In this video, we’re going to configure
VxLAN with data plane learning on some Nexus 9000 series switches. If you watched part 4, you may remember that
I said that control plane learning was preferred. So, why then, are we going to configure data
plane learning here? Isn’t is a waste of time? We’re doing this because it’s much simpler. This is an easier way to learn VxLAN, as you’ll
get to see VTEP configuration, VNI’s, the overlay, and verification. In part 6, we’ll get complicated and see
how the control plane is configured. Let’s get started... We’re going to use a very basic topology
to make configuration as simple as possible. We will use two Nexus 9000 switches. Each switch will have a VTEP interface. There is a single routed link between the
switches. This routed link will simulate the underlay
network. I have attached a host in each of these switches. They will connect to access ports, configured
with VLAN 1000. As they’re both in the same subnet, they
will not be able to communicate initially, due to the routed link. This is what we’ll fix with VxLAN. During the configuration, we will associate
VLAN 1000 with VNI 5000. We’ll also configure multicast to handle
BUM traffic. We’ll start with configuring the underlying
infrastructure. This includes enabling features, setting the
MTU, mapping VLAN 1000 to VNI 5000, configuring the routed link, configuring host ports, and
creating loopback interfaces. We’ll start by getting our three features
enabled. OSPF is used to manage the underlay. The PIM feature is for multicast, which handles
BUM traffic. The nv overlay feature enables VxLAN. NV means ‘Network Virtualization’. Notice that there’s a warning about a system
routing template. The routing template is how the Nexus formats
certain data structures in memory. On the 9000 series platform, this is required
on version 7.0(3)I5(1) and earlier. Newer versions take care of this transparently,
so you won’t get this warning. The vn-segment-vlan-based feature enables
tagging frames with the VxLAN header. We enter the ospf routing instance, simply
to start the OSPF process. We don’t need to add anything else to OSPF
yet. We define VLAN 1000 for the hosts. The vn-segment command maps this VLAN to VNI
5000. Remember that the extra VxLAN headers will
lead to larger packets. The MTU needs to be increased to allow for
this. Keep in mind that the maximum MTU may vary
on different platforms. Now, we’ll set that routing template from
earlier. As mentioned before, if you’re running a
newer version of NXOS, you won’t need to do this. Changing the routing template needs a reboot,
so we’ll save the config, and do that now. Don’t worry, I won’t make you watch the
whole reboot process. :) Now to configure port 49 as the routed link
with the no switchport command. Next, give it an IP address and add the interface
to OSPF. We will be running multicast, so let’s set
the interface to sparse mode. Let’s verify that this is working so far… We can ping, and eventually… OSPF neighbours will form. The host ports are quite simple. They’re just configured as access ports
in VLAN 1000. No special VxLAN config here. We will need loopback interfaces for two reasons. For one, we’ll use the loopback IP as the
rendezvous point in the multicast topology. Also, the VTEP gets its IP address from the
loopback. We’ll see both of these in the next section. Make sure all loopback interfaces are added
into OSPF, and are in sparse mode. A second loopback interface is used as the
anycast rendezvous point. As this is anycast, it is the same on both
switches. And finally, quickly verify that the loopback
interfaces are reachable We’ll now move on to configure the rest
of the multicast infrastructure. This is a very simple multicast topology. I won’t get into how multicast works, as
it’s way outside the scope of this video. After that we’ll look at configuring the
VTEP, also known in the Nexus world as the NVE interface. To start, we’ll configure the rendezvous
point address. Remember that this is based on the loopback
1 interface, which has the same IP on both switches. The RP’s are responsible for a very wide
list of multicast groups in this demo. In the real world, you’ll want to tighten
this up a bit. Next, RP anycast is configured for these two
routers. The anycast RP address is bound to the real
RP addresses. That’s all there is for the multicast configuration. Pretty simple isn’t it? Of course, it could could get quite complicated
in the real world. Now, it’s time to configure the VTEP. VTEP is the generic term for the interface
that encapsulates traffic. On the Nexus platform, this uses a virtual
interface called the NVE interface. The IP address of the NVE is not configured
directly. Rather, it is taken from the loopback interface. This is also where we bind VNI’s to their
multicast groups. This is how BUM traffic is handled. If you need more information on multicast
and BUM traffic, have a look back at part 4. We can verify this with show nve interface. This NVE interface is up, is configured with
VXLAN, uses data plane learning, and gets its IP from loopback 0. The configuration is all done. Were you expecting it to be harder? I know I was when I first started investigating
VxLAN. If you’re looking for more complication
in your life, don’t worry. We’ll get complicated in part 6 when we
look at EVPN. But for now, let’s see dataplane learning
in action. We can use the show nve peers command to see
which VTEPs have been learned. And… there’s nothing. Why’s that? This is because there has not been any flooding
yet. When a host sends an ARP over VNI 5000, it
will be flooded to multicast group 230.1.1.1. We can see this with the show nve vni command. We can also confirm that this VNI uses data
plane learning. Let’s jump onto one of the hosts. This one is 192.168.0.10, so we should be
able to ping 192.168.0.20. This shows that VxLAN is configured correctly. It also should have flooded an ARP request
on the VNI, to all VTEPs in the multicast group Back on the switches, we can now see that
the remote VTEPs have been discovered. We’ll run through a quick summary of some
verification commands. These will help with troubleshooting later. Show nve interface shows us the learning mode. VxLAN encapsulation shows that this is the
VTEP interface. We can also see that it gets its IP from loopback0 Show nve vni also shows the learning mode. This also has a list of all the VNI to multicast
group mappings. You can also see that VNI 5000 is bound to
VLAN 1000 Show nve peers shows discovered VTEPs. As this uses flood-and-learn behaviour, these
addresses are cached, and will time out. If we go to one of the hosts and run a ping,
the VTEPs are discovered again In the sixth and final part of this series,
we’ll cover EVPN configuration, for control-plane learning If you found this video useful or interesting,
please subscribe to the channel, and hit the notifications button. Also, let me know what you thought of the
video in the comments section. I’ll see you in Part 6