1.3 The network core

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] in this section we're going to overview the network core we're going to see what happens inside the network core and we're going to introduce a number of important topics we'll talk about backward forwarding packet queuing delays and packet loss we'll talk about an alternative to packet switching known as circuit switching and we'll talk about the structure of the internet we'll see what we mean when we say the internet is a network of networks the network core consists simply of a set of routers that are interconnected by a set of communication links and the internet's core operation is based on a principle known as packet switching which is really relatively simple the idea is the following what happens is that the end hosts that we talked about earlier take application level messages divide those messages into chunks of data put those chunks of data inside packets and send those packets into the internet now those packets are then forwarded along a path from a source node to a destination node for example from a web server maybe to your laptop that's running running a web browser that made a request from that web server so let's unpack that last statement just a little bit because we actually heard a lot of important words there we talked about forwarding we talked about a path from source to destination and we talked about a source and a destination there are two key functions performed inside the network core forwarding sometimes also known as switching and routing let's take a look inside a router so we can see what these functions are forwarding is a local action it's about moving an arriving packet from the router's input link to the appropriate router output link forwarding is controlled by a forwarding table that we see here and there's a forwarding table inside each and every of the millions of routers in the internet when a packet arrives a router will look inside the packet for a destination address and then look up the destination address in its forwarding table as we see here and then transmit that incoming packet on the output link that leads to that destination well that's pretty simple conceptually look up and forwarding but you might be wondering well how are the contents of that forwarding table created in the first place and that's where we encounter the second key function of the network core and that's routing routing is the global action of determining the source to destination paths taken by packets as we'll see routing algorithms compute these paths and compute the local per router forwarding tables needed to realize this end-to-end forwarding path a good analogy for understanding the difference between forwarding and routing is to think about taking a trip in a car i recently drove from san jose california all the way to the east coast to northampton massachusetts it was a long trip well i decided to take this upper route here rather than this lower route here and that's the routing decision that's made the path that's taken from the source san jose california to the destination northampton massachusetts now when i get to an interchange say in sacramento california i'm coming into the city on one of the input roads and i need to be forwarded out of the city on an output road in cleveland here and really at all intersections along the way that switching from an input road to an output road is the local forwarding function with the global routing function determining which of the output roads i'm actually forwarded onto you might want to think just a bit about this relationship between the local forwarding function and the global routing function can you think of other analogies let's next focus on the transmission of packet bits at a router in the network core in this figure here we're illustrating the bits in a packet being transmitted from one router to the next you'll hopefully recall that we said that if a packet's l bits long and the link transmission rate is r bits per second then it's going to take l divided by r seconds to transmit the packet into the link transmitted bits will be received and gathered up at the receiving end of a link until the full packet has been received you can see the bits and the packets being gathered up here once the packet's been fully received it can then be forwarded on to the next hop and so on this is what's known as the store and forward operation of a packet switch network now let's take a closer look at what happens as packet arrived to a router for forwarding in this figure here we see a router with three links let's assume that host a is sending packets to host c and host b is sending packets to host e and now let's take a close look at the input link rates the transmission rate r of the link from a to the first hop router is 100 megabits per second as is the second link from b to the first hop router but the transmission rate of the link from the first hop router to the second hop router is only 1.5 megabits per second it's almost 100 times slower and this isn't far from the case in a home network where the home network router may have attached ethernets in the home running at a gigabit per second or wi-fi at 54 megabits per second but the access link to the cable heading is much slower perhaps only 5 or 10 megabits per second and so here's the question what happens as packets arrive to this first hop router well the router in this example can only transmit at 1.5 megabits per second and certainly packets can arrive a lot faster than 1.5 megabits per second if a and b are both transmitting a lot of packets at the same time if too many packets arrive at too fast a rate then a queue of packets will form in that first hop router as shown in the figure queuing happens whenever work arrives faster than some service facility can actually serve that work and of course we're all familiar with queues here's a queue of cars waiting to get service at a toll booth we wait in lines and stores at the checkout counter and i imagine many of you have been waiting at lines in the bursar's office to pay your bills here's a group of people queuing to get in a building and here's a group of folks in the uk waiting if you've been in the uk you probably know that queuing is actually the word that brits use to say when you're waiting in line the brits are really good at cueing check out this video this is the awfully thorough guide to being british governor call blimey in this episode queuing [Applause] if queuing were an olympic event then great britain would win gold then it would wait for an hour take a ticket and win silver and bronze no one queues better than the british and if you want to fit in you need to know the rules well to get back to networking packet cues are going to form at a router's outbound link whenever the arrival rate in bits per second on the input link exceeds the transmission rate bits per second of that output link for some period of time when there are packet queues they're going to be queuing delays packets are going to have to wait in routers rather than being forwarded on their way to their destination and because there's only so much memory to store cued packets in a router the queue becomes too long and a router's memory is exhausted an arriving packet may arrive to a router and find no memory in which to be stored in such cases a packet's going to be dropped or lost at that router and as we'll see the fact that packets can be delayed and or lost is going to be a major source of headaches for a lot of network protocols well so far we've been talking about packet switching and we've just seen that packet queuing delays and packet loss can happen because the network doesn't always control senders carefully enough to ensure that large queuing delays and packet loss doesn't happen now packet switching is not the only way to build a network and indeed long before the internet was around and long before packet switching telephone networks employed a different form of technology known as circuit switching let's take a look at circuit switching in circuit switching there's a notion of a call rather than a notion of packets that flow from source to destination before a call starts all of the resources within the network that are going to be needed for that call are allocated to that call from source to destination so once the call begins the call will have reserved enough transmission capacity for itself to ensure that queuing will never occur there's no delay other than propagation delay and no loss of data within the network because link capacity has been reserved for the exclusive use of this call in this diagram here each link has four circuits the call from the top left to the bottom right is allocated the second circuit on the top link and the first circuit on the right link the circuits are dedicated resources they're not shared with any other users it's really like there's a wire from source to destination so all this sounds pretty good no delays no loss doesn't get any better than that until you start to think about the fact that since resources are reserved for the exclusive use of a call but the circuits can go idle if there's no data to send on that call and there's the rub if the bandwidth isn't used by the call it's lost no one else no other calls can use it and so a circuit switch network can be inefficient as we'll see in a second circuit switching is done in one of two ways either frequency division multiplexing fdm or time division multiplexing tdm in fdm the electromagnetic or optical spectrum is divided into narrow frequency bands and each call is allocated one of those narrow bands and can transmit at the full rate allowed by that band in tdm time is divided into slots and each call is allocated a periodic set of slots a source can transmit only during its allocated time slots but can do so at the higher maximum rate of that wider frequency band so now that we've been introduced to the notion of both circuit switching and packet switching let's take a look at a numerical example to look at the numbers of users that are supportable in a specific networking scenario here's the scenario we want to look at let's assume that we've got a gigabit per second link there are end users and each of these end users behaves as follows when they have data to send they need to send at megabits per second that's one-tenth of the overall link bandwidth however users are only going to be busy 10 of the time that means 90 percent of the time they have no data to send let's take a look at a numerical example about how many packet-switched users and how many circuit switch users can be supported in this network setting let's suppose that n is 35 users for the case of circuit switching the calculation is easy each user needs 100 megabits per second under circuit switching and so a circuit switch network can support at most 10 users at a time now let's take a look at packet switching remember each user needs a hundred megabits per second so we can support 10 packet switched users just like we can support 10 circuit switch users no problem but remember each user is only busy 10 of the time what happens if we allow all 35 users into the system what's the probability that more than 10 of these are active at a given time with more than 10 active users that's the only time queues are going to form and grow one can show with some basic probability and combinatorics which are not required for this class and so we won't go into this calculation here that the fraction of time that more than 10 of the 35 users are active is point zero 0 0 4 that is if we let all 35 users into the system under packet switching then the percentage of time when queues will start to grow is less than four one hundredths of one percent of the time that's a pretty small number maybe we're willing to put up with some very occasional delay and loss in order to allow all 35 rather than just 10 users into the system this performance gain under packet switching is known as the statistical multiplexing gain of packet switching and it was a key argument for using packet switching when packet switching was invented 60 years ago it may seem to you now that packet switching is just about the best thing since sliced bread is it a slam dunk winner it might seem so and even today's telephone networks actually carry data in packets these days so in some sense we could declare packet switching a winner package switching is particularly great for bursty data when a source only occasionally has data to send it's simple there's no call setup there's no resource reservation a host just starts sending data that it has to send we've seen that congestion packet delay and loss can happen but we'll also see that internet protocols tcp in particular will react to congestion by decreasing the sender's sending rate in the face of congestion so congestion and loss can in some sense be avoided or at least mitigated is it possible to provide circuit-like behavior with packet switching well as the saying goes it's complicated but we'll study various techniques that try to be as circuit-like as possible well let's wrap up our introduction to the network core by coming back to a phrase that i've mentioned a couple of times when i've said the internet as a network of networks what does that really mean and how is that actually reflected in the structure of the internet yikes it's like i'm stuck inside the internet here well what we've seen so far is that the users at the edge of the network are attached to access networks whether that be home networks mobile networks institutional networks and some way we need a way to connect these millions of access networks together to each other to get end to end paths between users how might we do that well we could connect that is wire each access isp to every other access isp but this would require order n squared connections and when n is many millions this approach isn't going to scale so given the millions of access isps we might create one global transit isp each isp at the edge would then connect to the global transit network a backbone network if you will and one access isp would reach another access isp through this backbone network and in the early days of computer networking this was indeed how edge networks were interconnected to each other but of course if one global isp is a viable business they'll be competitors for backbone network service and these global backbone networks will need to be interconnected to each other we say that a network peers with another network when they're directly interconnected the locations at which multiple networks can peer with each other are sometimes called internet exchange points or peering points regional networks might form interconnect access network closer to home and also connect to the global backbone in the state of new york nisernet the new york state education and research networks is an example of a regional network that provides internet access to universities colleges museums healthcare facilities and k-12 schools and then content providers like google or microsoft amazon akamai might want to run their own global networks which in fact they do to bring their services and content close to the end users and this picture that we see here is pretty close to the structure of today's internet we said that the internet is a network of networks and now you've got a sense of what that really means at the center of the internet we have a relatively small number of well-connected large networks these are sometimes called tier 1 commercial isps such as level 3 sprint at t ntt that have national and international coverage moving closer to the edge there are the regional networks all of which interconnect that's to say peer with each other and with tier one providers and at the very edge of the network then are the access networks themselves and then of course there are the content provider networks like google and facebook the private networks that connect their data centers and services to the internet and sometimes bypassing tier one and regional isps and just to give you an idea of what a tier 1 isp actually looks like here's a recent map of sprint's u.s network you can see the various international connections around the edge each of these sprint nodes shown here is actually a collection of routers that together form a so-called point of presence or a pop here's a blow up of a pop where you can see a set of links that connect pop routers down to customer networks another set of links that connect to other sprint pops and finally a set of routers that connect to peering networks for example other tier one networks well that concludes our overview of the network core and we've introduced a number of really important concepts here we talked about the packet forwarding process we talked about store and forward networks we talked about queuing delays the possibility for packet loss and we distinguished between packet forwarding and packet routing we also talked about an alternative to packet switch networks known as circuit switch networks and we talked about the pros and cons of each and we finished up by talking about well what we mean by that concept of the internet being a network of networks coming up next we're going to take a look at network performance you
Info
Channel: JimKurose
Views: 102,042
Rating: undefined out of 5
Keywords:
Id: f1nUcCdQJ8Y
Channel Id: undefined
Length: 19min 4sec (1144 seconds)
Published: Sat Jan 15 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.