How We Might Get Humans Out of the Way - Keynote by Nick McKeown, Stanford University

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
thank you very much it's wonderful to be here since yet so my job is to try and make sense of what you have on the screen in front of you so you know over the last 10 years we've seen networking change a huge amount understatement there's been quite a revolution in networking and so it's only natural to wonder so what happens in the next 10 years and I'm very lucky I don't have to run networks I don't get involved in the the day-to-day development so I get time to sit back and think about how things might might change and as a consequence I thought that I would try and share some of what a group of us have been thinking about happens as a direct consequence of what's been happening in the last the last 10 years or so and you know I see my goal in life is to try and make the internet better as I'm sure many of you do - and to think about how to make it evolve faster so that we can have more innovation more change more improvement in how networks operate and I actually think that right now we're a bit of an inflection point this is what I mean by this sort of part 3 this third phase of Sdn so that praedyth's opposes that there was a phase 1 and a phase 2 and we're sitting there thinking what are those so I will try and explain what I mean by that and so I want to give you a quick overview of that and I then explain why I think that this third phase is a direct consequence and extrapolation of what we've been doing up until now you've heard people talk about zero-touch networks self-driving networks we've heard all of the hype and all the buzzwords but actually it's a direct consequence of what we've been doing what we've been doing is laying the foundations in order to make that possible and I think that the way that that's going to happen is by the direct application of software engineering techniques which has been developed by the computer industry directly to networking we've only just begun to borrow those ideas and apply them to networking anyway while preparing this talk I was reminded of on this stage in 2012 for ons and I think it was actually in this room giving a talk about what was called making SD ends work and there were lots of puzzled looks in the in the room because there's a sort of the the beginning of thinking about how we might apply some of these these techniques and one of the things that I'd said was this is this way that I use formally verify that our networks are behaving correctly identify bugs then systematically track down the root cause and this was coming out of work that was really happening in the not only at Stanford but a number of universities about examining and building formal models based on network state because if we wrote the software we know what we were trying to do we can actually figure out whether the network is conforming to our original intent I will come back to this a little bit later but first really what I'm talking about is what happens in the next ten years it's let's make the math easier and say we're in 2020 today but if we look back 10 years and see how much it's changed back in 2010 networking was dominated by closed proprietary hardware and software we know this story it was like the mainframe business of the 1980s we were in a period of explosive growth of standards that's where most of the growth was by 2010 there were over 7,000 RFC's this seems like progress right but most of those were coming with the intent as a barrier to entry as John Donovan already mentioned today and will be sadly missed now that he's leaving AT&T as you may have heard he said to me when I first met him the vendors have a stranglehold over me my job is to break that stranglehold and this is really what he was referring to it had created ironically this proliferation of standard into an ossification this is the term that was used it was a fancy term but you could read about it in the newspapers at the time and the research community was using this term of ossification innovation was basically glacially slow so Along Came new efforts there were things like clean slate for D that came from AT&T with with Alba Greenberg's Lina's leadership when he was at AT&T before and Jen Rexford and others there was the ethane project that Martine casado led at Stanford so something badly needed a change and of course it has and we all know much about this there was disaggregation there was there was open flow in the early days then Sdn and NFV which really came out of that that at that effort then network virtualization then more recently programmable forwarding and then that enabling more telemetry and sort of throughout this period open source has grown and been a credible force in networking so imagine 10 years ago someone was to tell you that open source would be a strong force in data center networks and in ISP networks I think you would have snickered right you'd have giggled because it just seemed incomprehensible well we just saw over the last few minutes a description of not only real trials but commitments to go into deployment and over the next couple of days you'll see even more of that and so we're just at the cusp of a lot of work turning into the realization of that sort of revolutionary goal so I refer to this as sort of phase one and this phase one is really about changing who is in charge the network owners and operators have said no no we want to be in charge it's relinquishing control from the equipment vendors and saying we want to define how our networks behave so that's largely happened or it's well underway I don't mean to say that it stops in 2020 but the the the momentum has picked up enough that this is really sort of unstoppable at this the safe at this stage so actually now we kind of take it for granted don't we that if you own a large network you can download modify by Commission the software that controls your network that on its own is a revolution in 2010 every large hyper scale data center company in the world their networks were based on proprietary equipment with proprietary software running on proprietary hardware today every single hyper scaler in the US and in China operates on merchants silicon with software they either wrote or got it from someone else so that's a big change and now we see through what onf is doing that is happening to the network operators the ISPs as well and I mean just just stop for a minute and look at the number of projects that we just saw and the amount that's going on from a team of 30 people and I take my hat off yesterday my contribution to the ONS board meeting was to say wow because the number of projects that are going on from a team of only 30 people clearly that has to have a large army of other people involved to make that possible and so this is an immense immense change that we're seeing happen right now through Ono scored trellis Eber stratum etc we've seen those projects today so more more recently we've seen what I referred to as this phase 2 and I think of as a natural consequence of this phase one which is network owners starting to take control of how the packets are processed as well because to understand why this is inevitable you just have to remember that a network whether it's a wired network or a wireless or cellular network is really just a means of taking a packet from one place and transferring it somewhere else and doing a little bit of processing as it goes by if you're not in control of the software that defines the behavior of how the packets are processed then you're not really in control of the network so again this is a kind of an inevitable follow on next stage and over time I think that you will see that this will happen more more as many of you know I've devoted the last five years to trying to help move this forward through p4 and barefoot etc but I think this is part of a bigger trend and I think they will just see this inevitably across snakes across switches etc throughout our industry right now we're in the lucky position that we have many of the pieces of technology that we need to build the kind of demonstrators that we've seen described today and we have p4 now part of onf as a de facto industry standard for defining forwarding behavior and both barefoot now part of Intel and Broadcom are both building and have announced that they will build programmable switches so you can see that this is sort of picking up speed and picking off momentum let me just explain a little bit more about why this is happening 2010 this is what switch pipelines look like they were fixed function they did a seat a fixed sequence of operations so if we look inside the boxes that you would build from that and we look at the switch OS controlling the switch underneath through a driver what this what the switch OS was capable of doing was telling that switch which stages of a pipeline it could use to implement these types of functions like OSPF BGP so it'll be a fixed function pipeline packets would go through we see the headers on there representing different protocols that would be processed now imagine that it's 2010 and you want to add a new feature big new feature of the time was vo with VX LAN a a standard introduced by Cisco and VMware but in order to do that you need to actually introduce not only new software but a new stage in that pipeline if anyone in the room was to add the software that was to do that it would take you about two weeks it was a very simple operation in the silicon it took four years why is that because it had to go through an entirely new development and then by the time that development took place the four years before the first silicon showed up most people have moved on to another kind of standard so the silicon was way out of step though at the evolution of the silicon was way out of step with what the industry it what we were really stuck in was this Bottoms Up approach where a databook would tell us a datasheet would tell us how packets would be processed and it would be described in ambiguous English and so that definition was driving what the system could do you know our industry has moved on generally from this kind of this model the computer industry doesn't work this round we describe at the top what we want a compiler then maps it down onto a chip we never actually see what that chip does we don't know what kind of CPU it is that's running underneath that's the abstraction that it hides so now we're starting to see this happen top-down and I think that we will see this more and more in networking where we describe a behavior that we want in the control plane in the management that sits above it in the forwarding plane all the way through and then that will be compiled down and we won't even know how that gets implemented or gets mapped on to the hardware whether that's a forwarding pipeline that is dedicated to the purpose an FPGA a CPU whatever it happens to be that's implementing that the PISA pipeline is what has made this possible initially as the programmable this protocol independent switch architecture where we have a programmable positive can that is told how to process the packets and how to understand what those what those those headers are and then a pipeline of identical stages that will go through and before match plus action where the match is on a memory and the actions are with with Al use and there are hundreds of these typically inside one of these chips such that an arriving chip its head as a split apart in order to - once we understand what they are and then it will go through a sequence of transformations from one stage to another and we could think of this as a geometric transformation of that header that set of headers as they go through from one end to the other until eventually we finished the serial dependent processing on that packet we reassemble it and then we send it on its way this is the sort of the forwarding model it may actually be burned into silicon or not but this is the forwarding model to think of when you're thinking about processing packets using a language like p4 and when you write that p4 language it's being mapped down onto either that or an abstraction that works in that in that fashion so the reason that I spent the last five years and will continue to do this on this programmable forwarding is I believe that programmable switch chips can have the same power performance and cost as fixed functions which is that they will replace which means and I like this term beautiful new ideas not my ideas your ideas beautiful new ideas can now be owned by the programmer not by the chip designer so in other words your ideas find their way into the network more quickly which means more innovation and that means a better internet just as an aside briefly you should be skeptical skepticism is appropriate how do we know if a programmable switch chip can have the same power performance or cost haven't we always been told that that programmability comes at a great expense so I went and took the numbers from a an equipment vendor who's represented here today who builds these these systems from both the programmable and the fix and just took the numbers off the website I don't do this to try and sell you anything just to get across the idea that that intuition that we have that it's that it comes at a penalty is actually not the case it is essentially the same let's look at the numbers here these are for two identical systems built with the same feature set one from a programmable silicon one from the leading fixed function they have essentially the same forwarding rate they're both built out of 16 nanometer the 16 nanometer technology kind of defines the forwarding rate here it's slightly higher but we can call it the same the power draw typically is slightly lower for the programmable that's probably surprising the latency is slightly lower as well why is this is just a pipeline there's not really anything magical to it it's just a pipeline that you get to configure rather than being fixed this will actually be way the future ones are built so as a consequence we're seeing programmable forwarding happening now it's getting used primarily actually to reduce complexity the first thing that we've noticed that people do when they have this in their hands is they throw out the features that they didn't want so that they don't have to use up all the resources in the silicon that's partly how they get to the lower power the second thing that they do is they add new features that are specific and private to them you congestion control algorithms a layer for load balancing middle box functions that they wrap into the network whatever makes sense to them often you don't know what they're doing because they get to program it themselves but the big that probably the big one that has been the most common the most popular is telemetry and it's a little bit odd that telemetry becomes possible because the program of affording telemetry could have happened before the problem was no one knew what to measure and where to put the bits that you were measuring in the packets because in every network it had to be different doing that in fixed-function silicon was not really practical you need to have it programmable so that you can choose where you put it in different networks it's just as there a practical consideration as we know people don't know is now part of onf and there's lots of activities I hope that you get in you get involved I particularly wanted to call out four people out of the cast of many many people who have worked on on the p4 sort of evolution Nate foster who runs the effort day-to-day I mean but out from Google Jen Rexford from Princeton and Chang Kim from effort ok so phase one phase two what's the big deal I actually think the big deal here is that it kind of completes the picture and creates a deeply profound consequence for networking what we have now done as a community is we've created a programmable platform top to bottom and as soon as we have that platform which we now do we don't need to think of it in terms of the protocols that were baked in through data sheets and the IETF etc it's just a programmable platform we can do with it as we wish this actually sets us off on a whole new direction if we so choose if we want to continue to implement the legacy protocols because that's what we need to do then of course that's what we should do but we can gradually start to move off in new directions so I think this actually unleashes really quite a big potential revolution of its own so indulge me for a minute I'm going to kind of wax lyrical about where I think this might take us this isn't just my thinking this is from a group of us including onf and some other colleagues but we've been sort of contemplating what what actually does this looks like what what might this look like in 2030 the first thing is the network is now a programmable platform top-down including the control plane and the forwarding plane but as we look forward what does that look like let's make some extrapolations well okay this is an easy one I think that Nick switches V switches in fact in time networking stacks will be programmable as defined by the user within for over ten years in ten years time this happening now isn't going anywhere it's going to grow over time we will think of the network as of programmable platform we will describe the behavior that we want at the top by the mean what I mean at the top is not on an individual box level but at the top of the network as a whole whether that's a data center or a large network or a portion of a network or a mobile operators Network it will be partitioned its behavior will be partitioned in a way that we won't see whether a bit gets used in a packet in a nick or a switch or a V switch I don't want to know that I don't know how memory gets laid out when I write a program in Java I don't want to know how those those those get laid out right we have would have put in place an abstraction that shields us from those kinds of details it will be partitioned compiled and run across different elements and every what large network will as a consequence ik whence works slightly or significantly differently from the others will be tailored to the particular needs of those networks I've already seen networks large networks that use source routing who would have thought that use addressing formats that are different from standard that use congestion control based on signals that no one has ever heard of so this is already happening at a small level but this is going to happen on a much much larger level over the next few years I think will no longer think in terms of protocols protocols will be a little bit of a quaint piece of history they will have been the necessary thing we needed to get going and a bootstrap but large networks will be programmed to a behavior and only at the edges where they need to talk to others will they use standard protocols sound heretical sure but I actually think that we're already on a trajectory towards that many many networks are already doing this to some extent internally within a data center within a region of an operator this will start to happen more and more that is unprofessional yesterday as a consequence of this migration of function out of hardware up into software this opposed this is a problem for us at the university is what the hell do we teach networking students gone are the seven layer OSI how we go through the sort of progression of protocols the way that the textbooks describe today this is a very exciting challenge I think we'll describe it to them as a distributed computing platform we will give them examples on congestion control and routing of the things that we know but these will become examples routing and congestion control will become programs partitioned across the end-to-end system via a compiler and we'll have to get the students used to the idea of using this but if we really want to go further than that which is if you were operating a large network of course you want to do this how do you get the humans out of the way completely because we're always the problem right and I think goes wrong it's almost always human or operator error so what else do we need what else do we need to have happen because it's not just enough we don't leave all of our applications Facebook doesn't leave it's it's its application without humans involved in order to keep it running until you can do something that kind of closes the lid what does it mean to close that loop we all talk about closing that loop what is that what what actually do we need to do so I want to put it to you that we need at least three pieces first we need the ability to observe packets observe network state know is the state that defines how packets are processed in the network the forwarding state and code all of the code topped the bottom we need to be able to examine that in real-time and keep an eye on what it's doing because things in networking happen in nanoseconds no human no human can keep up with that we need that completely automated we need the ability to generate new control and forwarding behaviors as needed on the fly it's kind of scary to correct errors as they've just been observed right we are far from this today right you wouldn't contemplate doing this today and if we're going to generate something we better verify it again some original desired behavior or a set of invariants to make sure that it's doing what we originally intended and then get it deployed quickly I like to use the example of what if a packet I find a packet in a network or rather some piece of machinery finds a packet in the network it shouldn't be there it should be able to figure out how did it get to be there what was the piece of code or State or failed hardware that was guilty for allowing it to be there and let's do whatever it takes reroute around install the new rule changed the forwarding behavior change the control code set off an alarm maybe in order to prevent that from happening again and you better be able to do that in milliseconds not hours otherwise there'll be a whole lot more packets behind it that are then infiltrating the system or breaking it or whatever the protection that you're that you're worried about there's some signs that this is happening and by the way what I mean by signs here is mostly in the research community of a lot of work that's gone on to this end in terms of observing packets there's been a lot of commercial work in this in this area through int so observing packets today the packet telemetry is is already starting to happen and I don't want to don't want to tell you that it's somehow super mature and that it's not going to change it's just beginning right now I started looking at this because I was very concerned that even basic information about the network's behavior was really really hard to find traceroute ping that's kind of about it right I actually used that statement in 2012 and it's still true today right nothing has really changed if you have per packet telemetry then you'll see that this begins to change what I mean here is in a network where there are packets flowing why is it or why isn't it that I can just take one of those packets and ask that packet what path did you take not what path did some probe that somebody else sent take what part did this packet take there's not a single commercial deployed network at scale that's publicly talked about at least that allows you to do this today it's gonna show you how do we debug these systems unless we know that what I'd like to do is to be able to for the packet to tell me I visited these switches and at what time I visited them that tells me how it got here what if I want to know why it got here if I'm gonna try and close this loop I need to know why it got here I need to know the set of rules in the switches that this followed every switch knows the rule that it followed what isn't the packet now so why can i probe the packet and ask it which rules did you follow so that tells me about how it got here and why it got here let's think about the performance I'd also like to know how long did it queue at each of the switches along the way so for example I'd like it to be able to tell me that it was delayed by this amount at each of these the switch knows because it actually had to keep track of that packet as it went through why can't the packet know as well so here we've got a big red number that's a problem I'd like to be able to go to the switch and say show me the time series of the occupancy of the queue so that I can figure out not only that there was a problem but I want to know who that I was sharing that hue with so I know in this particular case it's the orange flow that was causing the problem and now I know who whose throat to choke I guess so this basic information is there other obvious information if we're gonna try and do this at scale which path did my packet take which rules in my packet follow how long did it queue at each switch and who did it share the queues with yes that's beginning to happen with p4 and int but this is just the beginning right this is this this I think will be standard operating procedure in a few years time that we'll expect that in a network I like the picture that I can jump over this about int many of you will know how it works this picture here this is a picture taken from a financial network of the occupancy of queues using int as per packet measurement showing the evolution of the of a queue in the network with a micro burst the entire excess x-axis here is about a microsecond so it captures the entire micro burst with every packet event down to the nanosecond that sort of thing seems kind of science-fiction today in an operational network I think it'll actually be standard per packet examination of what's going on in the network so if we can observe the packets can we observe the network state well gee this is already starting to happen commercially as as well for those of you are familiar with something called header space analysis that was first described and published in about 2010 you may know that Ford networks and and also verify Oh was doing something very similar is kind of commercializing this approach today briefly you build a model of the forwarding behavior of every switch in the network based on the program that describes its behavior and the state that it currently contains that allows you to ask questions like can a talk to be and formally prove that it can or it cannot you would do this by saying let me consider all possible packets this is what this cube represents let me see how that set of packets would be mapped forward through the network and then backwards through the network and then see whether it's possible and which packets could communicate it also allows you to generate a set of tests automatically to test whether the hardware is functioning correctly and if you find a packet at B that should not have been there it also allows you to figure out where that fault works this already exists now it was kind of in the lab it's in the process of being commercialized you'll see a lot more of this kind of activity it's beginning to give us these pieces that we need for automating this behavior but what about these two pieces the ability to generate new control and forwarding behaviors on the fly and the ability to verify newly generated code and deploy it quickly let's return to this picture here we all see this picture many many times before sort of the Sdn picture what does it mean that we actually need here so we need to be able to observe very quickly to measure and then validate the packets the state and the code the bottom two there's already progress being made in that direction the top one we still need work to do to verify that code that of course there's software engineering techniques that are that are that are being worked on the wean and then need to feed that back to how we would change the control plane in order to how to generate new behaviors new control code new forwarding code new state and verify that it's doing what we intended so for example changing and modifying a control program in the trellis framework changing perhaps own our state or its own haces behavior maybe then partitioning a desired nua forwarding behavior all the way down to the forwarding plane so I don't know that this is exactly how it happened but I think that we have almost all of the building blocks that we need now with some purple pieces still to fill in and this obviously helps create a research agenda for universities to start working on this and this is indeed what we are doing so where she have a program that with sort of a research program that we started recently between o and F and some colleagues Jen Rexford at Princeton Larry Peterson and guru at-at onf and Nate Foster at Cornell in trying to figure out how we do this verifiable closed-loop control of networks and so the idea is to build on a lot of the solid building blocks that onf has today that you've seen already with the trellis applications sitting on top of Bowness the p4 runtime contract and then over time as the stratum OS becomes more widely deployed before into switches nicks and into these switches it gives the mean to now take fine-grain per packet measurement for example from int there are many research projects that are going on towards this end not only not only amongst this group here there are also at many other universities as well if you want to learn more about sort of higher level representation of this measurement read the Sonata paper from gen Rexford that was at sitcom last year then four generation and verification if we are to do this then we need to be able to verify that the code that we're generating actually meets with our original goals there's very nice work that Nate Foster has been doing at Cornell on something called p4v which is verifying the correctness of p4 programs prior to deployment so that you know that they meet some Sun set of in very properties ready for primetime no but getting there using an enhancing p4a language that we're all beginning to adopt so this is something that's sort of in in sort of gestation right now as a as a program but we're trying to figure out how we move in this direction in order to help these pieces come together over the next few years so of course I would say this I think of this phase 3 as networks managed by this verifiable closed-loop control and would love I would love to work with many of you on how we can get there what are some of the building blocks and pieces sort of as part of this overall vision so I do think that the adding to to what it said back in 2012 we will measure and validate correctness then we will generate generate and verify a code fix and then download to correct that bug this is the direction that I think they were going in and then you're left wondering ok what would we do after that well I think it's kind of obvious we go to the beach and that's what I plan to do thank you [Applause] okay I guess pretty exciting huh so Nick don't go away if you are willing to take couple of questions maybe I know we are almost yeah I think we have time for a few questions so anyone has questions for Nick don't let him go that easily oh right Brian's own secretary rocks thank you for the presentation overview from the from a power our energy utilization perspective as you were comparing the you know whatever your legacy box yeah yep I wouldn't your tubular just comparing your your I see two to the box and the power state is a purport perspective but I where's that literally the I guess peak power consumption of an individual how was that actually done so it was actually much simpler than that you know as an equipment of vendor and equipment vendor post on their webpage the the ratings that they have for their particular product using in that particular use case this is taken exactly from that webpage Ramon Quitman vendor who had them with both of these types of silicon so it's a it's trying to get an apples with apples comparison as I realized that if I was to do that personally then then where would that where would that come from may its measured from the its the specification or the the made by the equipment vendor is that if you use that box what your power consumption will be as a power that you'll experience in your network using the particular protocols and software load that that they use right happy to talk offline on the details of that since you didn't mention it do you see a role for AI with intelligence I'm sorry do you see a role for ai ai yeah so I mean let's think about what what would be useful with with if you were to use either training and then inference of network behavior whether it's packets the network state or the code in order to try and figure out what your network is doing there are some things that probably lend its lend themselves quite well to this large trends of for example in a mobile operator for how things are changing currently that are on such a scale but it's very very hard to do on a detail basis personally I don't think that it has a very good role in security because I don't want statistical knowledge about whether my network it's secure I want to actually know that there isn't an infiltration and so I think that we have to think very carefully about the use of machine learning and AI for managing and controlling networks underneath it is something whose behavior is defined by code that we write so we have already have a model a design it's that code telling us what it should do we should hold the network against that test in my opinion whenever we possibly can if we really really can't do that then okay maybe rely on statistical statistical observations but I think that that should be a last resort personally maybe I can ask you a quick question so when you look at this journey what are the things like you are surprised that things didn't play out as you had hoped or things that have surprised you that they gone faster or better than what you had hoped ten years ago so the the very first papers that were written about Sdn came from AT&T and they were very much from the view of how you would change operator software in order to put the operators more in control and I don't think anyone took it that seriously because it just seemed like an impossible thing to contemplate because at that time all of the systems they were using the software was proprietary and came from somewhere else and so perhaps not surprisingly it then morphed into being all about the data center companies armies of software engineers they were building from the ground up much simpler networks what didn't happen in the middleware was what about enterprise networks what about the networks that we use every day in campuses and it surprised me and it continued to surprise me that this change has not happened there now there's obviously a lot of turmoil going on there because no one's quite sure should they build out their IT infrastructure in their campus or do they just move everything to the cloud so that has sort of muddied the waters but there's this still kind of big open question and so I actually thought there would be another ten years or so before the ISP and the big network operators will kind of get on board because it's such an enormous enormous challenge and so I think that I was actually quite surprised when so many operators got involved with with onf and really started to commit and we've seen that level of commitment they're represented today and for a number of large network operators that are in the room they actually really that really surprised me and that almost everyone is transforming to having a large software expertise and large software skill in order to do this so yeah very much like the talk would you agree that what you've described essentially is a pattern that would apply to things beyond networking it seems to apply very broadly and sort of towards a zero touch commercial enterprise position where everything is so do you see that was a bigger picture that this is sort of is the underpinning of and maybe either start off well at some level if you won't have a closed loop control system you have to observe its state in order to be able to modify it and if its behavior is defined in in software and then state then there the ovis control points that you have and so networking is probably much simpler than a general-purpose computer environment in that the desired behavior is much simpler and the state is much more well-defined so in a way it's a much easier place to start than it would be in general-purpose general-purpose compute but almost any infrastructure that has this has this behavior sort of inevitably you know most manufacturing control systems already worked this way because of their well-defined behavior and the consequences of getting it wrong and the time scales are much slower so really we just sort of marching on and borrowing from those those well-worn well-worn lessons from observation and control so what I think that in you know for anyone who's trying to manage a an enormous at scale system that wherever they can have well-defined behavior that they can observe check verify and correct then of course you want to do it so you can focus all of your energy and attention on the bits that are harder to manage and control okay one more question last one I'm actually student so actually we wanted to ask you like so I see this trend moving towards more layer of abstraction of protocols and yes you said maybe few years down the line we might actually not know about how protocols are because it's way down abstracted so as a network engineering student my question is like right now should be at so it's like having your legs and both waiters because you don't know where has actually headed towards so what is your like take on it like how should the current networking engineering students focus on you know it's a great it's a great question and one I spent a lot of time thinking about you know I think there is some there are some sort of principles that are they may be either everlasting at least or at least they're very they're going to be here for a long time things like packet switching layering encapsulation things that we perhaps take for granted but there are important building blocks of of networks and there are things that you have to do you have to decide where a packet is going to go we call it routing you have to decide what you do when there are too many of them that's congestion control and so there are these things that will continue to be these these the other principles or problems that need addressing that we will change and modify as we learn as we learn more it's just how we will go about it that will change rather than simulation go to the IETF persuade a whole lot of people they should change it and then five years later something shows up it will be I'll just go and change a program and see what happens in a small protected part of my network and then roll it out more broadly if it turns out that it works so I don't think that the underlying principles will change hopefully we don't have to worry so much about learning protocol super II that we've had to do and in the past that we can think more about what it is the problem that we are trying to solve in our environment so programming programming programming [Applause]
Info
Channel: Open Networking Foundation
Views: 1,302
Rating: 5 out of 5
Keywords: Curated Open Source, Keynote, Open Source, ONFConnect, ONF, Broadband Networking, Open Networking Foundation
Id: QDC7ckmioYM
Channel Id: undefined
Length: 44min 9sec (2649 seconds)
Published: Thu Sep 12 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.