Why Does BGP Need Link State?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] bgp link state it is not new it's been kicking around since oh 2015 or so and on the surface of it you might think why why are we carrying link state information in bgp that's not what bgp is for and yeah you're not the only one with an automatic reaction like that but the reality of bgp ls has more nuance to it we might jokingly call bgp the kitchen sink of routing protocols but the engineering choices that were made were made for a reason remember everything's a trade-off to educate us on what bgpls is and why it exists is honest gredler he's heavily involved in the deepest corners of the networking industry as a cto and founder of startup rt bric offering cloud native routing software for telcos and in the past he has also chaired the ietf isis working group and was a distinguished engineer at juniper networks and he holds some routing patents and he's a published office you get the idea honest knows what's going on when it comes to bgpls and he volunteered to share some of his time with heavy networking so hannes welcome to the show and hey man let's get started by going right to the heart of things what is bgpls and what problem does it solve i even thanks for the red carpeting first of all and having me on the show so well bgpls uh uh probably i should before we go into what it is uh we probably should start uh how it was conceived right it all started uh with an old acquaintance uh from cisco uh joining uh juniper networks uh you haven't it's a frequent speaker on the show uh dave ward right so in 2009 dave ford with his entourage was coming in and uh one of the problems that we had to solve uh was um trying to deliver a bit of application intelligence right and uh expose uh what we have uh in the routing daemon all the link state data all the internet prefixes all the internet egress points and sort of feed that into uh what was known at that time as an alter server uh which again will sort of aggregate those information together and tell clients like you know bittorrent or whatever content distribution uh networks uh you know what turns uh what paths to take and uh well this is how it all started so i'll respond with the obvious question which is why didn't you just handle it like a lot of other services have needed to handle that particular problem where they would you know say it was an ospf network you set up an ospf listener have it join ospf and then learn everything that's going on from the igp that way there was even a box i ran that wasn't from cisco but it ran eigrp and listened to ir eigrp igp network uh very good question um usually i would say for this simple labs like you do your ccie or whatever vendor certification this uh approach would work but as soon as you uh try to consolidate uh views and topologies from networks spanning potentially thousands of routers right you actually have a practical problem so you need to backhaul all those topology views using gre tunnels whatever you need to actually do this in a redundant fashion so at some point you really have trouble scaling the alto server right what it's going to be a linux box with 4000 gre outgoing tunnels and even then you need to actually scale the adjacency machinery on that alto server for terminating all those 4000 adjacencies and getting the other side to tell you about his local view of the topology so those are really some of the practical problems yeah yeah i'm laughing along with you a little bit because i actually faced that challenge in in one of the networks i mean it sounded like it was such a simple thing i'll just stand up and adjacency to your igp and you're all good and and in fact right when you get begin operating at scale and you're trying to communicate with all these endpoints to get a clear view of the topology there is a computational problem and logistical problem that you're facing uh to to deal with that so right in a smaller environment no big deal in a really big environment with many hundreds or thousands of routers yeah big problem okay so so i i mean the practical problem was uh hey do we really need to replicate uh the protocol machinery right for both ospf for isis uh for bgpe uh to uh do we really need to replicate that down to the alto server right or uh couldn't we just uh you know uh have a sort of a very unified way of the auto server learning about the topology ideally but just opening uh a tcp channel somewhere into the network and then really retrieving all that information that was the whole idea so the alt the point here is the alto server doesn't need to participate in the routing protocol to be able to discover the topology as long as you can just give it the topology information in some way it can figure out what it needs to know to do its job exactly exactly and this is actually how we started that was phase one right uh so then uh usually the alto developers come to you okay give us the data what are your apis and big surprise there are no apis right so uh probably those guys went back and started to hammer um the routing subsystem with their snmp queries right because that at the time or even xml because that at the time was the only way to get structured data uh out of the routing daemon right and uh you also see here a little bit the semantics are wrong right it's essentially a pull model where you constantly are polling for information has anything you arrived but uh what we really wanted to have the alta server is an accurate up-to-date view of the network so we were pretty certain that we wanted to have a push model where the routers whenever something changes a new topology a new link comes up a new ip prefix is reachable that uh uh 10 milliseconds later uh the alto server knows and can take action as opposed to pulling on an interval to pull that information and being blind for however long the polling interval is exactly exactly and while we were um you know when we talked here a little bit about the initial use cases for alto um uh very quickly uh as soon as we started talking about it uh uh all the traffic engineering folks jumped onto that and say hey uh by the way ah real-time uh topology information across igp boundaries i i want that as well right i have uh you know rsvp calls uh really heavy level three had very very traffic engineering uh uh related problems and they wanted to really get full visibility into every corner of the network and bgpls or any sort of topology synchronization protocol should could exactly just do that so i think an important point to make here is bgpls is not bgp as a routing protocol calculating link state and making forwarding decisions it's used to carry link state information from a different routing protocol but but again it is not making forwarding decisions like ospf would absolutely not uh i mean if you uh try to compare it uh what modern application developer are doing today you could almost describe it as a message bus for routing topics uh so today application developers which is the you know london pacafka instance uh publish ear of certain channels and then have their subscribers to subscribe to that channel and get a replica of the data and uh essentially we have bend bgp to just exactly do that we had those multi-protocol extensions so we could actually publish several lanes of information and hey well we did v4 v6 we did a route target information uh pedro uh was even uh distributing firewall information for mitigating ddos attacks uh so uh uh we were actually very shameless right and said hey we could just go ahead and distribute link state information for that the vehicle seems to work it does feel like it's pushing the boundaries a bit where those other things that you mentioned like well v4 and v6 is address families bgp is making routing calculations and affording decisions based on that information that's carried in those nlris but not now we've taken it to the point where we're purely at that message bus state hey we've got a bunch of bgp adjacencies we can use it as a sort of a database to move nlris around so let's do that bgp itself never makes any calculations but it's handing off that information that's been carried through it as a message bus to like the alto server we haven't mentioned pce yet but uh but but i know that's a use case there as well um that's correct and uh there is always i would say since the introduction of multi-protocol bgp uh 20 years ago there was always the allegation to say hey look you guys are now really messing up one of the core protocols of the internet uh and uh you know the sky will be falling ultimately but let me tell you a thing right usually um there is in a good modern implementation of a bgp routing protocol uh there is a clear separation uh between the message parser between the stream parser and actually the per address familiar handler of code so usually what the message person does it goes through or passes the message uh if it's just v4 updates which are typically found at the end of the bgp message it is going to write it into a dedicated interim table and uh so it is also for data and routing and prefix information encapsulated in the multi-protocol reach and unreach path attributes so there's an interim step where this goes into a dedicated ribbon into a dedicated database right and uh let me tell you one thing those stream parsers they're really hardened right uh every uh i would say honorable bgb developer uh every company who is in this space has really very tight fussing tests uh they're doing uh uh make everything sure that there is no uh sort of uh stream corruption or uh a bug in the parser tearing down everything the point you're making there is the stream parsers are are hardened and that means if we keep throwing more different sorts of data streams at bgp as long as each stream parser is hardened if something goes wrong within one stream parser it's not going to just kill the entirety of bgp and cause like like a neighbor reset or the process to have to reload and and so on that is then i would say uh uh a question on the implementation so if both the stream parser and the power address family handler are let's say in the same process in the same unix process then obviously uh if you have uh at the more vulnerable code which is usually the per address family handler uh if you have about there and run into a crash situation there then the whole thing is going to tear down everything right uh so classical featuring problem however if you have a slightly more modern architecture where you have clear process boundaries between the stream parser and the address family handler then you should be good okay okay yeah and more modern certainly that is an architecture that we've seen featured in a in a great number of modern and updated versions of operating systems where the way that the different demons are stood up and the way that the different processes interact there's a good deal of isolation and it's much easier to if there is a problem just have that one process restart i mean i would say all the code bases right which have been conceived uh 15 20 years ago are they clearly are in some degree vulnerable to the problem uh that the bgp critics uh really are put here on however i would say modern architecture all the stuff that has been developed in the past five ten years actually does not have those kind of vulnerabilities and issues so i want to go back to something that we opened up and kind of made the argument that bgpls exist because we wouldn't want some device that's trying to calculate say you know traffic engineering or segment routing through the network to have to listen to an igp itself we're asking too much of that device to be able to do that especially on a very large and complex network so we're just going to carry ling state and bgp i just want to revisit that is that really i mean do we not have enough computing power to pull that off at this point where you know we wouldn't have to rely on bgp to carry the links it for us we would just use an actual link state protocol as the underlay just just leverage that native data if you want to think of it that way um fair argument um let me ask you a question i know it's a little bit impolite asking a question with a counter question but uh what's your guess you know what is the amount of igp developers that we have on the planet right uh that know how to do a proper igp state machinery with pacing logic that is question number one and question number two how many application developers do you think there are on the planet who understand how to build an application on top of tcp and that is essentially the cornerstone of my argument uh tcp with flow control uh resequencing having this semantics of a stream that actually solves a lot of the problems that usually need to get solved at the igp transport module and um usually that is actually a very hard piece i mean i'm pretty sure you've seen dave katz's nano presentations and uh he's preaching to the core why link state protocols are hard and i have to say he's absolutely right this is why i hadn't i hadn't thought about that way and if you're in the audience listening to this and you missed that nuance bgp rides on tcp it's tcp uh tcp 179 right isn't that the port number that is correct that is correct and that design choice trace takes a great deal of complexity out of the transport module of bgp we're relying on tcp to be the transport layer and to do all of the things that tcp does um as opposed to ospf or eagrp which is their own ip protocols they do not rely on tcp or udp or any other transport layer they have their own transport functionality built in so going back to your point about developers and coders the if they don't have to worry about the transport stuff because tcp is just there that's already been written it handles that for them you have a much greater likelihood of success than people that would have to write some kind of a parser you talked about message pacing and do all of that kind of stuff uh that you'd have to do on the igp side but but but still honest i was still going to kick it back because because open source aren't there enough open source flavors of ospf let's say an isis to maybe that code exists i i i i actually uh it's a hobby of mine right uh i actually check out open source code basis right uh and uh one of the first things they always have a look at is at the transport module right and the transport module i usually have got uh uh two questions first of all is it receiver driven right um so um is the uh flow of information actually reversed receiver driven means the interface says hey i'm ready to transmit oh now let's drain uh 20 link step packets uh from the flood queue and transmit it there versus um sort of a naive implementation who always tries to say okay i have information to flood i need to tell it everybody in the network okay and let's now start filling up cues right the letter is actually leading uh to a congestion collapse on the load and um open source great i love it but this is actually the piece that uh pretty much all the open source implementations of igp's are really lacking the transport module uh is not receiver driven the back of heuristics are not really well developed and that is actually the difference between uh you know what the incumbents are doing and uh what is out there in the open source world yeah i got to get you me and don sharp on a on a podcast conversation just to just to talk through all this stuff this would be a that would be a fun one well okay honest so we we you've you've done the work of convincing us that carrying link state in bgp basically leveraging bgp as a message bus makes architectural sense the networks that need this are already going to be running bgp it's there it's something you can take advantage of so effectively it becomes the job of turning on the nlris that do this and distributing link state from the appropriate igp into the bgp process and then carrying those brandy new all super shiny link state and lris great we mentioned alto we mentioned pce and passing let's get into more detail in these use cases here why why would i do this and maybe we should start with uh with segment routing um when i we talked a lot about segment routing on the show over 2019 and 2020 would you say bgpls and segment routing are complementary or competing uh i would say uh to a large degree complementary technologies although the policy stuff and the egress peer engineering there is certain overlaps but by and large i would say uh complementary bgpls is just supposed to be uh that layer that provides really end-to-end visibility in otherwise uh closed uh topological domains like igp areas or asses even sub ies and sort of bgpls really provides that full visibility i had thought complementary and for what you just described so if i'm building out my hop by hop segment routing whether that's a stack of mpls tags or something i'm doing with v6 uh i could could potentially depending on my topology use the information i gleaned from bgpls to build that segment routing instruction stack and then send that tagged packet into the mpls core and have it go hot by hot based on what it learned from the igp link state does that sound plausible absolutely i mean one of the ideas in one of the early use case and i've mentioned traffic engineering before uh was that whole idea of a sort of sdnte controller right uh which tries to really optimize the utilization of certain egress links or certain core links and uh for that we wanted we needed to actually make that whole feedback clue for traffic engineering much tighter so we needed actually a faster way to ingest traffic statistics and also we again need the sdn controller to understand the topology as a whole to build models around that and then sort of uh distribute uh to certain ingress routers or core routers uh um you know sets uh stacks of segment routing labels uh to really ensure that links are not over utilized so that was always uh i would say back in 2013 when we started conceiving the technology uh one of the problems that we wanted to solve yeah and that's really interesting because routing engineers that aren't in the service provider world maybe don't think in terms of link consumption what is my current link utilization should i be rerouting traffic because this link is being too heavily utilized because they're using an igp that's a compute shortest path first and that's all you get it's all we can't take link utilization to consideration and then move traffic around and of course that doesn't work in the service provider world where the links are very expensive and you have to optimize traffic across each of those very expensive links so now you're in a situation where based on time of day traffic flowing between these two routers might be badly congested and in real time you want to adjust for that and move certain flows redirect certain flows uh into other parts of the network you can con so you have to know the topology so there's bgpls helping us know the topology in great detail and then the controller taking that topology information plus traffic statistics congestion utilization information and then being able to i don't want to say arbitrarily but to some degree arbitrarily change how traffic is flowing through that network one of the asks that we got in 2013 uh was mostly from uh the uh providers from the network operators who were sourcing uh a large number of content right and basically say look the majority um the products that we are offering by and large are free products right so essentially every dollar that we save on infrastructure uh is actually profit so uh normally in enterprise world uh you would um you know not saturate the individual link more than 50 60 percent and then you would start thinking about backups or um you know circuit upgrades whatever uh however those guys they really wanted to sweat their assets they wanted to ensure that you can really load it irrespective time of day you can go up to 90 95 percent of link utilization right here and you can only do that certainly not as a human operator right you need an algorithm in a closed feedback loop doing that persistent optimization it's not really router truck he's quickly typing doing that no no of course not no i right so a great great use case for that uh are there other big use cases we should talk about hannes um uh one of the things that we also see uh so so juniper uh we we had uh obviously a great uh love for rsvp uh we thought it's a great protocol to solve certain problems right especially when it comes down to uh you know reserving bandwidth along a path and one of the issues that we saw a bit with traffic engineering especially with the rsvp was really uh crossing a routing domain or crossing an igp area because um the only sort of reporting vehicle to get traffic engineering information back into the traffic engineering database was the igp so then as soon as you started to segment your large network consisting of potentially thousands of nodes into smaller areas you could only uh compute the constraints that you had for that lsp uh uh for uh the area uh that you're directly connected with um and um the interesting thing was um actually the whole um traffic engineering architecture uh that has been described of in rc 4655 actually allows uh in the architectural document to synchronize traffic engineering databases between domains however nobody really has ever formalized a synchronization protocol to do exactly this and i was i was really going crazy right uh going to adrian farrell and yakov and say hey come on you know you have been doing this gmpls stuff right and it's there in the architectural document oh yeah yeah but we never had time to do it right uh so uh at some point uh we said well could bgpls fill in that void and actually be that synchronization problem and then all of a sudden if you do this you can actually solve that problem you really can have end-to-end lsps you can have constraints you can have you know exclude certain srlg's fate sharing stuff so all of that starts now falling in place hmm that's like i don't know how many people have that exact problem although that's a really interesting one but uh but yeah now you've got a means to share that information cross domain pretty cool anything else we want to talk about on the use case side no use case i would say traffic engineering providing application intelligence and also i would say uh building a collector because uh what you can do here is essentially use the typically scaling machinery of bgp which is route reflectors to actually disseminate uh the and scale the routing mesh into every corner of the service provider network so um that goes back uh to the initial argument uh uh by using bgp you can just have a single tcp session into the cloud into the network rather than having uh hundreds or thousands of gre tunnels and still you're able to obtain the complete topology well honest i want to get a little bit nerdier and deeper at this point by talking about the nr nlri and the different types and what those look like now for those of you that have been listening to the conversation thus far uh there's a secret this presentation that you're hearing probably is audio since that's how most of you consume packet pushers heavy networking well we have a youtube channel youtube.com packet pushers and this presentation is also there honest has been sharing slides with me the whole time while i've been watching it's totally like cheating but you can cheat too just go over to your youtube channel you can watch this the presentation and recording that that harness is doing here as he and i are chatting through this and i say that because the honest i believe you got some slides coming up that are going to show us some of the nlris in in some detail okay so a little bit of link state theory first right uh uh the three elements that really um all the links their protocols iss and ospf have in common is uh essentially links of course node information and actually um you know some flavor of ipv prefixes attached to those nodes and voila those are exactly the free uh nlri types that we have in bgpls right just for a commercial break for you listening to this if you're uh if you know ospf and do show ipospf database that information that honda's just talked about what the nodes are links and the type of links that they are and ip reachability information that is what the ipo spf database is in effect if you can decode what you're looking at there on that screen for ospf it's a little bit messier even right because ospf is a messy protocol there's more to it for sure yeah so uh usually when you just watch the router lsa right for ospf uh then usually there is a mixture of all those threes right uh in the router lsa uh it says hey i am router id x so that is node information right then it says uh hey uh i'm having either a point-to-point link to the other guy or a link to a thing called a network lsa right which would be link information right and then there's stop network information which would be ip reachability right so in ospf in the router lsa you find all three classes of that information um in iss it's a bit different uh but uh um essentially similar and so those are the three components that make up what gets carried in the different bgpls related nlris right yeah so it is actually fair to say um uh uh the information that we encode uh in bgpls is a sort of protocol neutral representations of nodes links and its attached prefixes of course there is some i would say ospf isms and isis isms that sometimes we have to pick up but by and large it's a canonicalized representation of nodes links and prefixes so when as a bgp receiver of this information i i get this and i'm i'm some kind of a controller i'm not doing ospf i'm not an isis protocol speaker to be able to interpret that link state information i just need to know enough to interpret the link state information i've been given and put together a topology together maybe i've got some kind of uh graphing that's going on in my code something like that is that exactly correct exactly exactly so essentially you have those three databases you have the node you have the link and you have two prefix databases one for v4 and one for v6 and um on all of those database you have got a certain sub attributes uh so for example what are sub attributes um let's say when the link is reported you can attach the id of the remote router right or srlg group or an igp metric or a te metric so further properties of that link similar thing for uh nodes right node information you can add capabilities like uh you know segment routing srgbs or host names for ease of management right so again some attributes that goes along into the node database as well and of course uh we have got uh some sub attributes for prefixes like you know metrics uh tags um you know uh internal external flag bits things like that because it's okay that actually is an important point because we talked about topology it's one thing to put the links and the nodes and the reachability information in there but it's another when you begin to include metrics but in fact that matters because if the igp has converged in a particular way due to those metrics then that's something that the endpoint that cares about this information would need to understand absolutely i mean i really need to understand let's say if i'm the traffic engineering controller i need to be almost as good as as my life routing software uh such that i'm able to predict what's going to happen if i realized let's say a link has a close to infinite metric uh okay good probably not much traffic is going to float uh uh over that uh particular link i don't know man i'm going back now going back to our earlier conversation but why don't i just run an igp because we almost are almost well uh i would say uh implementation matters uh there is usually uh two school of thoughts right there is one school of thought who says hey look um in the igps uh we do not really have very sophisticated transport uh modules in fact uh we do not really uh have a way of doing flow control flow control is essential for scaling right the receiver needs to tell the sender hey stop buddy right i'm already getting too much information right you're overloading me please slow down in tcp that's very simple we just don't read the socket right uh our tcp kernel machinery uh um you know says hey the window has been closed uh please don't transmit right a very simple way of flow control building built into the protocol in the igp we don't have such a thing i know and we had this we had this part of the conversation before i don't mean to keep poking you about it it's just like every time you dig in and you find out really how complicated it is and what's being asked it's it somehow my engineering brain keeps going back wait a minute why are we resolving this problem didn't we solve this already i mean i mean if you want to get your fair dose of reality right uh uh have a look at the isis or lsr uh our working group list in the idf where people in the past year have been debating exactly about that very same thing right uh how to get back uh how to get a notion of flow control uh into uh the isis protocol right uh and well it was uh scratching and biting right uh people could just not really get the consensus uh what would be the right way of doing it right so those are some of the more practical considerations that i have right uh versus with tcp hey well 20 year old protocol we know all the back-off mechanisms dynamics are pretty much understood uh transport problem solved very well proven uh guaranteed delivery etc yeah yeah for sure and and and and timeliness right in in the face of congestion so yeah well honest are there things that bgp ls should not do if people are they're hearing this and they can oh i think i got some things i could do with this would you would you warn them away from certain applications uh yes and we actually had a bit of that conversation i think uh back in uh on the itf meeting in 2015 in berlin right uh where uh some some uh networking engineers jumped up and said hey wouldn't it be a cool thing uh to not just do that uh uh protocol learn message exchange machinery but actually make it a real link state protocol right uh which is let's automatically discover peers let's actually flood let's build some decent flood machinery into that let's do spf route calculation not bgp as a message bus merely but now bgp doing full-blown spf stuff exactly exactly uh and um again then i'm asking myself you know what is again the easier thing to solve right is it uh now if we add all those flooding and pacing heuristics uh back into uh pgpls if we add uh an entire new way of calculating the routes uh if we piggyback that on bgp that would actually mean a ton more of handling an address family specific code oh so much because the core of how bgp does best path calculation is so completely different from what happens in an igp you're asking you you're reinventing the wheel so but but really you end up with two different protocols you've got bgp as as we know it that does for it across the internet and then something else entirely a whole different address frame that works in a different way actually also the two protocols right uh from most of the implementations that i've seen work fundamentally different so for example in bgp everything has been really optimized towards incremental updates right uh so whenever something comes in right we are running through the policy and we only uh process uh that part uh that was actually the incremental update there is no notion of um uh hey let's really walk the entire rip up and down and recompute everything however if you look at the igps right after we calculate the graph and figuring out how distant all the other nodes are then there is the next step uh which is loading uh ip prefixes uh on top of those nodes and that is pretty much a brute force operation right uh we walk all the nodes see what prefixes do they originate put them in a global sort list and then do a walk figure out what is the best prefix originator and then uh notify our forwarding table about that change now if you would do such a thing on bgp you would actually combine two very different uh paradigms here right right uh with with the scalability challenges that would come with that exactly exactly so that's where i said hey look guys uh i mean janmedwith and myself have been crazy enough to conceive the protocol but uh this is the line right uh use it as a topology collector right use it as an information gathering layer that's fine but actually don't use it as a compu as a protocol doing active uh computation of routes right that problem already has been solved and there is far better tools than that and has that argument then been put to rest now are we done with that i mean bgpls as i said at the top so it's got some years on it so i i think there was actually some some some data center operators who were playing a bit with the idea uh but uh to my knowledge it has never gotten deployed right so usually the data just got to have that bgp i know i keep hearing about that so you should usually the the proof whether it's a good or stupid idea is in production no right uh okay so one more question here that honest i did some homework i don't even know if you would know this off the top of your head but what bgp implementations support bgpls do you do you happen to know i i have a small list if you don't have that information at the top of your head um i i have to say uh pretty much uh all the early interrupt testing uh we did um uh against cisco right uh which were only uh uh at the time the only quote on crazy enough to follow us on that path right so there was only juno's 14 something and i think also a 2014 release of ios 6r who did support that but uh after that i have not really kept track well here's what i found just uh doing some internet searching i found cisco ios xe as of 16.4.1 has got a flavor of bgpls and then ios xr since 5.2.2 so that that's quite quite a ways back there juno's 14.2 so again right as you were saying quite a ways back there free range routing um that is reportedly in progress what i found on their github as of the 23rd of november 2020 is when i pulled that information uh onos which is a an sdn controller that project was it said it was listed as bgpls was listed as an active project for onos as of 10 october 2020 but i think there's active code that's been working for quite a while there's a demo of it working in youtube uh sitting out there on youtube back in 2015 so i mean that that code seems to have been in on us for quite some time and reportedly the open daylight controller also can deal with bgpls and you know thinking about who uses odl and who uses onos that makes sense that bgpls code would be in there i mean uh what you have mostly uh quoting here is actually router implementations but there is also i think a ton of uh uh traffic engineering controllers right uh public available or also homegrown ones right i think juniper northstar has a bgpls implementation not sure about cisco kerry then god wouldn't surprise me if that is in there as well i didn't you know i i did a little digging only was able to go so far uh another comment uh de pantu xing put on packetpushers.net a summary of bgpls how it works and i believe at the bottom of that article he'd written some python that could interpret the bgp ls and lris and put a simple graph up on the screen based on the information that he was parsing from the nlris so it's not that difficult i mean uh one of the things uh um obviously you know once you have uh here a bit of uh experience encoding a link state database and have a sort of a canonicalized a representation which we did also for arty brick uh one of the uh functions that we have been doing is just uh plotting here the link state graph and we just use here a graph with library right there which has all the primitives there and they produce really nice rendered outputs which are really useful for troubleshooting complex graphs so it's not that hard right uh think about it uh bgp just open tcp session port 179 do an open message tell about uh hey i only want to speak btpls right send the keeper alive and then just wait for the update stream coming in that's it right uh uh it doesn't get simpler than that uh you can do that in uh five six hundred lines of code just five six hundred yeah which for people that are new to coding are like that's very intimidating honestly i'm much sorry no it isn't in all fairness programming's like any other task you just break down compartmentally what you're trying to do and then go get those tasks done one at a time and and yeah you're right there all there's so many libraries that handle so much of that for you it's not not overly complicate complex to get something like that done well honest gredler thank you very much for joining us on heavy networking today and you're a book author and various other things so man tell tell the world how they can follow you read your stuff anything you would like to share well um if you uh want to reach out please free uh to do so i'm on twitter honestgradler also on linkedin my professional life and the cto of the startup we're doing as you can imagine routing and bng software for the next generation central office if you're interested in that please send me at the dm excellent and uh you have a book out there the complete isis routing protocol i noticed linked in our show notes here when did you publish that one oh that's an old one that's uh actually i think this was in the early days of juniper 2002 or 2003 uh where we wanted to uh um since everything was written with cisco cli we wanted also to get some literature out there how you know our implementations work and at the time i was a professional services engineer and doing a whole lot of teaching right and i figured out that a whole lot of the teaching and slide material was not really adequate and causing more question marks about people's head than providing sound explanations so after some time i was doing my own uh little illustrations and slides and tested it if the question marks did go away and ultimately i had really a nice flowing deck and then um one of the instructors [Applause] said hey you know talk to walter right you should really walter gorowsky which was later on my co-author for the book uh you should really uh write a book out of this and we did this in just three months so there's been essentially an isis course covering a bit of cisco and juniper at the time i co-authored a book with ross white and i'll tell you it russ wrote most of that book but i contributed seven chapters or something it was not three months for me to get that work done writing a book is hard so props to you what what what walter has been cruel to me right uh he has been an absolutely slave driver right so he just said hey look honest you get it in time right provide me the content right just illustrations high level description i'll make it good okay that's perfect if you can get someone like that wow wow well honest again thank you for joining us in heavy networking today and if you're listening to this and you want to see some of the slides that hana shared as we were talking go to youtube.com packet pushers and just search for a honda scraggler search for bgpls and this video should come up and uh have a watch uh see what's there and help visualize some of this if you're new to bgpls and trying to get some of it into your brain these slides are quite detailed and informative and should help put some of the pieces together for you if you like this kind of content you're a network engineer you're trying to educate yourself keep up with what's going on in the industry well we have many more of our fine freak technical podcast plus our community blog that is all at packetpushers.net we're on twitter if you want to follow us we're at packet pushers and we're also on linkedin and last but not least remember that too much networking would never be enough you
Info
Channel: Packet Pushers
Views: 1,362
Rating: undefined out of 5
Keywords: packet pushers, data networking, career development, BGP LS, link state, networking, routing, Hannes Gredler
Id: T8okh6pE6lk
Channel Id: undefined
Length: 49min 44sec (2984 seconds)
Published: Fri Dec 04 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.