Build Your Own Envoy Control Plane - Steve Sloka, VMware

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everyone i'm steve sloca and today i'm going to walk you through how to build your own envoy control plane so we're going to introduce some some core envoy concepts once we have those concepts we're going to look at how we can implement those concepts into different ways of configuring envoy and once we understand those we're going to go ahead and build our own control plane now today we're going to use go line for our our source code and we'll have lots of examples that'll be available for you at the end let's get started all right quick little bit about me so again i'm steve silke uh my my role is i'm a maintainer of contour at vmware uh contour is an open source ingress controller for kubernetes it's also a cncf project um so enough about that let's dig into the project so so what is envoy so envoy is an open source edge and service proxy uh it's designed for cloud native applications uh this is this is a quote from the onboard proxy website um so it's a service proxy uh we use it all over the place um contour uses it as this data playing data path component other projects use this as well it's becoming a more and more popular service proxy out in the industry it's used by a bunch of people again this is taken from the envoy website so i'm sure there's much more in this list and it's also used by a bunch of different projects so you'll see here if i move myself all these different projects are leveraging on under the hood again this is taken from from the community website so let's dig into some core concepts and terminology in an envoy and the first thing i want to talk about is the difference between upstream and downstream so any request that's in envoy and it routes somewhere to some sort of endpoint or other place this is what we call the upstream and any request that comes to envoy from outside this is what we call the downstream so requests will throw from downstream to envoy and then from envoy to an upstream it's important to get this right at least for the next 20 minutes just so we're all on the same page but this is what no envelope says upstream or downstream this is what they mean okay so we talked about downstream the first thing we're going to introduce is a listener and a listener is a named network location and it's it's what you know downstream clients connect to this will be a tcp or a udb connection and then you can also apply filters to this and chain them together so filters in envoy are sort of the extension magic they'll let you take these tcp connections and you can do different things with them so in our demo we're going to go build out you know an l7 or http type network proxy which is kind of what contour is with that we'll be able to take the cptcp connection run it through a filter and that filter will then make it turn that into http headers and requests and responses and so it'll make us easier to do this different type of l7 routing there's a lot more to filters and filter chains which you can dig into that on your own again just to introduce that these are concepts that you can apply to your your your server all right now that we have that we have routes so routes are results of listeners so listeners will then call different routes so again in our in our l7 our model we're going to have a bunch of different virtual hosts that we can route to so again you could have you know steve silke.com and you know vmware.com has two different virtual hosts and from there i can have them route different places i can also you know modify the headers on each type and and do a bunch of different things so again this is this is super quick and not very much but then just just to introduce that these things called routes are what makes you decide where things should route within your within your cluster now once you have a route and you're going to route to a certain place and we're going to route to clusters and a cluster in envoy is a group of logically similar upstream hosts so i always think of this as kubernetes right kubernetes has uh services and to me a service is like a cluster where it's just a name and then it points to different set of endpoints an envoy is the same way so envoy has again it's a it's a cluster which has members and members are discovered via service discovery and there are a few ways that you can implement that service discovery so the first way is you can just be static right you can define a cluster and define all of the endpoints that exist in that cluster and then move along you can use dns and you can use strict or logical and they're similar in a few ways that they both look up endpoints dynamically or asynchronously but strict will look for all the endpoints that resolve from a dns query so say three come back from a query for a dns entry envy will then say all right there's three members in this cluster and it'll load balance across those three different endpoints now logical we'll use um similarly it'll just as you can see look up the at ip addresses but it'll use the first dns entry as its result so it'll just proxy out to that one initially and not without load balancing over all of the endpoints in that back end we can have original destination i'm not going to introduce that today we'll skip through that and there's also another one we're going to introduce we'll skip as well but the one i want to really focus on is this thing called endpoint discovery service or eds this is our first xds kind of protocol that we'll introduce so for clusters we can have again we can statically define the members or we can have the the members be defined by another lookup service so kind of like how dns is implemented this will be a service within envoy that we can then feed in the members of that cluster so eds is what we're going to do today in our demo and again i mentioned you can have custom but again we're not going to get into that all right so now that i have these core concepts i need a way to configure envoy with these these types right these routes and listeners and clusters etc so there's a couple ways you can do this so the first way is to implement a static file so i can just go build a file program in all of the different routes and listeners and all the things that i want and pass that off to envoy on we will load that file in and then it will serve traffic based on that i can also have a place on a file system so i can tell envoy hey look in this directory for your files and it'll load them in dynamically from there i can use the rest endpoints for my management server or i can also use grpc so this is what we're going to do today so grpc is has an advantage of a rest is that rest has the poll right and it's going to be slow and it's got a lot of overhead because it's always polling for traffic grpc is a rich connection right so i can stream changes bi-directionally uh very easily so that's what we're gonna implement today in our xds example and there's two ways to implement these management servers so the first one is called state of the world and the second one is called delta so state of the world says hey say i have nine clusters in my in my configuration and i add a tenth cluster what the management server is going to do is it's going to pass down all 10 clusters to envoy and say hey envoy here's the state of the world this is all the clusters that i know about um and so this is what you should use your configuration against and it works for all the different types right now delta is going to work like you might imagine instead of it sending all 10 clusters down to envoy it's going to send just the one cluster that was added same way if it was deleted it would delete one cluster and send that one deletion down if a cluster was deleted in the state of the world example it would just you know emit that from the list so it would just only send eight clusters down instead of nine and then onward would say hey this cluster is missing and it would you know delete it out of its configuration all right so let's talk about xds we've kind of hit around it a little bit but we've introduced these different things called listeners called routes clusters and endpoints right well they each have a different discovery service or xds protocol that we can implement so over grpc we can have each one of these return a list of listeners routes clusters or endpoints just like we just said and if you strip off the first name replace it with an x you can see how this is we get this name because there's a bunch of different protocols that exist that you can implement again these are just four that i'm introducing there are more and i encourage you to go research this more afterwards but essentially my point here is i want to get across is that this is where you get xds from now there are four different variants of xds just like we talked about we have the state of the world as well as incremental we can also decide on how we want to send out or set up our grpc streams so the default way is to have a grpc stream per resource type so for listeners clusters routes and endpoints i'll have one stream for each type so i'll have four total and that's in this top version now alternatively i can implement ads which is aggregated discovery service an ads lets me have basically one stream for all those different resource types and i can i can send traffic across all different types over that one stream for today's example and today's purposes we're going to use the simple way which is a stream per resource type as well as the state of the world type updates so having all that background right and that's a lot all these different concepts let's take all of that and put it together and let's go build an example and we'll see what this looks like if you had to go build this yourself so this diagram here is going to be what we're going to actually demonstrate today so we first need an envoy obviously an envoy will then get past a bootstrap config right and this is a static file that will pass the on void and what what's in that bootstrap config essentially is anything you want to preload to onboard right so we can set up static cluster static listeners any kind of static resource in that file then we can also reference dynamic resources that's what we're going to go today so dynamic resources basically will say hey instead of you loading my listeners and clusters and all those different xts types statically we're going to load them dynamically and they're going to come from this xds server here this box in green and what we'll do is we'll create a cluster a static cluster which will point to this xds server so when envoy starts up it'll have enough information to go look at that cluster and pull down all of this all of its information dynamically now what the xds server needs is it's going to need some source information right what is its source of truth in terms of how it understands what routes and clusters and endpoints it should configure envoy with this is going to vary depending on your implementation so for contour you know we run on kubernetes so kubernetes is our source information so contour looks for things like services for endpoints for secrets as well as ingress objects it takes all that different from information and it makes you know envoy configuration and then fills in those envoy xds caches and then passes those down to envoy so that is the nutshell what we're going to do let's go ahead and let's poke around so the first thing i want to look at is this thing called go control plane right so go control plane is a project in the envoy project organization and it's it's a go implementation of the data plane api so the data plane in envoy is a bunch of protobufs that represent all those objects we just we just described about um in the last couple slides but this one they have they're all ghost trucks so you can import them into your go project and use them from there the second thing that this project gives you is it gives you a sample implementation xds server which we're going to utilize today so instead of you having to go build out all the grpc connections and all the extra overhead and routing you can leverage the the implementation here in this project to go ahead and build your own control plane if you like now you can read through a bunch of things here but here at the bottom there is this example server so shout out to this user here who helped build this out i know we used to have to look at unit tests before to get a good example so this is this is great so this will give you a lot of enough information to spin up your own server as it says right um so i'm going to show you basically the same exact example i took this and reused it but what i did was i added a little bit more dynamic configuration so instead of it being statically configured i have my configuration come from a yaml file right and we'll look at that next here so this is my project i'll have the link to this at the end of the slides if you want to check it out yourself so in here what i have is a main.go and to get started what we'll do is we'll go ahead and create a cache a snapshot cache now this cache is the core of this go control plane you know xds server the snapshot cache holds all of the different snapshots that exist that we've passed down to onboard so remember we're talking about setting this state of the world set of set configuration to envoy so what we'll do is we'll go ahead and look at build up our configuration based on our source information and we'll build out a list of listeners and routes and clusters and endpoints and then we'll create a snapshot out of that and we'll pass that snapshot down to envoy and then onward we'll load that in so whenever this whenever those any of those different objects change we'll generate a new set of caches we'll make a new snapshot and then we'll pass that down to onboard until i'm like hey here's a new version go load this in android will parse that in that's essentially what we're trying to do here so like i said i have this config file that we created and this config file here is just to help us you know configure this server a little dynamic more dynamically this server has um i guess this is my source information again for contour and others this could be kubernetes this could be again depending on your environment this is whatever it needs to be but for me this is going to be this static file here okay so to watch this file for changes we're going to go set up this watcher so we first created the snapshot cache we're going to create a watcher and what the watcher does is anytime the file changes we get a notification back here over this channel that's all we're doing so we're getting a callback to say hey the file changed and essentially that's going to be our signal to go rebuild a new configuration now we have that set up we're going to start up our xds server so here we're going to go ahead and build the server out we're going to pass it the cache again that was the snapshot cache we created here in line 56. now that we have that we'll go ahead and run the server let me get the pass it up port and for us this will come from a flag and we'll default to 9002 so let's go ahead and run this okay so we ran the server you can see we're listening on port 9002 like we said and here this debug line says this is the snapshot we're gonna serve up so here you can see we have some information in here and what it is is this information matches our configuration file here so if you can look in here you see we've got this cluster name echo and here i have let me move myself over here for you here we have this cluster named echo and if we dig in we can see we've got a listener zero and here we gotta listen to zero so the snapshot came from this file and what happens is anytime we change this file a new snapshot is generated so let's take a look at how that happens so here in this processor you'll see here is we have this new snapshot i'm sorry this process file and when it gets passed in is the file that changed so the notifier again sends us an event whenever the file changes and tells us what file got updated so the first thing we do is we just parse that yaml into a go struct so out here i have this api and this api defines our struct in ingo so coming back to our processor after we parse that in what we do is we actually go and generate envoy types out of that that yaml file so we'll go ahead and loop through every listener that exists and we'll go ahead and build a set of listeners right and we'll pass in the name of it the routes and the address and the port and then from there we'll build a set of routes so here's our listener cache and then here's our route cache then we'll build out our cluster cache here so we'll add all the clusters and then we'll add all the endpoints so essentially by time we get to this point we have a cache full of listeners endpoints routes and clusters so now after we build those all out we have generated and since our point in time or state of the world type cache now we'll go generate a new snapshot and what we'll do is we'll pass in a snapshot version so this will go increment our snapshot version basically just you know increment it by one from whatever the previous one was and then we'll also go ahead and select my place we'll go ahead and pass in the contents now this cache contents goes and converts my local type that i have here into envoy xds types so if we look at something like listeners you can see we have this resources package and this resources package goes and builds a new resource type called listener.listener now this type are actually on void types so you see here i'm into this listener protobuf so again now we're digging into that go control plane and now we have this listener go struct which lets us create envoy listeners so we do is we go ahead and build out so we'll go back in here we'll go build out a listener and one thing that's interesting here is that we'll say we'll build out this filter chain right here's where we're adding that http connection manager and this is going to give us this l7 router so the same way in here we have routes so here's make route and again it returns a route configuration and this route you know has a match and an action again these are envoy specifics but these are building out all the different envoy prototypes we can create endpoints here and we can create clusters so the processor goes and builds out a cache of that yaml file and then generates a new snapshot against that type converting all those different types into envoy types we then make sure that the snapshot's consistent meaning do we have all the routes referencing proper clusters and so on if it is consistent we'll go ahead and create a new snapshot we'll set the snapshot to envoy and this is where we're actually going to tell envoy hey go ahead and load in this new snapshot and once you have that new snapshot you can then process your update all your caches locally to get the new configuration so excuse me with all that said we have all that done now right so the next thing i'm going to do is we're going to go ahead and spin up envoy so let's go ahead and create a new window a new tab here and let's take a look at what this environment looks like so as i mentioned we need this this bootstrap cluster i'm sorry this bootstrap yemo and this bootstrap config will go ahead and load in static resources and we talked about loading in you know a static cluster to point to our our xds server and that's this so on localhost 9002 again that's where our xds server is running we're going to go ahead and tell it create a cluster in envoy to point to this once we have that cluster we're going to set up all the xds endpoints so we're going to set up our cluster cds config again we're going to use grpc type and we're going to point it to this xds cluster created here in this previous step so envoy knows now that hey i have dynamic clusters and they come from this endpoint here or the set of endpoints and this is similar for lds or listeners so listener's going to come from this cluster name here it's going to set a cluster name and id and then down here we're going to expose an admin web page and the idle web page is helpful to debug contour i'm sorry to bug envoy and look at all the different information that we've loaded into that envoy instance okay so now this is up there we'll go ahead and start envoy so we're going to go find the binary local on our machine and we'll pass in that bootstrap yaml file here using c bootstrap ymo so i can go ahead and say hack start envoy and well the first thing you'll see is when envoy starts up it loads all the extensions that are compiled in so by we're using the upstream one here so by default what you'll see is we have all of the different extensions already automatically compiled in and just logged out for your information down here you can see that we've spun up this this runtime layer and we loaded one static cluster and that cluster is that xds server that we referenced earlier and then you can see that we connected to our server and we added a cluster and we added a listener here so here's listener zero and that matches our config and there should be an echo server here right here this update cluster echo so that got added as well as my listener so we connected to our server here so um let's go verify our configuration real quick so we'll go look at again localhost 9003 right i'll refresh this so here we are and what we can see in here is we've got you know a set of listeners so again listener zero zero zero nine thousand and again that matches our config zero zero zero on port nine thousand um we can also see clusters in here so we can see that we have this echo cluster and it has two sets of endpoints so dot 244 on 9101 and 24409 102. again that matches our configuration here so it's called echo 244 on 9101 and i'm 102. we can also look at this config dump and this dump shows you all the running configuration in envoy so again it shows you all the different extensions that are compiled in down here a bit further we'll be able to see the static configuration we loaded so here we see static resources and we have our static xds cluster again pointing to port 9002 and then here you can see we've got our dynamic dynamic resources and again it's that lds config point to grpc over that same static cluster we created all right coming down a little further we can now see our dynamic clusters so this is what got loaded in dynamically through our management server so here you can see that cluster called echo again it comes from this xds server i'm sorry our endpoints again are coming from eds i mentioned that back in the service discovery slide so eds is the is the source of our our echo cluster and again it comes from the same xds server and again here's our dynamic listener because we load that dynamically as well and then here we're loading in those those filters so here's our filter chain and we're loading in this http connection manager which gives us that you know l7 proxy routing okay so this is enough to get running let's go ahead and query this and see what happens so if i go ahead and curl localhost 9000 i get a response and you'll see here this is this is that simple echo server um that we have running and you see here's the host name that responds so we have two endpoints so if we query this again we should get a different one maybe 682 and then here's three fc right so there's those two so i can go ahead and do a curl on this we'll curl every second and we should just load balance between those two endpoints 682 3fc you can see that right if i go ahead and come into here and i remove an endpoint again as soon as i hit save i change the configuration our our callback will fire and going we'll then create a new configuration we'll then create a new snapshot we'll pass the snapshot off to envoy and they will update so as soon as i save this you'll see envoy down here updated if i come back to our window here now we're only going to hit the same endpoint so 682 because now we have one endpoint in our cluster right cool so now we can do is we can add another another cluster in a route if we like so we can go ahead and copy this and we'll call this new and we'll put this on 9 103 and then maybe 9 104. and then we'll add a new cluster here we'll call this one new route and a point to the cluster new i mean we'll call this one slash foo that'll be our prefix right so we can do routing on slash or slash foo again our goal here is to make an l7 load balancer so we'll go ahead and save that again our file changed we created a new internal state of the world configuration create a new snapshot and pass that back off to envoy so now if we go ahead and do our curl nothing should have changed here for the slash endpoint but if we curl for slash foo uh what we should get now is some two different endpoints okay all right so there we go so we got slash foo and slash running so um which i think of what else so i think that's all we have for slides here so um if you're interested learning about more um there's some resources here this top resource is the sample we just looked at so it's the xds server that i wrote which derives from the go control plan example but adds the yaml parsing file the go control plan is another place and obviously onbyproxy.io so again i'm steve sloca please reach out with questions i'm happy to answer more i'm happy to discuss this more i know this was quick but hopefully you've got a good understanding of how envoy works and how you can build your own management server thank you
Info
Channel: CNCF [Cloud Native Computing Foundation]
Views: 4,038
Rating: undefined out of 5
Keywords:
Id: qAuq4cKEG_E
Channel Id: undefined
Length: 24min 27sec (1467 seconds)
Published: Fri Dec 04 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.