Multi - Networking Kubernetes Containers with CNI

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
my name is Dan Williams and I am a member of Raj's team we were just presented before so it's the networking services team and this presentation is multi networking kubernetes containers with CNI I'd like to do or give a special shout out to Doug Smith from whom a lot of the slides are adapted part of this was also presentation that we gave at Q con Seattle back in December so thanks a lot to dump you for helping out with not just the main event of this presentation which is multi but also a lot of the slides do so beyond Doug I'd also like to thank Tomo Fung from Red Hat on the end of the partner engineering team and then also Carol who is from Intel and is also a member of extra Malta scheme unity and has done a lot of work on the multis and the plumbing working group as well so and then of course there's the upstream Memphis community it is an open-source project on github and there's a lot of people who collaborate in that particular project and if you're interested join the community help out so with that said we're gonna go through a couple of things today we're gonna talk about exactly what all the acronyms that we are using me or that I'm using today are we're going to talk a little bit about the network plumbing working group which is a group that we started up streaming kubernetes and why that's relevant to this and then we're also going to talk about how we take the stuff that the network plumbing working group developed and bring it into practice in communities and then I finally will talk a little bit about what's next what's coming up in the plumbing working group CNI kubernetes networking itself so some of the acronyms were other things that we're going to talk about today hopefully everybody knows what it is its container orchestration system that's fairly popular today and that's when you have container orchestration system like kubernetes you probably want the containers to talk to each other and you probably want the containers to talk to the outside world and that's where something like CNI comes into play which is the container network interface and that is basically a specification and a set of reference plugins that allow Network plugins to setup and teardown container networking and that's going to be an integral part of the presentation today because the CNI interface is how all of these pieces fit together also a pod that's kubernetes terminology for a set of related containers that share a network configuration network namespace and set up and then CRD custom resource definition that's another kubernetes term and that's basically just we need to describe an object in the kubernetes api that anybody can create it's not an official part of the kubernetes api but anybody can create one and that's one of the ways that kubernetes enables extensibility of the system and these only showed up fairly recently in the last like year or two but they've already created kind of a huge explosion in how third parties and other components of the Kootenay ecosystem that are not officially particle based projects interact with kubernetes itself and it's the rest of the ecosystem and finally a CRI the CRI is an interface that kubernetes developed it's kind of an abstraction layer between kubernetes itself and what actually runs the containers and sets up the network namespace for the containers one of those is you might have heard of docker there is a CRI for docker but up and coming is also the cryo project and there is a CRI abstraction for cryo as well in fact price stands for container runtime interface I think I forget what the other stands for but there are many talks about cryo this weekend as well so if you're interested in more of kind of the details and the guts of container networking especially in communities land check out those talks so what's the general problem why are we even talking about this today well the first problem is that kubernetes really only has one network interface or one network that a container can be connected to and this is perfectly fine for a whole ton of different use cases like you know a web server if you just run the nginx this is great it works fine however that's nothing a use case that a lot of people have and over the past couple of years we've been seeing a lot of use cases a lot of customers of red hats but also a lot of interest upstream in being able to have more flexible networking for your containers things that don't really fit well into that kubernetes model very high bandwidth like media streaming applications that need to push tens of gigabytes per second or gigabit per second out of a container if you have specific latency requirements a lot of the default networking plug-ins for kubernetes don't really have strict guarantees or the ability to provide guarantees for these things so then segregated networks the legacy networks those two are if you have for example old legacy things like databases that aren't really a containerized model those might be over here on this network they might have a particular IP address you need to talk to that it's segregated due to like privacy concerns or legal you want your container to be able to talk to it but you can't cook that thing's Network up to the rest of your container cluster so maybe you have a physically segregated network that you have to connect containers to to be able to talk to that resource so these are a couple of the things that don't fit quite as well into the kubernetes networking model microservices etc as you know the simple web servers or you know databases and web apps so what's the network plumbing working group that is a group that we form to kind of tackle some of these problems we worked with a number of other upstream partners and groups Intel as well trying to think of similar ones well anyway red had helped form this group about a little over a year and a half ago and it's focus is on enabling some of these use cases that might require multiple network attachments per pod to enable these in kubernetes but at least initially in a way that does not modify the kubernetes api officially there have been discussions going on a lot in kubernetes network special interest group around how to enable this and some pocs and things like that but there's some resistance upstream to it some you know for some good reasons and so the network limit working group is formed as a forum to be able to talk about these things prototype them and figure out what we need to do to maybe get some pieces of this upstream solve these problems before something upstream that you know may or may not get rejected might need a lot of work etc and it turned out there was a lot more work than we thought here so the plumbing working group focused on creating a specification that bases itself off CNI that anybody could implement to provide multiple networks per pod we did some pocs we refine those we learned a lot from the pocs we developed a specification over year or so refined it we did a first release mid to late last year and also been working on a reference implementation of that which is multiple talked a little bit more about in depth quite soon if you're interested in helping out with this group joining the skirt I have the link to the community right there and that includes like meeting times the purpose some of the things that are being worked on and there's also in the slide back the link to meeting recordings on YouTube all of our meetings are public all the meetings of today's networks special interest group are also public so it's a very inclusive community feel free to join we'd like everybody's ideas and we'd love to happen I'm gonna skip that slide for time reasons so the spec b1 again short term solution there are other groups that are exploring much longer term solutions something like network service mesh if any of you for meeting with that but the network plumbing group was focused on what can we do to enable some of those cases that we talked about sooner rather than later without change in kubernetes api because that's very hard to do for various reasons that i won't necessarily get into unless you really want to know and it'll basically a lightweight standard that anybody could fairly easily implement beyond that it does use CNI but we found while developing the specification that there were other people who didn't want to use CNI plugins necessarily to do this specification so we worked with those people and we tried to figure out ways to make sure the spec would work for plugins that didn't necessarily use see it we also want to coordinate with the resource management working group and that's a working group that's focused on some things like scarce hardware so if you have s Riv nix on your notes that's something that you only have a certain number of and they only have a certain number of capabilities and so you need to figure out well if I over subscribe this node with pods that require that capability can't do that things will fail you want to stop that before it actually happens so we're working with them to try to figure out how we can best use of the resource management on the notes and make sure that me prevent these problems before they occur so a quickly go over specification that it has a couple of parts the first one is an annotation so in kubernetes you define everything through you know usually yamo files and so you'll add an annotation to the pod object when you create it that says I want to attach this pod to network a or in this case network fubar when that happens and the pot is created the node will actually go off and it will attach that pod to the cluster wide default network which is the normal communities Network but then also to fubar and then the implementation for example Malta's will take all the information about mac address IP address other characteristics and publish that back to the kubernetes api currently the only thing in the cube API that gets reported about a pod is its IP address and it turned out that that wasn't actually sufficient for a lot of the cases so you can kind of see here that there are a number of pieces of information that get published another thing you can see is that pods can have multiple IP addresses that's something that kubernetes upstream itself only is really starting to deal with and that was only because of ipv4 and ipv6 dual-stack so we kind of tried to incorporate those kinds of things into the specification already so that it would be compatible with future versions of communities so you can see that so that's the second part the third part is that the specification defines a custom resource definition which we talked about earlier and the custom research definition just says this is what my network needs to be created for the pods here are the properties that this network should you is when a pod is connected to that network then you do that through the cube API there are some additional components you know for example you can think of you might not want every single pod in your cluster to attach to a given network so we need to make sure we have access control for these networks and there's also admission controllers for validation an admission controller is simply if components and kubernetes that allows validation allows access control of things before they get added to kubernetes itself so it kind of routes your request through the admission controller and then finally if the admission controller says yes allows that to be added to the communities object store and ecosystem there's also some upcoming stuff to help other implementations so let's talk about malti's we call Multi and I'm meta plugin because essentially what it is is a shim between kubernetes and a number of other network plugins and it kind of multi plexus things which is kind of where the name Malta's comes from so it allows you to attach more than one network to any given pod in kubernetes and it understands the network plumbing working group specification which allows you to do all these kinds of things so again the problem just to recap each bot only has one network interface and normal kubernetes that's not particularly dynamic you only get one thing we need a little bit more flexibility so how does most this help with this flexible or help with this and provide the flexibility well you define the CR T's that define all of your networks for the cluster Malta's looks at those it reads those it figures out when your pod is born on a particular node which networks it needs to go attach that pod - it looks up those network definitions and it actually makes that happen she'll go into a little bit more so in this example you know you kind of see how Malta's will attach to different networks at the same time and you get you know mac feeling inking the second network is going to be any cni plugin it doesn't really matter so it's fairly open it's fairly easy to specify what kind of networks you want beyond the default one so key concepts the specification okay let's back up a second kubernetes requires a cluster wide default network it has certain guarantees or certain things that it expects out of a network plug-in and your pod network and so the specification calls that the default cluster wide network and that provides the backwards compatibility between what Malta's does with multiple networks and what kubernetes expects it always attaches the default of the pod to the default cluster wide network but then all of the additional ones are secondary sidebar networks and what that means is that they're always going to be additional you have the default one always and then you have zero or more of these secondary networks the secondary networks don't have quite the same guarantees as the default cluster wide network for example you don't have microservices on those you don't have any kind of network policy on those networks we're gonna work on adding that in the future and exploring how to do that but at the moment these secondary networks are very targeted very focused custom resource definitions basically what happens is you say this is a description of my object in our case that would be these secondary networks and you tell kubernetes what this particular object looks like how to define it kind of validate it you add that to the kubernetes api and then anybody later can create objects using that kind of type using that description for the network plane working group specification you can kind of see the example of what the pod annotation looks like to select all networks right here you can kind of see this is an annotation that's kind of defined by the group it has a name for each network and so you can say okay well I want to attach this pod to the control fleet Network and one and I went to attach the pod to the data network so these names which are the product specification map down here to the actual object that you have defined for that network and that object has a couple of properties as well and this is basically the C&I configuration for that network that describes how you're actually going to attach spots to that network this is a little bit more detailed definition of how this works this is the object that we've been talking about attachment definition and this is what you create the CRD for so the CRD tells kubernetes how to interpret this particular object when you've added the subject of cube api so you add this once and then every single node on the system is able to see the configuration for CNI and to be able to create pods that attach to this network so how do you start a pod with one of these additional interfaces pretty easy you use an annotation then you say this is the network that I wouldn't attach to this name maps back to that object that you're just looking at that describes that MACD line network so that would be here and you can there's a couple of formats for this annotation you can use the short format which is a lot more user-friendly and that just says the name but there is also another format that allows you to describe things like what's the MAC address that I want this interface to have what's the IP address that I want this interface to have what is the network interface name inside the pot that this attachment should have so that it's not completely random in your application inside the pod can expect a certain network interface name so then of course after you attach this pod to a number of different networks how do you even get those results back the specification defines in volt dis implements a way to publish this information back to the Cooper Nettie's API so that you can inspect it from your other applications or from your management tools or anything else that you want and once you see the information here you'll see the secondary network interface and that's the MacNeill in one second but now we have demo time so and we'll just do a little quick demo of how this works so you can see here I have a small kubernetes our small open shipped cluster and it has two nodes in it at the moment there's a master right here and there's a second node right there and this is just showing that Malta's is running in that cluster and managing the networking situation yep not better good point so what we're going to do first is we're actually going to create a pod and we're just going to use nginx because it's small and it's really simple so you create the nginx pod and create itself for a second there sorry about the wait I actually had pulled this image before but apparently that's not the case anymore well anyway we will come back to that and hopefully it will be where it needs to be so while we wait for that what is next for the network plumbing working group we have some - classification updates obviously not everything is perfect the first time around so we found some changes that we need to make we found some errors in specification there were a couple of small problems that we had to address for example what if you want to specify multiple static IPS we had to add multiple static IPS but with a network prefix so if you want to do like a slash 24 or slash you know 16 whatever you want your static I paid to be we a found that that wasn't in the specification we added that and we also found that that wasn't possible with CNI due to some of the conventions that Si and I had and so we had to also update Si and I so there's kind of been a cross-pollination I guess between CNI and the plumbing working group we worked pretty well together also some of us on the plumbing working group or maintainer z-- with CNI so it's very easy to make these changes back and forth some of the other minor spec updates adding some of the capabilities that kubernetes allows and making sure that those were expressed in the specification kubernetes allows things like port mapping bandwidth QoS type stuff and it pushes those through into the network plugin but we also need to make sure that the specification allowed passing those through to something like Malta saying to those sub plugins we also as I talked about before because these secondary networks are not really full citizens yet and that can be a problem so for example if you want to run a media streaming service but it needs high performance you might have a second network interface that's dedicated to media streaming but you want to have a service on that particular network so that clients don't have to connect to a particular IP address they can just use a domain name and kubernetes figures out which pods it goes to that's not currently possible because of the funding groups because the plumbing working group is attempting to not change the communities API yet so what we need to do is we look into POCs and do a little research to figure out what's possible there one of the problems with that is that if you have a second network but you expose all of these kinds of things like the pods IP address the service virtual IP those things to the cube API how does something that's reading the cube API and trying to talk to it knowing that it has to use this completely separate physical network to access the pot so we have to solve those kinds of problems we also have to take a look at network policy on these second interfaces because again the network policy is talking about can pad a talk to pod be there anything in the network name very can anything in the project or name space over here talk to the project namespace over here which at least an open shift is one of the ways that we implement multi-tenancy that's not so easy because if the pod is on two networks at once how do you know what can talk to each other how do you know that these things over here is supposed to be able to access the network what happens if you can't actually talk between the two networks so the physical level so that's an area of research that we're trying to work on and also dynamic interface attachments those are right now kubernetes expects that when you start a pod it has your cluster by default network and through multi and the specification you get these additional networks but you can't add and remove them on demand because kubernetes really does not expect that well it turns out that they some people want this there's a lot of interest upstream and being able to change the pod definition after it started and have those networks automatically attach and detach and because we have this shim for example Montes in between kubernetes and the pods themselves this shim can sit there watch the kubernetes api and decide oh hey i noticed that this network is now present on the pod specification let's add it to the pot or for example removing it that's a use case for the particular use case that somebody is very interested in is dynamic grabbing and so they're kind of building an architecture where some of the routing logic is actually in the pots but to be able to do that you might need to add network interfaces remove them from the pot to be able to dynamically update that system so we're going to work on that that's actually not really that hard because it's just doing the same exact operation just at a different time so that's going to come up as well and because not many people have done this type of thing before we're gonna try to figure out if there any implications for kubernetes there might be we'll see a lot of this stuff you really don't know until you try it and not a lot of people have tried this kind of thing before so through CNI itself there are two parts to C&I the first one is the specification that anybody can implement and there's also a set of reference plug-ins for CNI so next up for CNI we're gonna release a new specification for for CNI in the next couple of weeks or maybe a month or so and as part of that it adds things like check support which is network health checking previously kubernetes has not really had the ability to say hey does this pod actually is its network actually healthy does this network actually work it has a higher level health checking where it actually query does the service inside the pod so if it's a web server it will actually query the web server and say hey does this web server still healthy but there's not a way to say is the network itself that the pot is attached to actually working so that's something that we add to CNI and then eventually we'll also add to kubernetes itself to call that functionality as soon I and when the network is unhealthy kubernetes will kill the pod we start it somewhere else or maybe the same note doesn't matter finally cache results in the helper library so CNI has kind of a helper library that kubernetes or any other runtime can use and that currently what happens is kubernetes calls the ad request for the pod it gets back the IP address throws everything else away that doesn't work that well in some of the cases for example when you want to check the network health again so we added support for caching that result from the pod network setup so that it could be used later that you get more information at kubernetes could use later because again right now it only stores the IP address and that's not really sufficient we also have some more reference plugins there's a firewall plugin that works with IP tables also works with firewall d that will help in some cases you need to punch those through the firewall to do certain things the firewall plugin will allow that fairly easily there's also a new source based routing plugin that helps with some of the vrf which is I think virtual routing and forwarding that was contributed upstream and recently merged Malta's itself what's next from Altis we want to like I said before try to figure out how these secondary networks are actually going to work with services in network policy so there's going to be some pocs going on right now about that we also wanted to enhance security currently the way that access control works is if this pot is part of a namespace you restrict the network to the network definition to that main space - and if you're not in the same namespace you can't actually add the network that's not sufficient so we're also going to investigate how we can make that more fine-grained so you can give specific users perhaps or specific cluster rules access certain networks but not there's also going to be refinements in that work plumbing working group specification we're going I mean like I said this kind of small fixes we have to make sure that the meltus reference implementation is updated for those small fixes we also want a conformance test framework for the specification because multi one implementation but there actually are others out there and so we want to develop a conformance test so that you know that certain plugins will actually implement the specification implemented correctly but that also works well for multi speak I've found in the past that Malta's itself didn't correctly implement the specification so it would be useful all around and then again continue working with the device management group on things like SR io v you know if you only have the ability to have 32 virtual functions on your NIC well don't start 33 pods that require a virtual function to be inside the pods network namespace because clearly that's not going to work and unhappiness results the other thing we might want to make multi celebrate because the functionality isn't something that's like earth-shatteringly complex so if we make it a library it could potentially be integrated into some of the kubernetes container runtimes like cryo and then you wouldn't necessarily need this shim because cryo would automatically understand by default that if it sees that this pod spec has a couple of networks that it should be attached to just go off and do that you know it basically fold the shim layer into the action of container runtime itself it's totally possible not sure if that's actually going to happen it or not we're just kind of exploring and thinking that maybe that's the direction that could go the last thing to talk about is network service mesh that's an upcoming attempt at solving this problem and it much more in your way if you are interested in our service mesh happy to talk more about it but it's a little out of scope for this particular presentation so let's actually go back and check out the demo if we can see if that got where it needs to be nope it sure didn't so unfortunately we will not have the quick demo but if you're interested in any of these topics we would love to have your help love to have your input I've put the link for the form a working group community right there like I said we have meetings every other week and we love any kind of help or input that anybody has so with that and - the demo I'd like to open it up for questions right yes Jerry asks that I said there were some problems with getting some of these features upstream into kubernetes what are those problems in way after two years or more are these sorts of things upstream I don't know I mean I'm assuming most people are familiar with networking in general in this room at least at a basic level well it's a very complex and so one of the problems is that because everybody has something different that they need out of networking because all of the ways to provide those needs are different you know it's which interface type do you need to use do you need to use a particular vendors hardware and software combination to get your cluster networking working you know what methods are you actually going to you know do you need to use routing or is to me like layer three or the unity is layer two all these kinds of things kubernetes really does not want to be in the business of defining a certain set of capabilities that networks should have for your containers so they kind of want because of the complexity to just wash their hands of it and push the kinds of things you need to do off to custom research definitions like we've described here push it off to the network plugins themselves that's kind of why the CNI layer was added to committees a couple years ago was to get kubernetes out of the business of defining the properties in the network and how it's called and just move it all into a simple add this container to the network and remove this container from the network so all that stuff's pushed down in now the problem is that when you encode those kinds of things like those claims of ideas about what a network is and how a network works into the kubernetes api its API there's guarantees there's stability guarantees there's backwards compatibility guarantees and because it's so complex they just don't want to have to deal with those kinds of things and formalize that stuff into the cube API so we're kind of left with how do we make these things happen outside of kubernetes and maybe take some of the things that we learn and bring those back into kubernetes but in a much more generic way and one of the thoughts there is what if you describe the things that your application needs out of kubernetes like okay does this application need a ton of bandwidth maybe what is the minimum bandwidth requirement for this application what is the minimum QoS guarantee or what are the minimum isolation guarantees that you need should this by network talk to these other networks or shouldn't not talk to that and then based on those kinds of generic properties or requirements of the container maybe the kubernetes ecosystem would figure out which actual backing network to attach the pod to you without having to say this needs IP address you know 10 1 1 3 on this network with this particular MAC address on this particular card that can deliver a 40 gamers so it's kind of one of the reasons why how we're going to approach that problem but yes it's a very long road and everybody needs something different from networking I'm sure you're not missing well yeah they could I don't think that those those things are mutually exclusive you know you don't have to define services on the secondary networks if you don't want to it's just that we're gonna allow that possibility to happen because there are some use cases that want that possibility sorry the question was these the networks that multitude being the default network are secondary when we talk about trying to make those secondary networks first-class citizens are we also accounting for the fact that maybe some of these networks don't want to be part of the cube a part of kubernetes in general or use the constructs that could be minis gifts to networks by default so is that gonna answer the question any other questions yeah so the first question was how do we interact as like network on the oricon group multis community CNI etcetera how do we interact with the communities network special interest group and what's the cooperation there and what's the maybe timeline for getting these kinds of improvements and upstream into kubernetes the second question was have we played with the hardware side of things like s RI and V and that kind of stuff so first question I'm also I'm a co-chair of the kubernetes network sick so I'm in both meetings and there's a lot of other people that are also in both meetings back and forth so there's a large degree of cross-pollination between those two groups just based on the members being similar in a lot of cases other than that you know we regularly brings specific issues back to sig network so that we can talk about the air get back on those proposals those ideas that's also a forum that a lot of us on the plumbing work and reviews to kind of keep track of what's happening in kubernetes in general I'll give you one example there's been a lot of discussion back and forth about how should kubernetes deal with ipv6 and so it was decided that yes kubernetes should deal with ipv6 because it's kind of important well that means that pods might need multiple IP addresses because you and you'll have a v6 or one award each so there was input back and forth and we want to make sure that as a funding working your representative as you know I'm standing over here is plenty working here you want to make sure that when kubernetes makes changes to its api they don't have versity impact you but you know like sitting over here is a cig network representative I want to make sure that what gets added to kubernetes is generic enough that anybody can use it and then it's worthwhile for anybody and so it's kind of taking those two things and trying to make sure they happen it's a little bit challenging but it actually works out fairly well and we've been able to in the IP v4 v6 case kind of chart that path and make sure that it will be useful for you know plumbing working group and others but also useful in general for kubernetes so I guess that's one specific example of kind of a cross-pollination oh yeah for the second example are for the second question s are iov yes for example Corral from Intel is working pretty heavily MSR iov they have a device plug-in so without getting too far down the rat hole of device plugins device plugins are what the they're basically built for managing hardware resources on a node so things like SRV InfiniBand things that aren't normal network interfaces that have some finite resource a device plug in is what knows how to configure that particular piece of hardware knows how much of that hardware exists and that's what communities talks to to actually bring the hardware up that pipe creation time so the device plug-in is something that Corral and others at Intel working and they have software that will actually handle this are a V parts in interact with motives to make sure that you can do SR I would be with Memphis for the next question and that's actually it should be available already and it works there's some git repos at this link right here not community link but if you just take off the community part and really working group there's a sir IV device plug-in there's necessary of the admission controller and if you grab those things and they also publicly images as well because all these things actually run as containers in the company's ecosystem and so you can use those today it may only work with Intel cards at the moment but you know there are others I think that Mellanox is also working on device plugins for their stuff Nvidia also has a device plug in for more of the InfiniBand side too so as I answer the question yes on the end so currently the input list only includes IP addresses from the cluster by default network so they you do not have n points on the second networks yet and the problem there is you have to keep the guarantees of kubernetes api compatibility sorry the question again was how are the endpoints when you have services and you have pods and multiple networks how are the endpoints calculated what does that import list look like it does not like I said it does not include any of the network interface IP is on secondary interfaces at this time because something reading the kubernetes api sees a list of the endpoints you know su is currently that it can reach every single one of those endpoints but if some of those endpoints are on a separate you know like physically separate network or logically separate network something reading the cube api is it can be able to necessarily talk to those now you can't get around this through like you know proxies between the different networks or you know like some other kind of connection between those networks but at the moment that's not possible we're not going to try to jump into that mud pit at the moment but that's something that we are looking at trying to solve how to solve there are some ideas around what if you have a fully connected cluster where you have essentially two separate networks but every single node is connected to both of those networks that's a use case that's a lot more easily solved than if some machines are hooked up to this physical network and some machines are kept in this physical network and they can actually talk to each other so not quite yet but hopefully over the next year or so that might happen yes so I missed the middle part of your question could you repeat so the question was about interfaces but some of the higher-level things like perhaps an important range that currently is hard to configure are we thinking about making notes [Music] you feel okay yeah so also about protocol is not just TCP and UDP but other protocols SCTP support was recently out the Kubik yeah that may or may not be interesting to you and so that's one example we know there have also been requests at least on the OpenShift side to have two port ranges so again another example there you know kubernetes does not make that easily available but we certainly want to make that easier we know that those news cases exist I don't think there's a plan specifically for it but if this is something that's interesting you that you need I'd say get involved in Signet work or network will be working group let's figure out what that news case is and let's figure out how to address that okay on the SCTP okay the question was really explained talk a little bit more about SCTP support in communities urban days only really cared about tcp and UDP as protocols in the api it was recently updated like the kubernetes api objects were updated for SCTV support but of course that requires that you implement that support in like the proxy layer and also potentially your network plugins so just because Cube allows it now as part of the API there's kind of a little bit of lag between when the network plugins and the proxy and stuff actually end up supporting that I believe cute proxy does support a CPT SCTP new but not every Network plug-in actually uses cute proxy so so yeah those plugins yes yeah and so it's it's gonna be a little bit before some of those plugins support it some plugins might not actually ever support it but you know if there's certain plugins are interested in you might need to contact those you know that project or that vendor or something it has them that support amore I think it was like the kubernetes 112 released when it was added so that was only like mid last year yeah so it's still pretty recent [Applause] any other questions all right thank you very much and let's see [Applause]
Info
Channel: DevConf
Views: 1,219
Rating: 4.5555553 out of 5
Keywords: Red Hat, DevConf, Kubernetes, CNI, networking
Id: X6rcpy2g5Ew
Channel Id: undefined
Length: 46min 23sec (2783 seconds)
Published: Wed Feb 13 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.