Cilium - Container Security and Networking Using BPF and XDP - Thomas Graf, Covalent

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what I want to talk about is cilium Andre in the last session talked about ipv6 and then working side I will I will focus on the network security or the security side of this so what is cilium about so salem is about BPF who has heard about BPF berkeley packet filter alright see a couple of hands for those who have not heard of it I think most of you have used TCP dump or you can where you can monitor packets on orc interfaces on the wire when you specify a filter expression with TCP dump what actually happens is that TCP dump will compile and generate a BPF program load that into the kernel and that program decides which packets to display when you're on TCP dump so if most of you have been using BPF even without knowing it so this has been invented many many years ago 30 years ago but it has been extended since and it's become a revolution inside the Linux kernel it's been revolutionizing tracing and profiling if you hurt Brendan Gregg talk about this I think you have been you have noticed and you have experienced that this is changing how we can do performance analyzes one example and I'm just using one is showing how we can use BPF to generate histograms directly in the kernel so instead of sampling everything to use a space and then looking at the samples and deciding something or what to do with the samples we can actually do that inside a colon why was this even needed because the number of samples were too high to even export them to users face this is why tracing at the tracing and profiling subsystem has moved moved to BPF but tracing and profiling is not the only option of the only subsystem or the only field DPF eat revolutionising another one is not working that's the one where we focus on most some of you have may have seen Daniel Berkman's presentation yesterday he talked about BPF in general on xt the expressed data plain HTTP is a framework which allows us to run BPF programs at the network driver level of linux so very close to the actual harder than nick what i'm showing here is an experiment that we've done we've basically measured actually be BPF DDoS mitigation filter compared to an IP set based DDoS mitigation filter so IP set is an IP tables extension which allows you to match on a set of IPs or ports what we've done here is basically have connected two machines together back-to-back with a 10 gigabit network card and we've loaded the filters to filter on 16 million IP addresses we would then use one machine to send as many 64 byte packets as possible and the receiver would have to drop them as quickly as possible so let's look at the numbers real quick if for all the details Daniel has shared his presentation and you can look into the details there's also we also have a demo recording where you can see the actual video how this happens but let's look into the real numbers real quick so the the sender is able to generate sixteen point eleven point six million packets per second if you're using IP set we can actually only ever drop 7.1 million packets per second so all the resources on the machine are actually not enough to even draw all of these packets if actually peeper can easily drop all of these packets so what happens if we if you load if you load the rules these 16 million rules while we generate traffic how long does it actually load to even - other than how long has it even take to load all of these rules with with IP tables IPSec this took over three minutes if HTTP we were down to 31 seconds this is not exciting projects I notice here what about the latency and then throughput of a machine while it's on a DDoS attack if you're running an IP set based filtering basing the laser latency goes up to two point three milliseconds and the throughput goes to like point zero one for gigabit it's basically nothing while we've actually P the latency stays extreme we can we can still use a good portion of our bandwidth in terms of handling TCP requests per second that's the lowest metric like with IP tables IP set based filtering we can we can do a couple of hundred requests per second if you have an HDPE based filter we can still handle thousands of requests per second so basically even though the machine was on there a DDoS attack the machine remains reachable with low latency and can actually handle workloads so this is how Linux in the future will be capable of protecting itself from DDoS attacks this is one example the second expo what we want to talk is Facebook published numbers at net F this year and basically announced our to almost say that they're switching their load balancers the layer 3 layer 4 load balancers over from IP vs which is a Linux load balancing technology to BPF HDPE so we're talking about this piece here so between ecmp hardware based local answers and TL 7 local ancestor is running on l3 l4 local answer and these are the numbers and the numbers are mine or my employing the the bar below is the IP vs throughput and the upper bar is the xdp BPF throughput so packets per second and there's almost a 10x improvement and this is this is amazing anybody has been in an hour working field knows that 10x improvements don't come every day so this is definitely even though Facebook is not sharing the absolute numbers they're sharing the the performance Delta between the two this is not what I'm going to focus around I wanted to give you a outlook into how bps PPF is changing the chronal and how we do networking security and profiling if you want to know about this specific use case the network maintainer David Miller has done a keynote talk this year and there is a recording I have included it in this light here this is not just changing software networking though at future net a couple of weeks ago all of the smart NIC vendors have announced that they're going to support or all are already supporting BPF as an offloading engine so as we write DPF programs our software engineers smart necks in the future will actually be able to offload and run these at even higher speed and the DDoS mitigation filter I explained about I talked about it's just one example alright so what about security there's multiple projects I want to have mentioned one which is land lock it's changing how we can do sandboxing so it will be another low-level tool and framework ow for example something like a darker runtime or a rocket runtime can contain rise or sandbox applications and obviously cilium so how the cilium revolutionized security and I want to give you guys an example on what we focus on on what we figured is something that is currently unsolved that needs to be solved and I will I will take you a cue to the full thinking process that we that we went through so if you look at how applications have been delivered or developed and deployed many many years ago we started with servers and we would deploy maybe yearly we would set up the server it would be a mail server group a DNS server it would be a database we would deploy a yearly would apply security fixes right at that time is long gone we went on to virtualization and we would deploy VMs we would deploy we maybe we're now entering this phase of micro services or service oriented architecture we can call it whatever we want but it's a world where application developers deploy multiple times a day we've seen a lot of tooling improve a lot of tooling that provides automation but there's been infrastructure deployments to terraform ansible cfengine and so on we see containers evolve we see kubernetes coming up these are all tools which help us deliver and deploy applications quicker eventually if the goal that we can disrupt other businesses because our application teams can evolve faster if you look at networking though we have seen a move hardware appliances to servers in the world in the VM in the VM move we have not seen much after that so if you look at current for example cuban others networking solutions even kubernetes itself still maps to IP tables and IP tables I worked on IP tables myself for many many years my background as an OS kernel development I've done that for 15 years I know what iptables has been designed for has been designed as a firewall for servers so it filters on ports and eyepiece and I'm using IP tables here we could use any virtual switch here that's flow based it's based on IPs and ports so why is that not enough well if you're looking at this modern cloud native applications they would typically use a protocol such as G RPC rest Kafka and so on and what you typically see is that most of the communication between these containers or micro services is over port 18 like if it's rest or chair PC which means that as you as a network engineer as you open up the port you basically open up everything right all of a sudden whether whoever can talk to whoever can basically use all of the functionality and this is a problem and I will talk you through a specific use case why this is the problem so in this example we'll look at Gordon for those of you don't know Gordon Gordon is one of the mascot mascots of docker so Gordon is an intern and has a brilliant idea right he sees that company struggling to to fulfill all of the hiring needs so he's he's on Twitter all day so he figures why don't they write a micro service that will basically tweet out all of the job openings that my company has so he goes along and he wants to create that micro service and in order to do that he needs to have access to the data of all all the job openings so what does he do he accesses an API which has the invictus information this information this API has a couple of API endpoints but all of you using kubernetes all of the service basically have this gets left health which which cubanelles will call to figure out whether a part is healthy it you can access it to to get the actual job postings the database also stores the applicants that applied for the job and you can actually create new jobs these jobs or his data might actually might be backed by something like MongoDB or something else all right so far so good so Gordon basically writes his micro services and for his purpose he needs to get access to the get slash jobs API to retrieve the job openings he goes along it because obviously Gordon is a good citizen and good a software locker so Gordon uses mutual TLS right good thinking Gordon developer etiquette super simple stuff does TLS buy us anything TLS basically says anything from this container this container from this app to this app is encrypted but it doesn't actually do anything on API called level that we can still do all of the API calls that we want so let's dive into the networking level so how will be secure and try to securities on a networking level if you apply something like a Cuban others Network policy it will get translated into an IP tables rule like this which says well this tweet service container has this IP so you can talk to this job's API container or jobs API service and you can do that on TCP on port 80 so this is how the rule will look like this is how your firewall will look like so this will allow the containers to talk but at the same time it exposes all of the API endpoints so if the intern for whatever reason introduces little bug and that application is misbehaving worst case scenario it can actually tweet out all the applicants that are played applied for that for that job which is definitely something that we don't want on the other hand the integer could also use this API to even create a job if he wants to stay at the company but it's definitely not least privileged security like this is definitely not least privileged security so what can we do about this and this is the problem we're solving we're saying let's go back to the drawing board what we want is something very simple which is I want containers to talk to each other pulse to talk to each other but I want to expose the least amount of api service possible so least privileged security on api call level so in this example we allow the tweet service to talk to the job it's jobs api service but it can only do the get to slash jobs api call if it attempts to do the other api calls we will block this so even if the intern screws up you cannot leak the data such as applicants data or create new jobs all right sounds neat right we want a demo and this is this is open source uninstall demo so the demo I'm about to shoot kubernetes faced who think cuba netis or is planning to use kubernetes right about half the hands does somebody have no clue at all about cuba cuban Eddie's awesome all right don't needed work you bananas in topic I said I would have really struggle to do that but yeah it could where's in a minor nutshell what one word one sentence it allows you to run Cuba net allows you to run containers at scale on multiple notes and it will orchestrate all of this and takes away a lot management burden right so this demo is a demo that has a scene so let's look at that some of you may remember his intro a long time ago in a container cluster far far away is a period of World War the Empire has adopted micro-services and continuous delivery despite this rebel spaceships striking from a hidden cluster have won their first victory against the evil Galactic Empire during the battle rebel spies managed to steal the swagger API specification of the Empire's ultimate weapon the Death Star so this is the intro to our demo and what I have here is basically this is my laptop VM and I have a mini cube which is basically a entire Cuban Etta's cluster fitted into one VM so I have a full and right now is one know my VM I have nothing running so let's do it get parts this would be the containers running I have nothing running so there as a first step I will deploy that far right so that's the Empire wants to deploy def star what is the death star a def store is a service which is basically the local engine construct that's not important here and it has a deployment which is a way of describing I want to deploy in container or a part what's important is use labels along this demo so the Death Star has labels and it's a long story organization Empire and it's has a class it's and then down here you basically describe what type of container I'm running I'm running a Star Wars container image so let's deploy that this is how you deploy in Cuban Aries cool so this is deploy now the DEF store is getting constructed we now want to have spaceships land on the Death Star so spaceships are basically containers as well so this is our different of a spaceship it's a container image and that container has labels so it's orbiting Empire and class spaceship so let's create that as well you can now get these and they should be coming up so they're still creating in them and while these are spinning up the we want to establish a policy so we want to have or we want to allow spaceships to talk to the Death Star how do we do that in kubernetes you do this with policy and a policy could look something like this the policy simply basically says this policy applies to all parts which have two labels Empire the class test or organization Empire and you can talk to me if you have to label class spaceship so there's no IP addresses we do policy through labels so I'm going to imply that I don't offer Wi-Fi let's see not missing part I see why it's not coming up all right let's let's start over let's try again you're not even at the selling part yet so what you get with bleeding edge technology and doing a live demo all right cool let's try again okay achieve this doesn't comment it will not sure why maybe the Wi-Fi is very slow so what's happening if you're on a container it will actually check with DDOT with you container registry where Ares and your image may be able so this is typically while why dock related demo fail on stage because you don't have Wi-Fi in case it's failing we've did this demo a talker con there's video recordings or worse case I will refer you to the video recording it doesn't doesn't look like it will be coming up alright sorry about that let's let's go back so what is psyllium well actually let me let me talk you through what what the dam would actually have showed you it what I've showed you that we can import a layer 3 policy and lay a four policy to have containers or pots talk to each other but then we also support importing layer 7 policies some of you have may have come by our booth and saw how we basically used layer 7 policy secure communication on API call level so how do we do this we as I enjoy we certainly was all about BPF so what what's the purpose of psyllium what does the psyllium do psyllium runs as a agent on all of your servers in the kubernetes case this would deploy nibbly deployment deployed as a daemon so running as a pod on all your servers it would then generate PPF bytecode BPF programs and inject them into the kernel so what is p PF or what what can you do with p PF p PF allows you to inject bytecode in the kernel and extend the kernel at runtime while doing so it goes through a verifier so the kernel ensures that you cannot crash the kernel you could not you have to run to completion and so on so it it's basically similar to a kernel module but you cannot crash the kernel because it goes through a verifier so it's basically the next generation of making the kernel extendable the bytecode after verification go through a JIT compiler and just-in-time compiler which the BPF bytecode and translates it to the instructions that your CPU understand so the DPF program in the end as x86 instructions or arm instructions so there is no overhead in terms of performance so this is our data path or our our kernel side the upper side is basically how we integrate with the rest of the world so we have a CLI which basically allows you to retrieve debugging information and so on we have a policy repository this could be your kubernetes control place or this could be given artists resources or it could be a key value store we have plugins we have plugins for kubernetes for mesosphere for darker we have plugins for different container runtimes and so on this is how you interact or integrate with the rest of the world and we have a cilium monitor this is the monitoring component which can listen to events that happen on the data path so for example whenever a cilium drops a packet or a request because of policy we will generate an event in through a framework called perf ring buffer this perf ring buffer is coming out of this tracing and profiling revolution in the kernel and it's a very fast data structure we can expose millions of events per second through this so this is radically changing how we can give visibility into what's happening running TCP dump in a production why I mean it's definitely not something that you want to do running iptables jelly lock is something that you don't want to do but this has low overhead this is something that you can do and you can you can run this where needed and when you stop running it the overhead will be gone so it's basically something that you can start monitoring and gaining visibility into your production workloads a very nice property of this BPF cogeneration is that we can replace these programs at runtime without any disruption which means that and this we've done this a couple of times we find a bug we can fix it and we can deploy it not a single connection was lost so basically how does this work we we compile a new program we verify it gets cheese compiled and then it the program gets replaced in a so called atomic operation none of the state is lost this is really changing how we can do how we can do in our key what does it allow us to do it allows us to do hot fixing it allows us to basically if something is not working we can compile in debug instructions on the fly I'm a kernel developer how do you depart kernels you add print Kate printf equivalent statements you recompile you reboot the machine and try to reproduce what we can do in studies we can compile in this d box to debug statements without change without rebooting so the problem is still occurring we can debug it live or we can even hotfix it live so this is a completely new way of doing kernel level of nor nor kingdom load so I talked a little bit about our kubernetes integration we basically integrate with the standard resources I'm listing them here so now policy in our policy was recently declared GA part of the official cubes resource API with NOC policy you can define layer 3 and layer 4 Ingres network policy you can say this part can talk to this part we can say you can talk to me on port 80 and so on right now you could not define egress policy but this has been worked on right now and will most likely be included in the next release right now you cannot define egress side rules for example but this will also be clearly included in the next couple of releases we also as on dimensional in the in the last session we also implement services when we say services we implement a part two part services IP tables so typically this is done by IP tables there's a distinctive disadvantage if you do this by using IP tables for every service to define cube proxy will inject about 5 IP tables roles what our IP tables rules there are is like a sequential list of rules that every packet walks through so as you scale up the number of services it will get slower and slower and slower and slower with PPF this is a hash table the cost is exactly the same whether it's one service or 50,000 services it's exactly the same even our policy enforcement it's a hash table it's the same the numbers look the same better we have one rule or 5,000 or 10,000 rules so we're we're doing we're redoing networking with a scope of micro services where we have hyper scale and you're talking to hundreds of thousands of endpoints eventually we recognize parts why do we need to even all that part we look at parts and we'd read the labels of the pot so this is how you define a policy we saw this in the first two minutes of the demo this is how you define a policy you don't define a policy based on IP addresses you say any container if the label foo can talk to any container of the label bar you don't care if you're running one container or 10,000 containers from a policy perspective you don't care it doesn't matter we integrate with notes so why do we need notes we actually have what we call a zero configuration networking mode where instead of using an external key value store or something we basically just use kubernetes as the control plane what does that mean it means that as andre explained as well as last session we how do we know about what nodes host which outsiders what are the IP addresses that are used another host we use the cube netis control plane to do this so instead of inventing our own we basically leverage and use communities for this and last but not least now our policy does not allow you to do egress it doesn't allow you to do layer 7 yet we are working on extending this and making this a core property of that policy in the meantime we offer a custom resource definition this was previously called third-party resource it's Kuban ettus way of allowing evolution or development pre standards so we can basically everybody can define this and use them and then what makes sense eventually gets into the standardized EPS so this is how you can use kubernetes and layer 7 policy today what do we do before networking and this is the big question that everybody's asking if I'm doing multi know networking should I use an encapsulation protocol or should I do direct routing so cilium supports both we have an overlay mode and this is default which basically means that you create a so-called overlay or UDP encapsulation between all the nodes so it's basically a tunnel you hide the part IPS from your underlying network this is easy works out of the box but there is a performance penalty it's very simple to set up you basically run the cube controller manage manager with the allocate node ciders and Cuban artists will automatically handle all of this so this is the only thing you have to provide your psyllium and it will have Knowlton no networking easy but overhead so use case is typically PLC or if if you don't care about the last percent of performance the second mode native routing mode is basically the mode where you're running a routing daemon or you won't use the cloud providers routing functionality this case psyllium basically just gives the packet to the router to Linux you know what to do either the cloud or the cloud provider knows how to do with this or you're running a like a routing protocol and two routing protocol disputes all the route typically what you just do what you do post POC you and you know what you're doing and you're setting everything up for production it's faster and the network actually knows East apartheid peace I will go into more details here but basically the bottom line is you can run with psyllium so how are policies actually defined we saw this in the first part of the demo this is a l3 label based policy basically there's lots of information here what really matters is this part and this part so the plot this part of the policy says this policy applies to all parts which have to label stuff store and Empire right and then you say all parts with the label class spaceship can talk to this so this is how you do connectivity layer three part two part very simple what do you do if you want to for example limit access to external services for example you have a micro services which which uses stripe comm services you don't want that Microsoft to be to reach the entire world you want to limit it to what it what it needs like lease privilege in this case we're saying this policy applies to all all parts with spaceship and Empire and you can only talk to the external IP 8888 Google's DNS server so this this policy would allow the partner to talk to the Google DNS server but not i'ma try to rearrange this all right al for Policy same principle you have this selector which selects the parts that work should apply to and then you say you can only talk going on port 80 TCP so you cannot for example use a different port when you're talking outside and so actually it is ingress so this is incoming you can only be talked to on port TCP and then layer seven this is the part where the demo would have been awesome because it was that it was starburst seemed this this basically says is an extension for the of the layer four rule and it says you can do these two API calls only so you can talk port 80 TCP and you can only get two /we one or you can do a put two slash exhaust port if half the HTTP header X has forced true set so some of you may know this read a dog the better demo was going it was definitely some def start construction was with definitely I've been going on so this is how you can how you can define layer seven policy how are these policy is enforced so we talked about BPF are we using only DPR for this for the layer 3 layer 4 part it's all DPF the kernel can do this today right now layer 7 policies where can do this we will be able to do this in BPF right now we're using a sidecar proxy for this I will explain what a sidecar proxy is so what is the sidecar proxy a cycle proxy is basically if you have two services talking to you run a proxy as a sidecar next to them so basically you place a proxy in accident and then all communication goes first through the proxy proxy to proxy and then proxy to service this is what is called a services mash as well some of you might have heard of sto Envoy linker D some people have nginx and H a proxy beforehand what this allows to do is basically provide networking functionality on layer 7 so I can do HTTP our local ancing I can say if I do local ancing and my request to the back-end fails I can try another one I gain visibility latency data I do into I get tracing information and so on so right now this services mesh is not focused on security just on load balancing routing and so on but you can use this technology to do to enforce security and this is what we use it for so how does this look like on the networking level it basically means that all traffic goes out of the socket a TCP down here this is basically coral and down here you have an IP tables rule or a PPF rule that basically redirects everything back to the proxy the proxy does whatever it has to do sends it out it goes over the network and it goes through a psycho proxy here as well so basically from service to service you're going through the TCP stack 6 times that's it times the number of memory resources and this is non-trivial if you're running that's a public cloud provider images you may have to bump the image size just because the memory needs or bigger so your bill will increase the latency is how big because you're going through TCP stacks multiple time context switches this is basically switching back between Crowell and users face this adds latency and there's a ton of complexity but most previously one connection is now three TCP connections to have two services talk to each other so can we do something about this and this is what or can be turned a side car into a race car and this is where k proxy comes in Zork a proxy is kernel proxy K proxy basically brings some of this sidecar functionality into the kernel basically at the socket layer so this is what applications use to talk to each other if they do if they run TCP and this is where we basically would look at this payload with PPF and make a decision or ad localizing function layer or anything at this at this layer if you look at this picture this is very simple I one T or two TCP stack traversals you go over the network simple then the question came up like all right what about SSL TLS what is what if my application is you and to end encryption how can I handle that and this is why this was not possible previously but recently k TLS colonel Els was merged Katie Ellis - Katie Ellis allows the colonel to take over the symmetric encryption part which means open SSL the library will still do the handshake the where all the Box in the code are so basically in all the exploits that we saw over the last 15 years they were all in the control handshake part and then once you have negotiated everything you pass down the key to the kernel and the kernel will do the actual encryption that is the expensive part you gain about 3/4 percent of performance simply through this and this is why some static content providers are interested in this more importantly this allows the kernel to the clear text payload even if the application is doing and to an encryption if we're going back to this picture the worst case scenario is that you're doing end-to-end encryption and the proxy actual needs to decrypt here it will decrypt look at the header make a decision re encrypt it will send it over decrypt look at the data re-encrypt you're wasting a ton of that your AWS available basically double at least boy there's a question so yes and no so the question was is to cycle proxy just becoming a control plane here the kernel will have limits in terms of complexity can handle so what we're doing basically is we're saying if we can handle it we can handle it in a kernel and otherwise we can basically punt it to the sidecar proxy and user space still so it's more like an an offload from that sense which means exactly so the statement here was just for the video recording you can you can opt in what to handle and everything else will still be on about user space sidecar proxy and we can do this as on the request basis so it's not just connection so if you have let's a long left HTTP - connection we can do this per request so even if just one of the requests inside of the connection cannot be handle we can punt this out to use a space proxy and it gets even more excited exciting because we introduced or we're introducing something called socket redirect which is basically if we go back to here it allows us to basically jump from here to here like soccer - soccer which means if we have to punt to the userspace proxy we can say if this very expensive hairpin down through the TCP stack and back up we're basically just can can copy over which even if we cannot do it in the kernel we will still save a ton of cycles now we can basically say well this is safe we can delay the encryption or the dig yeah we can delay the encryption so we can move the encryption from here over to here so we don't have to decrypt andrian crypt again so this is how we see the future of services mash enforcement data play or they are certain functionality in general yeah this is a socket redirect I just talked about so you're basically getting rid of this of this part below so just to give you an idea of performance and I'm leaking company information right here this is coming directly by our internal slack channel so baby Sean faster Manistee is the code that is currently working on this he did a performance measurement and the numbers below maybe if you cannot see it it's 5.5 gigabytes per second is if one application is talking to another locally through TCP or the loopback the above example is socket redirect with a filter applied we're actually faster the socket redirect because we're not going through the TCP stack so you gain policy you gain layer seven functionality and you're faster as you would have you are faster than act and then you you were before so the number in that the upper box is six point seven six point six gigabits per second so the before and the after it's pretty simple right so to summarize so cilium uses BPF to do networking lopa lansing and network security firewall on layers three we can do label based as we saw defining policy based on labels we can do cider based filtering ingress egress you can say I want this legacy right cool Oracle database to connect to my micro service or I only want two despot to be able to contact stripe calm IPS we can do layer for Paulo see we can do layer policy right now we can do HTTP we're currently in the in the doing that work to transition over to using envoy as the sidecar proxy to enforce this already supports G RPC and so if this transition will start supporting HTTP and G RPC and then we'll add more protocols that the most likely candidate is definitely Kafka like Kafka is if you look at Kafka and the potential health is secured as it's obvious you have micro services sharing a message bus Kafka has a concept of topics where that the same message bus can be used for different topics it's obvious that you want to have a policy and say this micro service only needs access to this topic only granted access to this topic so you cannot steal data you cannot steal messages you can say you can imagine a policy which says this is a producer it only ever writes to the Kafka messenger boss so you say it can only write another component is a consumer so it can only take something off the messaging bus these are obvious reasons why you would want to secure on calf:cow level we saw the low balancing bit and we saw the performance numbers what we observed in our DDoS mitigation use case which is very similar to being a low plunge and the Facebook use case basically being able to compete with user space networking solutions all in kernel all well integrated I didn't talk a lot about the dependencies because you don't have many the only dependencies that dependency that we have is an external key value store in the edge of clown native micro services this is how you interact or how you share state between components so a key value store is basically a key value store database where you can keys and a value associated with them you can use HD or console right but these are the two options that we support yes so the question is if I'm running keeping address - I still need this yes but you can use the Cuban Eddie's HCG key-value store if you want to even though I would not recommend this at a certain scale right it city will have a certain scale limitation and yeah so we are talked about Cuban Eddie's so we come as a sea and I plug in but we also have a lip in network integration so if you're running docker swarm you can also run psyllium Mazouz recently added seeing board as well so we are supporting the mesosphere ecosystem as well if you want to get started if you if you if you got intrigued and you want to try this out we have a getting started guide a tutorial which is using a way grant box you can call the silly my or / try and basically try this out including layer 7 top on kubernetes Meadows or darker and it's an open source project so we are on get up feel free to start us we have a Twitter handle we're sharing news feel free to follow I think I saw a sign up for one minute so I think we have a little bit of question but I think the coffee break is next so I think we can use some time for questions all right here we go [Music] so what the Russian is what is the relationship between psyllium and envoy envoy will be our primary sidecar proxy which means that it will be the default to handle layer 7 policies if we can do that in the kernel for example for Kafka which is a very simple protocol we will definitely do that in kernel which means we can do it at lower cost higher speeds lower latency so we already have multiple people working on envoy to basically prepare it for is on what will be our primary psycho proxy the question is can you can you only use the layer 7 bits right now the thing about security is that you want to make it very hard ideal impossible to pipe bypasses the way we do this is we take over networking because in this way we can we can guarantee that we see everything which right now there is no there's no way to do this but you're not the first to ask this question and we're currently investigating what do this in this in the in that in that scenario you would it wouldn't you would not be installed through a C&I plugging anymore but they basically run it on top of another CNI plugin for example but you can already do for example I want to use for example calicos routing daemon and but but you see Liam so that would be compatible or you could say I'm using a flannel and running psyllium on top that would also be possible but right now it's it's not like generically decoupled yet more questions yep so the question is can I integrate this with nginx or Apache what do you mean by directly on an hour all right so the question is can I could I use the nginx configuration interface to use this it's not the question ok so right now you can't so right now you can we have an API that you can use to actually configure this or you can use a kubernetes resource file as we saw in the example these are the current ways I'm happy to explore how that could look like I haven't looked into that yet all right so we do have t-shirts feel free to stop by we also have and please store us and get up all right thank you thank you very much [Applause]
Info
Channel: The Linux Foundation
Views: 4,683
Rating: 5 out of 5
Keywords: system containers, Open API initiative, techology, kubernetes, openstack, cloud open, open source community, containers, nfv, embedded systems, IBM, cloud, technologists, containercon, linux, Intel, sdn, CPU performance scaling, quantum computing, open community conference, open source, cloud computing, red hat, API, open source summit, Google, devops, containerization, apache spark, linuxcon, joseph gordon-levitt, decoding, Cisco DevNet
Id: CcGtDMm1SJA
Channel Id: undefined
Length: 43min 30sec (2610 seconds)
Published: Fri Sep 15 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.