Network security for apps on OpenShift

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
thanks for coming this is a session on network security for apps running on open shift container platform quick question before we get started actually let's introduce I'm bid with Chandi I'm a principal architect working for Red Hat I woke on open ship Tiger team and this is so me and the air works in the same team my name's Shanna chancy new solution architect so he just explained what I do yeah so we both go and help our customers in adoption of open ship container platform in North America so we have worked with most of the open ship customers who have adopted open ship so far right so we get them started we do education sessions we do proof of concepts deep dive sessions on specific topics and things like that one of those topics is this one so this session is a little advanced come it's not an introductory thing so there is an expectation that you know at least the kubernetes concepts like parts services and all that stuff right so how many of you are using open shirt can you perfect that's almost everyone that's nice all right so let's get started I joined her for today this is going to be a packed session in terms of the amount of information that you'll gather so be attentive because I will be going going pretty fast unlike what I generally do this first will go with an open shift Sdn overview which is software-defined network that comes with open ship and after that we'll get into typical scenarios some of the some of these scenarios might apply to you these scenarios are derived based on the kind of questions that Shauna and I get asked on a day-to-day basis so we'll talk about the scenario the kind of questions that we get asked and what solutions do we have in open shipped or how do you solution around it for ownership right that's that's the topic of discussion so first open shifts Sdn all of you open shift users kubernetes CNI container networking interface right that's the standard using which we plug-in open shift networking the thing that you see in green here which is the open shift plug-in right that is that is open ships Sdn it's based on a technology called open V switch just like everything that I'd had does open research is an open source project which we have called it an open shifts Sdn that's fully tested it is supplied with the platform when you install up and shift Sdn is included you don't know how to do anything with it right however in case you are interested in any other networking technology we have a bunch of partners and all the things that you see in do you hear like flannel new arch calico contrails contri of any of these things right these are all validated plugins so you can remove open shift Sdn and use the other plugins if you wanted to right and there are there are other plugins that are under development which is open daylight so CNI is that standard with Allah which allows you to change these networking components as you desire there is no specific favorite or anything but we we supply this so you can use this by default but our rest of the conversation is going to be focused on open ships Sdn now the others now open ships SDM comes in three different flavors Sdn itself creates an overlay Network it rates a VX LAN Network on on your physical network right and it comes in three different flavors when you set up openshift or even later you can change it among any of these the first one is openshift the OBS subnet configuration with over your subnet you get a flat network across all the projects so if you are running parts any part can talk to any power on the when you run them on often that's oviya subnet this was a default that we started off with later we introduced a second plug-in which a second type of plug-in which is open OBS multi-tenant what does multi-tenant allow you to do it allows you to create one network per project you know OpenShift project and curating the kubernetes namespace are same right so it allows you to create one network per namespace the way in which it does that is every every namespace gets a virtual network ID and all the parts that are within that project get the same VN ID so parts with same to be an ID can talk to each other parts with different VN eyes cannot talk to each other that's how multi tenant will work the third type which was introduced about six months back is obvious network policy this allows us a finer grained control we'll talk more about it a little later let us understand how this open ships Sdn works since what most of you are using OpenShift you know that there is an H CD which is the data store where the entire configuration of what your cluster is stored right part of that xcd is a node registry and that node registry is the place where the network related configuration gets stored so as you add a new node to your cluster master allocates a subnet from the list of IP addresses that you are supplied right whenever new node gets added it takes a part of it a subnet and it allocates that to a node if you remove a node from the cluster it removes that from the notary's tree node register is a place where the list of subnets allocated to each node are are saved right now what happens on the node itself when you add a new node open ships Sdn sets up a bridge network so this bridge network has three different parts of it the br0 of this bridge network OVS bridge gets configured to talk to any part on there eth0 interface on the other hand there are two other force that get open one is a VX LAN 0 network so any traffic within an open shift cluster if a party is talking to another part and if that other part is on a different node on the cluster that traffic flows through this VX LAN zero if you are talking to anything outside openshift that traffic flows to tunned is it tun zero okay let's see this in a little more detail so open flow rules are added to watch this kind of stuff so whenever you create new nodes in your cluster when you add new nodes in your cluster every node in the cluster knows about their brother now we have a new brother so let's this is added now right if a node gets deleted every node in the cluster knows of knows about it and the open flow rules are set up in such a way that hey if I get a new part with that IP address which is on that subnet I need to redirect it to that particular node is is how the VX line rules are set up now whenever the new part gets created it assigns one of the IP addresses from the subnet that is given to the node so node gets a list of IP addresses right from that those list of IP addresses one of those IP addresses is allocated to the part and this eth0 is mapped to that br0 network right and open flow rules are created here so that if traffic comes in what do I do with it right in case we are setting up it in that OBS multi-tenant mode let's say then the rules the OBS open flow rules are created in such a way that if the traffic is coming with the same VN ID it is allowed otherwise it's not allowed those configurations are taken care of right next how does a traffic flow let's say we have a part a that wants to talk to padmi and when the request goes out it goes from the eth0 and it reaches the OBS bridge if these two parts are on the same node let's say right it goes to the OBS bridge and the traffic is addressed to this particular IP address obvious bridge knows that hey this is within the subnet so it is within my same subnet I am it's on the same node so based on the open flow rules it redirects the traffic immediately to the other part on the same node what happens if part a wants to talk to talk to Part B but part B's is on a different node so traffic goes out to be a zero right based on what I just said before every node knows about every other node on the cluster right so since the traffic is addressed to a different subnet here you see the two-year right so the bridge network here knows based on the open flow rules that it is directed to a different node so it sends the traffic why are we at VX LAN 0 which goes or the overlay network reaches the other node there it goes again to the bridge Network and here it finds the pod and sends it there very simple right next what happens if you have to turn if your part is trying to talk to in an external system the flow is the same here at the bridge it knows that hey this this IP address is not in my on the OpenShift cluster so I have to send it outside so it redirects the traffic to 4 - which is turn 0 from there it gets knotted goes to the physical eth0 and then gets out right this is how the open ships are Sdn works brief introduction now let's talk about different networking scenarios and what solutions we have the first one how do I restrict traffic across years let us understand this a little bit more so usually when traditional applications are applications that we have been building so far right they are divided into different tiers presentation to your application to your database tier so when a presentation tier typically calls application tier application to here cause data data tier this is all known right and certain connections are not allowed so your presentation tier should not be talking to the database directly so how do I set this up on an open-shut cluster like how do I take this kind of an application and restrict these kind of accesses on on an open shipped cluster right that's where network security policy helps so remember the third kind of plugin that we talked about OVS network security that's the one that you want if you have this kind of a scenario what it allows you to do is goes over and above what we discussed here it is not just this it also allows you to do micro segmentation you can control the traffic going into parts at individual power level not just at the tier level you can exactly say that hey you from this part I I will allow traffic to this other part and also I'll allow traffic only if it comes from specific ports so you get micro segmentation at a very granular level let's look at an example of how this is done so when you look at the example it will be at this demo next you'll understand it better so we're going to have a system sample like this any applications that might be similar to what you have this application it would be simplified in a simplified form that you have a application that have multiple services behind the scenes and you have your services in different namespace projects is the project name space as vir mentions as they are similar and by default part and part communication they can communicate really my part PHP can talk to my no js' PHP across namespace and but then a lot of time we want to restrict access how the applications their traffic flows through the applications we might not want everyone talk to each other or we want some kind of security for their back-end service so before we do that let's take a look at my application if it's working before we get started and see that get implement and this service is very simple I have a PHP application and then we'll call the backend to do registry save the inflammation in it and then we'll send an email to the new register new users who register and then it will be able to log in and get the info off that user and then we'll send another API request to the treated service and then we'll say a poetry and then show up on the on the screen so looks like my application is working and but then now I'm I could have a PHP file that actually be able to just directly hit my back-end and get any information I want basically I just have to have my user and the service name and then I can just hit the page just like that as easy as that right but this is not what I want I don't want someone just randomly come in and just hack into my system and look at it and so we have network policies object to do this to help us to restrict their traffic and in here I have an ingress ingress policies that look up the labels on my part and in here I would say I will warn all my part that has a label X in here is email service to only accessing to this part has label my sequel right so let's take a look at how how we doing this and so I already label all my services the way I wanted it obviously when in the deployment and then I will have a policies that apply to my namespace which you saw on the slide that described was the ingress was the port and where it goes and once I applied my policies to my namespace then I will no longer able to hit my back-end because the network policies were straight thing my access to it basically my label where I'm running my PHP the label doesn't match and therefore you can but let's take a look at how we would do it when your app entire application if you your application will normally have multiple namespace and multiple service and parts underneath we want to deny all the service access for all namespace isolate your namespace first and so there's no access whatsoever to start with then we will create policies around and specifically for the incoming traffic and so let's take a look how we would do it oh my god nobody wants to handle I don't know what I'm talking about all right thank you all right I'll stay away from this thing so I I I have a script that just running these policies across different namespace and then right now I just I didn't touch it [Music] Oh No so I set up the system I purposely make a video or and thinking okay great nothing will happen sure enough there's something right as I oh my gosh so I have this I have these policies that will look at the ingress traffic I am not allowing anything in and nothing will be directing and if I go to my application right now I shouldn't there you see I shouldn't see anything because I isolated my entire applications right and then I I'm not going to go through every single policies idea is that we created policies that has matching label how we describe it in the policies for example in this PHP i am i have a policies that allow 8080 coming into a part that once my PHP has a label that call app eco user rag right and then once I do that I should be able to hit my login page and then I will have all the policies that describe how my part and part communication within the namespace and if you have policies that you warn your service is heating outside of the namespace where your service is not it's not the same namespace then you will do things like this that you will have a label for your project and so your service is from coming from this project and this project has a label called projects equal to blah blah blah and then going into the port 8080 and then we'll hit that service that you want it to be and then obviously I'm going to apply all the policies to it all different namespace because it's all around the namespace and then you will be able to hit that application back to you all right here so basically from the example what we have seen is just to summarize right you can apply policies to restrict the traffic from part to a part within a project from part to another prod within a in a different project so you can restrict the traffic to within a project or to outside the project so network security policy is a superset of what multi-tenant provides as well as a little bit more granular stuff that you can get right so so with this you can implement the dream of micro segmentation on an open chef cluster right now that's so we solved one problem next thing area next thing is I have from the my app my setup perspective you'd every Enterprise has multiple security zones right network security zones so you have for example DMZ where that's the only touch point for any traffic that is coming from Internet from outside outside your enterprise that's where things like reverse proxies load balancers and web servers would run and then there it'll be a firewall and your application zone it is firewall protected this is just an example you might have more than this right and maybe there is there it'll be a data Zone where your data is Hertz which are again firewall protected there a led there is a double firewall now holes are punched in this firewalls to allows traffic coming only from the DMZ to ABS own and only from abs on to data zone now if this is my set up how do I run an open shift cluster in this kind of a setup do I run an open shift will the open shift nodes reside in one of those zones all the zones device what do I do here right that's the next question you have a couple of options here one of the things you can do if this is what is mandated by your security policy is to have a separate cluster zone right it's completely isolated OpenShift cluster you need zone again this is not a recommendation it is just what is possible if when if your security policy doesn't allow you to have an open shift cluster that goes across all three zones then this is what you would do right but there is a headache of managing three clusters now right and also it's not as simple as this right in the in the demilitarized zone you have a master you have all those URLs that you expose for la consolidation for matrix and all that you don't want these to be seen by the outside world so you still have to protect those so while this is while this is possible it's it's it's the maintenance is high here so what are the other possible options think about this we can set up a open shift cluster right and this openshift cluster can reside anywhere but it is single OpenShift cluster across all the zones right using network security policies that we already looked at you can do micro segmentation you can have your DMZ pods applications that are supposed to run as DMZ can also be running can be running on this cluster as well as desert zones and if you want to separate them out use network security policies on the same open ship cluster right that solves the problem to some extent what about ingress and egress that's where what you would do is set up infrastructure nodes and these infrastructure nodes will run in the respective zones and on these infrastructure nodes you run the ingress and egress that's how you can set up a single openshift cluster that works across multiple networks owns this is the recommended solution right now one of the things that we should remember is once we set up a cluster and you have a set of nodes even if they are if those nodes are spread out spread out across your all the three zones let's say as long as you open that port 4 to 789 you have this overlay network right the traffic between the parts has nothing to do with your physical firewall because you open forces 47 89 all the traffic between the parts is flowing over the overlay network so your physical firewall has no impact on the overlay network so the only way in which you can do firewalling is by using network security policies okay right let's move on so I talked about ingress and egress so how do I secure egress and then we'll get to in English after that first let us understand how does egress happen so if I have an external system that's running outside openshift maybe this is the database or some other external service that an app running as a container on openshift wants to call what is what are the possible options one the part can make a call directly but you are embedding the URL or the access point for your external service into your application itself right if that if that's the case that means you kind of hard-coding into your application the better way to do that is create something called an external service what is an external service external service every service you know when I am I was assuming that you know how service works or service has a selector and a selection mechanism based on the labels selects the parts that it is front ending if you have a bunch of parts you create a service that service has a section called selectors and that selectors section will decide which parts this service is going to front-end based on the labels that you have put in there an external service is a service where that selector is empty so when a service gets created it doesn't know which parts it is for intoning so how do you tell that this service is pointing to an external system which is outside open shipped you manually create [Music] all right so you manually create an endpoint behind the service so usually the endpoints for a service are parts right instead of a part you create your own endpoint and that endpoint will point to the destination IP address which is outside of it shipped now you may have multiple applications that are trying to reach this external system all they have to do is call this external service and that will redirect it to the external system right now the advantage of this approach is if for some reason that destination IP doesn't know what's going on if for some reason the destination changes IP changes this is the only point where you would change it right but you can also do this with a fully qualified domain name it doesn't have to be an IP address right create an external service an endpoint can be fqdn in both cases when this traffic goes out of our cluster how does your external system see this traffic where does it see it coming from that depends on the node on which these pods are running if you have a cluster with ten different nodes right it's the part that is making this call is on no number five the traffic here is seen as coming from the IP address of the node if the traffic is coming from node number four it shows up as node four now what if you had a firewall here and you are restricting the traffic to come from specific source IP addresses you don't open up the firewall for all the nodes in your open shaped cluster right you might have 100 nodes in the cluster so how do I prevent how do I set up my firewall that allows traffic only from specific IP addresses that could be you use case if that's the case we would use this mechanism this is called egress router what are the egress router do egress router is a is a special pod and this every part when our pattern is part and a service in front of it right so egress part is intended by a Negro service now just like that a external service this egress service is what is called by your application so your apps can be running any anywhere as pods on your open ship cluster they'll make a call to the egress service the egress service will redirect the request to the egress pod and egress part sends out this request but it does say a little bit of magic here your egress part when you bring it up you'll set two different configurations in this egress pod one of them is a source IP the other one is a destination this destination is your external systems IP address the source IP is what you have opened up on your firewall so you fear firewall is the riving allowing traffic from a specific IP address that IP address is what is defined in the secret spot now what is equal spotters is irrespective wherever the traffic is coming from so when the traffic goes out it changes the source and to the source IP so your your firewall will see the traffic coming as a particular source IP right that's the advantage of using the egress spot but if I have so many different services do I have to create so many different ingress egress parts right what what are the other solutions we recently introduced in 3.9 this other solution where you have you can assign this egress IP at the project level so if you are running a number of services within a project all these services will you don't need to create an egress spot anymore all the services when they traffic goes up the out of the project it will get the same IP address irrespective of where this where this parts live so your project is created but this parts can be scheduled on any node on the cluster right irrespective of where those pods are when the traffic goes out since you configure the IP for this project the traffic goes out with that particular IP address so if you had the traffic here you will see the IP that is configured here the way in which it works is you will add an additional IP address on one of the nodes on the cluster and when you configure the project if this IP address is assigned to any of the nodes on the cluster that node is what is used to nag the traffic out so let's say this part resides on this other node from here the traffic will travel to this node because this is the node which has this IP address and from here it gets knotted out good so we have a demo for this one as well let's look at it just wink at me I'm looking at this so I have a small system up to knows one master and one no for this setup I feel this is a much more easier to implement than the egress router serving the same use case and same purpose and so have a project called egress project and I have a simple applications running within it and I scale it up to replicas and each part running on each node right so and then well these this particular application will just make a request to the external system they're showing at a corner and they will show where the traffic the source IP is coming from and right now I have no one as ip55 and then I have master 54 and then if I make a request coming from these part the requests going to the external system will see that traffic's the source IP coming from as where the pot is running by default right because you can't have your pot running anywhere and then you can see at the bottom this is the where the part actually running and so you will see the source IP as that and then similar to the other part it will show the source is actually coming from the fifth before instead but this is not what I want I want them coming to the external system see it as one single IP address so that my external system can set a firewall that seeing the same traffic coming in right that's the purpose of it and dear explain it so I'm gonna fast forward a little bit right and I pick one of the note as a thank you you guys listening very good all right so I have I have I have a note that picked a note one about IP address 55 are just a another floating IP address that I have that - the same interface in the same subnet right and so this would be my static libres IP address that I'm gonna use so if you look at the host subnet for the notes there is no egress IP as yet so the one that I picked 30 for I'm gonna add that to my note 1 egress IP address and tell it I'm gonna use this note 1 as the place where hosting my egress IP address basically and then remember the project name is egress project so my name space egress Roger needs to know where my egos IP address is where my application is running so I'll do a similar OSI patch and then I will add my IP address to the egress project just like that easy and then then you're done right so now really done and so yeah so you can see the u.s. project has the IP address that I added to it in this version only one IP per project right now so now I'm making the same call to my external system I would expect that IP address from both both of the part going into the external system to be the same and sure enough the 34 show up first and then second my VI search is not so good here so you can see both of the IPA that way then basically when the part is running on other notes that does that's not hosting the Yeager's IP address were happening behind the scene is your traffic will then get rau we're the VX land your overlay network to the note they're hosting the egress IP address and then it will get called to the external system so when the external when it gets to the external system you will see the IP address where should come from so thank you so that was about using a stack static IP at the project level right next what else do we have can I restrict the traffic going out of my project that's you can do that there is something called an egress firewall to limit access to where your traffic can go you can set apart to talk to hosts that are outside that are within your network but not to internet for example right or you can set policies like I can talk to my parts can talk to public internet but not to my external systems or I can control this at the subnet level so my pods can talk to submit a but not to subnet B or anything that is not in those subnets kind of thing so you can set up your specific firewall rules at the egress firewall level to limit that access to traffic going outside now let's come to ingress if you have used open shift you all know about openshift router how does it work very quickly so you setup OpenShift router on one of the nodes typically we set up an infrastructure node on with the on which the router branch openshift router is a hedge a proxy load balancer it runs as a part it listens it runs has a privileged part and it listens to any traffic coming in on port 80 or 443 on host on which it is running so as long as you make the DNS setup in such a way that your URL that has to result to the application gets to this box on port 80 or 443 it will be picked up by the router and the router will redirect the traffic to the respective parts which could be anywhere on your cluster over the overlay Network we saw how the overlay Network works before right so it uses the same mechanism now can i restrict access to the route I know that I can get traffic into my cluster using port 80 and 443 but how can I restrict access right that is where IP whitelisting comes into play in the router configuration you can set up IP whitelisting and you can say that hey I'll allow traffic only if it is coming from this specific IP addresses to my particular URL it's on a route by route level when you create a route you can set up this white listing right and then now we talked about port 80 and 443 that allows you HTTP HTTP WebSockets kind of traffic what about if I want okay if I want to run some applications that are not using port 80 and 443 and I don't want to expose expose those applications on 80 or 443 I want to use some other ports in that case you have multiple options I'll discuss one after another the first one is a node port approach Twitter the first one is a node port approach where you define when you create a service you define the type as note port when you do that what OpenShift does is when it springs up this service it allocates a port in the range of Port 30,000 to 32 767 on every node on the cluster so one of the ports one of the port numbers is blocked across all the all the nodes in the cluster so when you make a call from outside you can make a call to any nodes IP address either this IP or this IP or this IP if you make a call on this port number it gets redirected to this service and from there it the traffic reaches the parts this is one of the approaches but if we do this how do I set up my firewalls I mean I have to open up my firewalls across all the nodes on the cluster for all these higher-level port numbers if you don't want to do that like what are the other options next option is to use type as external IP for your service when you create a type as external IP there is some administrative tasks to do here it is not as simple as using a router as an administrator the administrator defines an external IP address range and assigns those external ip's to certain nodes in the cluster right typically these could be those infrastructure nodes and those are extra IPS that are allocated to those nodes on the cluster now when you specify the type as external IP and leave it dead rat openshift will automatically pick up one of those IPS and add it to your service now if you make it all to that external IP it reaches the host which for which that external IP is allocated from there openshift knows that hey if I am getting traffic on this particular IP address it is meant for this particular service it redirects it there and from there it reaches the parts now you can also you can leave it to openshift to assign the IP address from the list that your administrator would have configured or you can specifically ask for a particular IP as well when you are creating a service both both options are possible so this allows you to set up your firewalls to allow traffic to particular IP addresses for ingress we talked about now we are talking about coming in now you can also set up these IP addresses what if the what about failover right you can run IP failover which also runs as a par now if we fail over is available as a container it runs as a part and that will automatically manage in case the node with which has this IP goes down you set up that IP as a virtual IP and it will assign to some other node that was about egress now let's go to the next use case which is how do I secure now we I know how to do ingress I know how to do egress I know how to set up a cluster in a in a multi zone environment I know network security policies what else how do I say queue traffic flowing between the nodes itself open shoot allows you to use IPSec which uses OpenShift CA and existing certificates configured across your open chip nodes now all the traffic going between the master node and northern node are all is all secured with IPSec so your traffic is encrypted you will need three files cluster CA file the client certificate and private key these are all configured into libbers Wan database and you can set up the policies IPSec I you know IPSec right Internet Protocol security policies to encrypt the traffic between the force next coming to the application level we talked about the bottom levels but what about the developers building applications what they can what they can do in terms of network security for the apps we talked about the router and the routes right there are can I do as a self for my applications absolutely and there are three different possibilities there one this is the open shipped router that we discussed before right you can not do SSL at all in which case there is no problem it goes all the way across to your application but if you are doing as a cell you can have a say the SSL terminate at the edge our edge is the router and the traffic from the router to your application that means that means this is within the openshift cluster or the overlay network that goes unencrypted so from outside to your router is encrypted from with an open shipped cluster it is unencrypted that is one option the other option is you can use pass-through termination on your route which means that the router will not try to decrypt the traffic it will just pass the traffic all the way to your application the implication of this is that your application your container application container needs to know how to handle SSL distribution so your certificates need to be configured at your application level the third option is to do re encrypt this is the case where you have certain which is between the client and the router and between the router and I and your app you use a different certificate that is Rhian crap Riaan crypt this is how you'll do SSL for your application traffic coming by either through the router what else the next question that we get commonly asked is about layer 7 application security so I have I want to monitor traffic the east/west container traffic within the openshift cluster can I do that I want to set up web application firewalls can I do that can I do Dan granular traffic control can I do packet inspections can I do things like denial of service and can can I detect the denial of service attacks or ransomware viruses and all that stuff right or can I just use do run time security with forensics capturing incidents doing auditing and stuff like that right for these open shoot by itself doesn't provide a solution from Red Hat but we do have partners whose solutions have been tested these are all listed on open shifts partners list and you have a choice here so pick your favorite work with them figure out what they are they have to offer each of these have their own benefits in terms of the kind of things that they offer and you can select any of these solutions all these solutions are continuous and they work on the platform itself so you have multiple options here what else is coming up in the in the area of application security you would have heard this term is tio quite a few times in the past couple of days right ECU also has things to offer in the area of network security and let's talk about it briefly we have ten minutes okay that's good so the way in which history works is it uses an a pattern called sidecar proxy you are developers your application developers will build their code as containers which has pure business logic and nothing else they just have to take their business requirement write their code and create their containers the rest of the things that you want to do at the for monitoring your applications or whatever right you can do with by injecting this sidecar proxy into a part so you all know that pod can run more than one container right so your business logic container is supplied by your developers and when the application gets deployed you can automatically inject an additional container which is called a sidecar proxy what does this do any traffic going to your application container is intercepted by intercepted by the sidecar proxy if you are running multiple applications let's say you have running two different micro services and these are talking to each other the traffic from this to this doesn't go directly it goes why are the sidecar proxy so the sidecar proxy itself is forming a network across different applications that are running within your cluster and this network that is found by the sidecar proxy this dotted lines is forming a mesh network that is called a service mission right sto uses this service mesh issue is to you is nothing but an implementation of this service mesh so we have the sidecar proxies and sta uses envoy you know lift which is this car company they have developed this online and they donated that so it's an open-source thing right so on why is your sidecar you have the mesh network but in order to instruct your sidecars on what to do you have a nice tier control plane issue your control plane includes a couple of components we'll one of them is a pilot this is using which you are telling on why on what to do so for example using the pilot you can send commands across your cluster saying hey if I'm getting traffic to micro service a I have two versions of it one is version 1 version 2 right version 2 is just added I don't know I'm still testing it so if the traffic is coming from this particular user Joe who is a tester direct that traffic to version 2 otherwise direct rest of the traffic to version 1 you can send such a command using pilot so you can configure the routing rules what other things can you do I want I have I have this newer version of service but this service is being used by this new customer I want to direct specific traffic that is coming from this particular customer based on the HTTP address to this newer word newer service all other traffic goes to the older service you can configure things like that using the pilot pilot can talk to all the Envoy instances across your openshift cluster and instruct what to do it can configure the routing rules you have mixer which has where you can configure additional tools so for example you have a application monitoring tools let's say you want to monitor the traffic that is going across the cluster or you want to do law consolidation or you think of anything right mixer allows you to provides adapters where you can plug in your own tools of course those tools have to be built to work with is T all right so all the tool vendors in the industry would have to comply with this mixers plug-in mechanism and build adapters to plug in their tools in through the mixer the using mixer you will tell on what data that tool will need so if you want to monitor traffic what are we expecting to get those rules are configured into the on voice on wise are intercepting all the traffic going to the to your parts right so those will report all the data back to the mix the why are the mixer to your adapters so for example think about ab dynamics today ab dynamics is our new relic or any of these tools are built into your application you take the jar file and you bundle that with your application tomorrow you don't have to do that you just put it here tell on why what you want that data is reported back to that yr the mixer to abdun Amex or new relic or any of these tools and they can do whatever they want with that data so your code remains pure code GREEN core nothing else it is clean so that's the architecture that is coming up but what about security but right from the intro security perspective what can we do that's the third component which is st or CA the control plane includes that is to your art which includes which has a certification authority what it'll do is it uses how many of you know about service accounts in openshift you heard of service accounts right every part on an open shift cluster runs has a service account this concept has moved from open shift into kubernetes now sto use is using service account as the identity for every part so using service account as identity its certificate is generated it uses spiffy for generating the certificate and is toc a generates those certificates and distributes across all the pods in your cluster to your own voice so now using these certificates you can set up mutual TLS between the services running on open ships so one of the things that we talked about is all the communications between the nodes can be secured by using IPSec but not just that now we are talking about any part talking to any other part any service talking to any other service on the overlay Network even that traffic can be encrypted using mutual TLS because you have this certificates distributed not just that I want to revoke the certificates just go to the control plane revoke the certificates I want to put news add new certificates we think because they are changed I want to change the secrets across my all my applications or all the instances of my application you can do that from the control plane this is not there yet it's coming up but this is this is the future
Info
Channel: Red Hat Summit
Views: 3,818
Rating: 5 out of 5
Keywords:
Id: dkPYdSs4EaA
Channel Id: undefined
Length: 52min 42sec (3162 seconds)
Published: Thu May 10 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.