OpenShift - the power of Kubernetes for engineers by Marek Jelen

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so thank you for coming welcome my name is Marek I work at Red Hat and the openshift team and today I would love to tell you something about open shift and especially from the perspective of how to use it or how you can benefit as an engineer compared to using kubernetes and we will see the relationship there why I'm speaking about kubernetes and OpenShift in the same talk before I go into the thought I will ask you a few questions so please raise your hand if you are ok with the I am saying you have tried OpenShift already awesome you are very familiar with docker ok you are a bit familiar with kubernetes ok that's good you are an engineer that means the person who builds cool stuff you are operations that's the person who runs the cool stuff ok you are a manager that's the person who slows down the other two groups ok so not many I can make jokes so because every presentation should start with an agenda so my agenda is kubernetes and openshift so we have to give the formalities everything that's supposed to be there so first something about kubernetes because well before I moved to kubernetes so what I'm gonna show you is how to use both of those technologies Burnett is an open shift and I am running a cluster single node cluster of both of those on my laptop right here this is the commands that I used to start the cluster so it's the default configuration I didn't add anything extra there I didn't add ingress into into kubernetes because it's not there by default so essentially this is just what you get if you deployed the bare-bones and I pre-cached some of the docker containers and things that I am going to use into my laptop as well so during the demos we should not be waiting too long so there have been some preparation to that I'm using these two tools called mini cube and mini shift so mini coop allows you to start kubernetes in a VM on your laptop use this I'm using clinics so I'm using kvn driver there is VirtualBox X hive on OS X VMware and not a hundred different drivers so if you want to spin up a single node cluster that doesn't pollute your own machine it creates a new VM it downloads everything in there and spins up the cluster in the VM so that you using mini coop and when a shift start I spin it up mini shift and mini cube delete will destroy the VM and clean everything after that so it's very convenient very how to do stuff so I'm using that and we will also use an application so my application is written in Java EE because this is a Java conference it started as a our conference so it's written in Java EE the source code is hosted on github and we will be using source code as well that's why it's mentioned to spin up the application we will need one environment variable that says that points to some llamo file that describes what should be what the application should do and what the application does is it uses the llamo file to configure a content for a workshop so when we are doing workshops we are using this to to deploy the content for the workshops for the people so there are different modules and this you can go to these modules and follow the workshop so the amo file describes what modules are supposed to be there in the in the app and there is a pre built container on docker hub so Lecrae always give workshops workshop where we are going to use that that's pre cached on my laptop in those clusters already so this is the application that I'm gonna use s n from the engineering perspective and I wanted you to understand what the application does or how it why it was bailed and this is the example that I'm going to be using during the do demos so something about kubernetes well I forgot one thing I have books in here if you will be active during the during the presentation when I ask something you answer or you will have questions you can get a book I think it's like 19 or 18 of those so these are for you I will give them up so be active and don't sleep it's difficult after the lunch I know so kubernetes started as a as a project from Google and Google has been using containers for for many years even before containers were actually like heaps of Technology before we actually cared about container smart of us they have been using them's like 2004 something like that they started to use them and they have two tools internally Accord work and Omega they schedule the containers in Google infrastructure and they try to open source also they contain a runtime it was called let me contain that for you limit of Epsilon well difficult name never never got traction I think it's because of the name but doesn't have to be I think even Google is now like phasing this out and it's moving to doctor containers or something like something like that and so from the experience how big is Google that's a good question big I usually think that Google is pretty much the biggest what used what we have right now inference things like in the infrastructure world like what they do would be Iran would they what they have like I would say that there are big area Microsoft maybe not maybe even financially but doesn't care but technically like infrastructure wise I think they're very bigger the same compared to Amazon or something like that so they have been using containers on that scale and in kubernetes they are actually trying to open source that know how the lessons they learned the hard way and put it into an open source project so they are not open sourcing a technology that they have built they are open sourcing to know how the ideas they they learned during the time using containers in the project and kubernetes are designed specifically for running containers so you have a container you want to run it and you want to run it in a highly available resilient way it should be scale up it should scale somewhere it should be deployed on a specific node and there should be scheduled to decide which node and this all these things really really are related to to the runtime only for running containers kubernetes doesn't care how you build the containers they don't really care how you store the containers you just say deploy me this container and it will be deployed it can be built in any possible way it can be stored in many different repositories or registries but kubernetes doesn't care about that really they don't provide this this infrastructure and they provide API and on top of the API there is a CLI tool and there is a web console it allows you to communicate and use the tool for something you want to do I will be maybe after every slide I will ask if you have any questions so we don't lose the context for the questions so feel free to raise your hand if you have any questions no ok so kubernetes like by design it's not that difficult it's quite simple actually and there are four different main objects or things in communities that you need to understand to be able to use it so there are labels that's a key value pair that you can attach to the other resources like Paul's replication controls or services so every resource has several labels and these labels are used for identifying nice label these resources in the cluster but what kubernetes don't work with the concept of container they work with the concept of pot and pot is a set of containers that has the same life cycle is bound to the same network interface is running on the same node so it's containers that are always tied together if you have your application like WordPress and you have MySQL should it be single pot or two pots not single these two have different life cycles if your MySQL goes down your WordPress can please say hey I have a problem come back later and when the when the WordPress fails and MySQL is still running and there's no probe there is no need to tear it down just because the front-end is not running so this should be is a different life cycle so it should be two different pots but if you have MySQL and some monitoring team and they just fetches the information from the database and sends it to somewhere like you're a fan now or something that collects these metrics this could be in single port because if you start them database you want this demon to start as well if you do if you stop database you want to stop this demon as well because they have completed the same life cycle when database starts monitoring daemon starts when you intentionally stop the database you don't need daemon running again like for the time being so this right there there is a there is academic paper somewhere published by Google that describes different use cases for having multiple containers in a single pot do I've in my opinion like now 80 90 percent of refuse cases when you use containers you will have single container in a pot usually you don't have two two containers there so what is a running container or set of containers somewhere replication controller says how many containers with a specific label is supposed to be there in the cluster so as I said before labels are attached to these different different resources and they identify those resources so if my pod has a label a PP equals work shopper for example I will and I have a replication controller that says there should be one replica of a pot that has label a PP equals work shopper and my container dies my pot is is going down so there is zero containers of with that label it will spin up new container to keep keep it on the level replicas one if I have too many of those like if I want to scale down I have for example replicas too and I change it to replicas one and it sees there is five containers or two containers that have this label or the set of labels it will kill some of those containers to keep the level as it is described in the replicas replicas value so that's the deads d SD concept of replication control who always maintains a specific number of containers with this specific set of labels in the cluster and then there are services because as I said like the container can kill the containers it can spin it down so if you are if you have two different pots that need to communicate with each other they would need to know their own IP addresses if if a pot is destroyed or if it if you died and it's rescheduled somewhere else it can change the IP address that it is assigned so your application is trying to consume something from that application that died would need to reconfigure itself to go to death but to the new IP address instead of the old one so services provide a stable IP address that works as a load balancer to a specific set of containers behind them so what what happens again is you say I am a load balancer or a proxy to different services that have this set of labels and anything that has that is there in the cluster and has the de set of labels will be considered a back-end for this proxies horrible answer well I usually use the word proxy when there is only one port and load balancer and there are multi propose behind behind that service but I think they could be in this case like change just fine so this is essentially how how kubernetes works what you do is you create a replication controller and that describes a pot you change the the number of replicas to one it will create a pot you have something there you have a service that manages the traffic to that to that container and that's it so it is essentially how how kubernetes work there are new concepts there that are extending these basic ideas so as a replication controller is a parent to pot there is deployment at this parent replication controller if you need to for example if you are doing a rolling deployment of something and you want to redeploy the application what you do is you have application with replicas for example five there is five pots and their application control that says that there should be five pots of this type so if you want to do a rolling deployment and by rolling I mean I spin up a new new container if it works I will do one down right so I will create a new replication controller race replica zero and I will scale this to one if the container works I was tell this to four if the I was tell this to you if it works scale is to free and I will like vary in a linear way or whatever you configure it's up to you you can just do a rolling deployment of the of the application so and this is this is maintained in the deployments who are parent replications contours that's why there is the parent-child relationship because something manages something below replication contours manages spots and deployment manages replication contours and with services in in kubernetes there is ingress little uh was there a question no yes I heard something so there is ingress an ingress allows you to access something in the cluster from outside and it's usually HTTP based and based on the host heater in the HTTP protocol it decides what should be the container that is the backend for this particular request that it's coming from the from the client so it was out of a lot of words let's do something so I'll try to show you how to actually use kubernetes in a very simple way and we will deploy the application that I mentioned at the beginning and we need to deploy it using that the container that has been pre built because kubernetes itself doesn't allow building containers so because I don't want to build containers myself it's pane and I don't want to do the pushes and this kind of stuff so what I have in place is when you do the get pushed to the car hub trigger a Travis CI build this build my application and then creates a container that this bit is then pushed to the curb so I have this workflow there but it's outside of the cluster itself the same goes for the registry as a registry I am using docker hub the register is not part of the kubernetes cluster itself yeah mm-hmm so what you can do well I think in kubernetes is optional in open ship then it's it's part of the default deployment but there is a DNS server or DNS service running inside a cluster or can be and that one is going to resolve a name to some IP address so you can have two types of services and one is called headless one is called norm or whatever so if and they always have a name so from your from your front and to back if you're communicating from front end to the back end you will name your service that is working as a proxy or old balancer for the backend containers it will name it back end for example so when you try to go to was that was the it's a synthetic DNS name like service name dot as we see dot cluster dot something something like that and you can use this domain name that is well-known to actually identify that service if it's a normal service and it will choose one of those containers and will proxy the request there if you don't want to use the service as a as a load balancer but you would like to use it as a discovery protocol or a discovery feature you can use the headless service and when you do a DNS query to a headless service it doesn't choose one IP address but reports all the IP addresses of all the bots ins that are behind that particular service so you can actually discover all the services they're just using the DNS query to the service if it's headless if you if it's not headless then it reports only one in a round robin fashion does it answer your question awesome so do you own the book you want to come now or later okay some of the questions I'm really eating the books are for you if you ask the questions yes and what you mean by automated yes there is a auto-scaling feature or I think it's called autoscaler something like that that can monitor some of the metrics of the pots and decide based on the metrics if it should scale up and scale down and it should be there as a component that you can deploy with kubernetes so it could be automated book for you I know it was just for the book yeah I don't think so I think in the current versions they don't do automatic rebalancing of the cluster when there are new resources just the new containers would be getting to do that particular note because it's it there is most resources so everything that's new will go there but it will I don't think that it will automatically just take something from from the other nodes and reschedule just because there is a new note yes so the question was if there is a new node connected to a cluster if kubernetes is going to rebalance the whole cluster and use and put some of the containers that already exist on that new node that was the question three books down okay right so we will need a command line they don't need a browser so this is the console of kubernetes when it's deployed and it's completely empty and I prepared the comments in in a text file here form for me so this command is going to deploy the application for us because if I would type it myself I would definitely make some mistake there some typos i mean--you it this way so this is the command that we need to run so can you read it is it readable okay it's not readable for me from down here because it's so so huge so cube CTL is the kubernetes CLI to run run strands something workshop written name it we'll use a docker image from the hub it will expose for port 8080 from from the from the container to my machine and there is the environment variable that I need that was mentioned in the first slide there because in the default there is no ingress or no HTTP HTTP system therefore translating so I need to expose the port is no nice entry point that would allow me to get to my service so if I do it it's very fast and because I already cashed the image it shouldn't take too long to deploy it so you see that there is a new deployment there is a new replica set replicas sets are replaced single application contours they're slightly different behavior towards the labels and there is a port so I created a new deployment that created a new replica replicas that and replica set created a new port so what's see if it's already there it's there it's running so it's green we have a pot running so our application is running at this time I don't need this IP address that is exposed from my and I used 1990 some applications would be there and it's running so really click it works should work yeah so that's that's my application has been deployed on kubernetes so it's quite easy but there is nothing fancy I still had to build my container myself somewhere I couldn't do it in the cluster and it runs it for me but that's pretty much it that's how it helps me just to run the container it will not help me to build a container maintain the containers I rebuild them if there is a problem or something like that and that's what open chief is putting on top of kubernetes and that's the that's one of the features that we are I think so this was just to get you started with kubernetes so you see how it works if you want to run the container but now I will move over to to the slides again oh because I had a problem with my Wi-Fi it doesn't work at all so I'm connected through my mobile phone and everything is done through the to the 4G so I put the the screenshots of the different steps if there was no signal or something so I have some backup just to just to show you something so when you around me cube start the the visitor KVM it will do these steps so that actually what happens when you try to try to install install kubernetes using the mini cube then I run the the command that's my screenshot of the of the screen and that's my application right so this pretty much what I showed you as well so now moving over to to open shift open shift has some history so open choose a sub-project and protect by Red Hat but today I'm speaking mostly able to project the open source part yes right now my micro Burnett is it turns in in a VM so there is a KVM virtual virtualizer that's a crazy work running on my machine and in that virtual manager I have two virtual machines mini coop and mini shift so in mini cube VM there is a kubernetes deployed there and I was communicating with it it's not running on my machine directly it's in the VM on my machine inside that VM yes yeah so it has its own doctor in there and it's using the internal internal internal infrastructure inside so it doesn't pollute my image my my system outside of the VM I could do the same thing like you can do something like cube CTL start or deploy or something like that or cube at ADM start or deploy and it was started on your own machine directly so you don't have to go through the VM process and it's quite easy as well but I like the containment yes the containment in the VM so I can just do a mini cube delete and it destroys all the things there and I can start from scratch yes so cube CTL is just the front and to the cube API running in the VM that's HTTP based so just communicate with each other I think is a treaty based yes yes yeah if you if you for example you know - I don't know Google container entity what they call it you can use cube CTL against that or some other there are different profiles or contexts so you can switch beforehand between different instances of kubernetes and just communicate with one specific generally just HTTP endpoint that you just communicate with so it can be wherever you want the nice thing this mini cube is it will actually create this context for me on my machine so I don't have to login or set something manually it's created there it's called mini cube very simple to understand it's up to you you can specify that I think by default date they use to do bytes where is it yeah this one so by default cube gigabytes of memory and it is using two CPUs of my four CPUs on the laptop that's the default configuration but you can using the command-line switches you can choose your own configuration of the VM it's up to you all the application inside the container ooh well there could be something here reported but I'm not sure if it's right there no I don't know really it's a vault fly application server that is running a simple Java EE application but is using pretty much only the servlet and CDI specifications so most of the subsystem are probably disabled I would I would shoot like - 300 megabytes but that just my guess okay we have all still the same people I don't want to give them the same book again and again and again different people ask questions as well if you have some yes I will try just wait on me when I start answering right away I will try to remember so going over here so we started at OpenShift ah - no five five years ago who knew about doctor two years ago who knew about the curve three years ago yeah so we started way before doctor was there at that time actually the company was called cloud they were trying to build a platform as a service and they built this technology that allowed the layered approach to containers and distribution of the application so they eventually decided hey our business in the platform as-a-service land is not going that well let's just take the technology we use with the layers and containers and let's build business around that and call doctor so actually doc was created as a side project or like as a component of a bigger thing that was done and then like somehow transformed the company into a doctor company so at the time there was no doctor really so we created our own container technology as was called gears it was pretty much some Alex C with ASEA Linux that created a security environment and allowed the application to run there and we had our own scheduler the whole thing was written in Ruby I like Ruby so I was happy though some of the customers who were deploying the platform were complaining that it's pretty complex and Ruby that it's not really stable and I don't know so there has been all of moving parts there and there was some some problems so eventually when doctor came and our customers started to asking for using doctors so we want to use doctor with your platform can we do it we didn't have the answer at the time so we were designing should we ditch the gears and use docker should we support both or what will be the future and at the same time Google was starting to open-source kubernetes and we did the decision that hey okay let's let's cut it here and let's end up doctor for the content technology and let's add a kubernetes for the 4d scheduler so we actually ditched the previous code and use these two right now my opinion like kubernetes becoming the fate of the standart for like the most common way to deploy containers though now of course some other some other tools so it was a nice nice decision and doctor when when somebody says container most people think of docker actually even though there is a repeat and some other implementations and some older older implementations right what was the first container technology and this is this is like my my own opinion some people disagree with me that it is a container technology but I still think it is a container technology route change route on Linux like it didn't allow you to isolate really the resources but you were able to switch into different context of the machine and do something in there so I think like that was one of the first then we had Solaris zones containers then we had FreeBSD gels then we have a Linux Alex C and we evolved into docker eventually we have dr. rocket and there are some other implementations as well so let's let just some something to the history and something about openshift itself so I like to call openshift distribution of kubernetes if you want to if you want to use open shift as a kubernetes cluster you can you can take cube CTL and just run it commands against open chief cluster and it will just works all the things from coburn bees are exposed back to the user and and there is the word Enterprise that's because Red Hat business is in doing products projects for the big guys or enterprises right so what we do is if our customer asked like hey I want a supported I want a kubernetes running owners Enterprise Linux can you do it for me yeah we have OpenShift you can use OpenShift as a kubernetes cluster as well so that's why I call it enterprise distribution of kubernetes and it also adds some other features like multi-tenancy like in in kubernetes cluster everybody's pretty much equal and there is no role based access we have like different different projects that isolates all the resources and different users can do all the different things and it's gonna stop this it's like things that allow you to use the platform as a multi-tenant system multiple users can use it without interfering with each other we also do this on the on the network level so every project can have a virtual network just by itself so different projects cannot see the communication of from different other projects plus what I want to speak today about most is the developer experience so we take openshift and sorry we take kubernetes and build around the developer experience so that you can use the platform as well for building containers for managing containers and somehow cover the whole lifecycle of a container from its inception to its runtime or production in the same tool because kubernetes itself is only the last part of that process but we try to cover the whole lifecycle of the container okay that's enough 44 the interactions and let's do something again in the with the demo questions and I will remember to no no questions I will not remember to repeat them so what I have here there is a openshift origin running over here origin is the open source project so if you want to get the open source project or access the our scope or use the open or use it you can go to github.com slash openshift slash origin and right there you can download open shift which is a single mind binary in the simplest case and you can spin it up if you want and use it for whatever you want it's a 42 I sensed so you can contribute back you can build something from the source code everything's right there on github so my project is here so I have one project only and I can do things in there so the project when I'm inside the project I am pretty much in the same scope as I was in the general general view of the kubernetes cluster before so this this time I need to select a project to actually work in some specific namespace so what shall we do we shall deploy the container again exactly the same thing as we did just pull something from the crop spin it up and get it running on on this cluster instead of instead of on an kubernetes so I could very much use the same command decided before but we also have our command line too oh one more thing is that before I can use it on the command line I need to login into my username with my password and choose a project so I've worked somewhere you said you didn't see it within incoporate is because there is it's not needed there everybody has access pretty much everywhere so I am logged in where is my this mini shift comes up it would just launch the browser service the console is the IP address of the VM so I don't have to remember it myself what DD URL over here so what I do see log n I would need to have the address of the cluster so I'm using the console - URL instead of opening the browser I am printing it out to STD out and then logging in using means using a username and a password it's quite easy so this is the command that I need to run to deploy the application Oh so Oh see there's a inset of cube CTL new app my container and the environment variable this time I'm not exposing any port because I don't need it I already have har-har running on the platform but is doing the HTTP translation for me so it's it's it's there as part of the platform by itself when I switch over to the browser there is my application there is one pot being deployed it's now it's blue so it's running we are exposing port 8080 but there is no like light now the application is isolated inside inside the platform it's not accessible from outside what I need to do is create a route so I create a new route or just use the default I'm using the port 8080 I closed it so nothing changed except that I have my DNS name right now already so I can open it and it was translated so what happened what's happening here right now is that I have already domain name so I'm using a domain name that is and I am hitting H a proxy nhe proxy decides what service is supposed to be back-end for that and I'm then I hit the container itself so I don't need to expose a port or something I can just use the translation layer this is useful for HTTP right because HTTP port O'Call is simple to translate into something using the host header the other protocols are very very more challenging so that's it I have application running so this is the same as we did in in kubernetes again very simple so I will clean it up I'll see delete all - a PP equals so I'm deleting everything that has a label a PP equals workshop err when you create something it will automatically label all the resources with some specific label so this one do it's everything that's there related to my application so nothing's nothing's in here so this time I'm gonna run the I recommend I prepare it and that's this one control-c where's my console like that so I'll see new app the same thing but this time I have bad fly tilde a URL that points on my github repository - - name which was it's not required there because it would be guessed from the base name of the of the URL and then the environment variable again what's going to happen this time is I'm going to create a new build and if it's bad fine so my build is running here it was not here before I hope I catch the correct about fly image yeah I guess I did so I'm cloning my repo into the platform and it it will use the VAR fly container who has been configured also for building stuff so it will that tidy there is a poem XML file it will come on go go go my wife I still up should work so it will use the image and the image knows how to build application but there is pom.xml file it will run maven build and then from the target the target director it will use the warp ID for files er files and this kind of stuff and will put it into the deployment directory of add fly and it will take this image this is this container it will committed as an image and will push it into internal decrepid it's running inside openshift so when this happens openshift will detect that there is a new image for this application it will pull it and spin it up so it was Starbuck fly and it will automatically deploy or Devorah file or your father they have been there from the from the build so I don't have to pre built anything myself before I can use the platform to build the container for me oh yeah we are downloading the whole internet now to maven so the platform is pulling all these things and it will run the build when the bill finishes it will create a new container and yeah copying the artifact pushing a new canoe new container and you see there is an internal IP address so it's inside the container platform so I'm not pushing into docker hub or anything like that yes fabricate replicate some of the features yeah sorry this looks very similar to fabricate what's the question yeah they have quite some some they have similar features some of those features some of those features are actually just things that they use in open shift and just expose it in a different way and they are trying to build a different user experience for engineers than we have in an open ship itself and this is mostly the matter of personal opinion what you like what you think is better some people would prefer this user experience some people would say fabricate is great and I want to use this user experience so one book for you as well still pushing as everything's happening in my single VM dr. Bush is quite demanding contribute because we are calculating the hashes and stuff and validating them so it takes some time before everything's pushed especially for the big layers yes in openshift or in general okay oh definitely we have like big customers in the area of no airports banks government military yes it's deployed in wild scale it's deployed across across industries and we put a lot of security or constraining stuff on top of the containers so by default everything is isolated in SEO Linux we like by default we never run container as a root container we always run it as a specific user which in essence blink breaks a lot of containers from doctor hub but also it puts a security layer on top of the container because you are not rude so you can do a lot of stuff right so sometimes people see open chip and they say hey my container is not working yes because your container was built in a way that it probably shouldn't be because there are still no user name spaces in the in the kernel in the way that it should be so once it's there the technology to actually I saw like in container be seen as a root but be a regular user on the machine itself then it could be possible to it use it this way now at this time even if you use the user name spaces that aren't there right now you can be a root in the container to be seen as a root in container but you will be always a single user for our containers on the machine so if you break out you can you will have access to many different containers because still end the same user so there is still security aspect needs to be fixed to actually use containers in a rude manner so maybe some of those things may come from may come from from people using it in a way that it not well it's difficult to say not meant to be because everybody has a different opinion about how to do stuff right but I don't know like we have customers who actually run it in they use it and it works for them so we have to experience that it works yes now now that's just a general just a general dukkha repo if I go to caps lock get how come / osg workshop / yeah so I do have Travis llamo and dr. file because that's used for building the container and pushing it to the doctor as a doctor image to dr. Hopp but it's not required for the application itself the only thing is that the / image knows that if there is a poem XML file it should run a maven build and take the artifacts from there and configure itself to use these artifacts that's pretty much it so the application it's independent from the platform the the Builder image needs to understand how to build that particular technology so what I understand how to build maven maybe and or some other technologies there is a builder image for Ruby dr. Perl PHP similar technologies as well if you have something that you need to do specific to your own company to your own deployment you can just do from Val fly builder image change something in their image fin in the in the Builder image and use that new container as the Builder image for your use cases so it has more features than ours have because there is something specific to your company you need to have their make sense cool yes well let me show something because I think I'm running out of time slowly I will do one more comment that will do something and then we can while it's happening we can discuss things so my built finished my application is running I can access it again and I should see my my workshop so it works it's exactly as it's not exactly the same it's but it's very much the same as pulling the docker image but I am building it in the platform instead of building it outside of the platform this is good but not not always you want to build a bit push to github repo and then trigger the build from the github repo want to build from your source code directly so the last thing that I want to show you is that I will use the same same configuration the same thing the only thing is that I will trigger the build with start built for work chopper and from directory this I mean from directory so let me change CD like that so this is the source code of my application and I'm I want to use from this directory in this application not don't use the github repo we specified before but use the source code in my application right now so when I go back to my console and I check the log there is the cloning here right so I'm cloning the github repo it was the one that we did before now I will run this and build second build was triggered so I can see the lock over here and this time there is no git clone there is I am using some archive from stdin so I am actually uploading I compressed my source code on my machine uploaded that archived and openshift took the archive as input instead of the clone and it's using the source code directly and it's not using the github repo or the source code there is just what I had on my machine at that specific time and then it happened the same so your questions now so the what I did what over the platform did is it took the Val fly image it cloned the repository in the image and then there are two shell script called assemble and run in a well-known part inside the container so the platform if you want to build your own container you need to put the script in the container and then you need to add some labels or in the metadata of the container to actually specify what script should be used as assemble and run and assemble is used for the build process and when the container is committed the run script is set as the entry point so when the container is started it will use the run script to boot the container itself does it answer the question awesome no the guy the person behind you was first be content you want to redeploy the container one by one in a rolling manner yeah sorry so I think I'm not sure what what's now the default in kubernetes but in open chief the default is to use rolling strategy so essentially by default if you try to deploy new content new version of your application it will be done in the rolling manner so it will spin up new container if it works it will tear one down and go this way so it is the default that we use then you can have like kill everything deploy everything deploy everything kill everything or something that could be custom like some way of scripting but the default is rolling right now you are no we are pushing it into the internal register it is running as part of the platform so inside the VM that open--it OpenShift is in there is also a register running so it's completely internal you can access it and you can like pull or push from there if you want but you don't have to the registry or no it can be used for production as well in every like pretty much in every deployment of openshift you will have a registry that's used for the images that are produced by the platform it's also is for caching these images and this kind of stuff you are next my application over here yeah uh-huh you can have so we are using so the the translation that is translating the DNS into some internal container is H a proxy running in a container on OpenShift you can scale the router as well so you can have multiple nodes that will work as as as the gateway into the platform so you can scale the departed translate though you still need something in front of that that will work as load balance or something so the platform can be can be highly available but if you are for example in AWS they have this load balancer that can be there as a publicly facing endpoint and then use some of the backends so then you can scale the 80 proxies and then you can scale the containers so this can be highly available I don't know just decide yourself uh-huh mm-hmm okay so essentially the question is how to do multiple stages of the application and deployment in different stages of the lifecycle QA brought def that's it ah so this could be talked just by itself because they're different different patterns to do it if you have single openshift cluster The Container has something called node selector so that says what nodes it should be deployed to nodes can have labels so this is again based on labels so you can say this container should be on no that has zone def zone prod or something like that and by changing this selector it will move the container to different zones so that means that you can even you can specify different hardware that is going to be used for production for development for QA so this is the lowest level right if you move one level up or like what you mentioned how to like structure the access to the database that is used somewhere differently so the simplest use case would be or the simple solution if you are in prod Q if you aren't you a def and this kind of environments the database will be running on the platform probably inside the project so you'll just have a service call database so you will access the database domain name and it will point a container running inside the platform but services don't have to point only on the on things inside the platform they can also point to things outside of the platform so for production for example you can create a service that will be again code database but it will not point the container it will point to database cluster running on different nodes so you will connect to the local service and it will transfer it somewhere out so from your from the perspective of your database you are always connecting for example to database name the to host name database and you are reading the username and password from environment variables or something and then the application moves through different stages it will without changing the code connect to different different databases and use different credentials based on the information either in the environment or you can have conflict maps that will create a file on the on the disk that you can read from there different ways of how to distribute the configuration information inside inside the application itself it answer the question yes now there was somebody at the back yes uh-huh yes not really it's not meant for that so there are two checks two probes one is called readiness probe and one is called liveness probe so readiness probe checks if the container has already started example is wildfly when the process is started the one-five process it doesn't mean that the application is deployed and it's running it's not available yet so you get 500 or 404 because the application is not yet there so for example here I could do readiness probe HTTP GET to my application and unless it responds with I think 300 something or 200 something it will not add this pot into the load balancer to the service so it's not yet available once this probe actually returns a correct answer then it will be added as a back-end to the service so it will be usable liveness probe works checks in the same manner but checks if the application is available if life miss probe fails the container is considered that it will be unscheduled and will be rescheduled somewhere else so there are two probes for this it's not really meant for monitoring itself like monitoring in Ravana or something like that it is meant for checking the state of the container inside the platform it can be HTTP requests it can be TCP connection to some port or it can be a script run inside the container so there is their script in a container it will be run if it exits with with error or it will be considered not okay if a zip it zero it's okay so that's what the health checks are for not sure what you mean let me see so we were over here right yeah so it just shows what was happening inside the platform that's like overview of different things what do you probably would like to see more is if you go to application pots you open one of the pots you have floats here so you see what printed to STD out st the error of the container you can use terminal so you can do something like this run directly inside the container or I can have two more things here like can be metrics that will show memory CPU and some IO charts and stuff around about the container which is not deployed here because it's it uses cassandra and it's too big for my VMs running on my node manage my machine or it can be you can have logging which is using flow and D to stream stream all the locks from nodes from containers into elasticsearch or spunk or something like that and then you can aggregate information from there so there are different ways how to do monitoring but those things that you spoke about we're more metric or like the state of the container checks there's so many questions I think you are next uh-huh I have this question quite a lot I would say that it's much more flexible it has much more features though with Amazon they are moving rapidly forward as well so they are doing features all the time I would say if you are considering that or dead dry bolt and decide yourself based on your own experience if it fits to your use case for some use cases you may prefer to use their service like natively some customers are running OpenShift on AWS themselves instead of that the service that Amazon provides so there are different different workflows different features and I think like it's better to just go do some hands-on test and see if it fits your needs one more question at the back somewhere now are we overflowing a lot oh that's cool so if there are no more questions no more questions sure yeah so there is a concept of persistent volumes persistent volumes define that there is somewhere a persistent system it can be clustered safe EBS um AWS some Google days or something like that so different things can be hidden behind persistent volume and then you as a user you create a persistent volume claim that says I need a persistent volume with these parameters it will choose one of those based on some metrics and stuff that you choose I will try to match them and then you have purchased in volume claim and that claim can be attached to a port and it says mount this technology inside this particular container at a specific path so then you can use this directory as a as a persistent directory and the rest of the container still is ephemeral okay okay so for in containers pretty much the best thing you can do is to stream everything to STD out from the application and the container technology usually streams everything from there so whatever you print as application to STD out in the container we will take and through flow and D we will send to some technology that aggregates this by default it's elasticsearch running an open shift but it can be Splunk or some other solution that you are already using in your company okay I think I am really really really overflowing so I'm thank you for coming you were great audience these books are over here so please come and take I think there is not enough questions so if there are some left you can take them as well I'm not taking them I don't want to travel with them home and these are my screenshots and thank you
Info
Channel: Devoxx
Views: 8,045
Rating: 4.9148936 out of 5
Keywords: VDV17
Id: l3tDV25JWQ8
Channel Id: undefined
Length: 66min 12sec (3972 seconds)
Published: Wed Aug 30 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.