KubeInit: Bringing good practices from the OpenStack ecosystem to improve OKD/OpenShift deploys

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] good morning good afternoon good evening everyone wherever you're hailing from welcome i am chris short principal technical marketing manager here on the openshift team at red hat i am joined for a very special presentation today about cubanet which is this project from my fellow red hatter my friend carlos carlos please introduce yourself to the audience and you know let us know what you've been working on in your free time here sure thing uh first of all uh thank you chris for letting me uh be here today and well my name is carlos i'm carlos camacho and well i'm going to speak a little bit about different ways to deploy a an origin kubernetes cluster using ansible so uh the title of this presentation is cubini bringing good practices from the opening stack ecosystem to improve the way uh we deploy openshift right so you hear the word openstack i will tell you more about it later so who am i i'm a red hatter uh i finished my phd in computer engineering a few years ago but i'm still in touch with the people from the university and you will see this is one of the whys of this project i work as a software engineer in red hat and i am a upstream openstack contributor since uh five years ago maybe and since maybe three years ago i'm starting to dig into the uh kurnadys ecosystem um i usually try to like share things on my blog it's unstack.com and i usually hang in uh free node the kubernetes slack and you can see my github profile it's sakamacho in these three networks so chris please feel free to interrupt me if you have any question or comment no i just i think it's great that you're on i love this project right like i love any project that looks at installing kubernetes differently or just with a different mindset you know so like i i really appreciate you coming on and i'm sorry it took so long to get you on but i'm glad that you're finally here thank you so uh i will give you a quick info right now so uh about cubing it cube in it provides unsold playbooks and roles for deploying and configuring multiple kubernetes distributions uh it's basically an actual collection and ideally i should be able to deploy different qr8 distributions but right now i only support origin which is the upstream project uh for uh openshift you can download at okd.io exactly exactly uh so the main goal of kubernetes to have a fully automated way to deploy in a single command a created list of prescribed architectures one of the things you will see if you try to deploy origin it's that it's magically installed once you finish all the prerequisites the installer it's magical but you need to do a lot of steps before this actual deployment this is what i'm trying to solve or to automate all these prerequisites to have everything automated in one single command that's amazing i love it i love it so uh for those who are here who never heard about openstack openstack it's an open source cloud operative system which allow us to manage compute storage and networking resources using apis [Music] it's an opinion but i can say that from my point of view it was like the default cloud platform uh and it's now starting with other competitors like kubernetes for example but we have different use cases so in particular for cubanit what i'm trying to do here is to inherit some of the best practices from the opening stock ecosystem to help the way i uh deploy origin i do this pretty much because i'm quite used to the development workflow of openstack so for me it's really easy to get some pieces from openstack and use them to deploy openshift um mistakes yeah openstack is incredibly complex some people say that openshift or in general kubernetes is simpler but yeah i mean six to one half a dozen to another you know i mean it's pick your complexity you know yeah so i don't think so right now so uh the why of this project um i i'm used to working different research projects so i needed to have an easy way to deploy origin with one thing in a single command i just need to deploy the environment run my test and destroy it completely and then do it again how many uh how many times as long as i need them so um i had to deal with these manual steps like you need to configure bind you need to configure h8 proxy you need to configure nfs shares uh you need to provision the infrastructure to be able to deploy your cluster but i need something like fire and unforget so yeah uh i needed this because i was writing a paper uh research paper in which i needed to run some specific tests in this cluster i i needed i needed back then to uh to do this around 70 times and it was taking me one day to run each test so i had to invest more than two months in running a set of tests that they that might fail and i have to re re-execute everything from scratch so um in general if you're writing a research paper to be convincing you need to be able to provide enough evidence that your results are reproducible so uh if you deploy for example a kubernetes cluster and you execute some tasks you might have the cluster in a inconsistent state so if you rerun the test you cannot warranty that you cannot be 100 sure that the cluster it's in this initial state where the the state won't affect your uh test execution so um this is uh like the way i found to be able to uh make my experiments uh reproducible um in this case we have a a measure measure which is called confidence interval and you need to be sure that you have enough confidence in the results you're getting to be able to prove that your experiment work so um i started to work on this cluster automation around maybe two years ago for a research project called pistol which is a fault injection uh platform to uh to do calcium urine in kubernetes clusters in general i choose origin because it's the one of the current distribution that i'm used to use and the paper was submitted to the transactions on cloud computing uh journal it's still under review because of this pandemic everything slows down yeah i get it yeah um the title is cause software product line a platform for improving open hybrid cloud system resiliency so uh it's under review and this is the why of why i uh started with this automation uh the logo some people gave me some feedback like hey man why did you do this logo for the prick i thought it was cool i'm a fan of a comic series called dragon ball uh it's like focusing a lot of energy in one single point and then release releasing it uh it's like you know your deployment command i just want one command to deploy my environment completely and the way i wanted to to make this presentation it's a live demo at the same time i'm speaking about the different components of cuvinit so right now i'm going to start with the demo you will see that it's incredibly simple from a user point of view there is no complexity for installing your kubernetes cluster your audience cluster as long as you have enough computing resources you should be able to do this super quickly i'm saying this because uh you if you do this manually you will have to do a lot of pre manual steps and they might be prone to fail uh they might be different between the person you wanted to deploy so with this you might you should have a uh you know you you shouldn't be frustrated uh if you want to deploy audience so uh if you see my terminal i will run the comments to be able to uh deploy origin using qfe needs so the first thing we will do is clone the repository from github which is something we can all do so i'm just going to interrupt one second just let the audience know if you cannot read the fonts or anything please let us know right like your feedback tells us like what needs to do carlos actually has a super wide monitor which is super cool and we're trying this new format as you see on the right hand side those slides are live on our slide deck site he's using slides that are available to you right now his terminal is up on the left i love this setup but let us know what you think about it and uh you know we'll just continue on please carlos and we'll take feedback as we get it thank you chris so yes please tell us if you're able to see or if you're jp dave would like you to increase the font size yeah yeah sure so let me see how to uh do these preferences yeah i will go to here um here and i will put this 35 cool uh let us know better it looked good to me so now it just looks better now all right so there's but there is that fine line between it being usable and it being readable right so you tell us you know yeah the important thing here is that the comments that i'm running are the ones that are in the slide so first thing we will clean the repository we will get into the the cube directory and then we will run this uh actual playbook command so it's super simple uh right now so what i'm going to do is to actually run the deployment command if you go to the project repository in the readme you will also have this command it's super easy uh to run so that's all you need to do to deploy your origin uh cluster so in theory we should let this uh to run for around 20 minutes and after we we have these 20 to 25 minutes pass we should have to cluster a deploy so this is a very simple environment it's a multi-master uh origin cluster with one worker uh yeah so it should be powerful enough for you to have a development environment if you don't choose to use something like mini shift um yeah yeah so i mean you could use something like a crc or something like that but this allows you to do it pretty much anywhere you want right like you can use the ansible configuration to deploy it um like bare metal or hypervisor based anything right like yeah exactly that that's the thing uh if you want to run actual tests that can be you know compared to production environments you need to have like uh something that allows you to deploy a multi-master uh deployment i mean you cannot test something production ready in a mini shift so right yeah yeah uh by by default i'm using only one master uh sorry three masternodes and one worker but you can extend this to as many workers as your hypervisor allows yeah as your resources can withstand exactly yeah so uh it's funny story well not a funny story but just kind of an ongoing saga here uh in the the short household the short office um i we i bought an old server it you know had came with like four you know 10 000 rpm drives could not handle like open shift at all couldn't handle etcd right like the discs were not fast enough to handle lcd so replaced all the stuff with um ssds and it's like night and day so and installing different clusters and different things like that is now something like i can do i can test these things i can like have these different environments now where it's like oh man like the like i have a small cloud in my house and it's it's very convenient but it's all hypervisor based right like so it's all right there on one server or actually sitting on top of a refrigerator across the house exactly and for example if you want to provide i mean if you want for example you can get more uh servers and you can have your master nodes across your different hyper virus source and you can do a lot of uh powerful things uh with these so now i'm going to speak a little bit about openstack because it's uh you know where i took a lot of pieces from so these are the main repositories or soup projects where i got some references uh reference that i contributed to so that's why i uh use them so the first two repositories triple answers and openstack console are two projects that allows you to automate all the deployment of openstack from those from these two repositories what i took it what i took was uh for example uh we render automatically all the documentation from the code itself uh if we want to add more roles to our collection we have that already automated and we try to split the features as different roles so you can use the roles in the case of youtube in it for example imagine that you have a role to uh configure h8 proxy you can use the same role uh to be used with origin with playing kubernetes with ubuntu kubernetes or with any distribution kubernetes yeah nice that's awesome that's cool that's very cool yeah exactly uh from triple upgrade we have a pretty stable uh molecule test architecture so yeah so incubate it you are able to run any molecule test you can uh in in next in some next slides i will speak about how this is integrated into github actions because all the tests are running in uh github um then was my grade i used that uh for example to get some references about how we run the linters how the unit tests are structured and some downstream ci automation because for example if you try to run uh things in github actions depending on the resources you need you might not be able to run them there right so yeah um and in tribal validations we have a role where people can add actually roles to validate things we validate something like hey does your hypervisor have uh has enough uh disk space or ram or things like that or uh for example there are some required packages you need to have do you have like nested virtualization enabled things like this it's our validations we can have in the in the rule why well because i have a principle which is reduce reuse and refactor from the openstack ecosystem because for me it's super quick because i'm used to these uh development workflows the components of cube init um the architecture it's based on layers it's a as i told you before this is uh an ansible collection so we have roles and modules to define how we will architect these uh deployments so we can have a role to configure bind a role to configure hd proxy nfs we installed in the services machine a apache web server to for example provide the ignition files to the master nodes when they are being bootstrapped we can integrate other services like free ipa coupe bird and the role for the validations and one of the good thing is that all these roles should be able to be reusable for different kubernetes distributions uh for example we know that origin uses these ignition files that might be inside the origin role but we might have a different way to deploy playing kubernetes so we can use this as a puzzle and build it as we need and right now i am only i only support liberty because it's what i'm using but ideally we should be able to deploy to azure to aws to any cloud provider we we want uh what i support right now uh the a hypervisor or operative system is cytosol fedora deviant and ubuntu these are the the four of os you can have in the hypervisor uh at the at first i only was working with uh centos but someone asked hey i have a an ubuntu hypervisor and i cannot test the code can you please make it work and i say okay it's not like much work i will make it work for you we only have libya and origin and well the external services are bind for the dns service he proxy for load balancing the traffic to the master and worker nodes uh the apache web server for for example hosting the ignition files and the images we'll use to bootstrap the cluster and the validations role to add any validation you might need um cool ideally we should be able to integrate aws order kubernetes distributions theory api uh different sdn uh controllers and for example provide something which is really useful offline deployments um right oh my gosh that is such a topic on this channel it's like a trigger word almost yeah disconnected installs offline deployments air gapped you name it like that is always going to perk my ears yeah you know for example i have a very kind it's not a big machine but it's connected uh using a adsl and it takes like four hours to download all the things the cluster needs to work so it's like and you will see that this takes 20 minutes so i'm working on a roll to get all the things required to deploy the cluster put them in the services node uh even creating a small uh container registry and bootstrap yes have you seen uh christian hernandez's helper node project uh no okay uh you should definitely check that out i think it would probably jump start you on your journey there for sure i just dropped a link in chat to the youtube thing but it's uh it's under the ocp4 official uh okay github i'll find the link for it here in a second i'll send it to you on dm too it might it might actually like help you jump start that process because he's got it like bare metal kate you know kvm all the whole nine yards figured out so cool and it provides pixie booting dns h.a proxy the whole nine yards right so yeah you might be able to pick and pull pieces out of that for sure yeah i was speaking with yolanda robla she's also a software engineer red hat and she already has some automation to to do this uh offline deployment yeah but i will check it for sure uh because i mean one of the things you will you should know for sure it's that there are a lot of people doing similar things so it's something we can you know use we can always learn from each other right like exactly always the next component uh documentation i think the documentation for the projects are very important as you can see i'm using sphinx which is the default uh documentation generator for all the openstack projects uh the docs it's available in docs.tubenet.com uh it's based in things it's based also in the reader doc's things 10 thing it's automatically rendered from the documentation you write in the code so uh one of the things that in in it's usual that in upstream projects the documentation is quite uh desynchronized with what we have in the code so uh doing it in this way we make sure that the code is actually synchronized with a with a coat the dogs are built on each merch so uh in order to make it uh to be sure that it's uh up to date uh it's integrated into the github repository as a different branch it's called the gh pages branch and it's integrated into the as a github action nice so you've even automated your documentation good for you yeah so uh another good thing is that uh people who wants to contribute uh if you want to create a new role you will need to you need to know all the different folders and files you need to create in order to make a new role so we have that out already automated so we have an ansible playbook that will create a role for you you just need to put the handler you will use for this new role run the command and it will create a full skeleton for you it will create a default file variables the task where you can start putting your automation it will create a module the molecule test skeleton so you will already have one no op molecule test running uh it will link your code with the documentation and uh everything it's like working in you know a single directory so it's it's really really easy and these are the lessons learned from the openstack community you're bringing forward and i love it yeah i love it exactly so these are things that we are already uh doing every day so why not to use them uh in the kubernetes ecosystem so yeah uh for those who can see my screen the terminal uh the cluster it's already kind of deployed so as you yeah as you can see uh we already uh deployed the three master nodes and in theory what we're waiting here it's the that the etcd cluster uh builds and we have uh the nodes ready in the cluster once we have the three nodes ready it will continue it will deploy the worker node it will check the certificates and it will finish so let's cross fingers but this is uh yeah like this is kind of the last finalization steps here yeah yeah moment of truth [Laughter] uh now other thing we have here it's the the ci all the ci jobs are integrated with github actions so you will be able to see there that we have the linters the unit test end-to-end test molecule test and the docs build test running with every uh pull request so it's really good because if you want to contribute with the role you will have uh weak uh feedback so you will be able to see if you have something wrong or uh what's happening with your uh with your code yeah that's awesome uh as you can see there there is a job called end-to-end okay lip beard that's downstream uh a job it's a job which is not running in github it's running in a gitlab runner so what i'm what i have there it's a script which is also available in the repository i will uh speak about it right now now let me see yeah so um we need tests i mean uh the code in general it's something that it's moving really really fast so if we don't have enough tests we won't be able to see uh which parts of our code are broken so the tests are based in github actions they're run on each pull push or pull request you will have results between two and four minutes depending on the job itself and in theory we are covering all the code in the repository we have the molecule ansible automation for the functional test uh we have uh unit tests for the python modules we have code styles running with linters and the documentation it's built on every vr wow yeah just wow that's amazing and this is what we have for the regular uh code you you can push besides that we have to enter and test because i cannot deploy uh this thing in github because there are basically no resources to do that so um what i'm doing here is that i have an internal gitlab instance uh a runner and they run on demand based on custom tags if you see the here uh the tag is called end to end okd libert uh here and if the bot detects that you have a pull request with this stack it will run the full end-to-end uh test in the gitlab uh runner so for security reasons that are outside github uh remember that users can actually run malicious code on pr's or when forking the code so it runs externally and it's triggered by tags so uh in order to be able to put it back you need to have permissions yeah you're nice yeah yeah yeah that's a good that's actually like a great way to control that yeah very in i mean yeah there's that i mean it's definitely yeah anyways go ahead sorry yeah i mean if you have people maintaining the code if you push a pull request uh i can see that the actual code is it's okay and then i can add the tag on demand so they don't need to run automatically um yeah um how it works there are two simple scripts it's called launch end-to-end dot pi and run.sh uh a check pipeline executes the python script around each minutes if it's if it's not detecting any tag in the repository uh it won't do anything it if it's detecting attack in a pull request it will fetch the code and launch the the deployment um yeah it's basically three three steps it will configure the job it will launch the job and it will write back the results uh to the to the uh job stages so if you go if we go back uh you can see the job finishing correctly and it's executed in 23.6 minutes so yeah and if you click details if you click details you actually will be able to go to the gitlab pipeline and you will be able to see the whole execution uh yeah uh the validations ah so if you see right now we already finished the deployment uh what we are doing is to wait until all the cs csr uh resources in the cluster are approved this means the certificates because uh until we don't have them all approved we won't be able to see the uh the worker node in the cluster so this is the final step i think in two or three minutes we should have the um the cluster already deployed yeah i mean think about it right like you're deploying a cluster right now like live and it's just exactly technological technology never fails to amaze me right like it's it's just marvelous yeah one of the things that i kind of identified is that when you have a lot of documentation and you have new people trying to contribute uh it's really hard for them to actually have having something working it doesn't matter if it's complex but when you give them something working they can actually go inside the cluster see how it works try to debug things try to hack it but if it's broken uh just starting it will make harder for you to you know join to the uh to in uh the validations it's one rule uh it will run some pre-deployment checks and it will run some post deployment checks things like ram availability disk availability you can check endpoints uh you can check if the nodes are ready to be able to be used uh you can check the persistent storage status if you have persistent storage and you can check for example uh an application that it's deployed in the in the cluster uh so you can pretty much validate anything you want and now i'm going to make a div dive uh chris i don't know we have something like 20 uh i got 26 minutes like you can deep dive this i feel like in 26 minutes right awesome yeah yeah sure uh i'm going to speak about the three external things we do prior the deployment of the cluster because once the pla the cluster it's deployed we don't do anything so basically we'll i will show you how i am configuring the network the dns service and the aha proxy uh service so it should be like uh seven seven seven minutes uh we should be able to break it down the network uh the network for this uh machine it's pretty much a virtual network switch uh i'm creating a network call called uh chi k-i-m-g-t net zero and this is basically uh creating a virtual bridge with some services enabled which are dhcp and dnas so when you plug a machine there it will automatically get a an ip and it should have internet access but it's so it's a bridged interface yeah yeah exactly so you will see that here in the slide uh the gateway uh the decider of the net is 10.000 24. uh in this slide out as you can see i was waiting for you to see that yeah 22 minutes yeah that's good good that's really good so give me a few minutes i will finish the deep dive and some comments in the cluster so you can see that it's actually deployed and worked awesome yeah um then we have uh if you if if we using one single like network the 10.000 you will be able to deploy your cluster but you will not be able to have external access to for example the origin dashboard if you need to do that you need to have an external an additional interface connected to the service node which is the load balancer which is able to uh connect to the uh to the actual dashboard so in this case what you need to do it's to create an additional bridge slaving an interface you need to have available in your server so uh in this case the interface eno2 it's it's a slave interface of the bridge we will create additionally so if you have this machine remotely available when you do that the interface will lose its ip configuration so you will lose connectivity so you need to be careful when you do this because uh you might get this this is a treacherous step yeah this is where you need to be very cognizant of your type typing because you could just completely disconnect yourself exactly so once we have this we are able to access the cluster internally or externally using this additional interface in this specific case should be eth1 uh which it's ib it's 10.19.41. uh 159 and yeah that's the the two different networking scenarios uh that are supported right now you can have only internal access to the cluster or you can have an external interface to see the the dashboard or any application you you want to deploy as you can see here uh this is a piece of code from github uh this is the definition of the network we're using and as you can see here uh we provide dhcp for all the nodes that are connected to this network and we have as we know the mac address for each machine because it's defined in the inventory we will be able to assign the ip address we want to all the nodes in the cluster that's one thing and then what we do it's that we need to provide dns server service for all the nodes so in this case we have an external dns server which will be the alternative for the zone we defined for the cluster so if you see here there is define a forwarder for the domain which is being used in the cluster so when we receive a dns query in the gateway it will say yeah i don't have an answer for this but this server should know the answer for this request and it will forward the request to the service node which is running the dns service right and he and here is the dns configuration it's super simple we have two views because as i told you before we have one external interface and one internal interface so if you use the internal view you will get an answer within the virtual networks which we define with the range 10.0.0.0.24 depending on the node but if you want to access these cluster services from outside the the internal network there is an external zone in bind pointing to the external interface of the uh services node so you will get to fa you will be able to fetch the services by name if you point your machine to this service uh machine like if you configure that as your dns yeah yeah this is an example this is an example of the configuration uh we put into the cluster when we deployed cubing it uh as you can see we have that all the services points to the service machine which is the 10.0.0.100. we have the api addresses we have a wildcard dot apps dot the name of the our the domain of our cluster which will be able to reply uh with a load balancer queries yeah for example the dashboard it's an application and it will point to the the service node and any other application you deploy you will be able to to fetch it by name [Music] that's awesome the aha proxy configuration is super simple we just have four endpoints pretty much for the openshift api server which is listening on the 6443 port the machine config server it's 22 623 port and then we have two ingresses uh rules uh pretty much for http and https traffic which will be listed on the 80 and 443 as you can see here the traffic for the 6443 and the 22623 ports are redirected from the services machine to the master nodes because it's like internal traffic for uh the apis running in our cluster and the 80 and 443 port will be redirected to the worker nodes where we actually have the applications deployed right yeah yeah awesome and if we have internal traffic it will be routed directly because uh all the nodes know knows the the name of all the other nodes yeah they're all on the same network they're all aware of each other yeah yeah and uh well ideally uh well we can integrate any third party application we we want for example right now i am deploying kubert uh because the next step of the research article that i'm using is using cuber to deploy vms there oh boy so that's gonna be awesome so i already have a role and it's deployed automatically actually uh if you see my terminal one of the resources is jupiter so it's already there wow yeah yeah it's already there okay cool and well yeah this is pretty much the the deep dive i just wanted to show you the the main three things or services we deployed prior uh bootstrapping the cluster right uh i told you about the why which is pretty much to to help me with the research papers i was writing and to have a reproducible way of deploying uh origin in one single command and i need that to work and that's the only thing that i care about the how well with cubing it and all the answerable collection which is uh provided there for anyone who would like to to use it uh we already have a prototype it's working and the idea of this it's to be able to iterate to uh to fix the code because yeah the code it's not perfect by any mean i mean there are a lot of things that can be improved and uh well the the code it's there and it's um already worked it's open source and it's available for anyone to improve upon and hack away yeah so uh the next slide will be like the uh next steps and like questions if you have any but yeah if you've got questions fire away carlos is here to answer them right like this is a very very complex operation that he just put into a 22-minute playbook basically yeah so what i'm going to do now is to jump into the service note uh to show you that the cluster it's already deployed uh as you saw previously in slides the 10.0.2.100 it's our services machine you want to go back to that slide just kind of give everybody the layout of what you're diving into if you don't mind sure here you have it so what we're doing right now is to uh jump into the service machine uh which is the one uh with the dns and h8 proxy service configure so if we like to if we want to for example see that the nodes are okay we see that we have our four nodes working since 20 minutes ago yeah so they're already deployed and for example if we oh you forgot to pee yeah so let's see the the containers uh which are should be all working for the service uh we have them they're already up and running oh my god yes i'm so excited so yeah i mean uh i was pretty confident because i run the same thing like two hours ago i need so yeah and uh for example you can see here if i show you in the slides let me go here for example the content of the dns song that is atc named zones externals because because we have two different views the internal and the external and we have here the configuration for our song sir excuse me uh because the screen is very small and the font size is dns zone files are very yeah you have here all the information so if you want to learn how bind it's configured you can deploy the cluster and you can actually see how it is uh working and make your changes hack it break it right like th like if you want to learn how to use bind or h.a proxy or any one of these components diving into any one of these playbooks will help you learn how to like a configure these things and then b let you tinker with them on the other side and then see if you want to reconfigure them any differently and allow you to learn how to use some of this tooling that is kind of like you know table stakes for kubernetes clusters for lack of a better term exactly that's the idea and having fun something that is working from you know out of the box i don't want to know anything i just wanted to have it working and then let me dig into it um yeah that's awesome and well the next uh slide it's about uh the next steps questions and answers uh the next step for for this project is pretty much it would be really cool to find people interested in giving feedback testing the playbooks hacking them erasing issues and pushing pull requests if uh if it's okay for for people i already got a lot of good feedback and i created a lot of cards for future uh features that's good for you yeah uh another good idea it's to uh other kubernetes distributions like playing currencies or into kubernetes or any other uh in general yeah because the idea is that the code should be agnostic we shouldn't be tied to any like specific distribution um yeah the offline deployments are already working progress i have a lot of things yeah kind of in place but uh it's something that it's taking some time i'm doing this on my free time so yeah for me it's hard to you know keeping everything up today um i uh i run a small project for kubernetes and raspberry pi's i feel your pain yeah i i've almost given up on it it's very hard yeah um our thing is that it's it's always useful to improve the testing coverage the unit test and the molecule test because right now what i have there it's the the architecture everything works but i don't have like any specific unit tests already working so it would be really awesome to to start writing all those uh tests um i did a big roles refactor to start splitting the gold because uh the first this was like a 2 000 lines of playable ansible playbook so uh it's already modular enough to people to be able to read it correctly and uh yeah i also have something which is already done but it's not up to date in the slides it's to be able to run the playbook from a container uh i i had some feedback from people that were not able to deploy the the ansible playbook command because they didn't have all the uh prerequisites installed in their laptops so uh what i did was to put all everything in a container and having that well you can actually run it from a container and how to it's it's already documented in the parade so you can just jump in and jump in off you go yeah that's awesome and well that's pretty much all uh first of all trees thank you for inviting me uh thank you for coming on i really appreciate it this is my first time uh doing this kind of streaming so for me it's it's really interesting and it's really nice yeah oh well thank you uh i mean we try to make it a great experience for everybody right like it's it's we connect we talk we interact with the audience and off we go let me just double check that we don't have any additional questions on any of the other networks that might be hiding from me because of technology problems but i think we've answered everything unless folks have additional questions now's the time to drop them in please um i'm not seeing anything new here if you don't have uh any question you can always ping me on irc or on slack or racing a issue on github i will be you know listening and if you put there anything i will reply and well in general if you like this presentation for the people seeing this uh one of the best ways to start contributing it's to just start the project and listen for any you know updates or [Music] anything i just wanted to uh tell you yeah no thank you very much um yeah so follow uh see camacho on github and irc and you know go to his anstack.com blog i am a uh subscriber to that blog i read it i thoroughly enjoy it it's a great site i mean you put out good content right like it's very thoughtful very very well put so thank you very much obviously go to cubanit.com head over to the github repo help out um i've dropped the github repo there in the chat uh if you ever have any questions like this is the place to reach out to carlos and um yeah this has been a great show carlos thank you very much uh let's see i can't i gotta read the screen name hang on maria angelica b says great work carlos thanks for the great demo so like i said awesome job and yeah this is a fantastic project that i'm sure a lot of people will benefit from in the long term yeah so thank you very much for being here together yeah this is awesome thank you guys well uh have a good day and youtube buddy thank me for anything thank you yeah thank you very much y'all have a great day the next show coming up is at 11am eastern time uh 1500 utc the one and only most requested person uh that hasn't been on the channel yet dan walsh will be joining us the senior distinguished engineer himself from red hat to talk about all things container security so please join us uh in about an hour and look forward to talking to you all soon thank you again carlos for coming on and stay tuned to openshifttv for more updates see you thank you folks bye [Music] you
Info
Channel: OpenShift
Views: 242
Rating: 5 out of 5
Keywords:
Id: Jbe_zApGTFA
Channel Id: undefined
Length: 55min 10sec (3310 seconds)
Published: Thu Sep 24 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.