What's New in OpenShift 4.4 for Developers

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] you today to twitch Facebook and I hear tell we are going also live to YouTube so it's going to be a technically interesting day and well we'll see how it all works out so wherever you're chatting with us from welcome today we're gonna do something kind of interesting from from my perspective we have members and actually almost the entire team of developer advocates for OpenShift with us we're gonna talk about open shift 4.4 the latest release and from a developer's perspective so Jan Kleinert who leads this team is going to introduce everybody and we're gonna have demos galore and insights galore as well as QA and AMA at the end of this so ask your questions in the chat we'll relay them back and we'll open up conversation at the end so again take it away and thanks for coming everybody good to see you all thanks Diane so as Diane mentioned you've got a few members of the open ship developer advocates team here Brian Joel and I will be demonstrating some of the different features that were added in openshift four four that are primarily focused at developers I believe J Dobies is also going to be joining us and he may be participating in some of the commentary and questions as well so I will just go ahead and get started i'm going to be covering some of the updates to the developer perspective and the developer catalog I'm out of I'm out of full screen mode here and hop right over into the developer perspective in the web console so if you're not familiar with the web console in OpenShift 4 there are these two perspectives by default you often will land in the administrative perspective but you can toggle over to the developer perspective here if this has been around since I believe for two but there's been a lot of features that are added to make application deployment even easier in 4/4 so these include developer catalog updates to allow developers to filter and group items in the catalog labels to visually distinguish item types I'll show you all that in a moment we also have operator back to services in the developer Catalog now and that allows developers to run a variety of workloads that are installed and managed by kubernetes operators will also look at home 3 a little bit as well so let's go ahead and get started we look at the catalog here this will open up the developer catalog and you can see that in addition to the kind of the filter options that we're here to begin with we now can filter by type right now I have all of the items available but let's say that I just wanted to look at builder images and operator back services you can toggle here check these checkboxes on and off to narrow down the list of items that are available you are looking at the operator back to services of which we happen to have nine here installed on this cluster you can see that there's also this group by drop-down menu if you choose group by operator what this will do is just kind of visually clump together the operators or the items in the catalog that are related to different operators and this can just make it a little bit easier to find what you're looking for and find the items that are related to particular operators now if I install let's see I'm going to install this one ca-caw operator here just to show you what that process looks like also you may have to bear with us we were having there's a issues going on with Quay at the moment which may cause us to have some problems deploying certain things depending on if it needs to pull images or not so when you are installing these operator back services you have the opportunity to manually edit this animal here if you want to I'm just going to click create and then that's going to take me to the topology view so in topology view you can see that these are visually distinguished as being operator back services the little o here stands for operator backed and it has this dotted rectangle outline around it in this particular case there's only one item in in this block but some operators may have multiple components they would all be there in that rectangle so you can see what what is all grouped together now if I were to click on this you can see there's also this managed by link here this is helpful for those operator back services as you can see what is actually managing this particular item going back over to that developer catalog I want to show you a couple of the other things here so when when we have all the items showing you have these little labels here now so that you can at a glance see if something is a builder image versus the template versus you know a helm chart and so on so that in addition to being able to filter makes a little bit easier when you've got a lot of items on this screen to find what you're looking for now Helton charts were added to the developer catalog in this version of openshift right now the helm charts that are visible are coming from a specific repository in future releases of openshift you'll be able to specify which which repo of Helen charts you want to have loaded in the system but I'll go through the process here of showing you what it looks like to install the helm chart from the developer catalog this is a no js' example one so when I click in there you can give it a release name you also this is from the the values mo file if you want to make any changes here I'll click install and then similar to what we saw with the operator back services helm releases are distinguished with this in it as well and you can click on here if you want to follow the builds or any of those steps as it's getting deployed you'll notice over here there's also a link for helm in the left navigation that is also new and this will allow you to see all of your helm releases here now home 3 has been you can use the Humphrey CLI with OpenShift for a while now and if you need the CLI and you don't have it you can get to it here under coming online tools you can go here to download the home through CLI if you need it I'm going to if screen-sharing cooperates switch over to my terminal and install the helm chart that way so you can see that they show up in this list whether they're you know installed from the command line or if the web console see you can get that working here that's just a little bit bigger okay so let's do helm install example my SQL stable if I type that right they will get that installing for us okay it's good so now I can run in home list and here you can see hopefully you can see both the example MySQL one that we just installed from the command line as well as the node.js one that we did from the web console so switching back over here you can see that coming up as well and it's also here you can also see the revisions and any of that information you can click into it to see more all right so let's deploy something using one of the Builder images from the catalog if if we're having clay cooperate at the moment let's see [Music] look of it a shot I'm just going to use the sample repo here for simplicity and create I wasn't paying attention so I added that into this application grouping here for Jaeger that's not what I actually want so I'm going to take that and we can edit the application grouping so instead of having it grouped with Jaeger I'm going to create a new application grouping we'll call it node example right give it a bad name that's better so now it's sitting here in its own application grouping which is helpful to you can also I believe shift-click to drag this in and around if you want to move it that way too so what I wanted to show you next was these connectors so if I hover over this item here you'll see the little dotted line connector show up you click on that I can use that to make visual connectors between different items and that is exactly what it says it's just a visual indicator that there's some connection between two items in certain cases you can create a service binding using these connectors but here let's say for example these two components communicate I can draw drag that and then anyone looking at topology view will be able to see that there's some association between those two you do that by mistake you can delete it pretty easily and then the last thing that I want to show you here with the developer perspective is adding items to projects or applications in context so now if you right-click on the topology of you you have this add to project option where you get basically the same view that you had from the plus add menu right here in context so we'll do another one really quickly just to show an example pick this one this time we'll create our new application right from here show the example and click create and then that is created here so that can be a time-saver if you are trying to add something into a project or application to do that straight from topology view I think that was most of what I wanted to show you basically in summary these are some new features to the developer perspective and developer catalog to make browsing and finding items easier and you know managing and adding to your deployments easier from the topology view all right so I'm going to stop sharing my screen and pass it over to I believe Brian is next but correct me if I'm wrong and it's Joel can go next if great either way doesn't matter don't feel perfect let me see if I can find their screen and I will wear this one perfect all right so I'm going to talk about my favorite feature in in 4/4 which is the open shipped pipelines but most importantly the pipeline builder to open ship recently introduced open shift pipelines which are built on top of Tecton which is what you see right right now on my screen so Tecton is a is a community's native sei CD as you as you can read and it's a powerful and flexible open source framework for creating CITV systems along the developers to build test and deploy across cloud providers and on-premise systems what's really nice about Tecton is that you have all those little basic building blocks and you can build your your big pipelines for CI CD and everything is ran inside your of your kubernetes cluster just to make sure we're all on the same page here this is what a Tecton pipeline looks like basically you have your pipeline which will contain a bunch of different tasks tests can be run either one after the other or in parallel depending on on what are your needs at the moment and each task has one or more different steps unless you've created those pipelines you can actually put in some resources and tag resources on it so you could have some input resources and outputs so as an example you would have a git repository as an input then you would have through your pipeline a series of tasks and steps that would build an image from that source code and eventually you would have an output which would be an image that you can then deploy to your internal service at your internal registry on openshift so once you have all of that you will be able to trigger a pipeline run which basically is the execution of a pipeline so this is what we're going to take a look at to build all of your different tasks there is a catalog available on the Tecton CV github lunch catalog and you can see that there's a bunch of different tasks that you can start from so say you want to have a task that will perform something with the openshift client you can just find your llamo file which is right here and you can just import this into inside your open ship cluster you can also use the version 1 alpha or a beta v 1 beta 1 was just released just a few days ago but a which of pipeline still uses alpha I think this is G to be changes sometimes soon let's take a look at our pipelines and how it would look like so if I want to take a look at my cluster I've got a brand new project here and I can take a look to see if I have any tasks and currently I don't have any fast so instead of importing one directly from the catalog I'll actually go ahead and create a task manually and we'll give this task a name so we'll call it the hell task and you might guess what it will do so what it will do is that it will do a echo and we'll use hello inputs parens name and it will input whatever we pass it as a program we'll probably want to add a default value as well so we can just add and this is a task for Tecton so we have it right here I can now create that and now that I have a task I can actually go ahead and create a pipeline with that task so if I go in pipelines which is part of your navigation bar now I can create a new pipeline we'll give it a name we'll call it the hello pipeline and from here I can select my first task that I want to do so you'll notice that I have a bunch of pre-populated tasks that were defined by my cluster admin I could use one of them or the one that I've just created so I'll use the hello tasks for now if I click on it I can see all the different details and you can see that parameters name was already pre filled with world because that's my default values when I just go ahead and create this pipeline so this is a pipeline so it has one big pipeline has multiple tasks while only one in this case and each one of those tasks could have multiple steps but that one only has one if I go to my pipeline I can actually go here and start this pipeline and you can see the pipe the task is not running if I click on it I can see the laws I'll have to be really quick because that shouldn't take too much time and here we have it we can see that it was successfully completed this was changed to a check mark and we see the output hello world if I go back to my pipeline I can the pipeline I can go ahead and edit it a bit and you can see that I can change my Rama tur to say hello Joel instead when I said those little all those tasks are reusable you can very very easily change them by using different parameters so if I go ahead and trigger this pipeline now well as you might guess the output will I've changed a little bit so now we have it we have the hello Joel so the same task was performed but we've used different parameters to actually change the output here but if we go back to our pipeline once again there's a few things that we can add so when I go ahead and change just a little bit and I forgot to mention but you can easily add more tasks if you needed by just adding them and appending them to the to the pipeline so you can really decide well to start by doing your he'll to ask and then do some maven you could do some where utter output and then use build up for example if I wanted to instead of using and typically you would normally use parameters as part of the pipeline and not as part of your paths so I could go here and add a pipeline parameter and I'll use it I'll call it name again and it'll be the person to greet and still keep world has a default value here I can save this parameter and now I'll go into my mo and I'll change my test to use not the value that was hard-coded in here but instead it will use params dot name but this will lose that my task will now use this pipeline parameter and append that and use that as a value for my full pipeline so now if I start my pipeline again you can see that I am greeted by this nice little model it says person to greet I can I can leave it at the world for now we'll just run this task if you look at the logs we will see the hello world once again but now because this is a parameter by a Python parameter I can use it through for each one of my tasks inside that pipeline of course but if I start my pipeline again I can either start the last run which will reuse the same parameters again or I can start it again and just specify a new parameter here this can be very useful if you have say a github repository with multiple branches so each one you want at each time you want to start that pipeline you might want to use a different branch or some sort of option that will change each time that you want to run that that that pipeline we can easily change those when you have them set as parameters resources can be used and kind of kind of in the same way why don't I go ahead and create an a little bit more complex pipeline we'll look at something a little bit more complex what I want to do is something similar to this pipeline here so I want to start with a git repository I'll use a task to create an image out of this and then I will output an image that I can then deploy in my internal registry before I start that pipeline I'll actually need to create a image stream inside my cluster so I'll just paste it in here I can go back to my pipelines and I'll create a new one so we'll call it our art deploy pipeline and now we'll use our s - I know J s is no js' application that I'm going to use you'll notice that in my pipeline builder I have this little exclamation mark telling me that some things are not ready some-some required fields I haven't been filled and that's because I don't have any resources available right now so I'll need to start by adding resources I'll need to tell my pipeline what I'm expecting so I'll be expecting a git repository so we'll name that parameter git repository and we'll be expecting an image as well so we'll just call it image name and it will be a resource of type image when I can go back here I can specify the git repository that I'm going to use as well as the image name so you could have multiple tasks that need multiple git repository in this case it happens that I only have one task and it's only using that repo I'll just need to change this to false as well and we're going to leave it at a node 8 it doesn't really matter for this demo and once again we have our pipeline so everything is ready so now that I have this in theory I could just take on a github repo add that as a resource and we'll be able to create our images and deploy that inside our cluster now as I mentioned we've been having a few issues with one of our servers right now let's see if this will actually work so I can go ahead and start my deploy pipeline and you'll notice that now I'm being asked to fill in those fields because I never specified the resources so I can actually create my resources on the fly and so I'll tell it to use github comm I will use the software collection node.js application also known as the gorg org no chance X perfect so this just created a resource that I can then reuse in multiple tasks if I need to and my major is history which I can never remember and I can remember which project I'm using by playing demo let's start again all right so this one was prefilled because now it's created and I'm using pipeline demo and we'll use demo app excellence also create this image this image resource perfect and we can now start our deploy pipeline let's take a look and you can see that this task has multiple steps they can see the generate the build the push and they will all be executed one after the other let's just cross our fingers and hope that this might work it seems like it's taking a little bit too long so what would happen is that it would actually create an image now and I would be able to create my application specified that I will take a image from my image stream which I've created earlier so the demo app and once it's actually been deployed I would have a he'll latest tag here that I would be able to use to create my application I would then and give it a name I would use the deployment config so that each time that this pipeline is run each time a new image is pushed it would actually redeploy my application and create those routes this one is still running Oh something's happening look at that let's actually wait for it to happen as you can see we have all the different steps that are happening right now so we're creating the image it pulled the gift source files it then generated the docker file it's now building the actual image this should just take a few seconds thanks to this amazing cluster that we have which goes really fast actually there it is so I think there's nine steps total so it should be too long and thank you for installing that's good cleaning up perfect perfect perfect everything is there it's not being pushed so in the next few seconds we should see it being completed you and there it is perfect so it actually successfully completed and I can go ahead create my application we'll use an image stream as you can say now have the latest tag that I can use let's keep all the defaults and we'll create a route for this and we have our application and in just a few seconds we'll see it being deployed but I never actually created that image everything was taken care by my pipeline and I can now access our demo application here if I were to start this pipe on again no the deploy pipeline we could start the last round so it will use the same defaults once again so it will use the same github repository as well as the same image to be or the same image stream or same image name so it'll be published in the same image stream then you would actually see that application being redeployed as soon as this task is completed now this was also a relatively simple task so with the pipeline builder there's many things that you can do why don't I go ahead and I'll actually use this deploy pipeline and we'll just tweak it a little bit so in most cases you probably won't want to systematically deploy your application as a developer you probably have some sort of process view well you probably have some some tests some unit tests that you want to run and you probably have some auditing security auditing that you want to do so I'll just go ahead and create a few tasks that I already have in here I'll just need to copy and paste them I'm back so I have two thefts so this is a no trace application so I'll use NPM to run an NPM audit just to make sure that we don't have any security our vulnerability isn't there let's just create this desk and and my second one will run npm run test which will run all of my unit tests for my project but i can go back to my pipeline let's just see where it is to see if we can actually see the new application being deployed well we're almost there but remember I just figured that that pipeline as soon as as the first one was completed as soon as this one was deployed but you should see it in just the next few seconds it'll redeployed a new version there it is but now push to the internal registry so we can automatically redeploy our applications but I was actually very fast and it's the same version as before saying source code cuz I didn't change anything in the meantime but you saw that it really deployed the application automatic so I was saying you probably have some sort of different processes in place so we can go back and we add our NPM test to make sure that all of our tests are passed and we can also run in parallel because we don't necessarily rely on the NPM test to run the NPM audit so we can both run the testing and the auditing at the same time just to save some time on the build we will use the repo that was provided to our code and let's just save this and now we're getting into a little bit more complex pipeline so you can see that we have multiple actions going on why don't I go ahead and start this pipeline again we'll use the same repo and image and you can now see that we have two steps so it will run the npm install and then run the NPM audit and the other task will do the same thing when you have multiple tasks like this they're actually running simultaneously in two different pods so we can see the results going on and you can see that I just got an error here in my NPM audit and NPM test succeeded so if I go to NPM audit I can see what's happening all at the bottom and you can see that I have one critical package which caused an error in our pipeline if I look at my pipe once again you'll notice that the status here instead of being all green we've got one failed task and the other one was cancelled because we've got a task that failed before it actually canceled the next tasks in line so that's why this one was never triggered but not at least we have our we have some bald number things in our code but this was actually never released so we can actually do those changes before we deploy a new application and that's pretty much it there's one last thing I wanted to show let me just see if I can which to my vs code instead and there it is the last thing I really liked about and and this was just released just a few days ago actually if you go to your vs code if you're using vs code obviously and you go to your extensions you can search for the tectonics tension and there's a that comp icon extension by Red Hat that you can install and once you have this installed let me just bring this a little bit I can actually see all the different pipelines that I have installed on my cluster I can see all the different tasks so we can see that we have this l task that I've shown you and we can see the details of the last round which was 13 13 minutes ago the NPM test was ran a few minutes ago I can actually see all the llamo I related to that and the same goes for all of my tasks I can actually see all of the details of my past the helloworld task that I had earlier this is what it is we also have all of our pipelines and so on so everything that has to do with Tecton is available right there inside of our cluster but what's really neat about this is that if I actually open up a pipeline I now have access to the pipoint preview I can see that this disc this specific pipeline has two different tasks and I can jump from one to the other in my code I can jump to my build task here and I can see how they are dependent and then they will both be executed before the next one do you actually have all of that preview very similar to the pipeline builder that you have an open ship but in your vs code as you are doing that development work so that's all I had I will be monitoring the twitch chat if there are any questions so please go ahead and fire them away and I will now hand it over to Brian to talk about serverless okay there's also a whole lot of questions in the blue jeans chat too so check that out and Brian's been doing a good job in the interim trying to answer them but I think you can weigh in so thanks thanks Brian I'll take over yeah there's quite a bit of interest with Tecton so that's pretty awesome you got you got quite a bit of quite a bit of questions but hopefully most of them are answered all right but let me share my screen real quick if somebody could just say that it looks good on their end looks great okay cool so yeah I'm gonna talk about one of my favorite things that just came out with a bunch of for is pretty much that serverless is now GA within an open ship so now it's generally available previously service has been on a tech preview and developer preview release and with 4.4 it's now GA we could consider it stable at least for the serving aspect of surrealist and I'll get into some of the details on here but we do have a blog article that is pretty good that goes through a lot of the details highly suggest you check out that information so one of the things with serverless is it allows you to deploy applications and have them do things that are generally pretty nice to have there would be good recommended practices to deploy an application whenever you know we're deploying maybe at scale maybe we want a particular application to be able to gale down to zero pods running so we're not wasting resources instead of having you know maybe a deployment config with one pod that's always up and always running there's reasons to do both but serverless allows us to do something where we could scale that down to zero and it's pretty cool it's pretty neat and on the screen right now you could see that I'm logged in already I'm looking at the topology view on the developer console and in here I already have a server list application or service deployment already done and I'll show you some examples of deploying a new one in a second but first I want to see what are some of the new things that we have within the console to make it easier to work with serverless applications the developer console has been getting better and better and it's it's still improving and it is pretty awesome so far right now and we could see that we have in this view the ability to actually look at our que service or K native service openshift server list uses K native and the K native service is the main aspect of a service that's running within surveillance we could click on that K native service within the vel Burcham console and we could see some of the details the stuff that's really important to me like I could see that this hello service has nothing running right now and it tells me that hey all these revisions they're scaled down to zero maybe if I make this a little bit bigger little easier to see but I could see that hey everything scaled down to zero so there's no pods running if I go and try to access this okay click on the open URL and because there's nothing running right now we could see that hey OpenShift is saying it go start this pod and now we have a pod running so stir birla salão me to not have anything running and it waited for a request to come in once a request to that app came in it then hurried up and scaled up that pod got it running and me as the client I just saw the application I didn't see anything crazy it just took a second to you know to load up but I could see hello world version 1x tetra right so that's cool I could see some of the information here that's within the developer console I could see that hey 80% of my traffic is going to this hello one revision and 20% of the traffic is going to the version 2 revision and I set this up beforehand and I'll show you how to do it later on but this allows us to do something like a canary release you can see right now the pods terminating and scaling back down to zero there hasn't been any traffic coming in for about 60 seconds give or take that numbers tunable but that basically is the timing that you know says hey scale down because nothing's coming in so with this going back to the traffic distribution tribulus allows us to do things like canary releases that typically might be a little harder to do when we're working with kubernetes or OpenShift you know when we roll out a new version of our application how do we vet that that application is actually good and stable and working the way that it should well server this allows us to go and set some of these percentages I can roll out a new revision right here and a revision as a snapshot point in time configuration of the service so an example would be the new revision would reference the new tag of our container because these are just containers and this you know we could say that 80 percent goes to hello 1 or 20% goes to hello V 2 I could change that if I want so I could go in here and I could say that hey you know this is a little bit better now let's you know I've added it I looked at the logs things seemed to be okay but let's make it a 50/50 split we could see that I've got a couple different other tags here and you know they're referencing different things and the tags are pretty important because these allow us to access these revisions outside of the normal everybody goes to this route I could go to current - whatever you know the rest of the route is or previous - whatever the rest of the route is to hit a revision that's specific to that one and I'll show you an example once I set this up let me go ahead and let me go ahead and set these up - it's one it looks like there might be a bug in that's right now figure that out it wouldn't let me save that it with nothing there and zero also wouldn't work so maybe we'll figure out that traffic distribution or maybe I got a update that I need to get to my ICP console but either way I switched out the the percentages here and I could go and see I could click on C open URL for hello v1 and this goes to current - below right whenever I go to revision - I could go to previous - hello so that's pretty mean I could define a Ralph's or a sub route or a traffic tag they're called both are you know interchangeable terms but I could specify those and go directly to one of the religions or I could go to the main hello you know sort of this tutorial and I could get all that stuff so it's cool that server this makes me there gives me the ability to do this complex traffic you know distribution and networking on the cluster for new deployments and new revisions without me than having to really think about it all I do is you know specify I want 20% to go here 40% to go here what-have-you right that's pretty neat um the other thing that this topology view gives us for a que service would be the route right we could go and drill into you know and understand what's going on with the route check out some of the the configuration if we wanted to look at the animal and we could see how that stuff's built out but one of the things we're survey list is it doesn't really require me to think about um all it makes it easier to deploy applications I don't have to worry about the Y Amal aspect of kubernetes right I don't have to think about it all I need to know is some of this stuff around serverless so anyways let's go ahead and show you how easy it is to add an application and make it run as a service app if I go into the developer the the developer console and hit add I could again shoes just like Jan showed from get container image etc right I could you know just point this straight to my source code and it would build out an application for me but I already have an image already made so I'm going to go ahead and hit the ploy imaged and lets me grab the image name so here we go I just pasted it in a simple hello app and this is version 2 of that app just gonna call it leave it the default here I'm not gonna change that but I could I could change the name right Jan showed you what the application context was in the developer console and then the hello app is what it's going to be called and then under here resources right I'm going to just choose a native service well yeah you said hey this isn't tech preview and more yeah that should be gone this is GA now so there's a there's an issue with that but that that tag did not be there this is definitely GA now a bit either way it's there and we could do K native service and I could go and define some of the details around here right I could go and specify some of the scaling information say that I didn't want this to scale down to 0 like that property for this particular server this application doesn't make sense for me but other aspects of serverless do such as the auto scaling that's already set up I could go ahead and say that always want to run one part of this and I could change some of the concurrency details so server list gives us the ability to scale up whenever I think it's a hundred requests are concurrently coming into our application so if we have more than a hundred then I could go ahead and you know scale up an application by default automatically I don't have to think about it and I could change those limits here if I wanted to but I'm just gonna go ahead and satisfy one part always running in Krita click on create what this is doing is that this is kicking off at a native service right so this is basically pulling in that image and it's gonna build out we can see that it's running I've got you know one available and I could look I could drill into this and see I've got one pod running and there we go just like I specified one always running and I could go click on the route and I could see this is version two of that app so that's pretty neat I could see all this stuff deploy an application and just like I would deploy any other app on OpenShift and specify use open show serverless instead of the standard deployment config and there we go we're good I could see the details within the developer console to get some of that stuff to get the information that I really need if I want to I could drill into the pod just from here I could look at the logs really quickly you know I have access to debug things if I need to from the developer console really quickly and easily one of the other big things with server lists like I said is the ability to use this stuff without touching yeah --ml and I showed you how to apply an app without touching em on the console but what if I like the CLI better there's command-line tools that allow us to work with OpenShift serverless in the command line so on my OpenShift console i could click on the question mark up at the top and I could see command line tools I click on that this is the this is the the repository that's running on our my OpenShift that has the signed you know from Red Hat command line tools that are here and I could go and download the helm one or the OC command or open shift do pretty cool but pan right I've got KN available here this is the open shift server list command line interface and this works with Linux Mac Windows and allows me to work with the open shift server lists on the command line so let's see that in action so let me switch over to [Music] and of course it's not gonna let me share my screen I don't think here I don't think he could see my screen or can you a big black box yeah give me a second let me let me rejoin this real quick help you hit the heck well he's scooting in and out this has been really awesome guys so thank you very much for taking the time to do this today and there are a ton of questions in the chat and so we'll try and read some of them out to so that everybody who's on all the other channels we can answer them them live at the end of this thing as well so that if you're on Facebook asking a question or twitch or wherever we'll get your answers in there though and and it will get on the recording so I can edit into it something nice for everyone to watch in their evenings as we know you all subscribe to our YouTube channel our H OpenShift I'll probably take a second here just to answer a question that I had while we're waiting for Bryan a few people ask questions about about web book web books and how to trigger something based on a change in github there is a project called Tecton triggers and I have still to play a little bit and dig into that but I am preparing a demo sometime soon so I've just promised Chris that I will do a twist on that sometime soon so probably in the next few weeks I'll be storing that and be able to provide you with more information all right here's Brian back now yes sorry for the sorry for the mix yeah so like normally I'm using Waylon Dora and there's some weirdness with the browser and not sharing screen anyways so let me go back to here and I think it's a little too well I'm all set up now let me make it a little smaller so sir could you go see who and just double check I'm logged in so I'm logged in I'm good so you see Project I'm on the surrealist tutorial project so I'm good [Music] all that stuff and make sure that you know that's set up and it's pretty nice to get this stuff working by the way maybe there's a blog article this you know talk about how to get this in your prompt that we could do but anyways that might be useful later on so KN so is the command-line tool I already have this installed and I could do command line completion and I have that setup available as well I just hit tab to get some of this detail I could see what the the KN command-line tool allows me to do I could work with the same things that you just saw me do within the console right I could do plug-in so I could work with stuff like that you could work with revisions and routes and services and those are things that are really important as far as OpenShift server lifts and the serving aspect goes one of the things that I'm not talking about a whole lot within OpenShift server list is the eventing aspect of that and that is I think preview I'm not quite sure with OpenShift it's coming it's coming pretty soon and it allows me to act on events that happen right so I could do things whenever a database gets updated or whenever a file gets you know added to an s3 bucket or something something cool like that right in the eventing aspect using Campbell Kay and all this stuff the stuff that I'll talk about later on not this call but you know once it starts getting a little more stable and that's what really makes the service aspect shine right but what we're talking about is the serving aspect and the service stuff so KN service and then on here I could see that I could create something I could list something I could delete it etc so let's just let's see what we have a little smaller so it all fits I go so we can see that I've got that hello app that I just created and it's running the version two right there sorry hello app it's running a revision name that's automatically generated and then I have hello version two which is the existing one that I had before and I could do stuff that's pretty neat Erica do KN describe there's arts or a service describe say hello and in here I could get some of the details with that I could see that hey I got those percentages set for me 47 goes here 1% goes here except alright I could see that information I can see the tags that I talked about a low latest preview etc right and some of that detail I could do KN revision lists and I could see all the revisions that I have that these guys are pointing to I could see I've got revision version 2 and then version 1 I specified these specifically and I'll show you how to do that and then I've got the apps which is the one that I created within the console so we could see all that stuff there and I could describe some of that and get more detailed but we've seen most of it in the service so I don't think that we need to can route and do the list I could see all the routes that have available to me and maybe describe the hello route because it's a little more complex right I've got the same stuff I could see traffic targets and I could see the URL for each individual traffic tag so I could go and go directly to the previous one for example if I wanted to do Perle and I could curl that it'll take a second again to you know spin up that container but there we go it's not too long and I can see the little world and there we go that stuff's pretty neat so if I wanted to say create something let's go and look so I've got pan service create and let's just do hello 1 right I could specify a container image I'm in here and then specify the name space that I want it to go into and the revision name and all of these things right if I want to let's go ahead and create it that's going but if I want to see the list of all the stuff that I can view I could do kN service creates - H and in here I could see hey this is the help and you can see there's a ton of different flags and examples like creating a service with multiple environment variables this help page for the KN tool is really nice and it gives you a lot of the stuff that you would want to create a KN service in the command line really easily and it's pretty neat right this one command can service create deployed an application using an image specified some of the stuff like I know in kubernetes I need a namespace and I I want to call it a specific revision name so I know I want that right but like if I wanted to specify you know the minimum scale I could specify that in here right and I could specify environment variables using the - e with some of the details it tells me what I need to do to do that so instead of me having to know all of the Y amel that's associated with that which is pretty you know there's a lot I could do OC get let's see it gets further whoops not the service is not serving that que native dot dev and then hello one - oh yeah Mille to see the yellow output of what that created for me right we could see that hey this is all this stuff I created it some of the details but I don't really care if I'm creating and I would use the mo under the spec stuff and I would see that hey this is you know the image I specified I need to specify a name in here give it a stack like I don't need to know any of this to deploy my application it doesn't matter I'm just using the KN tool specified in the command line and tab-completion helps me set up these flags it's pretty nice and the KN tool is really a robust it does quite a bit and it's improving each revision or each version of OpenShift serverless and like I said as a venting you know getting more and more out there I imagine there's gonna be a lot of really cool things we could do just with the KN tool and eventing and well serve a list in general tournaments with that yeah I mean any any questions or anything or any other stuff we talked about alright guys back to you Diana looks like you're muted I think couldn't get the mute button to show up there you go um the there you know there are lots of questions here and a couple of them Jordi's asking do we need to install the service operator to use the service features in OCP 4.4 definitely yes so with captain or openshift pipelines as well as OpenShift server less both of those require a clustered admin to go on to that cluster and installed the operator you could look into the operator hub that's on the sidebar of OpenShift in the admin view Java would install the server list one or the pipeline's one into the OpenShift - operators namespace you could look there's instructions on our docs on how to set that stuff up with pipelines you're done at that point you can start using it with openshift serverless you have to then create a que native - serving project and deploy a instance of que native serving you just look into the installed operators and look at it open job service and deploy that and you can specify and customize the installation with it but by default the defaults work and they're pretty nice and what I showed I didn't customize anything anything everything worked so those two steps once I'm done there any user could deploy a pipeline or a openshift service deployment and I'm gonna ask one question which is sort of a setup because you kind of asked it and I think you wanted Joel to answer it so are there plans for following the key Olli approach and show usage based top you like top not topiary topology and because we've been talking about that a little bit and the key ally view is pretty nice but can you ELISA ELISA to answer for that Joel I think you might have asked answered a little bit in the chat there is what but it's a pretty the difference between the key ally and the service mesh approach to things so there was that and that was within that's apology view sorry not Joe but within the topology dual apology view you could connect applications and whatnot together and it would be nice to maybe see the traffic blowing through there if maybe service meshes installed or whatnot that's what I was thinking I don't know maybe there are there are there questions I remember hearing some of the talk about some of that earlier this week and not quite sure it's not you know we don't do that now one of the other things that I was thinking about Joel's when somebody asked about using Tecton or openshift pipelines with other like a sure CI CD and stuff like that like is that something possible to those two questions on that picture so anyway yeah I saw that one so using pipelines with I don't have the answer for that actually so I will need to do a little bit more research and yeah if whoever asked the question want to get in touch with me maybe in switch and we can establish a way to follow up afterwards I'll definitely follow up on that right there was one other question which you may or may not have answered but I'm just going to repeat it how to set webhook trick trigger for bitbucket anyone wanted to take the bait on that one yeah yeah so with that I posted a link hopefully you got it I'm not sure if it was on Twitter or whatnot but if not feel free to reach out to me or Joel I'm sure we'll both get their door jam we all could help you out but just basically look at the documentation for setting up a web book on OpenShift and there's instructions in there on how to do it with bitbucket specifically so in there you definitely could do you could set up a bit bucket there was a link to something in Stack Overflow or was that the the link in the docs the docs one so I originally found a stack over two foot link and then I looked at the docs they should have looked at the docs burgers yeah there's that whole thing about documenting by blogging or FAQ use and stuff like that as opposed to the documentation and for once we actually did it in the docs so that's a really cool thing we have a tendency at Red Hat to document by blogging and it's the bane of my existence trying to keep those blogs fresh and up-to-date and current as each release goes out so the more we can transition stuff into the docs the better off everybody is so I'm looking to see if there are other questions coming in thank you guys have done a really great job exploring some of these new features and I'm really looking forward to getting you guys back again on a regular cadence because I think this is a great way to educate the OpenShift community and get you guys some recognition for all the hard work you do demoing everything and making things understandable and comprehensible especially with all the new features coming out in each of the new releases this is it's a lots that you off so if you look at our roadmaps and things like that these are the folks who are on the front line all the time trying to get people up to speed and educate them so we're really grateful for all the work that you guys do and there's whoops let's see this is that another question [Music] oh there was one other question well I'm all here thanking you and I think it's an interesting one from was about them the pipeline's if they're meant to being used with other CI CD systems like Azure github X actions etc or are these Tecton pipelines meant to be used just by themselves I don't know not asked to answer that question that's the one for which I'll have to follow up afterwards so I'm not sure who asked the questions so if it's internal you can find me internally or if it's real world it was out in the real world where people are trying to do hybrid cloud all the time and figure out how to make all of these systems mesh so which makes another layer of complexity on everything lots of new features not lots of new platforms all the platforms have their own approaches and tools so again we'll have you guys back many times I'm sure it's wonderful to have you here today thanks Diane for follow up I think you can all see the slide yeah this has all our Twitter handles on it so if you need to get in touch with somebody this is one way to do it cool all right and with that I will thank our producer Chris short again for backing this up and streaming us live everywhere we can possibly find a stream to be on and we'll be back again tomorrow tomorrow we have Andrew clay Schafer from the global transformation office is going to talk about cloud native operating models and he's one of our gurus on DevOps so if you're around join us again tomorrow at 9:00 a.m. Pacific 12:00 noon and I think it's 1600 UTC somewhere in the world but you can check the calendar and we'll be there too so looking forward to hearing other things that you guys want to talk about Brian and Jan and then maybe we can drag Jason Dobies out with Josh woods to talk more about kubernetes operators as they want to do as much as possible so yeah and there can showcase the work that they did in that wonderful book so yeah you'll get a couple thumbs up on getting them to talk more about operators never talked enough about operators oh there's the book there's the plug you can download it now and get it there you go so alright with that Chris I think we're gonna hang up our hat the Red Hat for the the afternoon and um let it rip and we'll see you all again soon I will again also post this up on you tube with some of the ins and outs edited out so if you want to wash it again in your leisure we'll we'll have it all there probably later this afternoon [Music] you [Music] you
Info
Channel: OpenShift
Views: 1,536
Rating: 5 out of 5
Keywords:
Id: CDbu1id1FVI
Channel Id: undefined
Length: 75min 32sec (4532 seconds)
Published: Thu May 28 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.