Use Portainer, integrated into Git, to automate the deployment of Apps with your DevOps workflow

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Great pleasure to be here. As you said I’m using  Protainer on a daily basis the company I work   for Cosmo Consult we created a service that  is setting this up using for tailor setting   a service up using Protainer I’m a CTO there but  also leading a development team that is creating a   containerized service and there I’ll try to show  you what we're actually doing in a production   environment I don't want to make an update in  a production environment live in a webinar so   I’ll use a dev environment for that but I’ll  certainly show you how it works in our production   environment as well as how we do it when  the update actually happens so basically   to give you just an idea of what we're doing as  I said we are providing a containerized service   this used by our product development teams  as well as the teams that are implementing   our products and on the customers side so  it’s both product development experts and   project teams that that are doing consulting that  are doing development project management and so on   and they're all using those services and the  reason why we're doing what we're doing with   for trainer is that we're pushing out updates  but we don't want to update them without their   consent so to speak because basically they  might be right before a go live they might be   right before a critical release they are creating  so we don't want to make that change at exactly   that moment but allow them the control to make the  change that themselves so basically for all the   companies that are using our services they're able  to make the decision themselves when they want to   do the update and until cortana came out with that  feature we had like an explanation how they need   to log on which machine what scripts they need  needed to run and so on and it basically only took   like five or seven minutes and it was kind of  easy and it worked well but still it felt like   not an ideal way to do it so we also looked into  creating something by our own but then we noticed   that Protainer was planning to do something we  decided to wait for it and it actually works   very well so basically the scenario is that  that someone who is an administrator for one   of the companies that is using our service they  want to go in and do an update for the service   what we as the development team  that is creating that service   are doing is we are pushing the definition into  a give repository so they can take it from there   and to show you what I’m talking about  I’ll just gonna log in to Protainer here by the way you can see the new SSO feature also  that that is also something that works great   so you can see this a Docker swarm it has roughly  100 containers that are up and running so this   this one of our production environments and if  I select that one I can select a stack because   the feature that we're looking at is a stack  that is Git based in a Git based deployment   and you can see here this the stack that I’m  talking about the Docker automation stack   and if I click on that one you can  see the new interface that is made for   deploying a stack and this the interface  how it looks like if the stack is already   deployed and I can do an update here so you  can see here that it is referenced in this   particular Git ripple hosted on Github and  I’ll just copy and paste that and open it here you won't be able to access this yourself this  a private repo because this as I said built on   building our own services but here I have a  stack folder a Docker automation subfolder   and in here there is a Docker compose file and  that is used to define the service that we're   creating it has a couple of different services  the what we call Azure DevOps automation then   it has the Docker automation it has agents for the  Docker automation it has a lot of configuration as   you can see here we're using traffic as a reverse  proxy we have a lot of vol es that we're mounting   we have environment variables networks and so  on so basically it's not terribly complicated   but also not trivial configuration file and we  can store it on Git and then share it with the   different companies that are using our services  because they have access to the same repo as well   now if we do an update let's say we move the  Azure DevOps automation from o1125 to o1126   we would make a change here if we have the Docker  automation move it from o11 1712 to 2011 18 we   would make a change here just commit that push  that into the repo and let everyone know that they   can use it and what they then would do they would  go into Portainer and then just click on pull and   redeploy they also need to authenticate I would  need to put in a username and a password to do   the re-authentication and I’ll show you in a  second in our death environment how this works   also in a future version of Portainer this will  be stored by container so you don't have to always   re-edit but if I then click on pull and redeploy  it would download the version of the of the Docker   compose file the definition of the stack with the  new versions and then deploy the new versions of   the of the images into the stack into our swarm  and we would have the update up and running so   basically as I said before the process that we  had in place before required them to log into a vm   set up the ssh keys run a script and so on and  again it wasn't terribly complicated but now it   literally is a couple of clicks pull and redeploy  and they are up and running again and they have   the latest version so this really great because  on the one hand side we can completely control   the environments because everything is built on  top of that Docker compose file that is in the   in the Git repo we always know what people are  using and for the local administrators who need   to create the update it really is only a couple  of clicks entering a username and a password for   now in the future that is even stored and then  we can very easily roll it out even if we do   some cannery deployments or sometimes we do a  green blue deployment then we could also enter a   different branch here let people have access to a  preview release or a different version or whatever   and that also makes it very easy for us to have  different kinds of configurations and different   kinds of deployments to roll them out if I click  on the editor here you can see the exact same file   so I also if I want can make manual changes  here but if I later go in here and say pull   and redeploy I’ll do this once just to show you  the message here it tells me that any changes that   I made locally will be overwritten so if someone  even goes in here makes a change or we tell them   to make a change because we want to test something  or whatever as soon as they do the redeployment we   always know they're in a clean state they're  on the same version that we want to have   so Tobias this this an interesting feature  because eventually if you're working in a   very secure environment where you do not have  access to the Git repository for whatever reason   you can do changes manually and then eventually  store them separately in a different file and   re and connect them later on to your proper Git  repository if you want right yeah so yeah this   this very cool this very cool yeah something else  that I’d like to point out is as I said it’s all   very similar very standardized but at the same  time there are of course differences between   the different environments to give you an idea  what that looks like is if I go in here you can   see a couple of environment variables for example  we need to know the name we need to know the DNS   we have a configuration setting that knows whether  this something that is internally in our company   or if this something that we're providing for a  customer and then we have a cleanups thing that   is cleaning up images and that has uh  basically an amount of gigabytes when   it starts running so those are the kinds of  configurations that you can have and if we   check back into the Git file sorry into the Docker  compose file in the Git repo you can see for   example here where such an environment variable is  referenced I’ll try to make that a bit bigger so   this the Cosmo internal environment variable  and in Protainer I have that one set to true   so that means that when it's up and running  it should be set to true let's check this was   the Azure DevOps automation current service  if I go in here take a look at the services and there are a lot so it's slightly slow we  have the Azure DevOps automation current service   and if I take a look at the environment variables  you can see here we have the Cosmo internal which   is set to true so basically we have that  option also to have a standardized way but   also to have it configurable so that way we can  make the necessary changes between the different   environments differentiation between production  non-production differentiation between based on   names and so on and that also makes it very  easy for us to make those kind of changes   so this how it looks like as soon as it  is deployed you might also ask how are you   getting this inter into the system initially  and there we can of course create a new stack   if I go to stack and say add stack and then  say Git repo I could put in all the information   here but when we create a new environment we're  using infrastructure as code tools we're using   terraform but you could use something else to  be able to completely automatically update or   create a new environment and of course we also  want the deployment that you can see here to run   completely automated so basically we're just  running a script or running a build pipeline when   we need a new environment and it’s fully automated  that that we have the full deployment so we needed   to find a way to do this initial deployment of the  Git repo also in an automated way unfortunately   Protainer has an API that we can use for this and  I want to walk you through the steps that we do to   do that initial deployment so what I’m going to  do to do here is I’m using visual studio code   and a plugin for that called rest client which  allows us to do some rest calls in here this   in in reality we're doing with a powershell script  because we as I said I want to run this automated   but to show you how it works I’m just using the  rest client so I can I can explain what I’m doing   the first thing that you need to do when you want  to interact with an API is of course you need to   do authentication for that I’m calling the auth  endpoint and the base URL you can see up here   this where I can I can reach the API and for the  authentication I need a username and a password   I’ll send the request and as an answer I get  this jwt token which allows me to do the login   the next thing that I need to do in my  specific case is I need the id of the swarm   for that I’m doing another call that you can see  here I’m sending a request to a specific endpoint   and I’m asking it for the swarm information and  if I send that one I get an invalid token that's   not good what am I missing oh I deleted something  for clarity and I obviously deleted too much okay   sorry that one is defining the token I need  this let me try again and try to get the swarm   id now I have it so I’m sending this to that  endpoint and getting the swarm information   and part of the swarm information is the id of  the swarm now I have all information that I need   and I can do the actual deployment so what is now  happening is that we are we're deploying the stack   for that I need the information that we've  provided before so to switch back to the gui   you can see here I need information like where is  the repository what is the branch that you want to   use where can I find the file in in the repository  what is needed for authentication and I need to   provide the same information for the API so this  basically the path of the file in the repository   exactly as it says I need a name I need to tell it  that I want to use authentication this a variable   that has my password this tells it what branch I  want to use this the repository this my username   and also I need to let it know the swarm id this  something that that we've just created and as   you've seen before we're also using environment  variables and I can easily plug them in as well   as you can see here so this just an array where  I put in the swarm name the information whether   it's internal or not the external DNS name  and that cleanup thing that I mentioned before   so just to show you before I call it I’ll log into  the test environment that I’m using here and as   you can see we're using the business edition  over here and I’ll select it basically has   almost nothing running it only has that base  stack but not the Docker automation stack and if   I take a look at the service it's only Protainer  and then basically traffic in a in a different   windows version so if I now go back in here and  make that call through the API to deploy my stack   to have that in an automated way I’ll send it  and this typically takes a couple of seconds let's see and then at some point it  will start to appear in here not yet those are always the interesting seconds in a live  demo yeah like oh my god is it gonna work right   yeah and here we get the 200 okay so it tells  me okay i've created a stack called automation   and all the information that I provided and if  we know now go back in here in the list of stacks   now we have the automation stack so I can  take a look and now we have everything as   defined previously we have it in place it works  we have all the different services we have all   the different configurations if I go back into  the editor you can also see that we have the   environment variables with the names and so on  yeah is it isn't it that isn't this cool I mean   I think this this type of integration is amazing  it does yeah reduce the deployment time and   also the fail-safe deployment of stacks amazing  manner this this amazing and I see that you're   running this in the in a windows environment right  I mean yeah and this something also that I think   is really cool about Protainer uh regardless  of what environment you're running on that is   the underlying environment is a windows  environment or Linux environment I mean   the interface is the same the functionality is  the same you could be doing actually you could   even be uh at some point just obviously changing  the proper parameters migrating this to a proper   Linux server if you needed to right just by using  this interface yeah yeah and the other dimension   of course is as I said or you said I think and  as people have seen we're using Docker swarm   right now but maybe in the future we'll move  over to Kubernetes because I mean it’s kind   of clear where the industry is going we have a  couple of blockers and things we need to change   to be able to move there but one thing that we  know we will just keep using as is and the users   will probably see almost no differences Protainer  because it just works the same it's fine and they   can just get easily up and running again even if  we change the orchestrator yeah I’m going to show   right after your demonstration how how this can  be done with Kubernetes very quickly and again the   impressive thing about it is that the interface is  practically the same right yeah there's hardly any   difference with actually there's a little added  bonus when it comes to Kubernetes he said I’ll   leave that as a surprise when I when I show that  part of Kubernetes yeah yeah okay so just one last   thing that I wanted to show you now is actually  the real update we just deployed it so there's no   actual change but just to show you how it would  look like I’ll enter my username and I need to   copy and paste my password and then I can just say  pull and redeploy accept the update and with that   it's just running we won't see a change because  it's exactly the same version and it was as it was   before but just to show you how easy this how easy  it is for an administrator to fetch the update and   get up and running again on a new version it’s  extremely simple as I said right now they need to   input the username and the password but that will  change in the future as well and then it’s really   very simple for someone to do that deployment and  with that we're already no I don't want to update   the password with that we are already up and  running we have we would have new versions if the   definition in the Git report would have changed  right Tobias this amazing this really really cool   yeah we got the exactly same feedback from our  administrators they were very happy when they   understood they didn't need to fiddle with files  and that kind of thing and that’s another really   key thing about Portainer right there's no there's  hardly any cold when it comes to using Protainer   involved unless obviously you have a YAML file  be that a Docker compose or a yama manifest on   Kubernetes but obviously that’s another story but  within Portainer per se even the administration of   your services in your containers it's pretty much  done everything using a a graphic interface right   and very quickly and very not quickly not  only quickly but in an easy manner and   in a safe manner so that's one of the key  things that I really like about Portainer   what I also tried in the login here  because we are using OAuth I try to use   an OAuth token with Azure id and that doesn't  currently work so at the moment you need to do   a username and password based login but I think  that's also on the roadmap for the future yes   that that is something that you need to be  aware of if you get started with the API and   you're wondering how to log in and trying  to use all SSO with something like Azure id   then this won't work for the moment but yeah  as you can see here it’s not that complicated
Info
Channel: Portainer IO
Views: 6,770
Rating: undefined out of 5
Keywords: container management, open-source, low code, docker gui, kubernetes gui, portainer, portainer ce
Id: 7UqGVTWEzWs
Channel Id: undefined
Length: 19min 52sec (1192 seconds)
Published: Wed Aug 25 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.