f5 Automation via CI/CD

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right thank you for joining thank you for joining today some basic housekeeping first while we're using zoom webinars please direct all of your questions the Q&A section the chat won't be monitored the Q&A really is where we can ask questions and then you can upload the questions that you think are relevant that you're experiencing that others are experiencing other things if you have any audio issues you know please let me know that would be a good thing to ping on chat and since we have a small group today we may go ahead and do some interaction where I can unmute you and let you talk this is the third part and a three-part series we'll cover that here in the presentation in a minute so we've covered terraform we've covered ansible and now we're getting into CI pipelines so things are going to get a little bit more complicated not everyone's as far along with this so expected attendance to be lower this time for me you're seeing there's no digital background my application actually crashed it won't work so we're going to go low-key today you actually get to see where I live and you may hear some background noise hopefully not so let's kick off the presentation and we'll get going so the agenda today we're going to focus on why CI c d CI is continuous integration C D is continuous delivery or deployment jenkins is what we'll be using today but it's not the tool you have to use there are other things that would achieve the same outcome but I'll tell you why I chose Jenkins and how we're going to leverage it in our labs pipelines and web hooks what are they why are they necessary and there are build environments all right if your build environment is gonna be very different than what your developer may be using so it's important to understand the differences and then finally f5 automation and orchestration tool Jain we do have some registrants today that have not attended the first or second webinar and so it's important for everyone to understand what is it we're leveraging in this automation and in these pipelines to really allow us to achieve the automation of our big IP in our environment and then what everyone probably wants to see is the demo right and we'll be honest guys this is a work in progress so hopefully the demo gods are with us and it will work and then we'll open it up for Q&A now today's session will be shorter than some of the past primarily because we're leveraging a tool that does a lot of the work for us so with ansible and terraform there was a lot of buildup we had to do right we had to cover basics we had to go through what were some of the things we did in our environment now we're kind of to the point where we get to see the outcome of the first two webinars so definitely a lot of time for Q&A and maybe I can give you a few minutes back towards the end so if you're new to the series this is a three-part we've been doing one every month we were starting with automation of the big IP using ansible why Ansel because for me and for a lot of my colleagues we see an suppose one of the widely or one of the top used automation tools in our customer accounts it is extremely powerful and it's very good at configuration management and then this session too was really about infrastructures code and this is where we start to talk about terraform and why terraform versus ansible and today we're going to talk about automating by the seei pipelines and you'll actually see us pull the two together we will use terraform and ansible will explain why we think using Bost is the best nested partly the best method so YC ICD if you've seen some of the especially the first webinar you heard me talk about the automation lifecycle for a lot of the people I work with their target is starting with the application right how do I automate the app how do I deal with deploying the app when in reality we should take a step back and we have to look at the full lifecycle of the environment so from a big IP perspective and even an app that's the building of the app and the bootstrapping of the image deploying the app and in the environment obtaining monitoring telemetry data from the application and then finally change right so for example if I need a scale or I need to change a security policy all right these are the things that we're trying to do by an automation process for me mmm we've covered some of these aspects using ansible and terraform but that change that's really where something like a CI pipeline starts to shine because my developers can take that telemetry data and say ooh I need to tweak this or maybe we should change the logic here and they can deploy a new version of that app now when we talk about deployments when we're really talking about developers and deployments we're really past the days of waterfall development right many of us grew up in that era where we wrote an app for several weeks or months and then it was eventually pushed out to our customers for our big IP customers today you're probably familiar with waterfall in the sense that big IP is released twice a year that is a waterfall development process essentially there's a lot of code that's grouped together and it's released every couple of months now that's not to say the big IP components themselves aren't moving from waterfall that agile they are and more and more of our customers are doing the same thing moving from large monolithic apps to smaller applications even to micro services and using agile development processes to achieve this so the goal is to move faster and faster and faster and to do that we have to release constantly and this is where that change that luk bat comes into play now my developers can take that information update new code and then I can either redeploy my environment or I can update my environment so in today's demo we're actually just going to update well we'll work through how to deploy it and in your environment you can make that decision right do I do blue green do i do canary and we'll touch on some of those deployment methods as we continue so for those that are new CI stands for continuous integration as a former developer when I think CI CI was how I was doing testing right so it allows developers to work together on projects and it sure that code meets sprint requirements right so this is typically referred to an agile we typically work on a sprint right so this is a user story that tells me what I need to be writing towards and what I need to be focusing on it also ensures developers don't break existing unit tests so I was really big on test-driven development and what that allowed me to do is someone would tell me hey this is a feature that we want in the product instead of me going off and trying to figure out what that should be coming back in a week or so and I'm saying yeah that's not what I intended right then I just wasted a week instead you would write unit tests that would say hey maybe we're building a registration form and this registration form needs to take a name phone number and email address and then register them or the actual newsletter and so a unit test would say is this form actually taking a name is that a valid input is the form taking an email address can I register the user for the newsletter and the unit test allowed me to ensure the code that I'm writing is always correct now there's other benefits of unit testing unit testing also allows me to say when someone finds a bug in our registration form they're like hey wait a minute the phone number is actually not accepting a valid phone number right so I can go back into my code and go ooh I'll add a unit test for that and next time we won't have this issue so I'm always failing forward I'm ensuring that any bugs or issues we find in production never creep back into my codebase because in large projects you're going to have a lot of developers that are all working on the project at the same time and if we really want to start getting to the point that we're doing multiple deployments a day we have to get to a standardized testing framework that allows me to ensure that my code didn't break someone else's this is heavily used in agile development processes and more than likely guys this is order deployed in your environments if you're using git labs this is already there it's part of it if your company isn't maybe you're a confluent shop you know they have their equivalent of this but we'll be using jenkins because it's a very popular open-source and it's extremely extensible so then the second part of CI CD which is continuous delivery or continuous deployment so why the two different names well continuous delivery is typically what most people start and and it's the initial goal so when I used to write Java applications write the Java application would be compacted or essentially zipped up into a war file and that war file is what we were trying to always build so the ops team could didn't go deploy that on our tomcat and servers so continuous delivery meant that I was able to continuously build the most up-to-date war file so the operations team could go grab that at any point in time if you've been playing with our ano tool chain then continuous delivery is us delivering a s3 or declared upon boarding to you by a get up inside of the distribution folder you see the latest version and you see the long-term support that continuous delivery is now when we talk about infrastructure is code and primarily when we talk about big IPS in CI CD pipelines we really mean continuous deployment in that scenario so that's really where when the developer updates code I'm able to push that code out into production and actually impact my production environment right so this means maybe I'm building out new types of releases maybe I'm building out new test environments I want to ensure that they're all the same so that when I'm moving through dev test fraud I don't have a lot of discrepancies between the two of them this is how you start to get to multiple releases a day and it heavily heavily relies upon infrastructure as code so there's two primary methods that I see used when we talk about continuous delivery there's our continuous deployment there's blue green and there's canary blue/green mostly doing this whether we realize it or not right so think of that the blue environment is what my production is today and green is where I want to go so I don't mess with the blue environment I deploy a new green environment and I switch everything to this for those that have been doing big IT management this would be you standing up new pull members and downing the old pull members right it can be as simple as that and blue green gave us the ability to say hey you know what if something didn't work in the green environment I could always fail back to what was working in the blue now one of the caveats of the green well the caveat to Bluegreen is that it's a little slower right there's still a lot of manual testing and that's where canary is starting to gain a lot of popularity especially when we talk about things with kubernetes and containers it makes it pretty simple for the teams to do this and canary takes a different approach canary says I want to take an x percentage of my traffic and split it off into the new environment so if I am receiving a hundred requests per second then I could take one percent of that so one request per second and send it into the new environment and those users will tell me whether that works or not now they're not actually calling up our filing helpdesk tickets right this is where that monitoring that's limitary is giving me that continual feedback loop so that I know wait a minute the new canary environment is throwing maybe 500 internal errors right our response codes right so that internal server error is what it's telling me so something's wrong with the deployment let's steer that 1% back on to our normal production load go figure out what's wrong change the code and we'll try again and this allows most of our customers to move much much faster especially when they're trying to get to multiple releases a day Canaries typically where they're trying to go so when we tie them together right see ICD this gives our developers the ability to move at greater velocity it allows them to start being essentially an impact into how the business is operating now what we mean by that if you've heard our general manager over application delivery controllers Kara's talked about the app capital now where a lot of customers have been moving from let's say maybe infrastructure right you think of like four has factories and lots of trucks and cars right that's infrastructure capital then you move to like the IBM which is much heavily driven by people capital and then now we see that aspect changing to app capital rather that'd be applications like Facebook or Ober or aap is where you know banks are starting to interact with each other through open api's and driving business or commoditizing those api's and so to move the app capital means that I've got to be faster than everyone else right I've got to be faster than my competition and this is where we start to get into those multiple releases right so we've got to remove the burden of that developer so that he's not writing code and then going okay I gotta wait for production now right I've got to wait for someone to approve this task or for security to actually go in and make the fireball changes or heaven forbid the f5 admin takes two to three weeks to configure or get to the point you can configure the virtual server let's remove all that complexity let's let the development team use the tool that they're using which is CIC the pipeline and actually automate that whole process and do it at their speed now this is ideal and and if you notice if you've attended two sessions one and two we had massive attendance on those because primarily that's where most people are most people are still figuring out automation if you don't have the automation working yet then this third step isn't going to be achievable I'm just trying to be honest right if you don't know how to automate every aspect of the infrastructure deployment and the creation of the infrastructure then CD is always going to fall back down to human speed and it's gonna be waiting on someone to do something we want to get all of our automation working and then we can start to integrate it into the continuous deployment so Jenkins so Jenkins is one of the most popular automation servers from open-source there are hundreds of plugins and these plugins are what make Jenkins extremely extensible and easy to use so if you've attended any type of automation session with that five you've probably touched Jenkins if you've used our super net ops training class too heavily uses Jenkins now class two uses Jenkins to do raw rest api calls into the big IP so in that scenario actually creating a s3 declarations and you're using jenkins to post those directly to the big IP in today's lab we're actually using jenkins pipelines now why pipelines versus the raw rest codes calls there's there's not a wrong or right ray but pipelines are primarily where I see most customers going and then we're using a pipeline to call Tara Foreman ansible which is our build system for this environment there are many other tools so we have examples of it working in as your pipelines circle CI is a very popular one I also use Travis CI for a lot of my open source projects so don't feel that you have to use Jenkins if your team is already using another CI tool more than likely you just need to ask hey does it support ansible it is a support care forum and if it doesn't most the CI tools have the ability to spin up a docker container and you can run all the scripts inside of docker so the demo today I wanted to make sure this is something that was repeatable that you could easily use and so you can definitely go off and build your jenkins yourself go grab an image it's not terribly difficult but I have put in the code repository for this webinar terraform and groovy scripts to actually build out an AWS V PC with the security groups a new ec2 instance for Jenkins and then it will configure Jenkins all the dependencies Jenkins needs like terraform Java ansible and then the pip modules that we'll need for ansible and then it uses a groovy script to configure some base configurations for Jenkins so this way when you go into the Jenkins GUI for the first time you can close the wizard because everything's been configured and start building your pipeline the other thing is the example code that we're using today we are defining the pipeline's using groovy inside of code repository as well so this makes it easier to kind of just get up and running without having to become necessarily a Jenkins expert don't get me wrong there's nothing wrong with actually understanding the tool that you're using but a lot of times in demos and especially when you try to reproduce this in your home environment you want to move quickly you want to be able to say wow this is cool now let me go digest and break down what it's doing so pipelines and webhooks pipelines think of it as sort of automated process primarily what we're doing in the pipelines is repeating code out of a code repository we're then building the required environment so for us today that means building out a Mbutu server we're going to install the engine X demo application and then we're going to go build an f5 based upon an ami Amazon machine image so it'll create its own ec2 image and then we'll configure those so that we're using ansible and then we're gonna you know normally you'd be doing this at a test environment in our prod and then you could also have the pipeline building out blue green or Canaries alright pipelines are also very powerful in the sense that you can also have them doing unit tests or additional testing to make sure that the environment is up and running ideally we want the pipeline to run through a process that no one has to touch if there is a process where we have to fall back and say hey Cody needs to actually go view the web app and make sure it's running then the pipeline has the ability to pause itself and say hey you got to come back and click proceed right what we don't want to really do is finish a pipeline and then the pipeline finishes and we're saying someone still has to do something because guess what inevitably that won't happen and you'll see that a pipeline finished you'll think that you're in production you should be good and we're not where we want to be right the other thing we want to make sure is we have multiple developers they could potentially trigger multiple pipelines at the same time so Jenkins has the ability to configure itself so that only one pipeline can run at any given time and if another pipeline is in a waiting State the newer pipeline will proceed that right and this is really what you probably want to do in production and development that's fine they can all be running different pipelines but in production we want to make sure that there's only one milestone then we're only deploying one instance at a time and the latest deployment would typically always win we also want to ensure our pipelines are primarily building off of master in production so when we talk about code repositories most of the developers should be working in branches so if I'm writing a new feature into an application I may create cody branch feature 1 2 3 right but then once I've gone through my CI testing and everything works as expected I and merge that into the master branch and that's what would kick off the pipeline so leading to the next thing and I'm sorry I kind of skipped ahead let me hold off on the workflow in a minute but one of the things I do want to emphasize here is Jenkins pipelines can be procedural or declarative and if you're new to Jenkins and you're googling around trying to figure out like how do I do this I don't do that you have to make sure that you're looking at examples that match what you're doing I know this because I rented the same problem myself right so if you're basing everything up on procedural make sure that the examples you're following or procedural if you're doing it Clara t'v make sure it's declarative you could typically achieve this by just adding on procedure or declarative into your Google search but it's something we want to make sure that we're doing I didn't mention you can configure Jenkins through the GUI or through groovy scripts in this particular demo we will already gone through the GUI I used to GUI to quickly iterate on what the pipeline configuration should be and then once it was somewhere that I liked I then took that pipeline configuration it's a groovy script and I move that end of the code repository and then I configured the pipeline to go look in the repository for the pipeline configuration so this makes it very easy for others to import not only my code but the actual Jenkins pipeline that I was using now I talked about the code being updated right when we update the code how do we fire off a new procedure so this is where web hooks come into play for this demo I'm using github and so you can use github you can use github Enterprise almost every major code repository today supports some form of a web hook and think of it as it's a way to remotely trigger a pipeline or trigger some type of event the goal here would be that when we check in and new or we update new code inside of our application repository the web hook is going to tell get Jenkins hey go start this process now the nice thing is Jenkins can actually automate all this for you so if you're unfamiliar with how to configure web hooks and you're using github you can provide your github credentials and Jenkins will actually go in and set up the web hooks itself the other thing is in this ticker example I do have two pipelines and this is my personal preference it's really up to you how would deploy it but I like my infrastructure pipeline to be separate from my code pipeline and one of the things this allows me to do is that infrastructure pipeline could be used by multiple projects so in today's demo I'm actually using a single Nick big IP with nginx demo right that's my pipeline for infrastructure and then the app just simply has the configuration and the actual code for engine X and I'll show you an example of that real quick so if I skip over I'm going to stop sharing and switch my window so in skip to my CI CD demo app so in this example I have my HTML and this is just the nginx demo application that you can find in their github repository they have it as a docker container but you know we are doing some labs and what we call a unified demo framework and the UDF doesn't support docker or native ECS instances in AWS yet so I'm just building this inside of an ec2 image pretty easy I also have my configurations there again it's just telling nginx to set these variables so we can display them on the page now the other things that we see in here is I have my Jenkins file and so this is how I'm configuring Jenkins right so my build processes I also have a pipeline and so you know one of the things I'm hoping to provide y'all probably in the next week or two is a fully automated test lab right what I want to do is actually have you walk through ansible walk through hands-on with terraform and then walk through hands-on with Jenkins so kind of giving you a more detailed lab over the three-part webinar that we've done so let's skip back to the presentation now the build environment so when you say build environment to your developers they're probably thinking of something like maven or you know building up their applications for us it's really terraforming ansible and so we're using both because they shine in different areas and simple is really really good at configuration management hundreds of thousands I don't know if they're for thousands but there's hundreds of modules that make it easy to integrate ansible with other services but terraform terraform is really good at state and this is why when you talk about infrastructures code you often hear people reference terraform because terraform is building out the infrastructure and maintaining the state of that infrastructure so the analogy I guess the difference would be that terraform is always looking for a state file whether that's locally or that stored inside of something like an s3 bucket which is what we're doing in this demo and so that s3 bucket is containing our data and so terraform always knows hey I've got a internet server built I already have a big IP server built and if he detects drift so for example I could say well I need to add a new IP to my security group terraform would detect that there's drift there and he would update it ansible on the other hand typically is always going to the device and trying to do I could gather facts he's trying to determine should he run this script or not right there's no state that Ansel's necessarily maintaining itself so why do we use both well ansible is primarily one of the most popular automation tools that I see on the market today it's agentless which for the big IP is very powerful because we don't want something installing an agent on the big IP itself and it makes automation tasks very easy so there's Alexander day we're using the engine X ansible role and basically we define the role and it takes care of everything else it installs engine X it configures the base deployment all we had to do is a few lines of code right that's pretty easy that's really what I want to be doing right I don't want to be writing a lot of raw shell commands which is what I'd have to do in terraform however terraform is much better at building out my infrastructure so in the labs or in the demo today we're really using terraform to go build out the ec2 instances build out the V pcs build up the IMM roles all of this stuff that really needs to be maintained as far as the state it's that state stored in an s3 bucket and it's something I can always reference the other thing I'm also having terraform do is because this is an ephemeral environment terraform maintains all my passwords now in the demo today we're doing passwords in the terraform state my s3 bucket is encrypted but theoretically anyone that has admin rights to my account could go read the s3 bucket for the demo that's accessible right in a production environment you may actually want to look at things like vault so that you know anyone with an admin account couldn't read the encrypted s3 bucket now you could store your credentials somewhere else so for those that are new today we're achieving the automation of the big IP using the f5 automation tool chain the f5 automation tool chain is a free offering from f5 it's part of your make this contract and it gives you a configuration tools to both deploy and configure an f5 using declarative api's there are a couple of components cloud templates are templatized methods of deploying the big IP and AWS is your GCP and then we have declared of onboarding this is where you can do a declarative API call to configure things like the networking interfaces licensing clustering making it very easy to have the big IP up and ready to accept an application configuration the one that's most popular that many of you may already be familiar with is a s3 our application service 3 extension a s3 is a declarative API to actually configure your application so this is where we can make one API call and configure all components of our app now there's the API services gateway so if you want to run all of these in a docker container or army you want to abstract it off of the big IP the API service gateway gives you that capability essentially it's a proxy for those declarative API calls and this is advantageous for customers because if you have SL A's and maybe you're only allowed to install or update software on the big IP during maintenance windows that impedes how fast you can add new features and automation versus if we can install the tool chain and the API gateway or in docker containers those can be upgraded at will they don't have to go through the same SL A's as the big IP now to limit your streaming is how we get data off the big IP and provide that continuous feedback loop to the developer one of the key benefits of using these declarative API is especially a s3 versus imperative API is that a s3 gives you a schema contract so what we're saying is for an X number of years we guarantee that this schema right the declaration that you're providing us will deploy the same application across any environment so for a lot of our paint customers they're one of some of their biggest pain points are upgrades because the big IP is the source of truth in that configuration and so they have to be very careful with the big IP when we start to abstract out the source of truth and let's say store it and it get out of repository and we use things like a s3 to then configure the app this makes upgrades very very easy because now the a s3 schema contract is guaranteeing that the same configuration would deploy in any version of Tomas that as3 supports so if I'm on twelve one I moved at thirteen one I moved to 14 one I moved to fifteen one that declaration will deploy the same app every time and this is very nice when we're talking about automation especially we're trying to integrate with the CI pipeline so let's kind of jump into a demo and see what's going on and again if you have any questions please feel free to ask them in the Q&A section I'll pause periodically and and answer that so I'm gonna go ahead and share out my web top again or my browser let make this a little bigger for y'all now what I have here is if you've been following along we have the automation webinar github repository inside I have code and then since we're on the third webinar that's our CI CD process there's a couple things going on here we have our jenkins file and this is really telling us how to build out the pipeline so if I was new to Jenkins this is how I would easily just tell Jenkins go grab your configuration from my code repository now what are we doing in here there's a couple things we are deploying defining our path we're also setting a couple of variables and so in this case I'm getting the name either from an environment variable or I'm setting it to a default because the name is used by some of my terraforming ansible scripts I need to ensure that it's only set now the other thing I'm doing is a github repository so again I'm either receiving the github repository as a parameter to the pipeline right that's what this params the app name is doing or if I don't then it's my demo right so go to a default repository now the first stage we're gonna perform here is actually to clone the repository so what I want to do is check out the repository and and you'll notice I hard-coded that in I shouldn't this should be github repo your else the fun stuff you notice when you're demoing rights though I will update that but this is the goal here is to grab the code so that we can then execute all of our terraforming danceable script the next stage is to deploy our our web app and big-ip so I'm deploying or I'm issuing a directory command because in Jenkins when it's it's talking to the underlying operating system if I was to do a CD and one of these shell commands here the next shell command doesn't go back into that folder Jenkins always runs its commands inside of its processing or workspace for this particular pipeline so if I provide a directory command up here then all steps inside of that directory command follow it or work inside of that same workspace so what we're doing is we're exporting the name were then doing some of our terraform commands now one of the things that's new that you didn't see last time is the terraform get update I'll show you that here in a minute but in webinar two we were using terraform modules that were in the same code repository that we were working out of and this class were actually referencing terraform modules that live in other github repositories because I really don't want to be copying my modules around I'd rather have a central place that I write and update my modules and then all my independent scripts or terraform scripts are referencing that so for example if I fix a bug in my ec2 deployment they all benefit from it versus me having to go to each repository update that same bug now terraforming it is pulling down all the modules that we need or providers that we need for this to particular instance and then of course from last time tariffs or apply Auto approve so in this particular stage we should deploy our Linux servers and we should deploy our big IP now once they're up and running we actually want to configure our web app so this is where we're again changing into the webinar 3 directory and then we're issuing our playbook now our playbook needs some variables that we have to define so in this case we're adding in some extra variables that name that we defined earlier in our pipeline and then the github repo that we're defining earlier on pipeline so this makes it easy for me to pass variables that the pipeline is either creating or variables that were given to the pipeline down into ants terraform and then the last stage is I want to configure the big IP now that my apps are up and running now we can start to do this I'll be honest I ran into a couple of challenges here and so I had to play around with some of the configurations if you have some experience and you think that there's a better way to do this I'd love to hear it right make a comment on github or you know do a an issue because I'm by no means saying this is the only way it's just the way that that I found worked now in this particular environment because the ec2 instances are all spun up dynamically there's one problem and that's the security guy I'm not fond of this but there's no way to accept the key from the server so most of the examples I see about running ansible inside of a CI pipeline they tell you to ignore host key checking so they're again perfectly okay for a demo environment but this is one of the things I want to ask some of my counterparts at ansible is there a better way to do this we're then TOI calling your ansible playbook and we're passing in some new variables this time we're passing in the big IP password we're passing in the big IP number passing in our SSH key file because ansible needs to ssh into the big IP to change its password now in this particular scenario we're using a feature inside of Linux that allows me to escape out of a command and run another command so we're doing a terraform output so my terraform scripts are actually storing the output of the big IPs private IP address and the big IDs password now you'll see up here I was trying to set those through shell commands but Jenkins doesn't let you necessarily put new variables or environment variables inside of the step so this is one of those I want to follow back up on and see if there's a better way to do this because while this does achieve what we're after this is not declarative right this could potentially cause problems and we'd rather be using variables because a variable can always be overwritten so this particular pipeline is built out over here if I go back into Jenkins and I can see that I have two jobs the first is tests and this one if I go the let's see configure what we did is we told Jenkins that the repository is our automation webinar and then we told him to configure himself via the Jenkins file right now if we go back into our actual web app and this one now we're doing the automation webinar let's see there's a question one second Jenkins calling the shell commands it's a bit long but you could just make it all one call with semicolons between calls yeah and you can write it's a demo so some of the things are stretched out so Jenkins looks a little prettier but in this case we're actually telling it okay now pull the Jenkins file but look for in this particular script path and so this is helpful when I have a repository that stores multiple projects having done development with github for quite many quite a few years I am really on the fence right when you have a big project you store it in one repo do you break it up at multiple repos that really is dependent upon your organization and what you feel comfortable with but this is how I would do it if I was in a large repo and maybe my project was a subfolder of that repo I just tell Jenkins where to go find it but keep in mind Jenkins is checking out that root folder all right so if we go back to this and we actually dig in to our branch I've got a couple of bills that have been failing I've got a couple builds that are correct number 70 is actually the one that built out my environment now number 70 failed and I'll show you why in a minute so as we come back into the pipeline configuration you're seeing that we have a stage deploy web app if I jump back over to Jenkins we have deploy a web app and big-ip we come back into this it says configure web app we come back into Jenkins so you see how those stages are being processed you can also any point in time come in and view the logs for that particular stage so this is helpful when you're trying to trouble now let's actually see what Jenkins is doing so if I come back in and I say all right it's going to be running my terraform that's one of the first things it's doing and main TF so in main dat TF there's a couple things that we're doing we're storing our state file and an s3 bucket and then I need to get the public IP address so that I can ensure that my security groups allow it I'm creating a password for the big IP because this is an ephemeral in environment there's no reason I should be setting the password to a common admin password right we're past the point of pets these are cattle right we're not naming them we don't care and feed for them if it doesn't work we simply blow it away and be deployed I need to find my B PC because I've already built the V PC as part of my setup this script is going to look for a pre-existing V PC that's what's happening and these two it's telling it the region and then it's telling it the cider blog and the name that the V PC should be called and mind you it's a borrowed name so this is getting pulled from the pipeline we're deploying our demo app so in this scenario I am pulling the compute module from another github repository and so this is the benefit of being able to reuse my terraform code so what's happening here is github cody green terraform is a github repository for all my terraform scripts this double slash tells terraform not to go look in the root of that repository but to go look for a particular folder inside of that repository so now I can easily reuse this compute ec2 at any time I want you can as well you can actually pull this case P lines right here and it would deploy out an ec2 instance for you there's a couple things I'm defining we need the name V PC ID the SSH key that's built out inside of my AWS environment and then how many I'm looking to deploy because it's a demo environment we're just doing one the next thing we're doing is deploying our big IP I fell back to CFT I do have examples that don't require CFT it's commented out right here and the reason there is a bug in 14.1 when deployed through AWS that I was running into so that bugs being addressed right now it's getting fixed and so the cloud formation still deploys a 13.1 image so what we're doing here is we're setting our image name our instance type were restricting our addresses for the management so only my VPC is allowed to make API calls into the big IP but for my applications I'm going to allow anyone and then my declaration URL this is where I can pull in an ASP declaration if you look at this example right here this is how I could build out with terraform an ec2 instance for the big IP when using terraform this is actually my preferred method I prefer to do this because terraforming CloudFormation templates kind of step on each other right there they're kind of trying to achieve the same thing and there's very little that I couldn't do in terraform that that CloudFormation to be able to give me there are some differences CFT has some capabilities that amazon has not put in their terraform provider but for the most part that I prefer to use terraform natively and then we also have inside of our terraform repository our sorry Terra forms github for the FI provider we actually have an anus 3 instance now so if you want to play with the latest version you could go grab that and actually deploy a s3 declarations through terraform it's not available and the public release of our provider but it is available if you download the code and compile the could be provided yourself alright so we've gone through the pipeline we've gone through the terraform let's see what ansible is doing so demo app is coming in and he is installing engine X so we're doing this by the engine X role and then post tasks he's gonna check out my code repository and he's going to put it in the SVR checkout folder I then and telling ansible to copy the website so remember we had the HTML index dot HTML which is back here index dot HTML and so ansible is copying that to the engine decks configure website page so HTML that index.html there is already the demo app there or simply overwriting it because this demo app shows you things like the client IP address and it can do Auto refreshing we're also sending in the engine X configuration again if you remember that configuration is pretty basic but it is setting some variables that the demo app needs such as server name address URL date and ID and then finally it's restarting the process so not a very difficult and simple script but trying to do all of this with terraform you would be essentially issuing shell commands and I'm just not a huge fan of that right this is a much cleaner way to do configuration management now when we look at the big IP we're configure big IP with ansible there's a couple things that are happening here we're changing the password of the big IP and we're passing that in and from terraform because terraform is maintaining that in thermal state we're installing the a s3 declaration so anything pre thirteen or fourteen one s3 long-term support had not been released so it's not part of the base image and this scenario we're actually putting in the a s3 deck or we're putting in the a s3 I control X extension and we're doing that by the get URL and so we can pull I'm sorry we can download it and that we're doing the big IP I Apple X package to install it then we're pushing our declaration out there now earlier if you remember this was my deployment why is it red so man something didn't work I can come over to this job number 70 and I can look at console output and so if I scroll all the way to the bottom this is everything that terraform and ansible is doing but if I scroll down here to the bottom what I'm seeing is that my commands to a s3 failed so I've got an error here right this is something I need to fix what it is is if I come back over to my ansible script we are trying to post declaration right after we installed a s3 well guess what the big IP needs a couple seconds to actually restart its node instance and have a s3 accepting declarations so one of the things I need to do up to the lab here are the demos I actually need to add another step that goes inquiries a s3's API and say hey are you ready are you good to go and then once I add that in you'll see that this won't fail so we have a question from Adam so just to clarify ansible isn't calling terraform and terraform isn't calling ansible jenkins manages both of these pieces and calls them separately yeah for me that was the better method I mean I've got Jenkins doing the automation why have terraform call ansible or a symbol called terraform if I wasn't using Jenkins and if you've attended lab or webinar 2 we actually had terraform calling ansible but with Jenkins kind of being that overarching control I'd rather him do that it also allows me to do a key management so if Jenkins is storing my passwords then Jenkins could then supply them to both terraform and ansible there's no right or wrong way here it's what works best for you so you saw why I was getting the error so I needed a couple seconds in between the point that we installed a s3 to the point that we wrote the declaration to s3 and so what we did is we ran the pipeline again and now if we come to 71 we see that everything was accessible and if we scroll down to the bottom we'll see that our declaration succeeded so what we can do now this is our big IP I'm just going to copy his IP address and of course demo gods are not with me so let's rerun the declaration see what happened should kick off 72 will that a minute any other questions while this goes through a deployment all right I'm going to check my ec2 instance while that's happening we've got our big IP make sure my IP address is correct yep that's the problem I am talking to the wrong IP address so there's that and if you want to make sure it actually is the big IP we add on the 8 443 for the single instance management and I need to give it the HT yes so a question came in if we just require web hooks to our use case to connect get with ansible and terraform that jenkins won't be needed so you're absolutely correct there is a example on f5 I believe it's the dev central code repository where we are doing web hooks from github directly into the big IP and this uses a new I control X extension that one of our architects wrote and it's community supported but what the web hook is doing is calling that I control X extension and then he's coming back to github and saying hey show me what's changed and I'll see if one of those is in a s3 declaration if it is an ASU declaration then I'll run that declaration and update the app so so that there's there's definitely Jenkins is not required Jenkins is really more about that CI and then the CD deployment if you can do continuous deployment another way by all means use what's best for you it's just this is one of the more popular ways that we see and so the nginx app is working I cannot get to my big IP because if you remember inside of my security groups I'm actually locking it down to my V PC this isn't this is an external IP address so the AWS security groups are locking down access to my big idea ideally that would be okay right because if we're deploying and this isn't really a pet or you know it's not it's sorry this is not a pet I'm not making it I'm not carrying it being it I really shouldn't be logging into the GUI it should be a cattle approach it should be automated it should be always updated through automation but the Internet's demo is telling us that the IP address is 10.0 dot one dot 180 and then the date and the URL that were requesting and then I could of course Auto refresh this and if you had multiple engine X demo apps and you had round-robin and configured you could see it go between this so the nice thing about this is is you could now come to your code repository and if I skip back up to the root I could come in two main TF and I could change the number of instances I can also come back over here and this is the a s3 declaration I could come in here and change how I'm doing persistence now one of the things I do want to point out real quick and I'll answer the questions coming out of chat is I did not ask terraform for the IP addresses of the engine X demo app if the reason I didn't do that is I wanted to show you the service discovery capability inside of a s instead of a s3 which works for AWS assure and GCP so what we're doing here is we're telling the big IP go query the AWS API and look for a key tag for me when I deployed my app I set the TV tag a scale group and I set the value to lap so any ec2 instance that spins up with that key and that value they'll automatically be added to the big IP pull so this gives me a way to dynamically put pull numbers in and I'm not having to always go and figure out what's happened why is that advantageous in this CI lab we manually deployed an ec2 instance as our engine X demo what I really see in production environments is customers would then have terraformer ansible take that single image and create a unique ami out of it and then create an auto scale group that is that am i right so in that type of scenario ansible and terraform really don't know the IP addresses anymore because we're now relying upon AWS to scale that out based upon demand or failure if we use the tagging as AWS scaling groups add or remove members we learn about those automatically so let's let's get some of the questions going on here William asked going back to my comment earlier looking at the bottom of the Jenkins shell command calls was the issue with the commented commands that each call is starting a new shell oh so I'm losing my variables yeah William that's something I probably need to look into I did try putting them all on one line and I didn't have a lot of luck there so by all means if you have some recommendations please let me know I'm not trying to proclaim that I'm a Giants expert so I do appreciate that input Christopher said I'm getting jittery and delay issues here I'll have to listen the recording later so Christopher sorry about that there is a bug that zoom has been working on for the webinar series they found that some people are getting downgraded to 320 instead of 720 quality and so we were told that that should have been fixed if you're seeing some issues I apologize maybe it was not so Clem is asking are there any best practice guidelines on how to structure the various git repositories when managing hundreds of applications that may be deployed over 50 plus different f5 pairs Clem I would tell you know the problem about the github structures is that it really depends for your environment and what works for you I think what we could do is you know we could take a look and try to give you a recommendation or how I would do it but we haven't necessarily come out and said you know thou shalt structure this way because we ultimately find that everyone's our unique snowflake and it's easier to just simply say figure out the structure and then we can figure out the pipeline to that structure so sorry not yet so you're looking for but that is it's it's not one-size-fits-all Jason asks is the ASV polling an AAA up AWS better than using F Killians with auto-populated detect pull never changes so jason in a cloud environment i would say yes because the TTL of the dns : which is when we're doing the fully qualified domain name service discovery were polling DNS we're really reliant upon whatever our el DNS is between us and that environment now an AWS if you're using something like route 53 you can set that TTL very low so you could probably achieve the same type of SLA that we have a service discovery but we find that service discovery picks up the new or remove instances faster than fqdn or dns service discovery does that being said tagging doesn't work inside of like AWS ccs which is their managed plus a container cluster in that scenario i have to use fqdn and i can typically find an oh that's been added or removed at about 10 seconds we're talking ten seconds right I mean if everything boils down to something breaking in ten seconds then then I would say we want to lean towards the API Adam asks I didn't notice any use of the big IP terraform provider can you briefly compare contrast the ansible modules for managing the f5 versus of terraform provider modules so you you also won't notice me using any ansible modules other than to install the I control LX the reason is is we're really trying to push our customers towards using our declarative api's when we talk about the terraform provider and the ansible f5 module they're all based upon our imperative API now that being said the next release of terraform and well hopefully the next release of terraform but definitely the next release of ansible you will see some of our declarative api's now included in either roles or the provider so what we're want is we want you to start using that schema contract by using the declarative api's we can guarantee that that app deploys the same way every time when you use our imperative api's it's still upon you to go validate that between version 13 and 14 your commands still work or your program didn't deviate in a configuration on the virtual server and so we're really trying to push towards those as3 declarations I'll be honest where do we see people using the terraform provider and or using the ansible modules we see them heavily used in brown build so we're working on ways to help customers move Brown filled into declarative api's and hopefully early this summer I'll have an update for you there alright see any more questions all right well everyone I appreciate your time today if you do find a question you all have my email address please feel free to shoot me an email happy to answer them I will be updating the code repository to fix some of the issues I saw going through this webinar and feel free to run it yourself and if you find a bug hey you know pull a do it github issue or do a pull request and you know we'll look at things going and use development processes try to get through this so anyway can you get a copy of the webinar yes there will be an email that zoom will send out tomorrow it's 24 hours after this and it will have a link to the recording a link to the github repository and a link to the presentation I'll also include all of the same information for the first webinar and the second webinar so all right everyone thank you for your time and have a great day bye-bye
Info
Channel: Cody Green
Views: 1,103
Rating: 5 out of 5
Keywords: devops, automation, f5, jenkins, ansible, terraform, cicd
Id: 3h2mg-lk3PM
Channel Id: undefined
Length: 55min 42sec (3342 seconds)
Published: Thu Apr 25 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.