CI/CD Pipelines for Microservices Best Practices

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
thank you for joining us for today's webinar ci City pipelines for microservices best practices today's session is brought to you by code fresh live today we'll see how you can create CI city pipelines designed specifically for micro services and how you can reuse the same pipeline across different applications today our presenters are Dan Garfield chief technology evangelist and Vidya subramanian principal DevOps evangelist at code fresh please remember to reference code fresh do slash events for all of our upcoming code fresh live webinars as we have fresh and informative webinars for you every month so that's it for the housekeeping items Dan the floor is all yours excellent thank you very much thanks for filling out this poll by the way this helps cater the presentation so we kind of know what people are doing looks like most people already are using micro services in some capacity so we're not gonna spend tons of time on so why have micro services I think that's pretty clear we're gonna get mostly to the CI CD and and then just to introduce myself and it looks like most people here looks for elevate I was looks looks good okay so just introduce myself as Taryn said I'm the chief technology evangelist for code fresh I'm a Google developer expert focused on cloud and kubernetes micro services and then I'm also a member of the Forbes Technology Council basically just means that I write about automation and micro services and how to make engineering teams more effective that kind of thing as well as creating tutorials around things like canary releases is tío all that kind of stuff and I have a special guest joining today Vidya who is a fairly new member of the code fresh team Vidya do you want to introduce yourself I was responsible for building a CI CD platform at Expedia corporate travel for I started my own consulting business I I do believe strongly in DevOps have believed before the word existed and I'm excited to be a part of continuing that evangelism through code fresh for DevOps as a as a way of thinking for all the software engineering excellent thank you video and as we go through this if you have questions use that Q&A button feel free to use the chat like I should say smash that subscribe button and hit that like but this isn't YouTube so we're not so first off let's talk about micro services and and my my favorite example I was just thinking about this I've been a huge kick with the Apollo missions because of course it's the 50th anniversary there's last Sunday of when we landed on the moon which was an amazing engineering accomplishment and so thinking about micro services and model as the example of Apollo 13 came to mind now Apollo 13 was the third mission to land on the moon and while they were on their way they had an explosion in the command module and it basically meant that the command module was dead and they had to shut everything down really quick because the batteries were gonna die and they all went into the lunar module which is only meant to support two people for life just enough time to go down to the moon and walk around and then go back up and they ran into this problem where they didn't have a way of getting rid of the carbon dioxide in the air and so if they didn't find a way to filter it they would actually all die and so they would asphyxiate now the lunar module had a way of scrubbing the carbon dioxide from the air that was specific to the lunar module and the command module had the same function and they had plenty filters for the command module because it was meant to last the entire trip but they didn't have a lot of filters for the lunar module which meant that they didn't have the the pieces that they needed to get the carbon dioxide out of the air which which was going to cause them to his fixated eye so I like to think of these as basically two different monoliths there are two different spaceships that are docked together and they have a very monolithic approach to design which means they basically did everything that's said the same now if you haven't seen Apollo 13 the movie it's super good you should totally watch it and my favorite part is this engineer at NASA says we need to find make this fit into a hole made for this using only this and so what they're doing is they have all of these filters for the command module that they could use to scrub the air from the carbon dioxide out of the air but they don't fit inside of the the system that the lunar module uses and so this is an example of monolithic thinking right they built a specific different oxygen scrubbing system for each different model lift and then when they needed to be able to use one from another they couldn't then improvise and of course they saved the day the brilliant engineers figured out how to make an adapter for these services but in the software world hey ideally let's use the same component to do all of the jobs let's not let's not have to reinvent the wheel for every different service that we have and so this is a this my analogy for micro services and I think it's a good way to set up videos experience at Expedia and the reason that you and the whole team decided to move to micro services and then how you tried to approach this from a CIC any perspective so Vidya why don't you take us through your experience yeah so I love that example because while we were not doing spaceships or trying to get to the moon it felt like it at that time this project was pretty pretty difficult to bring multiple monoliths which were all [Music] mergers and acquisitions of various travel products I'm just using the example of cars here equally applicable for flights and hotels but there were three different products all trying to do the same thing that had car searches as an example but different you are experience different user experience as a result um which was a great reason to move to micro services because now these were all part of the same company there was no reason for the duplication as well as you know getting phone apps and so on so the metro services were split up for this car search there was a different one for search and sort and the booking service was built so that it can be used across businesses not necessarily for cars as an example so in this process if you if you see the approach we took which is on the next slide really has the consolidation of code bases going on on the on the one side but what we really needed to do was bring together or the platform whether it is for continuous integration continuous deployment or everything that goes within a software organization for logging monitoring we had to build shared services and remember this was few years ago before spring boot even had some of these capabilities built in and the standardization required a lot of in-house work to to build those shared libraries at the same time we were also trying to migrate to cloud from on Fram and and rely heavily on manual integration testing throughout this process that was the approach being taken to bring all of these monoliths together and split into micro services obviously along the way we faced a lot of lot of issues the teams were geographically distributed that's always a problem but at the same time you you need to build tool sets that alleviates that problem I don't know how many of you in this session are are in situations where you know the build is happening this CI is happening but the consumers or the integration testing is happening in another zone and and you need more automation to bring all of this together and that's the only way for geographically distributed teams to to be successful so we'd love to hear some feedback on on your situation there as well as the tools had to be consolidated even though people were gen on Jenkins in some cases there were different instances when different versions offered different versions of nexus one team was was entirely using homegrown proprietary I mean this is we are talking about original Xperia codebase from back when it was part of Microsoft so all of these need to be and the pipeline's as a result in the in this new world when we moving to micro-services with you know each microservice having three to four branches at least there was an explosion of the CI CD pipeline from even a hundred is a lot but at least it was manageable across across the teams but now with the move to micro services it meant that we were constantly needing to update because these were not built modular they were not as reusable as we would have liked it to be and on top of that general Jenkins issues with master and slave the slowness and the context all running within the Jenkins master for the CI pipelines throwing us down many of pit cost by what we used to call the copypasta problem bad patterns of copy pasting which copy pasting works only to a certain extent across microservices when the teams were all trying to migrate to the new code base and build the CI CD pipelines at the same time this was becoming really hard for a central team which was sort of my team's responsibility as a director of engineering I was staring at this global organization of engineers constantly asking for my team to keep up and we were becoming the bottleneck because the pipeline's were were not able to to scale to the needs of the global of migration we took the approach which we have you know the main one being let's rely on a mavens modularity which helped a lot maven does help bring some of the best practices some of this was happening pretty container here are events so we weren't on any form of containers at least docker was in popular even though the containers prior to that so we weren't able to leverage some of the best practices that are available since then and later dan will be sharing some of the best practices with containers as well the the neck net is that lot of lessons learned which we have in the next slide through these issues faced the lessons really made us realize that the engineering teams that were building the micro services should have been able to prioritize C ICD templating as a top business need it's not different from building out cars if they are not two separate problems it doesn't belong in another organization so to be able to really so the to me that's a very key lesson because that was significantly slowing down when you rely on on a central team to bring all of that together for the rest of the organization and that was slowing down the the adoption or the standardization of CI CD and and later again you you will see as part of part of what Dan is sharing is that at that time when we were building all of this which was few years ago not all of the best practices or tooling were either the tooling wasn't available or the best practices it was a chicken and an egg problem build versus buy we were trying to build all this CI CD platform while all the micro services were being stood up by a variety of teams globally so in order to really bootstrap these projects which is my second bullet point on recommendation is you really need to externalize sort of that CI CD thinking the best of your capabilities but at the same time give it very high priority to bootstrap don't don't let that sort of struggle and say I need to take care of my backlog item which is to build the sort servers versus you know let me wait on bill hangout CI CD but they're the the reusability approach through a bi model could have also been quite helpful as well in in hindsight because we were trying to keep up with the rest of the team by building on of this ourselves I'm hoping that through the rest of the webinar today and Dan is going to start showing some of the best practices which we had in maven but now we are able to provide through code fresh perfect thank you video yeah and I think that the experience that you had a Expedia really mirrors what we've seen from tons of companies and it makes intuitive sense right as you go to micro services using shared libraries pretty standard procedure but as you mentioned it does lead to a number of problems so I'm going to show you how we actually have done this for code freshness which runs on micro services so this is going to be kind of a look at how we do our internal engineering processes as the backdrop here so so first just to reiterate right when we're using a monolithic application you typically have a pipeline or several pipelines for just that application and it's okay then it's complex or difficult to maintain it's okay that it's fragile because at the end of the day you only have one of them you only have one model if it's not a big deal for it for this to not be a scalable super easy to use process because you don't need it to be because you only have one monolithic application and so you usually often have it led by a single team and I marked here that this is actually kind of anti DevOps and I marked that because what it basically says is that the engineers don't have as much input into the deployment process as they should because it's essentially one team that just dictates how everybody's going to work that's our really great way to work and ideally you can find a better way to work when it comes to micro services now to understand this problem if we took this same approach and kept it going to microservices we would get in big trouble really fast let's say we have three monolithic applications and we're gonna split each of those up into several micro-services all of a sudden we have exponentially grown the number of pipelines that we need to maintain now with our original approach with a single team that manages the pipeline for everybody is this going to work no it's gonna be a nightmare right that's what Vidya found when this is kind of the approach at Expedia Allah said and then thousands of pipelines and if something broke it was very very troublesome and hard to maintain hard to scale hard to keep everybody working now the whole point is ci City pipelines is that we want to keep people working smoothly so they can just focus on writing code they don't have to worry about about the process going forward right so this shared this shared pipeline segment's approach it's not a great solution because it relies on shared libraries now many of you based on based on the poll results here have implemented Jenkins you've gone through the process of adding plugins and things the problem with those things is that they all run globally and so if you have something like let's say coop CTO right you need to have that installed well it relies on Python too and then you have a test suite that relies on Python 3 and all the sudden you have a version conflict you have a problem and what ends up happening a lot of times with these shared libraries is that they rely on each other in a complex way that different teams will need to use a different version and so one team will upgrade their library but also this will break everybody else's pipelines and and everybody knows this I mean if you've ever upgraded a plug-in in Jenkins you've experienced this this problem where you go to upgrade a plug-in and it turns out that several other things rely on it but they're not with the new version and oh whoops we just broke somebody's pipeline over here this is this is a really big problem and what we end up seeing is that teams that try to keep the monolithic approach to pipelines in their organizations will end up and if you're doing it on Jenkins they will end up spawning Jenkins instances you will proliferate Jenkins all over the place where instead of having one Jenkins instance all of a sudden you have 200 Jenkins instances one running for each different micro service different team because they have different tools different configuration and it's because the shared library system is a broken and bad model it's a really really wrong way to do it now the idea of having reusable components that's not about it that's a great idea it's just that shared libraries are a bad way to do it and this is kind of a video was sharing earlier when she was talking about how plugins would break all the time the central team couldn't maintain the libraries you know if you needed to upgrade a library you had to go and talk to like an admin they had to approve it then they would upgrade it and this would cause other people's stuff to break it's a bad way to do it it doesn't make sense and it's it's because the shared library approach is almost like it's almost like the idea of micro services on the face but really it's a monolithic approach right it's saying everything needs to be in the same application it's not the way micro services are meant to work micro services are meant to do everything that they need to do within their box and nothing else and they don't they don't affect other things they just do their one job so that's what that's that's something that we can actually solve so this this organizing pipelines for monolithic applications it's a big fail requires that we use the same versions the libraries we basically talked about all these issues it leads to a lot of stability problems and at the end of the day your productivity falls in the toilet and it's it's a big big fail it's really really hard to do really really hard to pull off if you disagree shout it in the chat all right so how do we do this well we're going to talk about three lessons that we took when we kind of looked at me the the model that Expedia had followed and we did a lessons learned and then we built it fresh for micro services from the ground up now luckily the nice thing about coach fresh when we built our platform we built it from micro for micro services for date from day one so we never had to go through the migration pane which obviously does cause some of these extra issues but we're gonna show I think three essential items now to understand the usefulness of the items I just wanted to share this is the code fresh stack now this is actually isn't everything we have a lot more micro services running than just this but this gives you an idea of how we work we have a complex micro service based architecture we have dozens and dozens of micro services we support a ton of different runtime environments we've got tons of integrations we have a built in doc register even built in home chart repository we have services to do authorization we have user management we have role based access controls we have we have audit logs we have all these different systems and services to maintain and you know as a team we don't have thousands of engineers to work on all this so we need to be really efficient with how we build this service and how we maintain it so the first trick that I want to share as a best practice is that you should use container based pipelines now what does this mean it means that for each step in your pipeline you have all of the essential components boiled down into a single container those containers can be self-service right so you can just grab any image that you want to use and stick in your in your pipeline and run it and these images do not rely on each other the big benefit of containers is isolation so we have all of the isolation built into each step each docker image does only the job it's supposed to do and nothing more other document just don't call it it just does its job and moves on to the next one this is a micro service based approach to pipelines is to rely on the container based pipeline now to do this and to pull it off and really great way the thing that we actually built into code fresh as a platform is we have a shared volume so that every container even though it's operating isolated they all access the same shared volume so if I do something like a get checkout in one step in the next step I can do my build because they have access to the same volume even though there's separate containers and this is this is a fantastic setup because it also means that we can we have caching built in with that volume we just cache that volume we've built a lot of rules around how the volume it is handled so you actually don't have to write anything to handle caching it's done automatically all the optimization is done automatically which is nice now to do this we actually have an open source library of steps that you can use it's it steps code fresh do I just want to show you super fast some of these steps these are all docker images with a schema that's what they do so if we look at some of these you can see there are some official ones these are ones that are built and maintained by code fresh now to be in this library at all you do need to go through a qualification process and a security scan process so these are these should all be considered safe obviously these official ones are ones that code fresh built and maintained but looking at a community maintained one for example this one's a Korean Canary deployment this does canary releases to kubernetes now you can see this is actually listed as a type step and it just takes some arguments to do its job and if we look at the source code you'll actually be able to see what image it's using now you can actually just pull and use this docker image and this is the same for all of these things you can actually just pull these pot curvatures and use them and if you look at this example here this actually shows the docker image that's being used this is coming from the stop with docker hub so you can actually go and examine it yourself and what we've done here is is in this step you have everything that you need in order to do with canary release the kubernetes so I can reuse this step over and over again and I can select the version that I want to use independent of anybody else so instead of referencing a shared library which is being maintained and changed and has to work well with all of the other shared libraries I'm referenced an artifact that knows how to do a job and it runs completely isolated now this means that it's so much easier to build my pipelines because I'm just cobbling together containers and plugging them together and it's good to go if I needed to maintain thousands of pipelines this would make my life a million times easier now I'm not saying you should because we actually do have a way to eliminate a lot of those pipelines but if you did need to this this would make the maintenance and the the care of this pipeline much much easier to use so you can see there's this whole library you're welcome to use these they're free they're open-source yeah most of them don't rely on code fresh even they they're basically just docker images that take arguments so you could use them locally you could use them in your own tools you can grab them and we encourage you to do that we're a big believer in open source and that's one of the contributions that we've made all right so let's look at the next component my I see somebody asked a question about my QA isn't pulling up for some reason it's like it's like hiding out somewhere yes each pipeline step is its own separate container and you can create your own custom steps and you can use them in your pipeline which I'm going to I'll show you in just a minute and in order to the knowledge that you need to make your own custom step is how to make a docker file like that's that's basically it so so essentially any of you there's note that you don't need to know the code fresh API there's nothing special there it's basically just knowing how to make a docker image that does a job so let's let's come back and I will show you this a little bit more in depth in our demo portion in just a minute okay so the second component is that rather than making templates like pipeline templates you could actually use a single pipeline that operates in different contexts now what's the difference between a single pipeline and a single template well the template is something you give to people and they take it and adapt it and the problem with that is that you you get into version creep people start doing different things and they get they get off base and you try to update the template and they don't get the changes so it's it's templates are really things for getting started in this case we we actually are going to use a single pipeline to do all of our CI tasks and the great thing about this is this works really well for any micro services that are uniform and as you scale you actually do want to make your micro services uniform now when I say that their uniform does that mean they have to use the same language no not necessarily but if you use a container plus something like a help akka j-- etc you actually have a very standard way of operating with these artifacts and then we just change the behavior of the pipeline based on the context so what comes with the context so here I've got a pipeline as a number of steps they're gonna run and you can see the way the trigger works is it brings a code base and that code base can contain tests it can complete container docker compose file at home turf and contain dependencies it can contain everything that we need so that when the pipeline checks out that code it knows which tests to run it knows where it's going to be deployed tube knows all those components so this this actually carries the context and I'm going to show you this in the demo I'm going to show that as well as the first thing now I'm actually going to open up code fresh and we use code fresh to build code fresh so this is code fresh on top of code French this is how we build code fresh using code fresh it's a circular thing but that's how it works and I'm gonna pull up one of our most popular pipelines this is a project we have that's called code first I do slash pipelines where we keep a lot of shared pipelines so this is a single pipeline that's used for lots of different micro services and we actually have two of them we have one for CI and we have one for CD so one for continuous integration one for continuous deployment so I'm going to pull the CI pipeline and show you how this operates and this this really is a fantastic way to build a pipeline because it's so much easier to maintain and it makes it really easy to add new projects and get them up and running very quickly so my computer's been running a little slow today so any any slow loading I think is just on my end let me know if you have a different experience but I had some weird background tasks running I think I think I might be Bitcoin mining I don't know but I got to figure that out after the sorry about that all right so this is play of a pipeline and what we're gonna see is that over here on the Left we see the definition of this pipeline now this is actually coming from git so you can actually see this is referencing a code repository that we have it's a private repository we have and we have a CI node yeah mo now this is CI for node applications in this case now does this mean that our all of our services are node no but we managed to group together a few dozen of our of our micro services into a single pipeline and that just eliminates a whole bunch of pipelines that we would have to maintain otherwise now the first thing that you see in this pipeline the first step this is a type step that's that git clone step now we saw that in the in the code fresh steps library a minute ago so this is a standard step it's basically a docker image that knows how to do a git checkout and adds variables and things like that and you can see one of the arguments that we have is the repo and the repo we have hard-coded and said only check out stuff from code fresh do that's a good little security measure though though we could probably just always infer it because the triggers and the way they're set up and then it specifies the repo name now this repo name comes from the trigger so over here on the right hand side you can see I've got a whole bunch of triggers and basically these are all push commits to different repos different applications these are all just different micro services that we have here and each one that pushes in it pulls the context automatically and checks out the correct code base and from there all of the other components are sitting in that codebase and so all the tests the helm chart all those components are sitting in the codebase for all the different micro services and they can operate the way that they need to automatically and so when we go to add a new project the only thing we have to do is add a trigger once we have the trigger it'll automatically check it out it'll automatically follow the rest of the process obviously needs to have a every every project in this that uses this shirt pipeline needs to have a docker image it needs to have a helm turn it needs to have tests that can run and then we'll just I'll show you some of the standard steps that we do here so this this shows all these different triggers feeding in now if you were using mono repo that's totally fine because what we would do is we would still have separate triggers for each project except we pull up my interface here yeah so what I what I was going to show is that when you create a git trigger there is actually a little option where you can set a filter for mono repo so you can basically say only trigger this when stuff in this subfolder changes so that's that's a nice thing you can do if you're using mono repo that works if you're not using moderate that's fine too either way either way it's fine okay so this CI pipeline we're gonna go look at a builder it basically does it handles the job it's one pipeline that handles tons of micro services which is a really fantastic way to do this so now this is this platform actually runs really fast but right now zoom is literally taking 150 percent CPU this crazy alright so I can see all my pipelines that are running here and you can see that we have tons that have been running today where we're pretty busy we've got all of our engineers working so I'm going to open up this random pipeline here I'm gonna look to this one yet but it's the same pipeline and just operating with different contexts so I can see it this one is actually working on and to end as part of the CFD why a repository so this actually tells me the context of the pipelines operating now the first thing it does is that get cloned check out and then these next two step basically just pull variables out of that repo and load them in so it knows what to do it's kind of validated version it can optionally create a pull request if it's needed if we look at the y ammo here it basically is going to tell us when it's gonna tell us what it needs to run it's going to install the test dependencies so this is using yarn because these are on those services but if you were using Java of course you would have here a Java test suite and you could have these all actually sitting in the same pipeline just needs to know to look for each one of them so we can actually just pull out and say oh is this a Java project is this a node project you can still use the same pipeline and then we run some linting or in some unit tests and then here we actually execute a composition now in this in this pipeline this is actually skipped but this this composition will basically take the docker compose that's sitting inside of the repository and spin up the required service to do integration testing so this is a great way to handle micro service dependency testing we do a security scan at all these we have in our library we have twist lock we have aqua we have several others and we do strongly recommend doing security scans by default on all yours on all the images that you create in this case we actually have a flag set up so that it's flagging that there is a minor vulnerability in this one that should be addressed but it's not enough to keep us from deploying we generate some test reporting we build our docker image and we push the image up to a home registry and we build a home chart now these last two steps are especially important because these are the things that are going to trigger our deployment pipeline now I want to back up really quick I'm gonna just look at another one because as you can see on all these different pipelines if they have a test suite you can actually see the test report right here so in this case we're gonna jump in and look at what tests are running for this service again I haven't looked at this specific service so we'll see what's in here so this one has about 3,000 different test cases excuse me that it runs if we look at the test Suites we'll see that there are a whole bunch of different test Suites running here things to test different api's things to test different integrations all these different components basically making sure that everything is working smoothly and if we look at our graphs we can actually see that the the trend has been that the time for these tests has typically been pretty fast and and it's been running smoothly so this is a this is a great test report we're pretty happy with this service now coming back over so we've created this single CI pipeline we have all these different triggers that feed into it now when it comes to deployment we actually handle that with a different pipeline so at the very end of this we built a help akka jand we push it up into we actually save it in our our help chart repository now if you're not using help that's fine but these principles are gonna stay the same regardless so I'm gonna come back to my pipeline view and we're gonna go and look at the CD pipeline I'll show you how we do this now are all of our micro services in this one pipeline no but it does cover a large swath of micro services and I have I've seen basically customers using this where they have 6070 micro services and they always always always use one pipeline and they just maintain it now that's really cool because unlike the monolithic pipelines that we saw these are easier to maintain branching is easier and if somebody did need to pull out and replicate functionality they can actually do that with the doctor based pipeline approach and that one's good it's gonna be more scalable so I'm going to switch to the CD pipeline here and what you're gonna see is that the triggers are going to change instead of looking at git commits it's going to look at artifact Changez so in this case we're building a helmet our repository we're pushing up into the helmet art registry and then once that helmet chart is pushed that triggers the continuous delivery pipeline to actually go out deployed so here you can see on this on this side these are these are actually triggers to our helm chart or posit or ease that are gonna operate and that's what's going to trigger the deployment and if we if we go and look at the bill of course we have everybody using the same one again now let's look at the bill that I'll show you what's actually going on in the pipeline and so you get to see actually the last time we deployed to production was two hours ago you can see there's also a test report for this so if we looked at this this would show us the results of the tests that happened after we went to production going to open up this pipeline and here we can see the first thing that it does is it spins up a test environment now this basically does is it deploys a one-off instance of the helm car with all its dependencies so we can run tests on it it has a little wait step in an employment status check where it basically double checks to make sure there's no deployments going on if there are then it won't start a deployment you don't want to deploy over the top of something's being deployed at the same time and then we actually have a manual approval step in some cases where if this is required this will trigger a slack alert and we'll basically get an approve or deny and in this case it wasn't needed because it didn't meet the conditions and if we look at the if we look at the mo we can actually see what that was I think I'm zoomed in a little bit and it's causing a little bit of grief there we go okay so you can see in this case that only when it's going to production and only if there's some sort of invalid deployment state from the last case so this this prevents us from trying to deploy over something that's broken and make anymore broken and then we basically do our deployment we send some notifications and we run our end testing now each one of these steps that you see here is running in its own container its own so this is a fantastic way to work you can see that we're doing and and tests on our deployed environment so we can actually validate that everything is running properly so we have a CI pipeline we have a CD pipeline and this covers a large swath of our micro-services in code fresh now one of the things i was going to show really quick if i go back to my CD pipeline one of the questions that was asked was earlier was about creating your own custom steps your own custom images so i'm just going to show you how this looks in code fresh if I go over here there's this steps tab and this is going to show me everything that's available at steps code fresh do as well as any custom steps that I've made so if I open that up give it just a second here what a bummer to have a slow computer it's literally like I can see in my other applications it's not it's not this okay so here's the steps this shows me my whole library let's say that I wanted to pull something from vault I can actually just filter this this will show me that I've got vault right here I can grab this and drop it in and so this is basically a docker image with a schema having some issues with my computer sucking the grief is here but you get that yes it will create my my steps here for me and then I have this my steps here now I actually don't have any in here but I can create my own custom steps to share with my team that will all sit inside of here so if they're going to create a pipeline that's actually available so this shows both both of those principles reasonably well right now the third thing that would talk briefly about before we kind of come up on the end here is that the the other thing we recommend as you move to micro-services is that you start doing things like canary releases now the principle of a canary is fairly simple basically you deploy a new version of your application alongside the current production version and you send a portion of your traffic to that new version in order to validate and test it now the reason you do this is because testing early right running tasks for any unit test brain and even integration tests this is the most useful when your infrastructure is less complex as your infrastructure gets more complex right you have 300 micro services well now it's kind of difficult for you to spin up the entire stack and run all of the tests with everything for every single change because it just gets costly it's prohibitive and so what you could actually do instead is use canary releases so you'll do your normal testing you'll validate everything you can before you go to deployment but when you do deploy you add in the additional step of only giving it to a percentage of your traffic and if there is an issue you automatically roll back now this prevents downtime and allows you to move a lot faster and it gives you a lot more confidence when you deploy that you're not going to have an outage or some big regression that causes big pain for users now if you do want to learn more about canary we have a whole whole webinar just on this at code fresh go slash events you can just search for canary there and you'll see there's there's a couple of videos that you can watch that go through this actually really in-depth and from a technical perspective how to pull it off but but that's a principle thing so in summary shared pipelines that works a lot better than shared libraries because you can maintain a pipeline and everybody can use it and it can just pull in the context of any to operate on whereas shared libraries are always stepping on each other or causing issues it's a big problem reusable docker images way better than that copypasta problem because you're not you're not copying pasting code you're you're reusing a step that's already been defined and you can handle the versioning of that step independent of anybody else so if I want to use one that's two versions behind it lock it I can do that somebody upgrading isn't going to cause me downtime and then lastly two plumbing validation with canary very very valuable now this webinar covers a lot of topics but if you want a little bit deeper on continuous deployment see ICD pipelines with microservices there is a full blog post on this if you just search for code fresh micro service CI best practice it'll come up with the link is here and will it's in the chat as well so you can grab it there fantastic so with this let's get into questions and I actually want to invite costa's on to our panel here kosis if you want to join that'd be fantastic you can help us answer some questions and then I'll help emcee some of these questions so we would encourage you the principles that we talked about today they aren't unique to code fresh you can do them with other platforms however we do make it really really easy to do it so please go create a free account of code fresh do to try this stuff out you can request a demo and you'll get a one-on-one with our solutions architects and then I'm also going to put up a questionnaire while we're answering questions just basically asking you what you thought about this webinar if and if it was useful for you so please that helps us know if we did a good job or not if we didn't then the next time we do a webinar hopefully we can do better so with that let's take some of these questions so first question is for you vignette the the question is do you think that your pain points at Expedia would have been resolved if you had followed this shared pipeline container-based pipeline approach the main pain point that the CI CD platform that code fresh provides the best practices some of this in fact we were using but had to hand build all of that using maven right maven has lot of modularity so a lot of these best practices be worth following but the effort it took for that best practice to be actually built out is really the issues we were facing fresh is literally making it all available out of the box so you can hit the ground running faster yeah very fair very fair there's next questions for for you coast is the question is do you have any tips for making these docker images for making these steps that are scalable and reusable so hello everybody I'm Costas I'm also developer advocate at conference I have been trying to understand all the questions that have been coming the Q&A now to answer this particular question all these dogged image that we use are just generic docker images you can find actually the source code for all the plugins that you have seen and you can see that they are not specific to code phrase so even though it's very easy to use them in code phrase they they are not tied to the platform you can create a bigger image with any programming language that you want any tool that you want you can write code first plug-in like in in Haskell if you'd like my general advice would be to make it like simply maybe back at the single executable maybe you already have a CLI for a tool that you want to integrate so the very natural candidate for the docker image is to just package the see a line the docker image and execute it from there and we see that most external platforms either they have a CLI already there for you or they have an API so try with let's say simplest thing possible just package it and use it but of course you're the kernel image can be as complex as you want so if you have many dependencies you can package them there yeah perfect yeah and we actually have seen this is a model that we've been building off of for the last three years and it is just night and day so much better to use container based pipelines I always say it's like building pipelines with Legos instead of trying to fashion them out of clay I mean it's just once you once you've created that artifact you can just use it and and you know it's gonna be reliable it's gonna be what you want so the next question comes from CMOS shared pipelines are better than shared libraries how do you maintain your pipelines are there some auto updating mechanisms well as far as I know we don't have anything automatic there if you have seen the demo most of the services have actually two pipelines the CI and the CD pipeline so even though the number of micro services is big and have many micro services as a platform the number of pipelines that we have in the end is just two maybe we have some additional pipelines and that's the whole point of all this discussion to reduce the number of pipelines that you want to manage so if you have only two there is nothing you know to auto-update you can just go there and change it yeah now from the from the perspective of maintaining steps steps you can specify a version that you want to use and and then if you're just if you don't specify the version of course you'll be using the latest version and so depending on what you're doing and what the steps aren't who's maintaining it you you may want to version those but in general I think that people will just use the latest version and that works pretty well because because those docker images are so tied to the specific job that they do now that the different that the one use case I can think of where it where it is really different is with helm or with terraform because they don't like different versions so I if I'm using like Tara for the terraform image I want to specify the version that I want to use same thing with helm so it doesn't like change it's just going to work better if I can specify the version so that's something you can do and maintain for your own pipelines if you're not using like a shared pipeline with this let's say then next question was do you have any examples of pipeline step templates that use variable replacement for code reuse and yeah I want to say on our home page there are there's I think seven or eight different examples of pipelines and they use variables pretty heavily to work because they're all designed so you could actually just throw them into your own your own workspace and have them operate for you as long as the triggers are there so that's that's a great question anything you guys want to add to that No okay another question do you keep static docker images since we have been seeing changes and dr. have Python images that change within the current release do you keep so I think the answer here is that the plugins themselves are open source and they are public and all of them are in in docker hub but you are free you know in your own organization to use your private docker images as like games and there you can berries on them as you want and keep whatever berries in you you need and make whatever you need and you can mix and match so you can have a pipeline that is using public maybe docker images for Python but then for node you use your own docker images that exist only in your private key registry if you have also seen the slide it's called fresh account automatically comes with a private dock a registry so it's very easy out-of-the-box to create your own steps and store them there and they are private just for you yeah very good and when it when it comes to the versioning of the steps and images I think that there there may be two different strategies so for something like terraform when you set a version on that image what you're really saying is like if it's it would be like terraform and the tag would be like one point to what you're really saying is this is the image to use if you're using terraform one point two and so that one could be updated now you could have additional point tags underneath that if you really want it but then there are other cases where like for example with the canary release new versions that we would make there would be specifically around functionality or or bug fixes or things like that so they would typically be you know upgrade it based on that so has a little bit an incomplete answer hopefully that's enough if not then shout in the in the chat or the computing question was will it will Co crush work with other tools like Jenkins like team cities now so code fresh is a CI NCT platform now it we actually do have an integration with Jenkins if you wanted to trigger code fresh jobs from Jenkins you can if you want to trigger Jenkins jobs from code fresh you can truthfully that's not very popular typically people replace Jenkins with code fresh with teen City there is an integration with the git repository component so if you make changes you can set up your triggers to the git repos that your house I think they're now called like a sure DevOps repos or something I can't keep up with their naming convention of course it works with bitbucket with github with git lab it works with all those things so there's there's a ton of integrations for code fresh that are gonna work pretty seamlessly there was a good question and I think you already answered in the chat kosis but there was a question of what's the best way to get started if I want to try code fresh and what should I look for my first reaction is if you've got more than 10 engineers in your company you actually qualify for a Solutions Architect and so you can actually have a one-on-one meeting with our solutions architecture will actually help guide you through the onboarding process kosis what would you add to that we also have some QuickStart guides and I actually linked that one so if you want to start exploding on your own before talking to your team you can do this as well we have tried to add guides for several programming languages so you can ask your developers what exactly they are doing and find a guide that is specific the programming language that you know and love but of course you can contact us and ask for more details if you have a strange use case that we are not we are not documenting yet yeah very good okay now there's another question about we'll code fresh add an integration with terraform cloud for state file storage now I actually haven't set that up before you can definitely do a state file storage with code fresh either using the built-in storage mechanism so you can actually store it on code French and just have the state stored there would be my first thought and then the tear from cloud I believe is accessible via the terraform CLI which there is an integration and code fresh for so I believe you'd be able to just pull it but do either of you have more to add on that question I haven't checked if the CA life has it but if it exists there you can use it right now today there is already a tutorial and using terraform with code fresh and I will link to this and you can see that you can run the CLI as you would run it normally so if you if you have some space here let's say requirement regarding the storage file and where it's stored you can just use your own infrastructure now if code for s in the future we'll add infrastructure by itself I don't know maybe maybe we could do this as well yeah yeah good question we'll take that as some food for thought for us to to look at let's see it looks like we've answered most of the questions I think I'll call out one more before we end here there was just a question about would this work with open shift and in fact everything we showed you here today would work just fine with open shift code fresh has excellent integrations into kubernetes into serverless into all the cloud native stuff but we also have people that build android apps using code fresh we have people that deploy plain terraform we have people that deploy to ECS we have people to deploy to DCOs mesosphere so it's very very flexible the great thing about this container based pipeline approach is that adding functionality is incredibly easy I'll tell you a story will end here I was at a meet-up giving a talk one time with J frog and we were basically talking about we were basically talking about how you know the demo and and beforehand and the J frog guys that you know it'd be really cool if your demo would push the artifactory and this is this is a few years ago we have a fantastic artifactory integration now I was like well we've got 10 minutes before we're supposed to talk let's go build a docker image and see what happens and we built an integration for artifactory in less than 10 minutes add up to the presentation and then demoed it right there because all I had to do was basically take the J frog CLI and stick you know docker image super-easy threw it into my pipeline that was good to go and now people can use it so thank you so much I think we're to end here we put a lot of links in the chat of course because you came today we are gonna send you a copy of this presentation you'll get that and all the links will be there as far as the QA capture there was a question if we were going to be able to capture the QA Taran let's see if we can add some of that QA to the follow-up blog post when this is complete where we post the video so that people can refer back to that but with that video cost us any parting words from you I think it's important to you know to understand that this want one mapping between a pipeline in a repository it's something that Jenkins let's say introduced but it doesn't have to be this way so you need to you know wrap your mind around this and say no it doesn't have to work like this you can have a single pipeline that works and many repositories and for microservices this is the best way to do it yeah I'd say really think in terms of how to reduce bootstrap time to maximize the resources you have so - to leverage as much out of the box some of these you know the steps the can redeployment Bluegreen deployment those are all available no matter what what your business is doing so try to think in terms of reducing good strata mean and leverage perfect excellent thank you video thank you video Thank You Costa for joining the the Q&A panel Thank You Taryn for organizing everything you're you're the best and thank you everybody for joining this is really great session really great Q&A by the way by implementing this stuff that we talked about today we've seen organizations speed up their engineering in many cases literally 20 times I know that sounds crazy that's a truce that's a true stat that's actually under selling it if anything so this stuff makes a huge huge different in how effective your engineering organization is so with that we'll sign off thank you very much we'll see you soon bye [Music]
Info
Channel: Codefresh
Views: 3,784
Rating: 4.4594593 out of 5
Keywords: CICD, Microservices, Containers, DevOps, Kubernetes, Codefresh, CodefreshLive
Id: TAP8vVbsBXQ
Channel Id: undefined
Length: 61min 22sec (3682 seconds)
Published: Fri Aug 09 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.