Cloud Native DevOps Explained

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I want to start with laying out an example cloud-native application that I've architected and I know how to build it out. So, let's start with the front-end. We'll call this the UI portion here. Below that we've got the BFF ("Back-end For Front-end"). So, this is serving the API's for that UI to serve up information. So, the UI accesses the BFF and that, in turn, is going to access the microservice or the back-end layer. So, in here let's say "back end". Now, obviously for higher value services - let's say that back-end goes out to something like AI capabilities, and in addition, maybe a database. So, Matt as the expert, I'm going to hand this off to you. This is the application architecture that I want. How do I start migrating this over to a cloud-native approach and what are the DevOps considerations that I need to take into account? Ok. So, you've already laid out some of the separation of concerns. You've got a component that is focused on delivering a user experience, which, again, can be containerized and packaged. You've then maybe got a back-end for front-end which is serving UI-friendly APIs and abstracting and orchestrating across a number of back-end. So, you've got your 3 logical points. So, moving forward, what you typically do is take this component and start to break it into a pipeline that will enable you to offer some discipline around how you build, deploy, and test. So, what we typically do here is we're going to use DevOps and we're going to create a pipeline, and this pipeline is going to consist of a number of stages that will take us through the lifecycle of building and packaging this component. So, typically the first step is to clone the code from your source code management, which is typically Git or some kind of Git-based technology, GitHub, GitLab, and then the next step is to build the app. So, "Build App". In this portion, when you're actually building out the application, you have considerations for a Node.js app, you have things like NPM, Java, you have to figure out the build process for that. So, the pipeline is kind of configured to build each one of these components based on the programming language? Right. So, typically you have one pipeline per component and, as you correctly stated, if you're building a UI and it's got React in it, you're going to use a web pack to build the UI TypeScript code, package that into a form that will then be package-ready for run. So, there are steps - and, again, with a Spring app, a Spring Boot app, that you'll package it using Maven or Gradle, and we know that Node.js you'd use NPM and various other steps. So, this part of the pipeline is about packaging the source code in the way that it's needed to then be run. But then, typically, at this point the next step is to to run a set of tests. So, you run a set of unit tests against the code, you validate code coverage. And then this enables you to determine whether any code changes that have been made in the pipeline are valid. And again, these steps are sequentially moving along, but if any one of these fails it will stop the build, you'll be informed as if as a developer and then you'll go back and fix the code or fix the test. So, just to clarify at this level we're going to do unit tests, so tests within kind of the app context. Not really considering connections between the different components. Yeah. Today we're not going to cover that the integration story or performance testing, but typically when you're building a pipeline you need to test the code that you've written using various techniques. Typically, you can use test-driven development which is a concept we use in the Garage. So, you write the test first and then create the code to validate that. You can use other frameworks, most of the major programming models have good test frameworks around them, whether it's Java, Node, or other languages. So, next step: again, one of the key things to try and drive for is to get to a point of continuous delivery. This is a continuous integration pipeline, but if you fail the test then that's going to prevent this package of code moving into a test environment. So, another common technique we use is code scanning, or vulnerability scanning, or security scanning. So, what we do here is we're looking for vulnerabilities, we're looking for test coverage, we're looking for quality gates. So, if your code isn't a good enough quality, from a code analysis perspective, we could actually stop the build and say we're not going to move this microservice further along the build process. Right. So, if we were building out this - let's say the BFF application was a container-based application running in IKS (IBM Cloud Kubernetes Service), We have some capabilities to allow you to test for that scanning, right? It's the Vulnerability Advisor. So, would that exist in this phase then? So, you tested the code, then you... Yeah. Again, I'm lumping in one or two different stages here, you can do vulnerability scan, you can do code scan, it's kind of a common technique to make sure. The good thing about vulnerability scanning is you're validating that there's no security holes in the Docker image, or the container image as you build it. Got it. OK. So, now that we've got up to the scanning phase, what's our next phase - where are we going? The next step is to take the application that we built and tested and scanned, and now we're gonna build it into an image. So, we call it a "build image". So, what this is doing is using the tools to to package up the code that we built and put it inside a container. And once we've built the image we then store that image out in an image registry with a tagged version that goes with it. Right. So, I guess I got ahead of that right there - so, that's where we would actually do that vulnerability scanning: once we've tested the code itself, done some some scanning at that level, once we build the image then, something like vulnerability advisors ... Right. So, you could have that as another stage, but, again, if the vulnerability is poor then you could prevent this moving forward and that will inform the developers to either upgrade the level of base images they're using or fix a number of the packages that they've included in it. So, basically every step of the way - if anything fails you're notified of that and you can go back and fix that. Right - and at the next stage, now you have an image, and the next thing is to deploy it. So, what we're looking to do is to take that image and deploy it inside an OpenShift managed platform so it will move the container from the image registry and deploy it. And there are a number of different techniques for deployment that's are used. Some developers are using Helm, but the more modern approach is to use operators, so there's a life cycle around that component when it gets deployed. So, and then this deploy - let's say I have a a Kubernetes environment - so you would deploy an application, let's say the BFF application, into that Kubernetes environment, right?. Yep. OK, and I'm guessing at this phase this is still part of the developer flow, - would this be the development environment that you're pushing into, or the test environment? So, typically a continuous integration flow builds and packages the code up for the development environment. When we talk in a few seconds we'll more talk a bit more about how we move that package of code from the container registry out into a test environment. Got it, so right here, like that. Yep. So, the final step is to validate the health. So, what you're really asking here is, "Is the container running?" - is it sending back operational information such that you can determine that it's healthy enough to validate that, not only that the tests have run, but actually it started, and it's communicating with its dependent services, and it's going to operate in the way that you'd expect it to. Of course, yeah. So, this is where you connect it up to the different components and make sure they're all working together seamlessly. This is where you would probably find issues with integration, or how the teams are connecting up with each other, API contracts, and those kind of things, those issues will start to bubble up in this space. Yes, and again, the health input is important because you can hook that into operational tools like Sysdig and LogDNA and other monitoring that will give you a better feel of the current state of your applications as they run. So, this has got us as far through the development cycle. The next step is to - and, again, introduce - this is starting to be common in the industry, is to use a technique called GitOps where you would now say I've got my application, I built it, I packaged it, I've tested it, I've validated it. What I'm now going to do is update a Git repo with the build number, the tagged version, and the reference point to the image registry. And then GitOps can then trigger off a deployment of that image out into a test environment with all the other components that go with it, and there are a number of GitOps tools out in the market and one of the ones we use in the Garage is Argo CD, which allows you to monitor a webhook of a Git repo and then it will pull the image, it will pull the deployment reference, and then package it and deploy it ready for use in testing. So, basically the same quality that developers have been doing forever with SCMs to manage different versions of their code, now operations team are taking advantage of that same approach to basically operationalize the deployment of these actual images, containers, applications. Absolutely, and it comes back to a point we made earlier, that this is about discipline and repeatability. There's no humans hurt in this process as you go through it, and the less humans touching these steps the better. Again, one of the things we often do with clients is we'll work with them and we'll discover that there's some human process in the middle and that really slows down your ability to execute. So, it's about automation, discipline, and repeatability, and if you can get to this point and prove that this code is good enough to run in production, you can then start to move towards that golden milestone of delivering continuous delivery. Right. So, once you've automated all of this, that's when you can truly say you have CI/CD. That's that's when you can finally get to that level. OK, so, honestly Matt, this was a great overview of all the concepts we've discussed already. If you've enjoyed this video or have any comments be sure to drop a "like", or a comment below. Be sure to subscribe, and stay tuned for more videos like this in the future.
Info
Channel: IBM Technology
Views: 18,302
Rating: undefined out of 5
Keywords: cloud native, kubernetes, containers, cloud computing, devops, IBM, IBM Cloud, Red Hat, OpenShift, Helm, node.js, java, GitHub, GitLab, vulnerability advisor, code scanning, code, developers, operators, IBM Cloud Kubernetes Service, Kubernetes service, Sysdig, LogDNA, GitOps, Argo CD, webhook, continuous delivery, CI/CD, devops lifecycle
Id: FzERTm_j2wE
Channel Id: undefined
Length: 11min 11sec (671 seconds)
Published: Wed Apr 29 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.