Intro to Cloud Native Buildpacks - Javier Romero, VMware; Sambhav Kothari, Bloomberg

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone thanks for attending our talk on cloud native fill packs my name is summer and i work at bloomberg i'm also a maintainer on the cloud native buildbacks project i'm joined today by javier and together we'll be presenting an overview of what buildbacks are how they can help and later javier will be ending this talk with some really cool demos without further ado let's get started so what exactly are buildbacks cloud native buildbacks transform your application source code into runnable container images without docker files let's do a deep dive into the buildback api that makes this possible first up we have build packs themselves at its core a bill pack is just two executables one called detect which detects whether a belt pack is needed or not and the other call build which does its part in building the final runnable image for instance while java buildback may look for the presence of dot java files a node build pack can look for the presence of packagelock.json at build time these build packs may download dependencies as needed compile from source generate build or run time bill of materials or set stock commands or entry points interestingly multiple build packs can work together for example you can combine your node build back with your ruby buildback and this combination allows you to utilize a variety of different build packs and building separate parts of your final application the cloudy of buildbacks project interestingly doesn't produce any buildbacks rather we define a specification and the tooling which is then utilized by a variety of different vendors to create the actual buildbacks we also maintain a registry that allows developers to discover these build packs from various vendors at this point the most well-known build packs are produced by google heroku and the paget project speaking of discovering and reusing build packs this brings us to the concept of builders one of the key ways how we distribute buildbacks builders are an ordered combination of build packs with a base build image and a run image they're a convenient way of distributing all the build logic for buildbacks in the format of a normal oci image the build image provides the base environment for the builder for example in the point of bionic image with all the build tooling and the run image provides the base environment for the application during runtime a combination of a build image and a run image is called a stack it can be really helpful to have these two things be different real-time dependencies can be left out of the application image making it smaller and lowering the attack surface area as a platform operator you can choose what builders are safe to use and you can construct them as you'd like precisely defining what sort of applications or language versions you want to support and you can also inject any necessary environment variables settings or certificates as the case may be finally we have the platform a platform is any tool that takes the builder together with the application source code to produce the final image a platform can range from a local cli tool like pac or a cloud native platform like kpac or it can even be built on top of existing ci cd platforms like techtom app developers don't really have to know how any of this works they just have to write their application under the hood the platform uses the lifecycle bundled in the builder to run and orchestrate all the build packs running their detect phases then running the build phases of all the build packs that pass detection and exporting the final image to the registry this allows us to have a single tool that can take different builders and build all sorts of applications automatically now this is how the image build operation typically works in the build packs ecosystem however we also expose a special kind of image creation operation unique to the project called vbase rebase allows our developers or platform operators to rapidly update an application image when stacks runtime image has changed by using image layer rebasing this command avoids the need to fully rebuild the application at its core image rebasing is a simple process by inspecting the application image previous can determine whether or not the new version of the app space image exists either locally or in the registry and if it does freebase updates the app's layer metadata to reference the new base image version now that we have a better understanding of the buildback api let's take a look at how these abstractions and getting rid of dockerfiles helps us we'll focus on three main benefits first it allows app developers to focus on what they're building and not how to support it in production or build a container image out of it in that buildbacks also have the added benefit of building and packaging the application better and faster than the app developers may have been able to do by themselves buildback authors can take care of internalizing both the container and language ecosystem specific best practices and produce build packs that minimize the images attack surface area handle caching for you and inject appropriate build and runtime dependencies and more second it gives platform operators precise control over what build inputs are permitted and enables them to enforce policies on what the app images should contain using the builder concept we just talked about lastly the abstraction of built applications as a collection of distinct layers stitched together into an app image can allow your devsecops team to detect and patch your images and scale this is because of the rebase operation which allows them to precisely switch out one layer for example the os layer from the image without disturbing any of the other application layers as we will see this can have dramatic consequences for large scale reactions to poorest vulnerabilities let's take a look at the last point more in detail by two example scenarios first let's say we have a bunch of applications built by buildbacks because the layers of semantic meaning and are enriched with metadata through an accurate level of materials we can have a good idea idea of the exact dependencies each app has we can use this to identify vulnerable images after we have identified these images we can selectively patch the application dependencies by updating the relevant build backup builder for example let's say we have a python build pack that provides the interpreter which has a security issue we can update the spill packs logic to now provide a patched version of this interpreter and we can use this new build pack to rebuild just the python related layers of our app images if you're using a cloud native platform like kpac you can declaratively update your builders and fill backs and it will automatically handle finding the affected applications and rebuilding the appropriate layers we can imagine a similar workflow with base images and os vulnerabilities the platform operator can identify and patch the base images declaratively and the plus side is is because of the way rebase operation works changing the base image which can be a particularly expensive operation in the dockerfile world suddenly becomes a simple point of change in the registry apart from build timeout implications it also has runtime implications since you don't need to re-download all these app layers on each node rather you just need to download the base layer once and poof all your applications are able to reuse this and have been patched next up javier will be taking over and he'll be talking about how you can use all these cool tech in practice along with some amazing demos over to you javier thank you sam as sam mentioned my name is xavier i'm a software engineer at vmware and one of the maintainers of the cloud-native build packs project i primarily focus on platforms that run build packs let's answer the question of where can they be used as you can see on the slide there are many platforms this is just a small data set of many more what you'll notice is that there are different types of platforms build packs can run locally on your machine with the help of anything that could run containers as well as built into large cloud platform providers and various ci cd systems the ones that we'll be looking at today are pack and k-pack pack is the swiss army knife of all things clown native build packs it is able to build and rebase app images as well as provide a plethora of utility commands to help you inspect create and publish buildpack components pac was intended to be used primarily for local development but quickly it made its way into being used by many cicd pipelines kpac is a kate's native implementation of cloud-native buildpacks platform it works by allowing users to declare their images as well as other components as kate's resources these declarative resources are then managed by kpac itself images may be automatically updated as new build packs or base images become available first we'll take a look at pac for local development we'll clone our app repository in this case it's a vanilla spring boot java application and we'll cd into the app directory next we're going to set a default builder as previously mentioned the builder has all the information necessary as well as all the bits to build our application in this case we're going to be using a sample builder not intended for production now that that is all set we won't have to declare it every time we attempt to build our application next we build our application simple as that right we run pac build and we specify that the image name that we want to be created is petclinic-demo as you'll notice we don't pass a source directory this is because by default it will use the current working directory as the application source now step by step the first thing it'll do is pull the latest version of the builder image it will then proceed to execute various lifecycle phases the first one being detection as you can see it has detected that this is a java maven application next during analyzing it will see that the image has not been previously built it tries to gather information about previously built images for it to provide potential optimizations during the build process an example of this is using various methods of cache if this was their second build we would see that certain forms of cash are restored during the restore phase next the actual build occurs because this is a java application it will be providing the jdk and execute the necessary maven tasks now we'll skip past all these dependencies after waiting for the java application to compile we could see that it did so successfully on to the next phase during export all the layers created by the build pack are either cached or added to the app image the container execution command is also set and the image is sent to the docker daemon or registry as you can see the image was built successfully let's go ahead and try to run it we're going to run this just like we do with any other docker image we're running it in the background binding it to port 8080 setting the container to be deleted when it's stopped and giving the container a name of pet pet clinic dash demo just like our image because i'm not creative now that it's running we'll open up the browser and check it out okay there we have it we have the application the pet clinic you could click around there's not much to see here and we really don't care too much about what the application does for this demo but what i do want us to look at is this welcome message here okay we're going to go ahead and do a rebuild by attempting to change that message going back to the terminal we're going to go ahead and stop our container right now that that stopped we're going to look at for this message the welcome message we just saw i just happen to know that it's in this messages properties file there you go now we're going to go ahead and use some said magic to go ahead and replace that with welcome back okay now that we've got that we're going to go ahead and run a pack build again and what you'll see is it'll still try to pull the latest version of the builder but there are a couple other little things here that change for instance the analyzing phase now it actually found that there's a couple items that it could retrieve from cache whether it be the app image or a volume cache which is what we're using here once it does that the restoring phase or restore is what pulls those artifacts or more information down for that and we'll see that at this time the compiling or building of the application is a lot faster because it uses that cached information all right there you go we have successfully built our application in a couple of seconds instead of minutes as we did prior you'll also see during the exporting phase that we're reusing a couple layers so these layers are not pushed off to the daemon they're reused in that fashion we still set the process type which is the execution the startup of the container and then uh we put the the image up back into the the daemon itself all right now that we've got that let's go ahead and run our application again all right we'll open up our browser go back to the application refresh and hopefully we should see welcome back there you have it uh now that we are done with that uh we can move on to our next demo we'll go ahead and stop this for this demo we'll take a closer look at kpac and how it could help keep a fleet of app images up to date resolving the concern of unpatched vulnerabilities before we begin i want to reiterate that these images and other components are registered just like any other kate's resource if we look at this samples repository what we'll see is a couple of resources we'll take a look at the builder and we'll see that the builder has a couple things defined it has the stack which is a cluster stack and the store where the build packs are stored it also defines the order in which the build packs from the store will be used we could then take a look at the stack the stack defines what build image it will use and what run image it will use from there we could look at the store the store is what tells us where these build packs are coming from and what build packs are available next we'll look at an image resource this image resource here defines the builder that will be used for this image as well as where the source for this application will be coming from now that we have that understanding let's see it in action we'll jump over to the terminal and we'll actually open a nifty little demo ui which allows us to to see what images are already registered and are managed by kpac we can see that this tool displays what stack and build packs are being used to build it if we go back to the terminal we can see that while we were distracted with the previous page we got an alert that our nginx applications may have a vulnerability we'll go back to the ui and simply mark the ones with a vulnerability this is strictly so that we can see which ones should be getting updated when the build packs provider patches the vulnerability in the terminal we're going to simulate that the build pack is getting patched in the real world our build packs provider which could very well be our own devops or devsecops peers would push an updated build pack image this would trigger a rebuild of our application images going back to our gui we can see that the highlighted images are getting rebuilt in this case we only have one and that should be getting updated here very shortly [Music] there we are as you can see this is being rebuilt meaning it is pulling down the latest version of the builder and build packs and uh creating a new app image we'll give it a few uh seconds to finish it shouldn't take that long and there we go we can see that our engine x dependency basically has been updated so back on the terminal all right we've done that we've gotten another alert a vulnerability has been discovered with our run image okay we're gonna go back to our ui all right and mark our stack that has a vulnerability we're gonna use the sha in order to mark that okay and as you can see it's practically every single uh image that we have here okay we're gonna go back to the terminal and we're gonna simulate again that a our buildpack provider has updated the stack because this is the run image we will be taking advantage of the rebase operation mentioned before this should take less than a few seconds for each image once we get that update as you see now every single image has been queued and we'll see how long it takes to update first one has just started we're updating we've updated five we've updated all of them so as you can see right uh we've just updated a whole fleet of images and you know it took less than a few seconds for the rebase operation that's the base layer aspect of it that concludes our demos let's go back for just a few more slides now that we've gotten a better understanding of what build packs are and how we can use them let's talk about the future this year the project has identified a few key areas they would like to focus on these are a few highlights of what's top of mind and what's going on this year configurability as users migrate from using docker files to cloud-native build packs they see the value but they also miss the flexibility and extensibility available in docker files the project has a couple good ideas on how to safely provide some of that desired flexibility while still maintaining the same level of core functionality things like inline build packs which allow users to create ad-hoc build packs as part of configuration along with additional oci specific configuration and more extensive modifications to runtime images during the build process in order to enable more advanced use cases supply chain security security is a core value proposition of build max while build packs may already provide bill of materials we want to do more we want to make them core of the project and align them with existing standards we also want to enable better image signing workflows something we are working with cosign on to achieve more cloud native integrations as the ecosystem evolves we want to make sure we continue to align with it we want to take advantage of those great projects in the ecosystem and enable users to pick and choose what tools they want to use alongside clouded bookmarks as i mentioned before this is just a peek of what's in the works to learn more about these and other items check out the official roadmap on the buildpacks.io website that concludes our talk for today you can go to buildpacks.io you could find us on slack along with the rest of the community select.buildpacks.io on twitter and github we have two github locations there one for the build packs project as a whole and another one for kpac which is at this moment a separate repository or project altogether thank you for coming see you next time
Info
Channel: CNCF [Cloud Native Computing Foundation]
Views: 262
Rating: undefined out of 5
Keywords:
Id: bpshvqQMYM0
Channel Id: undefined
Length: 23min 34sec (1414 seconds)
Published: Fri Oct 29 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.