Deep Dive: Cloud Native Buildpacks - Joe Kutner, Heroku & Stephen Levine, Pivotal

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay welcome to the cloud native build packs deep dive thank you for burning the midnight oil with us here and I was like one of the last sessions of the day everybody's probably pretty tired I'm tired my name's Jo Kutner I'm gonna be talking about climbing to build packs which is a cloud native sandbox project in the CN CF it was founded by pivotal and Heroku but we now have contributions from many organizations including Google Microsoft and others so I work at Heroku I work on the languages team Roku's of platform-as-a-service we host more than 10 million applications and handle about 23 billion requests per day on the languages team we do the engineering work that provides the support for our official languages and one of the tools that we have to do that is build packs so build packs if you've ever used to Roku you've used build packs they're the mechanism that prepares an application for running on our platform we originally created them about eight years ago as an extension point for the platform so that our users could add customize the images and add support for other ecosystems and languages that we didn't support so over time other organizations adopted build packs the build pack API and then we started diverging and sort of implementing Heroku specific things and Cloud Foundry specific things and so cloud native build packs as an effort to bring those api's back together and to join them with the docker container ecosystem so at a very high level build packs take source code as input they detect what kind of application you have from your source code repository and then they prepare that application for production depending on what kind of app it is so if it if it is a Java maven app it'll install the JDK run maven download your dependencies compile your source code if it's if it's a node.js app it'll run NPM and download your node modules the out pack the output of the build packs build is a docker image that has layers that map logically to your application this is different from like a docker image created from a docker file where the layers are arbitrarily derived from directives in the docker file so because of this well-defined structure and the MIDI data that goes with it that tells us what's in the layers we can perform operations that would otherwise not be possible so the bill packs themselves are just one part of the build pack ecosystem in fact the cloud native build packs project does not maintain any build packs the bill pack implementations themselves come from third parties like Roku and Cloud Foundry and others what the project does maintain is what we call a platform this is essentially an environment that's capable of running build packs so the PAC CLI is a sort of reference implementation of a platform you can use it to run build packs on your local workstation or in a CI environment we also maintain a Tecton template so that you can use Tecton as a build packs platform the implementation or what we call the lifecycle is the build pack execution environment and we maintain this as part of the project we also maintain lib build pack which is a go library that build pack authors can use to make it easier for them to implement the build pack specification and of course the project maintains the spec itself the core API is that define how Bill packs work and how it interacts with the different components we also have an RFC process that we use to propose changes and discuss you know various changes to the project so to visualize how these different components work together at one end of the spectrum we have the build packs themselves which are the part that actually touches your code and runs against your app the life cycle runs build packs so it's the executor of the build packs and then beyond that we have the platform which is the environment that knows how to put all this together and run the whole build pack system so between each of these layers we have a well defined API the specification which describes how these layers interact with each other and that essentially allows us to mix and match or swap out different implementations of each of these components so the build packs of course we have different build packs for different languages Java node Ruby and then from different vendors like Heroku and Cloud Foundry the platform I mentioned Tecton and the PAC CLI there's also the pivotal kpac project Salesforce Heroku and Google cloud run button there's many other platforms coming around day-to-day so a lot of options there so the lifecycle execute the build pack API and just to recap what we talked about in the intro talk yesterday this API has several phases the first phase is detect this is where a build PAC determines whether it should or should not run against a given source code repository so in most cases this is just looking at the source code to see if particular files exist for example package.json would indicate that it's a node.js app the restore and analyze phases are run by the lifecycle the built by execution environment and they analyze the cache and then the meta data for previously built run images if they're available to determine what layers can or cannot be reused the build phase is kind of the heavy lifting of the build packs each build pack implements this differently it's specific to the ecosystem that a build pack supports so for example a no js' build pack would install nodejs run npm and prepare the application prepare the node application for a node build pack the one of the outputs of the build process is artifacts or essentially layers and in the export and cache phase the exporter the lifecycle takes these layers and compiles them into a no CI image it will take some of the layers and put them in a cache which can be stored as a cache image or as just like a persistent volume okay so that's just kind of recapping the intro now we're gonna take a very close look at some actual build pack implementations from Heroku and then Stephens going to talk about some Cloud Foundry build tags so all the stuff we're gonna talk about in this deep dive isn't something you need to know in order to use build packs you just want to use build packs you can run a command like pack build against your source code repository it produces an image prepares it for you know to run with like commands like docker run or to be pushed with docker push that's how most people will use build packs but we're gonna get into what's going on when you run this pack build command so your first thing you'll notice is that we're passing a flag to this command the builder flag with the Heroku build packs image and then that's running the Heroku Java build pack that builder image is an important construct in cloud native build packs language a builder image encapsulate all the artifacts that a platform needs in order to run build packs so that includes the build packs themselves it also includes the lifecycle to execute those build packs and then a base image that will be used to create the container that all this will run in so we define all of those components in a file that we call typically call the Builder tamil and we passed that builder tamil to a create builder process that produces a OCI image that is the Builder image so if we look at the builder tamil for that Heroku build packs image it first defines the lifecycle version that is compatible with and then it defines the stack that it runs on so this this builder image runs on the Heroku 18 stack a stack is another important term and and build pack language stack represents two images one is used for build to create the container that bill packs run in and then another is used as the base image for what we call the launch image which is the image that's output from the build pack execution process so the Heroku build packs are based on the Heroku 18 stack this is a base image that's maintained by Heroku as part of the Heroku platform so if you've ever used to Roku you use this stack it runs as part as part of the platform it's based on a boon to Bionic with some system packages pre-installed that we provide updates for so you can download this from docker hub there is the Roku 18 and Heroku 18 build images the build image contains dev headers and other tools that you need to compile and prepare your app for production the Heroku 18 image is slimmer and just has only what's necessary for runtime but any case these two images provide the foundation for the cloud native build pack stack to create that stack we have to add a few more layers on top of this we define a few CNB underscore environment variables these environment variables are defined as part of the build pack API specification we defined the CMB user ID and group ID for the user that bill packs will run as bill packs do not run as root it's a feature and then the CMB stack ID so that the platform can know what what stack it's actually working with so if you were to create your own stack from whatever flavor of Linux or whatever base image you want you do something similar to this have your from directive and then add those just those essential environment variables so this is the docker file for the build image the docker file for the run image is pretty much identical with just a different from directive okay so coming back to the Builder Tommo after we've defined our lifecycle and our stack we then define the build packs that are going to go into this image so the Heroku build packs include I think about I think is about 10 of them in this image they map to the languages that we officially support Java nodejs ruby some others and then there's a few utility build packs like the Heroku proc file build pack and I'll describe what that does in a minute in addition to the build packs themselves we also define what order they run in so when the lifecycle executes the detection phase for build packs it's going to execute them in the order that we've defined here and the first one that passes detection is the one that it'll use during the build phase so we can define individual build packs in our order or we can define groups of build packs that must all pass in order for that group to succeed or we can have as I'm showing here some optional build packs where if they fail detection bill packs in that group will still execute okay so that's everything that goes into the Builder tamil for the Heroku builder image we then pass that to the pack create builder command which will handle creating the image for us it'll add the build packs to a well-defined directory it'll inject the lifecycle binaries it'll validate that those CMB underscore environment variables are set up correctly and do some other stuff to prepare that OCI image so that we can run it the output is the Heroku build packs 18 builder and then that is built on top of the Heroku pack 18 and pack 18 build stack okay so as I mentioned the Brooker build packs image contains a number of different build packs at support different languages but we're gonna focus on the Heroku Java build pack so this is a build pack that's been running in production for many years on the Heroku platform but with the older build pack API implemented so we wanted to sort of bring this build pack into the cloud native build pack ecosystem so we've modified it to support both api's because it's an older build as that are essential so any cloud native build pack must have at least these files two executables been build and been detect these map to the two entry points in the lifecycle phases for detect and build those executables can be bash scripts that can be binaries written with go they could be Python scripts there's nothing in the bill pack specification that defines how you write these these files they're just executables the third file is a build pack descriptor called bill pack tamil this defines submitted it about the build pack so this is the build pack tamil for the Heroku Java build pack defines its api compatibility version it's globally unique ID it's version and then what stack it is compatible with so the Heroku build packs are only compatible with the Heroku 18 stack and each built a Cawthra determines what stacks they intend to support with a particular build pack so the bin detect executable is responsible that first part of the lifecycle execution phase it's very simple it looks at the source code repository which is provided as one of the inputs to the detection it looks at that repository for a pom.xml if it sees a pom.xml it'll pass otherwise it will not pass that pom.xml indicates that this is a Java maven project and it knows how to build those one of the outputs of the detection phase is a build plan so the the detects script will add to the build plan the components that it either provides or requires as part of its execution so the Java build pack is going to provide a JDK so it writes to that build plan the JDK is something it provides then on to the bin build executable this takes as input the same source code repository that was passed to the detect phase and it also takes the resolved build plan so the resolved build plan contains the components that were provided by the bin detect script from this build pack as well as other build packs that may have run in conjunction with this build pack so if we're running for example a node.js build pack with our Java build pack are resolved bill plan might have a no js' in treatment so the build pack can look at that build plan to determine what it needs to install or what is available to it so the heavy lifting of the bin build script is installing the JDK running maven downloading your dependencies compiling your code and then producing those Java artifacts that represent the application one of the outputs of that process is several layers that represent the JDK the maven dependency cash and the JRE but each of these layers has a different scope or sort of a different visibility some layers will be used as part of the cache because we want them available on subsequent builds but some of those layers we do not want in that final launch image in fact the JDK which contains the Java C command the compiler and some other development tooling is meant for the build image only the JRE which is the Java runtime environment is what we actually want in the launch image so we can define each of those layers as having different scopes similarly for the maven dependency cache we don't want even in our production image and the dependency cache is very large so we only want that in our cache image the build scope is a little different it's not actually like a layer that's going to be put into any image this just means that it is visible to subsequent build packs that may need to use the JDK as part of their execution each of those layers has a standard POSIX file format structure to them so this allows us or allows build packs to do some things kind of magically for you each layers bin directory will be put on the path the Lib directory will be put on the load path and then we can also set up environment variables and profile scripts that will be provided as part of the launch image so that when your application starts up those are there for you things like Java home the last file here is what we call a layer descriptor or a layer tamil file so each layer has one of these descriptors that goes along with the layer and it tells us what the what scopes the layer is for so for that jdk tamil we see that launch is false because we don't want the jdk in our final run image but building cache are true because we do want to expose it to subsequent build packs and cache it we can also put some mated it in this file like the version of the JDK that's been installed and then we have a cache ID that allows us to just easily expire it so if we compare that to the JRE Tamil very similar just has a different scope for launch we do want the JRE and the final lunch launch image and we don't want it as available to subsequent builds the last thing the Heroku Java build pack does is try to determine the Java command to use to start this app that it can set as essentially as the entry point for the image it does this by looking at the pom.xml looking at your dependencies so for example if it sees a spring boot dependency and an executable jar file in the target directory it knows that some Java - jar command is the process you probably want to run sometimes it doesn't get it right or sometimes you want to override that and that's where the proc file build pack comes in so the the group in the Groo builder that has Java includes the karoku proc file build pack optionally the proc file build pack is meant to support proc file which is a part of the original Roku bill pack API but it's not a part of the cloud native build pack API but for our customers for our Heroku users we wanted them who already have a proc file and many of the repositories we wanted to provide that same experience for them so if you're not familiar with the proc file it's just simply a very flattened yamo file with a key that is the process type or the process name and then a command that is the command you want to execute for that process type so for example a simple Java app would have a web process type and then a Java dash jar command to start the app so the detects script for the proc file build pack is very simple it looks at the source code repository for a proc file if it sees that it passes if it does not it doesn't pass it doesn't contribute anything to the build plan because it's not creating or providing any components the build phase for the proc file build pack reads that proc file at Yamma and converts it into the launch tamil process table so the launch tom wall from derived from the proc file that i showed earlier would look like this in a launch tamil so because this build pack runs after the java build pack if it was to create a process type that was already defined by the java build pack it would override that and the last one would win so you can use the Roku Java build pack by running PAC build with the Heroku build packs builder just passing that - - builder option if you have a Java repository that you're running against it should detect that and and run the job build pack but if you want to be explicit you can also pass the - - bill pack flag coming in the next version of the Java build pack will be taking advantage of a new feature in cloud native build packs called slices this will this allows us to sort of carve up the application repository application directory into layers without creating that separate you know POSIX file structure for a layer so we could for example take your dependencies and the target directory in your classes and your config and put them in their own layers as well we're also going to decompose the Java build pack into at least two build packs Heroku JVM build pack which just installs the JDK and the JRE and then a maven build pack which does all the maven specific execution seperating I'm separating them into these to build packs allows us to use the proko JVM build pack with other build packs like for example a Gradle build pack so we can make them more modular so they can fit together in different ways so now I'm going to pass it on to Steven who's going to talk a little bit more about build pack modularity and how build packs are run in distributed environments yeah so I'm Steven Levine I work on things related to kubernetes and cloud foundry for pivotal and I'm gonna I'm not gonna talk about the Cloud Foundry build packs very directly but more about how we do builds with the cloud foundry build packs in sort of distributed systems like Kate's using tools like pivotal K pack and so just for some background Joe covered some of this Stax builders and build packages are composed of layers so a stack is a build image where they you know build a base image where the build happens a run image which is sort of the you know runtime base image for the application if you take the build image and you put a life cycle and some build packs on it you get a builder right and so a build package is just sort of the artifact format for a build pack so if you're talking about the cloud foundry node.js build pack it's actually you know a series of sort of three build packs and a particular build configuration that it's composed of it's not a sort of implemented as a build pack itself it just points to those other ones and a build package lets us sort of take all those build packs as layers package it up in an OCI image that you can either you know push to docker registry or save as a dot C and B file that's sort of the artifact format it so just fully self-contained own pack and if you take that build package and put it on a build image with life cycle you get a builder and so that's sort of what that builder image looks like and then if you take the builder and you you know run source code you sort of run the Builder in a platform with a source code like a pack you get an app image out of the other side and so I'm gonna kind of run through what it looks like to do a rebuild of an application in a distributed system and focus on the kind of rebuild time and the data transfer required because it sort of exposes why we think this model is really interesting for doing really fast iteration against a cluster and also there'll be a second example afterwards that shows you how we can roll out CV patches really quickly for lots of applications but we're gonna do that first one first so if you you know imagine you have a docker registry you know docker registry stores manifests image manifests and layers which are like they're also called blobs they're just the things that get layered on top of each other before they run you know when you do a docker run for instance so so you'd have a docker registry with a run image a build image and a node build package and then you have sort of all the layers that those manifests point to those those manifested sorry the layers aren't sort of linked to each other the manifests just define an ordering of those layers that's kind of a common misconception in the docker v1 format the layers are kind of linked but now in the OCI or docker b2 format that manifests just refer to a particular ordering and we can swap the not individually and you'll see what that looks like and so if we we can make a builder by essentially just creating a new manifest in the registry that has sort of strong references to the layers and the build image and a build package or kind of a weak reference to a run image so we know where to pull a run image when we do a build and because this is just a manifest upload it involves virtually no data transfer we're just uploading you know less than a kilobyte of sort of JSON configuration to the registry to say hey this builder is composed of these parts that already exist and so when we want to do a build we need to download the Builder and the Associated layers to the VM that's gonna run the pods that do the build and if a build already happened here there was a node.js build that doesn't involve any data transfer because it's already there that the pot is already here that the VM is already hydrated with those bits if it you know if the VM is a fresh VM that's never done a node.js build you might have to download some of the build packs and so if we kind of ignore the builder in the registry and let's say that you did a previous build you know of this application where you have an old version of the app image on the registry you know pointed it sort of sort of old application layers and the operating system packages and you also have a local cache sort of in the VM that we're doing a build of the same application because we you know in a distributed system you'd want to try to build rebuild on the same VMs that you're building so that you can sort of get more cache hits sort of in that configuration we would start pod in the VM that you know runs sort of runs the Builder which has a life cycle inside of it on the source code which we would have to upload but the build packs when they're run by the life cycle can actually look at the old build the old layers from the previous application build in the registry and say I only need to rebuild you know these individual layers I don't have to rebuild say all the node modules or you know whatever it might be and then during the build process that life cycle runs the build packs to regenerate just those new layers that need to change and it also you know updates the cache with whatever you know sort of new things need to be cached this is kind of different from docker where and a docker file if sort of lower-level layer changes you have to rebuild them from the top here we can individually select what layers we want to build or sorry rebuild and so you know after that happens we update the cache and the sort of local VM and we upload that sort of newly built dependency layers to the registry along with the sort of new manifest for the application notice that it's reusing a lot of existing sort of dependency layers in addition to the new ones that it uploaded so we really minimized the data transfer there to just just rebuild what you need to be what needs to be rebuilt and just upload the new stuff into the registry and then when we switch over to the runtime VM that's running the old copy of the application we can sort of you know apply the new image manifest which will cause the new layers to and just the new layers to transfer over to the runtime VM not existing not additional base layers nothing that was sort of already there from the previous application and then you know restart the pod so that it you know it's running the new application and so so that's an that was an example of what it looks like to rebuild an individual app right there's some kinds of updates we can do with cloud native build packs where we're we want to patch CDs for lots of applications at the same time and we can do this at the operating system level because we rely on a sort of strict ABI compatibility contract where we can do we call it a rebase instead of a rebuild we're replacing the lower layers with a strict contract that allows them gives us a guarantee I guess that the new image is also a valid image and will run but also contains security patches from operating system vendors and so this works by so like imagine you have a docker registry and you have three apps in it and you have the run image that they were built with pointing at the operating system packages and then a CD hits and so now all your apps are vulnerable in the registry maybe you have three maybe you have 500 apps right we tend to sort of patch that CVE we can upload a single copy of the operating system packages to the dock of Industry so think of that like you're uploading a boon to 18:04 like a new you know version of that that has a patch to open SSL or something like that and then we just make a kind of really small update to each of those image manifests to point at the new operating system layer without really doing incurring any other data transfer other than that initial upload of the operating system packages to the registry and so now we patched all the apps on the registry kind of instantaneously and so we start to deploy that out right it's not done yes we patched all those applications but now they have to run in a cluster but the way we've done this kind of allows us to deploy these really efficiently as long as we're using container D and not docker D dr. D is very inefficient and will repol you know layers on top of base layers if they change even if it already has those layers but container D can do this really efficiently and so this works by so imagine you have you know several runtime VMs with different apps distributed across them and they all are vulnerable to the CVE we can for the you know essentially do a docker pull into each of those our container Depot and to each of those runtime VMs which will cause the operating system packed the new operating system packages to get downloaded and the apps to kind of swap around automatically on each VM and point to the new operating system packages note that we only had to download one copy the operating system packages per VM not per application and so it's sort of maximally efficient in transferring just the layers that have the CD patches to each VM that needs to run and you wouldn't be able to do this and a model like docker where you'd have to retransfer you have to rebuild on top of a new base image and then re transfer all the newly built layers across so this is something you can do really fast and it even extends into deployment and so to kind of wrap it up you know we really like cloud native build packs because we think they're no reusable and they you know the bill packs are reusable they reuse dependencies they reuse layers they're sort of very efficient and how they you know do data deduplication they're fast because they only rebuild the stuff that's necessary they're modular because you can create individual sort of modular build packs that are very single purpose and they're safe because you know you can get build packs that are trusted by a vendor that knows what they're doing and can you know guaranteed that they're sort of configuring your app in a secure way and that's it please try out the PAC CLI that's up on build packs IO you know we have Doc's available you can also join us on slack we're really active on slack so please reach out and we have also have a mailing list but it's pretty quiet and there's gonna be some buildpack demos later today at the Salesforce both - if you want to talk to us more thank you questions yeah so the question is we're changing these layers out but the layers will you know handle in order to get the layers to run you'll have to restart your pod with them right so we don't you know for what cloud need to build packs does we don't handle anything after the updated images in the registry we can so we want to kind of solve that one problem really well but what we do do is by sort of efficiently only updating the layers that need to change in the registry when you do that pod we start you'll only get those layers downloaded onto the VM and so by the nature of how we publish it to the registry we can make that deployment process really fast sure yeah you could do that with the CI CD system we we kind of we've tried to solve just the problem of let's get to the registry and stop there kind of solved one problem well but the way we do that does make it really easy to use tools to promote it through a CI pipeline and you know whatever you want to do with it you know restart your pods and get it deployed out really quickly because of the data deduplication they're over there yeah it depends on the build pack so the Heroku Gradle build pack uses an like well-defined environment variables that it will pick up and then either apply Gradle ops or something like that I can remember some of the specifics the maven build pack similar functionality but with different environment variables so it's really up to that build pack some of the packs could have their own like manifest or descriptor that they would read in the Java bill pack actually does this with a properties file that defines the JDK version to install for the yeah so the Roku build packs are primarily written in bash they're like using bash as a controller to drive like smaller sharper binaries in most cases I think the Cloud Foundry build packs are hundred percent written and go for your back sorry yes so there so the question is in the old build pack model we used to build on the same sort of build image and run image and now we have separate bill damages and run images right there is a little bit less of a guarantee in doing that in that if you're not very careful about what those images contain you could end up with something that's missing during runtime this strong like the guarantees we care the most about are that on updates of the run image it's a bi compatible and an update so the build image to newer build images right it's a bi compatible between those two images it doesn't matter quite as much for a few reasons one is if your app doesn't start sort of immediately you have fast feedback right between those two things the other is we can be really careful to make sure that we only include things like compilers and that will be only exclude things like compilers and the run image that we don't put we don't have C libraries that's something might link to on the build image that aren't done at run image there whatever the so the question is about the size of the those base images there whatever the platform that provides that image decides so for Roku I think it is about 500 Meg's and we've chosen to make it larger to include more things because it's part of our platform it's like what makes it a better experience for our customers to have those things available but I think steven has some work on smaller images on the Cloud Foundry side we have a choice of three images someone is like a large image that eggs are so for you know linking node modules you know that might have made of dependencies things like that right but we also have like 20 Meg one that's just to go into Bionic essentially with compilers and the build image we even now have one that has a distro list run image that's our own kind of role of distro list on top of a bun - that's you know almost a small scratch it just has Gaea Lipsy in it I think so you can make really small images especially if you're building would go we've really optimized that use case in the Clive founders side so you can end up with something that's a sort of scratch like convenient image in the end so you shouldn't have problems with that you know but in in addition to that you can also create your own custom stacks that have your own kind of cited build packs in them you just have to make sure they maintain compatibility with the bill packs right so the question is about build pack compatibility between platforms where you know is it that you can only use Heroku bill packs on Heroku and cly finally build packs on clarity the answer to that is if you just look at the new you know cloud they did build packs b3 project that lifecycle you know packs to run on that lifecycle are always compatible across all different platforms so there'll be real complete compatibility you can run the cloud plan to build packs on Heroku at Heroku build packs on Cloud Foundry and we're even starting to invest in defining our stacks so that you can mix and match cloud foundry and Heroku build packs within the same build potentially in the future too although there's still a lot of discussions about how we want to do that so the compatibility is really great we're gonna we're pushing even more in the direction of more compatibility between them we do have some kind of different use cases for build packs like a Cloud Foundry side we've always really like the ability to do completely offline builds on Heroku side they offer more different versions of the dependencies right so we have some reasons why we maintain different build packs if that makes sense yeah I think it depends on what you mean when you say like on Cloud Foundry or on Heroku like that could mean like you're running on the Heroku cloud platform but it also mean that you're using the roku build packs with the roku builder image in some other like pack or some other platform so so the question is you know that we listed these stages like detect and build and analyze an export Can you inject custom stages into that process and the build packs are kind of those custom stages if that makes sense during the build process and you know each bill packets have been Bill during the detect process each bill packets have been detect we have thought about some additional hooks into the process for things like expiring the cash really early if you're on a platform that implements remote cash instead of a local cash like in the examples so I think we're definitely open to more of that extension but it's already very extendable because that's kind of the point right but good question in the front here yeah so those projects even though they seem similar or the kind of names similarly they kind of don't really overlap so cloud native application bundle is about sort of a way to package and deploy like you know images and configuration for like things like you know a bunch of KML and a bunch of images you've already built into a cluster cloud need to build packs is about building application images so they could interact really well you could you know use cloud native build packs to build images which you then ship with you know C NAB we actually do that right now for K pack where we were using we're building K pack itself with cloud native build packs and then distributing it via ad yeah so the question is you know OCI images are immutable and so how are we making these sort of swapping changes and dealing with those immutability constraints is that kind of question yeah okay so I'll try to address that and let me know if I'm kind of covering it so when we generate a new manifest that points to a different selection of layers it's essentially a new image it's it's a new it the old image continues to exist and refer to a selection of layers we're creating a new manifest that has a different digest and refers to some of the layers the previous one did and new additional layers right so that's so it does preserve the immutability of the model if that makes sense but it's careful to bring along layers from previous builds so that we get that data deduplication if you're talking about how we handle it like immutability with referencing build packs so that build package construct is a no CI image and it has you know hard references you know in its manifest to the individual build pack layers would there you know yes with an ID inversion inside of them but also with that shot 256 something that describes the layer and so the build package can act as to the distribution format where you get those strong guarantees of yes this is an immutable thing if it changes you'll know about it does that sort of answer the question yeah sounds good so like how do you use maybe a question is how do you use cloud native build packs in a more declarative way where you're the updates kind of are stream out into applications for you so you'd have to know when to rerun pack build that kind of thing so that's what pivotal k pack is a project that provides a declarative API for saying this is where my source code is and that's basically it's this is where the image should live and then it monitors for all the build packs and stacks and does rebuilds automatically it provides a sort of declarative interface into cloud native build packs so you should check that out if you're interested in kind of solving a problem it's open source got it so the question is you know is there a way to use build packs to sort of do less of the build process like build a jar file first and then provide that so as an example the cloud founder Java build path is actually a selection of forty or between 10 and 40 different build packs depending on the distribution of it and if you provide source code it will turn that into a jar file first if you provide a jar file the build packs that compiled the source code will kind of flip themselves off during the detection process and say nope I'm not necessary I'm and they're marked as optional so they're not going to fail the build and then the other build packs will take it from there so you can kind of throw whatever you want at that thing and it will figure out what to do with it that makes sense but in a modular kind of transparent way not like you know it's some big blob of complexity that's hard to understand what's happening okay so but in the cloud foundry in the Ryoga side we want to provide a sort of rich variety of build packs covering lots of language ecosystems so if you're if you're okay with using those build packs you know the build packs be right they're just gonna be small tweaks you need to do to the build process that should be easy and there are a lot of tools out there that you know that I was reading her oak was written already that use cloud native build packs to provide solutions to make it really easy to adopt them so the goal there is that should be much much easier to adopt cloud native build packs new organization than it would be to create your own you know selection of pipelines and doctor files and base images and all that it kind of automates that process gives you a lot more visibility into what's going into your applications it kind of solves that problem for you with the managed system that make sense yes well so the the modularity we have like I said that you know Clara founded Java buildpack is a lot of modular build packs it allows you to do things like make a small build pack that just overrides some behavior or swap one out right it's it's a JDK build pack and a maven build pack and a great old build pack where it's a and PM build packin a yarn build pack it's not a giant node.js build pack anymore right and so we that was a problem we really had with the old style build packs that we really think we've kind of provided a nice solution to here but we definitely looking for feedback and want people to try it out and tell us yeah I think it depends on how like how you want to adopt build packs you can use the third-party build packs from Cloud Foundry Heroku and then you have to accept the constraints of like this is the stack that Heroku works with right you can also create your own build packs which if if you have your own constraints like we're only working with this JDK vendor on JDK 8 whatever the bill pack maybe 100 lines of bash code which by comparison to something like docker file is you get to write that build pack once and use it on all of your apps as opposed to having snowflake docker files so I think the answer is kind of it depends any other questions all right thanks [Applause]
Info
Channel: CNCF [Cloud Native Computing Foundation]
Views: 2,300
Rating: undefined out of 5
Keywords:
Id: j9Ak5YLrihU
Channel Id: undefined
Length: 46min 25sec (2785 seconds)
Published: Thu Nov 21 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.