Intro to Cloud Native Buildpacks - Terence Lee, Heroku & Emily Casey, Pivotal

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to the cloud native build packs inter talks today we're gonna be talking about the cloud natives build pack project which is currently in the CNC F sandbox we're gonna talk them about the tooling this project provides and how it can help you turn your application into a container image I'm Emily Casey I'm an engineer at pivotal and I work on the cloud native build packs contributor team at pivotal I'm also a maintainer of the platform and implementation sub teams of the project hi I'm Terrence Lee I work at Heroku as an engineer helped create the original bill packs API and one of the founding members of the cloud named bill packs project on some of these other fine folks in the front of the room and so with kubernetes we have a great way to orchestrate and run images but you actually need images to start with to actually run it in a coop cluster and finally the built PACs provide a great way to take your application and turn it into a docker image that you can run in your cluster and it Maps layers logically to the layers that make sense in your application itself with relatively zero configuration of white cloud native build pack specific configuration that you have to do to actually make that application work and it's actually built on top of this kind of older Bill packs thing that was invented in 2011 by Heroku and so we've been running kind of the ideas that kind of made cloud native bill packs that it is today which started in 2018 from concepts that we've been doing in production between both from pivotal over the last seven or eight years and we wanted to kind of bring all those great ideologies to this container ecosystem and bring kind of some of those methodologies there and so at the end when you actually build an image you get this image that allows you to have these reproducible builds that allow you to kind of inspect these images without and unpacking the image itself and create images that have like these logical layer mappings and so as a project there's kind of a few important parts of the cloud native Belle PACs project there's at its core the specification itself and we like to call lifecycle kind of the implementation of the spec and so there's kind of two different contracts that are there there's the bill pack API which is the part where lifecycle will actually go ahead and run the bill packs and execute them to actually produce that image as well as the platform API so a platform can BC take the lifecycle and run the resulting images and kind of launch the resulting processes and stuff and set the environments and things like that and so the platform that we provide out of the project is this project called PAC which is geared towards local development and it's kind of our reference implementation of what a platform could look like geared towards local development with like a local docker daemon and so this is a high-level diagram that shows kind of taking a source image or taking source running it through PAC and kind of getting those resulting image and Emily's gonna run through a demo that kind of walks through what that looks like and you'll have a much better understanding of what that diagram looks like all right so let's dive right in I have a sample node application here and I have a pack CLI installed I'm gonna run PAC build I'm gonna provide it with a tag in a registry where I can export this app I can provide the publish flag which tells PAC to export the image directly into the registry which is the most performant way to run pack so let's give this a go so what PAC is doing here is taking the lifecycle and running it in a series of docker containers in my local docker demon in a minute we're gonna dive into the details of what is specifically happening in each of these lifecycle steps you can see the headings for the different lifecycle phases but zooming out at a really high level what we've done here is selected a group of bill paxton during this lifecycle phase called detection so here we have the node engine bill pack and the yarn pack and their virgins you might notice that this version is called old which is something we're going to come back to you and update later if you're clever you might have guessed that now in the building phase each of these build packs execute and it contributes dependencies that end up at the final image so the node engine build pack is contributing nodejs the yarn build pack is contributing modules they are writing these dependencies to specific locations in the file system and at the end here the exporter is wrapping all of this up to create an image so now that I've built this image I can use pack to learn a little bit more about my image so let's use this pack inspect image command and we can see some info here so we see what build packs were used to create the image the dependency and a players are added on top of what we call a run image which provides the operating system in those dependencies and you can see information about the tag where we found the run image and the specific run image we use at build time finally if we provide the build materials flag to you this command we can see more details about the specific dependencies that were installed into this image so for example we can see that we installed node 10 16 3 we can see the URI it originally came from and the licenses associated with it stuff like that so zooming back out now we can look at this diagram with what's going on and pack and understand it a little more deeply when pack executed those lifecycle steps the containers it ran and were created from an image called a builder image and the builder image is a packaging of a set of build packs and a life cycle they're compatible with so pack adds the source code to containers created from the Builder image runs the lifecycle phases and in the end you get an app image that starts with a run image which it comes from our stack layer which represents the operating system in this pack which is with semantically meaningful layers one for each dependency and the application layer and the modifications the bill pack might have made to your application on top so diving a little bit into what happens in each lifecycle step during the detect phase the lifecycle executes the detect binary in each build pack and what happens during the detect phase is the build pack looks at the application source and determines whether and how it should run so for example if I have a package tree son and a yarn lock file the node build pack is going to say yes I can build this app and the yarn build pack will - and together they'll make a build pack group that will execute at Build time in a case where for example we didn't have a yarn lock file we might get the node and NPM build packs collaborating to build the application so detection happens in parallel in order to be performant but we still want to allow the bill packs to collaborate with each other because one build pack might need a dependency from an upstream build pack in order to execute its piece of the build so this is facilitated with a concept called the build plan so as the bill packs detects they can write into the build plan dependencies that they either provide or require so after detection has run on all the bill paxton looks at the output to find a build plan where all the provides match requires and that's the first group where that is true as the group that's selected and those build packs will run on this first build we're going to skip over restoring analyze because they become important for efficiency on the rebuild step so we're gonna move ahead to the build phase of the lifecycle when the lifecycle executes build the lifecycle builder will call the build binary and each build pack and this is really like the meat of the build and what most people think about when they think about build packs so the build packs will run they'll look at the source code and they will provide dependencies like nodejs or yarn or node modules in layers right now they're directories on the file system so once we have those directories on the file system we can turn each of those directories into a layer and so the export phase basically converts all that stuff into the resulting image that you can run at the end of it and so once we've done export you actually have a runnable image that you could duck a run you can run your coop cluster and what-have-you and finally there's the caching phase and so this is what we're this is done to basically take anything that a Bill pack author my team that can be used to basically save steps or processing time in a future build or a subsequent build so an example this might be like the node modules once we have that you don't have to fresh install it every time so kind of taking a different view and slice of what this looks like if we look at what the docker image layers could look like here on the right-hand side so in the export phase we're going to in the Bill pack that Emily showed off we have that no models directory of all that kind of be node library dependencies that are included in there there's no engine which represents the runtime itself like the version of node you're going to run on that was like 10 16 3 in that example the application specific code so this lives in the workspace directory but basically the source code and any mutations that they'll pack needs to do on that directory to make the app runnable as well as the stack image and then any of the layered and build configuration sits in a layer and those make up kind of the OCI image that gets run and then in addition to that we have these cache layers which can be used and kept separate from the actual export image so as Bill pact author you can have say like the NPM cache live in this cache layer because you don't actually need it to boot or run your application so we have the separate set of things that we want to cache that we may or may not want in the final exported image and so with that we'll look at our rebuild on this exactly not as I made changes so let's come back to our app and we pointed out that we installed node 10 16 3 so let's say I want to update the minor version of node when I first ran this pack build I didn't supply builder because I have a default builder set on the system but let's take a look at the default builder that we're using so we can use to pack inspect builder command to look at all the bill packs on this builder and the order they protect in we can see this old version of the node engine build pack so I have an updated builder here that we can look at and it's almost exactly the same but it has the new version of the build pack so let's say I want to rebuild with this new builder I can add the Builder flag here and supply the name of this builder image now generally you wouldn't necessarily update your builders by pointing at a different time you'd be pointing at one builder tag that a group like cloud foundry or Heroku would constantly be publishing bill pack updates too but we're gonna do it like this for the sake of the demo and we can see on rebuild that our new known engine bill pack as detected we're restoring cache layers and metadata we're gonna describe a little bit more about what's going on there in a second and during the build phase the node engine build pack sees that the layer is out of date compared to what it wants the version of this dependency to be so we'll install the newer version of known the yarn built back can actually go ahead and just reuse the cache layer because this version I've noticed a bi compatible with the previous version when we come down into the exporting things we can see that only one new layer with an actual dependency in it had to be added and uploaded here the config layer gets regenerated every time so we can pack inspect our image here and look at the dependencies and we can see that we by just pointing at a new a builder we were able to get updates to our dependencies so let's dive into a little bit of detail what was happening in each of those lifecycle containers detect same as last time during restore layers that we put into the cache were returned to the filesystem in order to help build packs build more quickly either because they need those dependencies at Build time or because it's easier to make a small change to the cache layer than to regenerate it from scratch during analyze the analyzer looks at the actual OCI image that was generated last time and writes metadata about the layers to the filesystem so an example of this might be like a nude tamil file describing the version of node that was in the previous image and it will include metadata like the version that was in that layer the reason we do this is so that build packs can use this information to decide whether they want to bother regenerating a layer or not so if a build pack says you know this version is the version I want and this is a exported layer they can just leave that file and do nothing and the layer will be reused from the previous image finally the build phase executes like we saw in the first build except that now the build packs can use the metadata to only rebuild layers that they want to change they can use the cache to speed up that build so what's different this time with export is that we actually only have to export or update layers that have actually changed so anything that hasn't changed so like the no models directory from before doesn't actually need to be repost the registry because if we're reusing that existing layer from the registry itself so we can simply speed up time to export of the stuff that we actually need to do and then similarly with caching there's check show some of each the different caching layers that we want to use and so in this case we only need to update the note engine cache layer because that's the only one that's actually updated we didn't even actually run yarn install to actually update those dependencies could we're just reusing that layer so going through kind of the layer breakdown example again as we go through it so Emily I was talking about restore so we're actually go ahead and restore those images are those caching layers if possible if they're available and make those available to build what analysis we're going to go ahead and read that configuration so that node Tamil file Emily I was talking about and provide that so the bill pact can make intelligent decisions during the build process of whether it needs to recalculate those layers and then once we've actually set up and done the build parts that we need to do we can export and in this case for example we've run through we're just updating the configuration and the node engine because in the yarn build path we're not actually running yard and solver just reusing that layer and then with caching there's a similar optimisation where since we're not actually changing any other cache layers we're only going to be updating the node engine one for the cache so it can be used in an updated version of it in the future builds that we're doing so that's kind of what it looks like to just do like development where you're just building images and running them but once you're actually doing that you're probably going to be putting this in production and running it and so one of the easiest ways to illustrate some of the day two or kind of production operations you're going to do would be like patching this nodejs application in case there's an operating system well with cv that you have to handle and so snick released an article this year about you know like the top 30 docker images has all these vulnerabilities in them and actually you can actually just swap out kind of the underlying base image that we've been talking about to actually mitigate a good chunk of those CDs so just updating your keeping your operating system up-to-date actually handles a good amount of the vulnerabilities that people been finding in these very popular docker images and so Emily's gonna walk through what that looks like with clouded Belle packs reason pack all right so let's come back to look at our image here so the operating system and the operating system packages are in what we call the base layers which come from the space image here we can see that we originally looked at this one of these run images tags to find a run image to build on top of the tool we'll select the run image that's on the registry you're exporting to for maximum efficiency and then this base image includes the specific image that we built on top of that running time like the image digest so I actually have an updated version of this run image here and I'm gonna push it up to my registry and now we can rebase this application and it should replace the base layers on this image with the new base layers we pushed to the registry I'm going to do with the publish like oh my gosh okay you can see it's much faster when you're operating directly against the registry so now the weave rebase this image we can take a look and I don't know if remember the SHA from last time you probably don't but the base layers have changed and I'm realizing we forgot to show this image actually runs last time so that we've made a couple changes to it let's actually pull it down to run it not that one I'm just giving it a little environment variables pork to run on so let's give this a go so you probably have a existing running thing on that or stop it or just give it let's just do it nine different port here for a second alright guys I didn't clean up from practice okay there we go so let's come over here and this application is very interesting I just grabbed a Cloud Foundry sample application but you can see that just by running pack build we were able to generate an image that we can run and these updates have allowed us to keep working the image so let's talk about what is happening during rebase image normally have OS layers and language runtime layers and application layers on top of that so we maintain a logical separation between the application layers and the operating system layers and operating system packages so imagine there's a CBE in the operating system we can upload to the registry a new image that contains a patched operating system and these two images have a guaranteed between them called ABI compatibility so application binary interface compatibility and what that means is all of these layers these application layers that we've built can run exactly as they are on top of the new you base layers and this because we're building on top of a boon to Bionic this is a guarantee provided to us by canonical so what we can do is create a new image directly against the registry not by rebuilding or re uploading anything other than a config file that describes an image that combines these base layers for the application layers and a manifest file for that image so that's a great example of like doing it for a single app but I mean if you're rolling this out in production you're probably not doing it for a single application you're probably gonna be doing at scale with the cluster of applications out there and so potentially if you run to a thing where there is a CV that you have to apply you have to actually deal with that for every single application in your fleet and that means that you have to figure out some mitigation strategy for writing this with docker to figure out like how that actually works right so if you're doing this what docker file traditionally that means for every different base image you have to actually figure out a strategy for actually mitigating that for each different base image you're doing and then in the best case scenario where it's all the same you still have to do a rebuild across the entire fleet for every type of application you're running which if you have a small fleet that maybe that's a few hours to a few days to just even do the rebuild before you actually roll it out in the worst case if it doesn't rebuild cleaning you have to find engineering time to actually go ahead and fix your application to account for the new changes with those patches and so we're gonna take a look at what that looks like doing that with kpac with cloud metal packs so we talked about how by implementing the platform API any platform to take advantage of the core functionality of the lifecycle and the cloud native build packs building technology so we talked about one platform which was pack and pack provides the UX has optimized for a developer workstation it's a local CLI it has an imperative flow now we're gonna talk about key pack which is a pivotal open source project that uses a set of custom kubernetes resources and controllers to manage many images and it provides a declarative API and it's well-suited to rebuilding many images with Clannad build packs SQL so it's the same core building technology but wrapped in a UX it's optimized for a different use case so let's take a minute and look at an architectural overview of k pack these boxes highlighted and teal are custom resources in K pack at the heart of K pack is the image resource so in the image resource you can declaratively describe the image we want to see you living at a particular tag in the container registry so you can say I want source from this branch from my git repo and I want it to be built with a specific builder and then every time that the reality falls out of date with the desired configuration either because uh gopac gets updated or the source changes ki pack will schedule a bill so an image is a mutable config and every time that the system reconciles and makes an immutable build so it builds map 1 2 1 2 die dress in the image registry and this image describes the desired state at a mutable tag so let's go ahead and take a look at so I'm gonna repeat the question for the recording the question is what goes to the right of this page so this system and it is generating builds and that's great and all but like what is responsible for knowing what to do with certain builds and also like do I want to do something different because it was built for a different reason like a rebase we sort of think of that as outside of the scope of the cloud and build packs project but there definitely are people building tools that use things like this as a building block in the process of creating a higher level abstraction like the Rif team for example like uses kpac as a component in their function service so they plug this set of cearĂ¡ DS in and you generate a bunch of images and then a larger set of cearĂ¡ DS that describe higher-level concepts like a function or an application take care of deploying those but if you want to do I think the bill PACs take responsibility for providing all the information that a platform we need in order to make those determinations right so like either creating a derivative image config so I want to rebase this one particular image that I have to put the prod not just what's on master stuff like that yeah the build resources are annotated with the reason for the bill no problem you're making us jumping in here we're gonna get to other platforms later all right sorry let's rebase multiple images so I have kpac installed here I have taken the liberty of installing a builder image here so I created that configuration that points at a tang and kpac has populated it with the bill PACs it found on this image and that builder image has a reference tool run image tag and we can also see the particular digest reference of the run image at that Tang also created a couple images here demo apps 1 3 3 they're simple Java apps and we're gonna see what it would look like to rebase all of your images right now it's only 3 but this could easily on a large cluster scale to many images being managed this way so let's first of all let's talk about how these images got built because I think that's interesting if we go over here we're gonna have this Long's utility we can look at the original build that was used to create the first version of this image and this should look very familiar because this is almost exactly the output you're getting from the packs e1ke pack ran the same lifecycle in a series of containers in a pot and kubernetes but the like core technology here being used is the same this is a Java app not like the known app that we built originally so now that we have this app because it's being built with the same life cycle as annotated with the same metadata so we can inspect this image and see the Java build packs that contributed to this build and the specific run image that we have here so like last time I actually have an updated version of this run image that I'm not going to push up to the registry so because k pack is watching the image at the builder tag and that includes a run image tag it should see that this run image the digest that this tag has changed and it will update the builder for us it's gonna take a couple seconds so we're just gonna watch it goes there it goes alright so now that this run images change Capek knows that all these images need to be rebase in order to match their desired config so if we get the builds before we ran this we only had one build for each app but now we have extra builds these are real aces we can see and a Nick was interested in seeing the reason for the bill so I wasn't planning on doing this but I'm going to show you if I cannot type it all right so you can see the reason for this build was types back meaning that it's a rebus and if we inspect the image we can see that we have updated the run image and simply by pushing this one run image it's able to get a rebase of all the appropriate images on my cluster and we for this example for a stack but this would work with a built a cup date as well thanks Emily so I guess I'm kind of just recapping a little bit we we were talking about the contacts before I guess the demo /x questions and you know like we're talking about like how long it would actually take to mitigate this at scale if you're using like docker files in production and Emily you kind of showcase like doing that rebase like it for her the only took a few minutes but you can imagine maybe it takes a little bit longer if you have a bunch of different builds you have to do across entire fleet and so this is the thing that scale is relatively well and actually can be done relatively quickly especially in comparison to what you have to do with a rebuild per app so this is rebasing with kpac so just you know another platform that is leveraging cloud and build packs with a totally different UX like we're talking about and so really at the end of the day climb and Bill packs are kind of wound to some of dr. next questions like building blocks where different platforms or services that want to take a management can have different UX and parameters of what they want to do but it's just providing the primitives to actually do image building for things like applications and so both plat finding Roku are actually working on various climate bill packs to kind of take advantage of some of this stuff with the Google cloud run project they have a thing where you can click a button with Google cloud run and it will actually build an app using Bill packs or docker files project RIF is a I just like function functions as a service that basically uses cloud native bill packs as building blocks to kind of provide this like high-level abstraction for building functions so at the end you still do get this OCI image that you can run but it's using their own set of CMBS underneath to actually make that happen so build packs are being used kind of as a foundational piece to actually provide this kind of image building service across a bunch of different platforms and hopefully through the demos in this presentation you're able to see that bill packs have a lot of pretty great properties surrounding them so when we talk about reusable one things that was nice and the kpac demo is that like there was this same unified build pipeline of the lifecycle and using like a very small subset of bill packs whereas if you ever deployed a file in production and your company you probably have this kind of sprawling snowflakes of docker files per apps your checking in a docker file per repo and and all the demo apps and apps that people are using bill packs on you're not checking in the actual build pack into your app like it is a kind of separate thing that can be reused across multiple applications through the use of of basically the build configuration the caching we can base we can choose what we need to actually do work for rebuilding for both the image itself as well as pushing to the registry so things that can be fast should be fast you saw through the node example that we're using multiple kind of smaller modules bill packs a compose like this node bill pack so it means that if you need to replace a certain component like maybe you want to I like a few years ago when iOS was thing and potentially the platform you're working on doesn't support it you could write your own like I OGS engine bill packed us up that in or I guess more realize realistic examples like if you're working with Java you need to replace the runtime you can replace like the JDK or Jerry like Bill pack that's installing that and say you're on you want to use the kind of Amazon's proprietary jdk in that case right and so you could just write a bill pack that leverages the rest of those components but just replaces that one piece and then finally they're kind of secured out of the gate with these authors who are actually maintaining and patching this stuff with the metadata that's provided you don't actually have to unpack the docker image itself to kind of look at it for compliance or other reasons there and with features like rebase you can actually roll out d0 patches relatively quickly and so should check it out go to the bill packs i/o website you can go and download basically the easiest way to get started is through the pack CLI that we've demoed throughout this presentation the latest version is 0.5 most people don't have to go ahead and like just write a bill pack to get started as part of the builders that are kind of suggested in the PAC CLI you can use the hokum Cloud Foundry builders that support probably most of the languages you'll be worked you care or work with today if you just go to the bill packs io sites you can check out documentation for getting started can kind of walk through a lot of this stuff that we've talked about in this presentation we're pretty active on slack working with various contributors if you want to get involved or have questions fairly active on there there's a mailing list and then we also have a deep dive talk being done by Steven and Joe tomorrow at 5:20 where they're gonna kind of dig into production build packs and kind of how they work and you can get a better feel of like what it takes actually kind of write this stuff in brush them Thanks looks like we have like seven minutes so do people have someone have questions if you do please come SEP up on the mic so it can get recorded for the video this is relatively quick one some of this seems to overlap at least conceptually with the cloud native application bundle spec do you know if anybody's working on combining this with that or do they conflict with each other or do they complement each other or like are they totally unaware of each other right now we're aware of the cloud Nate C NAB project I don't think we're doing anything that conflicts with it we're not really doing anything that also makes us like make a scene ab out of it so like you can't do it PAC building get us C not out of that project but I think C NAB has a significant bigger scope as far as like packaging concerns right because it talks about both like what you're installing locally on your machine and how it like it's a pretty generic it has it's a huge specification with a pretty generic scope of like what it can contain and I think the the bill PACs project is much more concerned around deploying like services and applications in kind of those use cases to production all right could you explain how this might integrate with your CI CD server do you just run pack commands in there or how do you run it so you can as like a simple first path start by running pack in a CI CD server but the more up way to do it is to integrate the lifecycle directly with some of these CI CD tools so one example is we provide like template in the Tecton catalog Tecton is like an example CI CD tool and it will just execute the lifecycle directly in a series of containers so the Tecton tool itself is doing the orchestration and that's nice because you don't have the problem of like running docker in docker like you would if you used a pack C like so directly integrating a lifecycle is better where it's possible yeah that being said it's pretty easy to think like leverage and run it on like a circle or Travis or some of the well-known third-party eyesore services hey it seems like the process would be if I'm a developer that I'm still coding locally on everything then using Packer kpac to deploy it out to my cluster have you all looked at integrating with some of the other tools not integrating but taking the same approach as well for like local development so things like octet oh and telepresence things that can sync changes over to a running container is that completely out of scope of what you're trying to do with packs does that make sense what I'm asking so you're talking about sort of like as you're developing syncing and updating on the fly yeah it's definitely something we've talked about I don't think we have a proposal yeah we and so we've talked about like in the spec there's actually been developing that we've talked about it's not implemented yet and there's probably still much more work to be done on it so I don't think we have anything that does exactly what you're talking about but it's definitely something that is not out of scope it's just a lot of the focuses today since we're still in beta has been on kind of production use cases and in production ization so getting like production level images and running them and and having a good process for maintaining those things but we're definitely interested in both like better sort around testing and development as well going back to this yet going back to the CI CD question does the lifecycle also include like QA automation scenarios and all that or this is life cycle only just packing and exporting the image out from the CI CD perspective one of the spike in CI CD there these are called pipelines right you were you have the build stage you have like an automation test stage acceptance test or whatever so that is that incorporated into life cycle that's what my question is No okay so like how like one example how you might fit into a pipeline is say you have source code you might want to run your unit test first and then you use the life cycle to build an image and then that image becomes your immutable artifact and then you can deploy it and like run acceptance tests against that and that artifact is what moves down the pipeline yes but I don't think we're thinking about in the near future about running tests as part of the lifecycle so how do i distribute how do we distribute builders and build packs within the organization how do I make sure that everyone what's the distribution mechanism to make sure people in my organization don't have an older version the wrong version or an older version of the build back so if you have what I'm going to build our image and you have this image on a registry all the tools we've demoed today will always try to pull the newest version from the registry before running so it does that by default you can also distribute build packs in what we call build package is our new specification for distributing them and that all works by the same mechanism it's like packaging build packs in an OCI image and we try to always pull when something as a village it checks for new ones on each run so you never have to worry yeah I noticed there your version is beta how mature is this project for usage is it production ready is it I mean yeah so we're we're getting to a point where stuff is for the most part not backwards breaking I think we just made a significant backwards breaking thing relatively recently but for the most part a lot of the stuff going forward were not doing a lot of breaking changes and for the most part any kind of breaking changes would fall on the kind of bill they've tend to fall in like the bill packed API side so like I guess for a production level usage it it kind depends on like like if you're using the local Cloud Foundry build packs you know like they've been used for a while depending on what bill packs are using and but if you're like writing your own I think it's much more dependent on like whether the build packs you're writing you feel are also production-ready in that sense as far as lifecycle and other things like we definitely have these chases where people are using it in production and people are leveraging it so yes like we've had to make some braking changes but we've also are conscious about those things and we do versioning and R+ of like not wanting to actually break users that are running it now and also if you're using a maintained platform that can encapsulate you from the breaking changes so we might make a break one of those api's right now at this stage in beta but the platform is capable of knowing which API it's running against and doing the right thing so for most use cases you wouldn't even feel many of the braking changes it mostly falls on my platform authors and if I wrote if I write custom build packs the API is that they rely on my change in future versions yeah well so like one of the things we provide is a library called labelled pack for go and for the foremost like kind of breaking style things we will deprecated stuff and so you the library itself will continue to work for the most part on like the next release and then kind of give you that time to update your Bill pack to kind of take an account those changes if there's there's I think only been one time where we haven't done that because it was really like hard to actually maintain both of those things at one time but oftentimes a little like if we defecate an n/bar for like a new one like the Lib Bill pack we'll just account to do both of those things so you don't really have to kind of make those changes thank you for the talk question on what accounts for the security modules like a Parma or a sec aam in the build process or is that a responsibility of whoever provides the container image I think that's that's definitely something that is taken account by who's providing the kind of container image so like when you're providing the stack or kind of base or run images like like the bill packs of cells are not out of the box and provide like a app security thing how the gate is part of the project but yeah like the container image should have I guess like those rules and stuff in place all right sorry hi you said that your base images are based on Ubuntu do you update these images so in the example we're running the base image was based on a boon to the project itself doesn't provide maintained stacks but both Claude foundry and Heroku do provide maintained stacks that like push updated base images within a certain window after every CVE notification yeah and those are kept up to date because we maintain them for our own customers independent of kind of the cloud native build packs project but they're free - yes yeah all right looks like that's it thank you [Applause]
Info
Channel: CNCF [Cloud Native Computing Foundation]
Views: 3,983
Rating: undefined out of 5
Keywords:
Id: SK6e_ZatOaw
Channel Id: undefined
Length: 47min 42sec (2862 seconds)
Published: Fri Nov 22 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.