Looking Under The Hood: containerD

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to this next talk in the edge session it's a pleasure to me to welcome Scott Colton he is a principal software engineer at puppet and also which makes me really proud at all captain my name by the way is Gabriel Shanker I'm the principal content developer at talk please take it away and talk about container D good afternoon thank you very much for coming to my talk I know there is actually another awesome talk on at the moment with Laura and Steven so thank you for coming about when I originally thought about this talk I was called looking under the hood container D but what I really should have called it was container container d what does it mean for me so the talks actually going to be about how can you consume container D in a software sense how can you build tooling around it how can you develop software on it and how could you deploy and deliver software on the bare minimum product that is container D a little bit about me I'm in the doctor captains I work at puppet you can get me on github there and twitter at scott colton and as you can tell from the accent I'm actually from Sydney Australia so it's a long way from home so the agenda today first of all before we can deliver software on top of container D we need to know what continuity actually is what it gives us then find out why Kennedy is actually critical part of the container ecosystem what upstream projects use container D and how its container D changing the way we build container products so we'll go through all this and we'll do various demos throughout the talk so what is container D so I just plagiarized this straight from the container D website but basically what this boils down to is container DS the industry call for a container runtimes so all other container higher-level tools like docker itself Kuban eighties it's built on top of container D but what does container they actually give you so so once you have eternity running what is it there is to see a life it's an ND d good CTR how many people know what gr pc is awesome this is last time I did this talk one person put their hand up so I spend the last the next five minutes talking about gr PC so there's a daemon this expose the GRP C API over a local unique socket that's really important because there's no traffic traversing the network stack so it's localized only and there's protobuf specs between any of the other components so if you want to hook into container D through code you've got a set of specs that is a contract that you can start writing your code on top of so this is basically what container D does and what it looks like this is the architecture of it so as you can see there there's a G RPC interface you've got different metadata around what the containers look like you've got a tasks and events you've got the runtimes which could be run C or whatever else underneath it and then you've got the lower level primitives in the OS which would be kernel level things that you would boot up depending on what your application needed sitting on top of container D so this is a very high level architecture view of what container D actually is this is CTR don't get this confused we've the docker CLI this is a tool that is currently unstable but it is really really handy if you're debugging building tools on top of container D it allows you to have a look at what is actually working into that container D it allows you to have a look at the process is running what containers you've got it allows you to pull and allows you to modify namespaces see a whole heap of things that you can do with code in container D so it's a really handy troubleshooting tip troubleshooting tool now I was going to write code to do everything today but I'm actually going to use CTR because I don't make it a lot easier to explain what I was doing then putting go examples up there so what does contain a give us like why would we want to build tools on top of container D so wide we were looking at it we were actually looking at puppet at a few different proof of concepts and we wanted to build a cloud native application perhaps could have been puppet open-source could have been anything we were just doing R&D and what was the minimum viable product we could build on a container run time but not be tied to any other like implied the docker release cycle so we needed something that was lower level that we could like that was like LTS like container D so like there's not as many changes going into container D as there is obviously the docker runtime so we wanted to see what the minimum viable product we could ship to a cloud image was so container deed was what we started to do our and D with and what we wanted was a CI image spec support so we wanted to make sure that we could use any image that was already upstream or that we wanted to build we wanted to use run C I know there's other traces out there but it just made sense to work with run C if we were already using container D we did an image push and pull support because we thought at the time if we were going to deploy this if it needed to be on something like an OS that someone had already had we could install an RPM that would actually install contain a D or an app package and then it would then pull the images from upstream in a registry somewhere and these are all sort of things that we were toying around with and obviously we wanted management of namespaces because we were thinking can we use user name space and if it didn't need network as such could we get away with running rootless containers this is all as I said theoretical proof of concept this is not something we're going to release but these were the sort of things we were toying with at the time to see what the minimum viable product was that we could we can get away with and play with for just to find out what we could do in the future if we needed to play any like real-life applications to declare so I'm just going to show you a quick demo now and there's actually going to be a few demos of course it's actually really late at home I've actually cheated and put all my commands on this side this jet like it's been bad so basically what we're going to do here is we're just going to pull an image so we basically just gonna use CTR pull and you'll notice that the image name is actually the full of the quality tonight name that's because when you do docker it actually appends that to it when you do docker pool but container d is not opinionated about where you're going to go for the registry so this could be using it could be in kubernetes and it could be using something like the GCR registry or it could be clay or it could be any other registry so you have to put the full equality name of the image but if you just go full you can see there that there you go we're just pulled the Redis image so that was pretty basic stuff but as you can see you've got all the low-level primitives that you need to be able to pull something from upstream so your city process wouldn't change if you were to plain it on a docker image sorryi docker runtime or container d runtime as long as it you're using an AC i SPECT image and we can just run that now so I'll just that's this and I'll actually make this video so you can see and then we're just gonna run that image there you go so we just run Redis with no dr. engine just on top of container d so now you can see that we've got the low-level firm leaves that we need to run an application that's the fully working version of Redis and as you can see there it's ready to upset accept connections so that gives you the building blocks now to start to the play should Henry's applications that may need to be immutable it might be in a package completely a complete package that's immutable there's a lot of use cases of why you would build applications on top of container D and ship images like OC I suspect images for stuff that's totally immutable that's bits just go back to the slides so you basically so basically what I wanted to show there was this that you have the low level primitives to be able to pull the container pull an image and run a container from that image so now we know that you can can you do the basic levels of what you need you have in dr. so you don't have any of the fancy stuff obviously there like a remote API over rest any of the other stuff that comes with like swarm mode or anything like that I actually almost had a demo working last night where I was going to run swarm heat on top of container D and you can actually cluster it it's kind of a stable kind of not stable not no I didn't have enough time to get it ready for the demo maybe the next time I do a talk I'll have that that ready but yeah you can do some really interesting things but what other upstream projects use container D and and how does the architecture allow it to be pluggable so you've got here that you've got the API client which is a gr PC client so cry container D for Cooper Nettie's Moby I saw an actually a really interesting talk where Microsoft today had HTS I believe it was the site in front of container D that was some really interesting stuff today we're doing so they can plug in their or their their applications that they're building to run Linux containers on top of window because container D so pluggable and and that's what I'm trying to show you here the API client and building tools around container D so accessible that any of the container runtimes make it makes your trace pluggable when you're trying to build applications that work on top of container D so as you can see that there's as I said it could be the Microsoft plugin it could be the Craig container D which is kubernetes or it could be maybe which would be the docker engine itself there's container D and then you've got run see down the bottom and then you've got all your storage file system and os frames below container D so this allows other projects as I spoke about to sit on top of container D so as you see there you've got docker itself swarm key as I mentioned before I was trying to get a demo swung keep working without the docker engine the clustering and up.once opportunity it is completely possible it's just like a proof of concept type of thing and we got Cuban ETS as we saw the announcement yesterday docker is going to be supporting kubernetes and that will most likely be running on top of container d you've got all the other like Microsoft's ACS open shifts to new container engine so this is how container D people were building today on top of container D so if you have a look at everything there there's a lot of software being built on top of eternity so you might think it's weird that like I are doing on what we could do with container D it's because there's so many other frameworks built on top of it it was a really good we wanted to see what we could do the other great thing about it is as you can see it's mostly at the moment it is still in beta but it's mostly supports multiple os's which makes it powerful when you're trying to ship applications to other people because they obviously might have preference of os whether it be Linux or Windows and as I said there was a really good talk earlier in this track about Linux containers on top of windows so container D is mostly used in and we're going to talk about today movie and what Mobe actually is maybe Linux kid and we're actually going to talk about that separately because we're going to delve into Linux kit and last of all we're going to talk about kubernetes so how many people actually know what the mobi project is and when it dropped last year last decline what it meant for the docker engine said no not not too many just a few the mobi project is all the pluggable parts it takes to make the docker engine so container D would be definitely a main part of that the docker engine itself you got Swan cute hyper kit which is the virtualization platform for Mac OS to run docker for Mac you've got a linear skit which we'll talk about later you got run see all these parts make up the darker engine you've got now I mean going forward it might even be the couplet service that will be inside the docker engine for docker ye will be part of the memory project because if we shipped as part of the docker engine so maybe is a whole heap of tools that make mostly the docker engine but you can take part bits and pieces of the mobi project and you can start to build tooling on top of it because it's fully player plug and play and so like as I spoke about swarm kit on top of container D hyper kid runs doctor for mac all this stuff's plug and play so that's what the mobi project is and now after this talk I'm seeing the announcement of scooping a DS and swarm together you can see well why the mobi project and the docker engine got broken up into pluggable bits because it now gives people the choice to use whatever runtime you want whatever orchestration method you want and all this is really broken up nicely not only from to build software but from a contributing point of view it makes it really simple because all the services are clearly broken up into their own like little pieces of the world and then they come together the build docker mostly or it could be kubernetes now so how does container the interactive it may be so this is contender d 1.0 I'm using 1.0 beta 2 but basically you can see the it's the lowest level before any of the OS above the OS and you can see there's the API and then you've got things like the doctor CLI Swan kit this system storage system distribution any of the networking so that networking could be Lib network that doctor uses or it could be the scene I plugged in for Cuba needies data Kid infrakit any of these sort of things but you can see down there pretender DS talking to the the host storage and a host distribution and the network interface management if the namespace in the networking is already traded and you've also got the supervisor at the executor and then the runtime for the container itself so hope that shows movie projects as a whole you can now start to swap out things in there after yesterday's announcement so Swonk it might be removed like you gooblat service or the kubernetes framework but it still gives you the end-to-end container runtime and orchestration layer so how is container D different from docker so you've seen me do a run and a pool so what does what what is completely different so doc it's got a whole heap of stuff on top of it there's a value-add the container d that it doesn't so things like compose things like build these are all sit-on-top opportunity container do is not worried about this at all container d already needs to ingest an image that's already built it won't do build for you any of the are back controls around like trust or anything like that is built into the docker internet so of any of the doc arrest API is any of the any of the clustering that goes into running it on multi nodes is all sits above container D so as you can see there's there's there's a lot of difference between docker itself and container D even though like I'm using the CLI fraternity in this particular talk so we're just gonna do a demoing of this so basically what I'm gonna show now is I've got a container that's I've got docker running right now but if I go docker it's not found which I'm like okay but it's actually running in the container so say if I just go and use this command I'm just going to see if I can hit it and there you go dr. PS so docker is running where's it running so docker is a service inside running on top of container D right now so contain it is actually supervising in container that's running a full version of docker so whatever actually shipping is a cloud image that has docker running this is very similar to the way doc refer Mackin and some of the other tools are set up but this is a fully functional version of dr. say we'll just explore this a little bit more and we can go darker version and as you can see it's the latest version of dr. in CA but we might want to go further and actually run an application just to prove it's working and we're just gonna run out pun again so now you'll see that the pool is actually different it's actually using docker because docker is actually segregated in a different namespace now from container date so your container DS created the namespace on the underlying OS and as you can see there Redis is running so that's that's ready to go it's right actually running in the foreground so if we go do that as you can see it's that so this starts to show you that like this is like a bit of Inception but what I'm trying to show is that container D can create a namespace and then you can run an application inside of it the fact that I'm running docker just is something easy for you to see because like everyone here knows stalker but yeah I've just pulled so container D is running a version of Redis and then I'm running on top of that another container that's running docker and inside that container it's running Redis in a container the reason you can do this is because it is all just processes in namespace so there you can see the container diem you can see the stuff that's running and you can see the process of the Redis server and you can see that it's running as not root which is good because we don't want it to okay cool so that kind of was like a weird mo but it's like shows kind of inception of like what you can actually do inside a namespace with container D so that instead of running doctor that could be your business application but we'll go a little bit further now and we'll look at a Linux kit and I really wanted to go go a bit further on this with Linux kit well that looks horrible and it's okay I'll talk through this a little bit why would you why would you actually use Linux kit I'll go back to this like I said at that one look sorry want to know what the for for money is doing so in Linux kid is actually a distribution that or a thought that allows you to build your own OS and ship it and allows you to create immutable infrastructure I don't know and it allows you to use things like container D to build things every single thing is a service every single service is a container even the kernel so it's fully pluggable OS what that means from a security point of view if you're shipping software to other people is you now have an attack vector of something that's very small the attack vector would probably be down to the application that you're exposing that is all because you're not running any other services in the OS that you don't need to at the moment it's a it's a lean minimal size OS at boot it's instantly it's running the 4.9 kernel stable for daca fallout 10 is in experimental and allows you to run any container runtime so Linux kit allows you to create an OS that is running anything that you want it's full plug-and-play and we'll go through how to build it in a second batteries are included but can be replaced so what it means is that the people on the linee to keep project have a whole heap of containers pre-built for you therefore an init system for container D for the kernel you don't have to use them you can build your own or you can take the ones that are already being built as I said the formatting is bad but all system services are containers and why is this traditional to a different to a traditional OS and why would you choose Linux kit so if you're shipping software to a down to a consumer you won one thing that we found we were doing we're going to ship a cloud image we're gonna use Parker and we're going to ship it on like Red Hat or Debian doesn't really make a difference and it's going to run like PE it seems like a fairly simple process but it takes like 2 hours because you've got to use PACCAR you've got a hardened the OS you've got a patch it you've got to do all these operational tasks for something that you only really want to like this much of the OS but you have to take the rest of it and you just don't want to ship up with vulnerabilities to your customers because that's horrible and it's not it's not a good look so we spent more time actually securing the OS on stuff that we didn't need then what we didn't need to ship so one of the so what we wanted to do as part of the our OD was decrease the attack vector make it immutable because like if for example if you wanted to ship the puppet server do you want to make that version immutable and only allow it to run lock it down with something like set comp make sure that the process is the only one that's working and make sure it's fully immutable and then create a data drive where the customers puppet data sits like that could be a perfectly good solution so immutability was like something that we were looking at is it is truly a mutable infrastructure actually something you can do and Linux kit yes sandbox system services are awesome because like even if there is a vulnerability in one of the services at sandbox you've got all the sacrum preventing different containers from talking to different containers you've got kernel level protection so all the way down to the really lowest level of the Linux kernel you can protect you other services from the each other or themselves that's something that's very hard to do in a traditional OS specialized patches and configurations so if there's something special that you want build it put it in there and at a will allow you to have it in there you have full control over the build have you ever thought about deploying your own OS through a CD process that you're gonna ship your software and being able to ship it to like a juror and AWS at the same time with no different code this is a Europe you can do it with Linux kit and all the configuration is enamel which makes it super simple so how is contained adi integrated so this is the init system for learning skip as you can see there there's an init system and you'll notice that there trusted by Dockers upstream public notary service so you know that there's no chance of men in the middle attacks or anything on this particular containers that you're gaining but as you can see there there's an init system container d run c and CA certificates that they're on boot what this is this allows it to boot almost instantly and that is the OS that that is all you need which is like amazing so then you might want to add like your container to after at runtime to start and then it's an immutable entity that's an OS that's only running your application which is super powerful every container d demo we've done so far today has been running on linux gear so everything that I've built right now is just immediately running on top of the Linux kit so I just want to show you a demo of hopefully this point a to know how big it is the build okay cool so you just use them maybe bill til I have a mo file while it's building we'll have a look at it see as you can see there I've got the Linux kernel for that 9.5 four which is fairly up-to-date I know we're up to 12 now but for the moment for that nine stable while we're watching that build in D and say this is this is basically the whole image of what I'm using to to rank eternity so that's it so I've got DHCP a few other things running but that's it that's the whole OS and why I just looked at the mo file we built the whole OS that's it and I'll show you how quick it is to build boot sorry and see now I just go in Linux KITT run docker that's it so in that short space of time and what was maybe a couple of minutes I just built my own OS and got it up and running that's pretty amazing I think and and it really shows the power of like minimal it's like sometimes simple is better Jerome actually said something really really smart in in the workshop we did together swarm he goes raft is not better than the consensus our algorithm in zookeeper it's just simple there's just less he goes and less means less bugs simpler to understand means it's easier to get working and I think that's what the like Linux kit is it's just simple to get working and as you can see here I can go back and I've got my CTR commands so they're all there I won't have any containers at the moment because we rebooted so I've just got the standard once but I have got darker so ignorant installed because I put it in there bait system say we can just type we could just say what is VPS so there it is so automatically win a Buddha debited the version of doctor that's running inside the container so straight away we built an OS we had container D on top of it with Auto Buddha doctor as soon as it booted off and it booted up in just a couple of minutes so that could be your custom built application that's on a fully immutable version of infrastructure that Linux kit allows you to to build for any cloud or I believe OpenStack support it I'm running it on hyper kit so I'm running on the edge version of hyper kit for Mac but you can use like any of the Linux virtualization platforms as well but that just shows like how quickly and easy it is and you could take just an OC ice pack of your running application that you've already built and put it into the Linux kit and then have a fully immutable version of that so even if you're running something like nginx and there's some sort of vulnerability for it and someone breaks into the Internet's container they're not going to be able to break out and to do anything else and you can just reshape as part of your city process of for new version of not only the OS but of the aim of your application itself so there's no overhead operational overhead of patching there's no operational overhead of any of those sort of mundane SRA tasks that used to be there and going back to the original part of the top this is actually all build on top of container D so we've got multiple layers of what's happening here but in Linux it actually really should out shows the like power of container D because basically this OS is just running a kernel a few init systems DHCP to get some networking and when container D and as we saw before we were able to pull Redis and run two versions of Redis one on top of container D natively and one on top of our docker engine running inside a container running on top of container D so it's kind of inception yes but just think of these as peripheral concepts but take this to these proof of concepts and put it into your lifecycle all your application and think about how you could put a version of an API perhaps inside a Linux kid and ship it straight as an AWS my the it's totally deployable straight straight from go it's still using containers it's just taking out the operational overhead of deployment at the end it's just a different way to look at the world instead of running container like running an OS then running containers and then running it on top of it bake it into the build process and then ships still still using the same tools at just a different workflow and if you're in a highly secured or a highly rate literary environment it's definitely something that that's worth looking at I mean you can fully pen test this and the end and do all your security like whatever you need before you shipped it if you have a look at how we used to do that in the old school way we'd have to like to play the application in some sort of pen test environment the replicated the product environment and then start pen testing there you can actually do the pen testing or all the security compliance stuff that you need to do as part of the city process in the build phase so as soon as it's built you can do all your security checks which is a completely different mindset of the way you used to do things before it's this magic all the guys at puppet know that I use this MAME way too much way too much on he objected and kubernetes so we've this is actually it was really good that yesterday's announcement happened because it made my talks so much more relevant of course that was the best thing that happened out of that whole thing my talk become more relevant now I think eating running through finish on top of dock is pretty awesome actually you have spent the last probably four months at puppet just working on one hundred-percent community stuff and I was totally a small guy I used to go and do talks and say like the complexity and stuff and I still think company's complex and I still think swarms better use case from like some people but I actually appreciate communities a lot more I understand the complexity a lot better now and I understand what they're trying to achieve so I have a lot of respect for it now so was really good that yesterday's news I thought it was really good but how does container D black with kubernetes and there's probably another talk they could be done on the translation API the doc was built and all the hard work they've done to like make compose were from Safa kubernetes I mean when I was showing the tech preview of that I was absolutely blown away there's some really really good engineering being done there and like I think perhaps after this Gordon might have something to say about it so if you go to black belt after this I think Gordon might talk about a little bit more there's I seen him running around the hall he was talking about kubernetes but this is our container T interact so you've got the crashing you got kugel at which is the the service which obviously looks after kubernetes you go out the crab protobuf you got the crash team and the container runtime is container D so that's how the kubernetes ecosystem feed bootstraps into the container deep and why contained it is so important if you have a look it's going to be the core foundation of everything in containers most likely and if you were building a house what do you want a good core foundation because as soon as something gets rocky it's the bit that you could rely on the most and that's why container D is probably the most important a part of the container ecosystem that I think doesn't get enough love and the guys that work on it should get a bigger shout out because they're doing some really really good work if you want to follow the kubernetes work this is the project this community's incubator this is cry container day if you've got any interest on how to pan it is again and run on top of docker there's also you seek node music groups for meetings and things for the communities project or you can just follow this on github if you just have an interest of what's happening and as I've spoken about yesterday this was some really big news and I think I'm really happy um if you were following the open source stuff like if you were following container day and you will fall all the bits that made up maybe I probably wasn't much of a surprise to you because you would have seen a lot of docker people in the kubernetes ecosystem and vice versa working together on a lot of the lower level from noobs to get the announcement yesterday up but it is really exciting for everyone now in the container ecosystem because we can become one ecosystem which there's a lot of smart people that are going to get combined from the kubernetes ecosystem and the dock our ecosystem so it's a super exciting time to be part of containers in general and I'm just on time for questions thank you so much Scott if you have any questions please go step forward to the mic so over there or if you're in front raise your hand so I can bring you to mic it all thanks for the talk I was just wondering if you could give us some examples of container D competitors so I know that continuity is picked up by dock unity so what make it like 99 percent use cases but what about the others say I played Rockets what their own runtime and that's been donated to CN CF and I think there might be cryo that is the container runtime that kubernetes is also working on so there's also an alternate there but they'd be the two other ones besides container D that I know of off the top of my head but they definitely don't have as much community support as container details at the moment more questions well Scott I'm sure you will be around here for another 15 minutes or so if somebody has a late questions thank you you alluded on sort of Venn diagram of use cases for the orchestrators could you elaborate just a bit more do you up sorry for content day and the August rate is all right the use cases okay so I say communities has is hard to play and it's also hard to maintain one of the things that I found workers and kubernetes especially building a tool for puppet to the plays it that we released last week at Pappa Kampf is some things like logging is not easy it's not straightforward to find out what's gone wrong especially about things like SSL mismatch it or to say can't list the notes which is not straightforward to know that you've got an SSL mismatch between cube light and the cube API perhaps so I'm said to people if you're going to this this this point and you're only running ten nodes and you've only got eight containers do you want to take on this complexity for that workload wouldn't it be better than to start with something like swarm that's like one command runs and look at all the raft and everything's looked after out of the box and you can actually work out on how to schedule your applications how to work or how the clustering works when you're starting out small that's that's the the like when we go and when I go out and speak the customers that's the sort of use case they've heard they need kubernetes but they're really starting small and at that point it's not that they can't start with cuba needs more they probably are too early on they contain a journey to be even looking at that complexity they probably need to work out how to get their application working in hey CheY has a workout like ingress routing all these stuff they haven't got figured out on their own application yet they're more worried about the container Orchestrator because that's kind of cool to be talking about at the moment and then i would be saying to conversations to them is let's look at your application let's get this running on something like swarm it just works you might not have all the functionality but it's just rock-solid unless how look at your application runs on that on that platform and how you do like layer seven routing all these soldier security controls around containers do you want to do anything like set comp and things like that all those bits they've forgotten about they're just looking at the app Orchestrator so I'll just go back to them and talk about like the stuff that they should be worried about which they've got more of a control over and then say as you as you grow grow with the orchestrator and if you want to grow and go to kubernetes then do it but learn about your apps first yeah that's that's more of the conversation I have it's not really yeah it's more of that they're early on and they don't understand like for example containers and I'm gonna put something out there on the Internet it would be much better to orchestrate that on an easier orchestrated and start learning about the security process that you need to do than worrying about like don't like running a more complaints Orchestrator and then haven't been unsecured so then sort of conversations that I have where I think that might be a better use case and as I said as they grow they might want to go to communities and there's nothing stopping them but there's other things that they should be worried about before just using something because it's like a buzz or it's cool because there is running kubernetes that there is an overhead I mean once the docker versions out it might be completely different because doctors come forward and said it's supported they're going to look after all the security controls and stuff like that because one thing in the kubernetes deployments most people haven't thought about the security first why have to plain it it is getting better but like it's not as good as the inbuilt security for small so yeah that would be ok that's it folks time is over sorry but Scott will be around I'm sure thank you giving other hands to Scott [Applause]
Info
Channel: Docker
Views: 4,351
Rating: 4.8709679 out of 5
Keywords: Edge
Id: fIRaPGxhsH0
Channel Id: undefined
Length: 40min 32sec (2432 seconds)
Published: Thu Nov 09 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.