How to run Kubernetes on your spare hardware at home, and save the world

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
cover all right everyone this talk is going to be on how to run communities on your spare hardware at home and save the world by Angus Lee's now I want to tell you a little bit about Angus Angus has done all sorts of things with computers all that here surprising all of it revolving around Linux and free software he was searched on call during Google's second biggest web search outage did you course it though no that was the malware did you everything got marked as malware that was a fun weekend Gus has driven through the great sandy desert to install Linux to a satellite routers on poles he uploaded one of the first native apps to the Android Marketplace scummvm his home was the first place in the world to receive a quad a response from Google com he was interviewed by Keanu Reeves and Laurence Fishburne born as in preparation for the first Matrix movie and while working on OpenStack Angus accidentally became one of the earlier upstream kubernetes contributors and currently works full-time on communities and related tools as a senior developer with bitNami Gus is also a really nice guy he's a a personal friend of mine and more often than not he is the smartest guy in the room so can you put your hands together and give a warm welcome for Angus least yeah it's a bit of a funny story mostly an accident I was working for Google for quite a while and then I got bored of that one do something else so I went to rec space to work on OpenStack and it was right at the time I came out of Google I said right well I know how to run programs in on mortal machines you do something like this and then I wanted to learn about OpenStack so I from from a distance OpenStack looks like a similar set of problems it's a bunch of Python web servers that talk to my sequel databases and I was like well I know how to run a bunch of programs on multiple machines you do something like this so I just happened to be at that time the kubernetes had I only just sort of come out was just making making some names around the place so I've heard of this project so tried it out and tried to get OpenStack deployed on kubernetes way before either project was ready for it but um for me really the goal was to learn about our OpenStack and the architecture of it and on the side to learn a little bit about this community's project that I'd heard about so I wrote a bunch of patches and did some things to help that work and because I was working for Rackspace I had access to the Rackspace public cloud cluster that was the infrastructure that I had to work with and that is an open stack instance I was like Claude and so I was running trying to run OpenStack on kubernetes on OpenStack way before that was a thing people talked about and so I wrote a bunch of code in kubernetes that knew how to talk to OpenStack and I wrote a bunch of some flannel plugins over at a whole bunch of things to try to make that work and it didn't didn't go anywhere I learned what I wanted to learn the whole thing didn't work that was okay and so I left him and then about a year later I was it something unrelated talking to some people oh yeah we've been using this code and I started getting bug reports from people saying what what people are using this like this never works what do you mean and so then I had to fix it up so for a long time I was responsible for the part of the kubernetes codebase it knows how to talk to OpenStack so it's got a cloud provider plug-in sort of architecture and it knows how to talk to Amazon and and Google compute engine and then OpenStack was the third one and there's a bunch of other ones now a lot of them based on the OpenStack code module and then up until about the last twelve months I was pretty much solely responsible for that bit of code I meant a little bit code so just an accident of timing that I happen to be involved really quite early on in communities it was released in version 0.5 in late 2014 that was where that chunk of code got merged Austrian ok this talk really is not actually a talk about kubernetes where does that go to the next slide I said next slide you know sick while I focus the things pressed about metal day right this is not really a talk about communities it looks like it is but really it's a talk about free software and I'm really glad I'm following Karen's keynote this morning because it's very much those ideas applied to here so I want to talk a little bit about the context and really what the subtext of this talk is about once upon a time computers look like this how or biga and we had a model where you had multiple users logging into one machine you had you had mainframes you had shell accounts you had you know X Windows as an actual remote network protocol all these sorts of things where you had multiple users typically in a university setting sharing a small number of large computers and this was really the the UNIX heydays this was the big the classic era of UNIX and we had all of those brands we know about you know digital equipment and and Solaris and all these sorts of things and and big machines lots of architecture are lots of Unix then Along Came pcs hardware got cheap however got to the point where everyone had their own computer you know we got to the thousand dollar two thousand dollars at a price point and so we went to sort of this model everyone there was one person wanted me to the personal computer and this is where Linux really came in because you know this point we were driving price down and everyone could experiment with their own hardware they could reinstall it themselves all these sorts of things you couldn't do with a big shared computer so this is really where Linux suddenly took off was this sort of Europe and then this cheap plentiful computers and then as businesses kept wanting to do more with their computers they wanted more reliability more availability had to keep working all the time they had bigger and bigger data sets the computers got bigger so we we went with PC model but we just made the computers bigger and bigger we kept adding more stuff to them and as we wanted a more reliable we added more redundant stuff to them so this is a marketing brochure for this this is a lenovo something nothing very interesting server but you can see here this is the marketing brochure it's got four hot-swap power supplies and this is pretty typical you can you can add a passive like and fail while the computer's still running you can realize take it out plug a different power supply and keep going lots of power suppliers hot swap fan fans up here whatever this thing again the left is is all hot swap these things in the front are all hot swap so basically you've got all this stuff where you've got a precious thing that has to keep running and all around that we have this growing grant redundant infrastructure so that you can hot solve all the things plug them in and out and keep the the the core the sole of the computer still running now this yeah this is a Lenovo system X 850 x6 whatever that is I made a note of it when I copy the photo so we'll hop swapping all the things we're adding more redundancy at the hardware level this worked okay but you still got a problem this this still might fail doesn't matter how many things you put in there you still got to deal with the case where this breaks they're still going to be a fire like if one of these catches fire and fills the chazzy with smoke it's it's not gonna come back if it's in Iraq and if the whole rack loses power it doesn't matter how many redundant power supplies you had at this level if it's in a building and the building has a fire like you've still lost that whole that whole set of computing equipment so you still have to deal with the case where that fails it's just a numbers game just fails less often and also you have doesn't matter how big you make this computer at some point you're going to have a data set that doesn't fit in that in one computer so you still have to deal with with shard in your data we're spreading a data across multiple machines and so as companies got bigger and bigger and the demands got bigger and bigger more data more availability this approach just became infeasible lots of companies tried really hard to sell you a device that would meet whatever your needs were but it just became impossible to do with with with a device so this is I don't know the 2000-2005 kind of era we used to call a grid computing a long time ago remember that term coming out where we had loosely coupled machines they weren't the Cray supercomputers they were now loosely coupled machines that could fail separately without affecting each other they weren't doing shared memory they were doing what do they call it MPI I can always call what does that thing will used to do shared memory across machines to do learn distribute computing they were much more loosely coupled much more independent from each other and this allowed us to scale village so we had to deal with this problems anyway large data scalability and and hardware failure and so once you've written software to solve those problems you no longer need really redundant expensive hardware so we kind of flipped we went from putting more things into into this precious box to do the opposite the boxes and they're not important at all and we've got clever software Google was a big player in this Google really took this approach and ran with it but so did other people that wasn't just in part of this was we developed a whole set of new building blocks we used to kind of use these ideas on the Left we used to have live we just have in process linking things in now we were talking of the network we now had micro services we had remote procedure calls now when we talk about API - I think it's funny people talk about API and they just assume that means a remote API now these sort of particularly the key value stores and object stores these become really important building blocks and this traditionally would use something feature for like a sequel database in an Oracle database or my sequel the place queries and that that idea of transactions and and a table schema and things they're very attractive features but they're very hard to do at scale so instead when we go to the big scale computing lots and lots of machines we break that really into two separate camps one has has coherence atomicity and that's the key value stores so I can put a value in and I can get it out again and that's about all that's that's the only featured authors I can't do a search I can't do index I put put a thing in with a key and I get a and I get the value back again later on but it's reliable it's spread across multiple machines it uses Paxos your raft and it's very good at dealing with put a value in get a value out and make sure you don't lose that data anywhere in the in the process along the way and the other scale we have object stores which are you know s3 big Amazon what everyone knows about but lots of others and they are dumb and simple and scale to ridiculous sizes because all they do I store a thing I'll probably get it out again later on probably not straight away because lots of caching and stuff going on very very relaxed consistency guarantees it's really all just about scale and you couldn't do this with things like NFS before an NFS and traditional file systems generally have these ideas where I can write to a file close the file immediately open the file and expect to see that thing I just wrote which is quite a reasonable statement to make but but at a certain scale that becomes really hard to guarantee that kind of read after write so object stores are very relaxed they don't you don't necessarily get that Avia so these new building blocks that allows us to scale really large and and even to the point where whole administration practices changed so we no longer talk about in place I don't SSH in and do a package upgrade instead I replace the whole machine because this isn't this is an absolute statement of what should be on on the disk and my machine never deviates from that because it's always that's exactly what's running on that server and if I want to upgrade it III effectively deleted and replace it with a whole other version so I always know what's on there and if I've got a thousand machines in my data center and I'm using this approach I can say every single one of those machines is running pretty much a bit for bid that image whereas if I'm using this version the older sort of smaller scale version it's easier to get deviation and I know quite know if that machine was offline at the time that upgraded all the others and then it comes back and then I do an following on upgrade I'm no longer quite know what's running on there some slightly different combination of upgraded packages so whole whole practices administration practices also kind of evolved with this so this is is a big change big industry change and of course now we have data centers look like this this is a you know typical google picture the Google Data Centers lots and lots and lots of machines and the machines themselves are so unimportant these don't even have cases on them they certainly don't have redundant power supplies if you're the Google Data Center you don't even turn on journaling on your file system right all you're doing is is if you've got a a journal you're just doubling your rights and yet the data if the data is only on one machine it's not important so who cares about doubling my rights to save one copy of data like that's not where my redundancy is my redundancy is at the the object store levels right it's already spread across multiple machines I upset people a lot by saying what can we turn this server off yes of course again just turn it off but what if it's important if the partners not a once over so we've kind of gone back to this model now we have one corporate user who's now using multiple machines so we've kind of gone this whole spread of multiple users using one machine user per machine and it we've kind of gone the inverse version where we have multiple cheap computers PC class computers because they're the cheapest you can buy and we have effectively one corporate user although there might be something like Google that then or Amazon that one sells that so this is sort of the evolution this is the background that's happened to you now this software we write at home the the the free software the scratch around each software it kind of got stuck back in that Linux PC era we still write software if you look at most of the things that are packaged in a you know Debian distribution most of that software is still written using those old building blocks it's still reading and writing local files it still has a set up process that involves running multiple steps sort of interactively we have all those building blocks on the right hand side in free in the free software world but they're not used very much they're used by the software that we write and work the free software at work so so when I go to work every day I spend most of my time working on kubernetes but but I'm working with key value stores I'm working with object stores this sort of structure this architecture of software that I work on in my in my eight hours of work free software and when I go home my eight hours I wish of home free software I'm writing traditional like real you know nine seventy zero architecture software and this I don't know I feel like this is getting worse like it's it's a success for them we have lots of wonderful free software that we can work on a work now but it's different and it's diverging more and more I feel from what the kind of the home the actual sort of classic free software and this I feel this has a lot of effects and I had trouble putting words around this so I'm really happy I'm following the keynote this morning Karen Saylor's keynote because she did a much better job articulating what I was trying to say here there's a bunch of flow on secondary effects from this which I feel in the long term are a problem or issue it's it's it's partly a success problem it's wonderful that we can go to work and work on open source software at work but the fact that now we've got two camps I'll open source software and that the one that has a lot of the the resources behind it being add hours of work day are going off in a direction that's a little bit different it's kind of and the the features that we're getting as a result are different so we're seeing more Apache rather than GPL for example because companies tend to be nervous so GPL and tend to be pro Apache so now a lot of projects are a patchy base and there's a decreasing number I feel of GPL based software now and you see yeah the federated protocols I'm a lot of one of the last big successful federated protocol was probably get and it's done pretty well but there hasn't been a lot of others we don't have a good chat Network anymore we don't have these sorts of things are now becoming centralized services and there's a number of things and other problems that free software has typically been good at solving but it's very hard to put a business case around it's very hard to turn into money so when you ask a company to choose what projects they're going to work on what open source projects is this open source friendly company going to work on there's slightly different projects to what you would work on if you're at home without a profit motive and and the two that I find particularly in the kubernetes sort of world the docker world multi arc multi arc support exists in docker but it's rough and if you compare that to something like Debian wonderful support for multiple architectures huge infrastructure you know a strong support from the ground up within the community within Derby for that other architectures are important even though you can't make a business case around that you know your business case is always going to be seriously Intel and maybe arm and then nothing else so there's a bunch of those extra parts around the workflow which if this was a home free software which would be being worked on but their real weaknesses in a lot of these new free software projects because they're corporate driven and they have a different set of priorities so it's kind of where we are so there's this this is gross oversimplification this slide but it's kind of trying to just this is what I'm thinking about where we're up to with this we have a bunch of challenges in the software space that for a variety of reasons that work free software architecture those that new world that cloud native architecture never quite caught on at home and so we have these two diverging camps of free software and in this leads to a bunch of secondary kind of difficult to put your finger on issues and so this is when we're thinking was Act so this is kind of the historical context so on I mean this world and look at this go hard it'd be nice if there was some way to unify this and bring it all back together and I remember having a discussion with Josh Hesketh quite a few LCAs ago about maybe coming up with a new distribution that was all based around as a service so we were providing free software based services over the Internet but we'd run it in like a distribution community driven sort of project something and we're trying to come up with a way to sort of tackle this problem is diverging camps I didn't have good answers all right so I want to I want to so Park this this topic this part of the discussion and now we're gonna talk about if I can get an X slide now we probably burn a tease so I want to park that I'm going to come back to it all right kubernetes kubernetes is a project that came out of Google originally as a free software project it's now owned by an organization known as the CNC F it's their flagship product although they have a few other projects or they have a few others as well under that banner Prometheus and a lot of things the name kubernetes is Greek it's got something to do with Pilate or captain it's that sort of word it comes into Latin as gubernatorial ish as some quaint words like gubernatorial if you follow the u.s. elections and our governor is a more normal English word and also apparently kubernetes comes through French into cybernetic so it's actually quite a good a good word for a computer project and often abbreviated to ka des because it's annoying to type it is I like to think of it as UNIX process as a service you give it the things you want to run and it runs it you give it effectively a UNIX process you say here's the executable I want to run here's the command-line arguments here's the environment and it takes care of where and how that's actually going to run and people get all excited about docker and containers and what does that mean and I have to sit them down and explain it's just a process it's just a forked child process just like we're used to when you run something under system D and it runs our position process if you look at that and what's running on the host at the end of it that's pretty much identical to what happens when you run it through docker there's not nothing magic it's not different it's not VM layers it's not more or less stable or more or less reliable it is just a forked process that has instead of just the chroot it's it's done a namespace chroot equivalent but it's it's really it's old-school unix something that kubernetes brought in is the idea of pods so when it runs a container runs a docker image or one of the docker equivalents it doesn't run it in isolation it can run a group of them called a pod and kubernetes guarantees that those containers always run together always run on the same host I started and destroyed together and because it always running on the same host do they share the same or they can they can share file system so you can mount a volume into any one of those containers in that pod and the way community sets them up they're all running in the same network namespace so they can talk to each other via localhost they share the same idea of what localhost means so you can run a typical example would be to run a log so you've got your main Apache server for example and then in another container you might run a log fetch or something process that would pull up the logs from a shared volume and then push it off to your log saving architecture ah yeah I've got a nautical theme all of logos are project names in nautical now one of the things I like about companies is it's conceptually very simple I have trouble thinking about complicated things so I try very hard to make and and I like projects that I can understand it as simple it can fit in my head so here's the general idea we have one API server so our replicas but this is basically a web server it's taking in rest HTTP requests and responses and it talks it translate that into sed stores so you send a put I want to I want to create an object here and it turns it into a net CD store operation and then later on you come along you do an HTTP GET and it turns into entity gets operation real simple and then asynchronously you have a whole bunch of controllers and these controllers they're just loops they're big loops that go to the the API server and they say ok what's the latest version of this particular object right I'm going to give me I'm going to look at all of the objects of this particular type and then I'm going to try to make the world closer to what's described in an object so I might say I want to run a service of this particular name and somewhere in here to the service controller that looks at that and then try to make that happen and it's always just looping trying to make that happen so it's very fault tolerant if this is doesn't run right now or it's crashed or there's an error then you can fix it while at a new version and then it'll just start where that lifts off I'll just look at the latest version to go right that's what I should be doing try to make the board like that I've highlighted a few particular controllers here the controller manager and scheduler are the important ones there they're the ones that are needed to run all the other controllers and then on each host you run a cubelet and it's basically an agent that talks to the local Daka demon or container D or cryo server or rocket or whatever technology you're using and it tells it what to run but from an architectural point of view it's just like the other controllers it's just looking for a particular type of record that the API server is storing an F CD and it's making that happen in that case it's supplied this pod should run on this host record and it just makes it happen xcd if you're not familiar with it is a key value store very much like console or zookeeper they're the sort of three big free software key value stores it uses raft which is a an evolution of a rethink of the Paxos algorithm but it is otherwise conceptually very similar which just means you can run it on multiple hosts you store something in there it stores and all the hosts reliably it waits till more than half of the hosts have acknowledged that right and then it tells you yes this is now safely stored so I can I run it typically to a net CD with three or five nodes and as long as that right goes to more than half you know it's secure within that HCD cluster yeah very simple reasonable one state store and if you've looked at something like OpenStack you'll understand just as simple as here's OpenStack isn't a dozen or so API server equivalents it's got and dozen or so different places where data is stored they might all be on one my sequel server but they're all treated as separate tables and separate stores and these other controllers also interestingly are able to run in kubernetes so there's lots of kind of value-add controllers like even important ones like the dns internal DNS server which actually run as regular kubernetes jobs which makes them very easy to manage it also has a very simple network requirement it only wants you to be able to forward layer 3 so so IP packets or ipv6 from a pod running somewhere on this machine to a pod running somewhere another machine doesn't care how that happens so there's a lots of different ways of forwarding packets the simplest of course is just a simple kernel route forward rule but you might have tunnels you might have fancy layer 2 segmented something or others with VLANs and whatever it doesn't matter so long as you can afford a packet from here have a go out and arrive over here somewhere kubernetes is happy so this is again quite different to OpenStack which demands you know layer 2 and these sorts of things very simple very easy to provide in lots of different ways in fact if you were a large data center like google scale you would pick static routing the simplest dumbest approach part of your machine life cycle every time I put this machine in I have a whole lot of processes that I follow to make sure it's in the right place in the rack and it's recorded in my Hardware inventory part of those processes would be and I allocate this pod subnet to that machine and so you would just make sure that your network fabric had routing rules set up static routing rules so that whenever a packet was destined for this prefix it was routed down through the fabric to this this machine very very simple so this is good because as I said flexible on the host as I saying we've moved the machines and now not important the individual machines we expect them to fail so we've kind of inverted the stack from what it used to be in the days of redundant everything servers this error is to go the other way we used to have the colonel was important the the host file system was important now we've kind of flipped it now the host is the least important part there's no reason you would ever do a backup of the host file system in a Canary's cluster it's just not interesting the only thing you care about is the application data which is stored somewhere else on a network storage device of some sort or some sort of redundant style across multiple nodes the hardware itself you can turn the Machine off without even telling anyone and kubernetes will cope if you need to reboot the kernel that's easy you just take the Machine down reboot it bring it back up again see it's like these are easy problems down here if you want to replace the application that's also okay because the data stored somewhere else so kind of importance goes the other way now this is really really nice for upgrades I can do upgrades of pretty much any magnitude at the host level train the Machine do whatever I want to the Machine I can replace the file system I can reboot it I can put it kernel out there whatever and then when I'm finished I undrained it and goes back in the cluster so this makes my admin tasks stupendously simple everything looks like a rolling upgrade across the cluster it doesn't matter whether it's a I've got a new version of some unimportant package or I've got like a new Lib C or a whole new kernel and in tahrir it doesn't matter they all have the same cookie cutter process and so yeah it's application focused so we get the container basically captures the application we want to run so it's it's an executable that's environment variables its command line arguments all those things you think all this application concerns now getting a little bit more detailed kubernetes looks these is what the objects look like they're in JSON but ya Milles are shorter to put on the slide they all have this API version kind at the top there that tells me what sort of object is he in this case it's a deployment object this is a deployment object for nginx it's going to have two replicas there's the image version that I'm running down there the nginx one seven nine and it's going to listen on port 80 there's nothing really revolutionary there but have a look what I'm not describing there I haven't talked about what operating system are putting on putting it on I haven't talked about kernel versions I haven't talked about where I want to run it all of these host level concerns are taken care of automatically about kubernetes okay this is really application level there's the equivalent command on the right there keeps ETL so that looks like I can write commands and that happens but if I'm doing is this from from a team based admin I'm going to do it using the left hand approach where I store it in gates and get ops workflows and all those wonderful things but either approach is fine there's a bunch of other useful commands that you'll want to look into if you're doing this the describe shows you what's running and logs tells you the logs of the container these are useful debugging tools and then this is good running exactly good for just quick testing out things so that's kind of what it looks like there's a bunch of these objects that I create for different tasks I'm not gonna go through them all because that's what the documentation is for but that's just a feel for what's out there and there's a whole lot of other objects that communities knows about and from that earlier slide these are all just controllers somewhere in kubernetes here's a controller looking for that type of object and making that thing happen when it sees one of them and for those in detail um now that the machine lifecycle level hardware failure is not an emergency it boggles my mind and instead we've conducted this experiment for decades now and I've got news for you all the conclusion is hardware fails yes that's right and and I've got some bad news about Santa Claus - the the what still surprises me is that as software developers like we treat hardware failure as an exceptional situation and it's not it's normal so when a bit of hardware fails like every time if you're on your website every time a query comes in and you serve back a successful 200 response you don't page someone that's not an emergency that's normal in the same way if a server dies you shouldn't page anyone because that's normal you need to have built a system where that becomes normal okay and kubernetes is one of the first big projects I've seen in the free software world that makes it possible we had tools before orchestration tools our puppet chef ansible and other things in that class they were you ran them once and then they exited they helped you set up but then they stopped they didn't deal with the hardware failure is normal case kupah days on the other hand is always actively managing because it knows what application I'm running it's always health checking it's actively managing them it knows when they die and come back it knows to reschedule on somewhere else so hardware failure still you still gotta do so you have to replace the hardware eventually but it's no longer an emergency you don't have to get paged for it you just turn up on Monday at 9:00 a.m. and go oh look three machines died over the weekend I've got a very place then it's just part of your regular work now and as part of it separates the concerns and this is a big deal in corporate environments we've now got a separate teams can work very efficiently with each other without stepping on each other's toes and this brings us back around to the home communities process so restoring that that previous conversation about the diverging free software stacks free software communities I had this did you think about this problem and think about where we're up to and you know I wish there was I could see some way out of this kind of progression and so I looked at this is this is my little home fall server machine at home everyone pretty much everyone here I'm guessing has some sort of a raid setup at home it's kind of it used to be a high-powered enterprise feature and now it's a very normal power user feature if you're if you're a technology savvy person with some slightly advanced infrastructure at home you've probably got something like this and it's so boring and simple and obvious and what whoa hang on what do we do the same with compute the same things the same reasons that drove companies to follow that model also apply at home we now have decent hardware that's cheap we now have the same issues of hardware failure and elastic hardware I want to add and remove machines at a time easily my hardware dies and what's your place like all these same pressures driving it also play at home so my idea was let's let's let's do this let's let's make a distro sort of model will use that separation of concerns approach to get the a project a community a distribution idea that's providing the underneath layer and they'll provide up to the kubernetes layer and then from communities up it's now whatever I want to run at home and this is actually quite achievable and because it's got to be automatic and maintenance and easy upgrades have to be very simple or else it's just not going to catch on so here we go I've got a budget of about a hundred bucks that's in the in the cheap to achieve to care too much about and this necessarily means our machines computers I need three from eight city cluster I need at least three machines you know what to be able to deal with a fairly new any of them and when you multiply everything by three suddenly you care a bit more about the price so if I could get a knock device or something there you know a cheap Intel device it's a hundred two hundred dollars times three there's actually real money now so I have to look at arm class hardware I bought banana pies because I'll do this a few years ago and nowadays I probably pick up point 64 or something but they're about thirty bucks each that's about the budget and I assume a home there I see you Ethernet and then I try to be very normal because I don't want to put much work into it I want to just draw on what the standard could be news people are doing so I also have high school-aged children which means I have to keep buying about $500 price point laptops for them and they keep breaking them so I have a growing supply of laptops with dead screens and missing f12 keys apparently that was a thing you'd steal each other's f12 key whatever they're all xx this class none of them are grade and so I've run in corset I really like the core OS model they've got the a/b upgrades and automatic upgrade so download a new one and just flip to the new version automatically and then my control plane as I was saying has to be kind of armed class this photo didn't work for a wall because the LEDs are really bright and it confused my camera so it's quite dark I did turn off this is literally a photo of my desk you can see I put a lot of work into making it neat before taking the photo there's my three banana pies that have my HDD cluster across them and they're running those those core kubernetes jobs the API servers the scheduler and the controller manager and I discovered that core OS didn't work when I'm 32 I had to go out porting it and said that's too hard so I read my own operating system instead contain us it's built on opening better IP bedded is amazing I'd love to give the whole talk about that one day so this is small a lot better faster than core OS it's more portable more of a systems you're welcome to use it it's cool and yes so then I had a couple of things I had to adapt those if you're familiar with kubernetes there's a few kind of core building blocks one of them is a persistent volume whenever you have data you actually care about you store it in a persistent volume object which is an abstraction for whatever remote no architecture storage thing if you're on Amazon turns into EBS volumes if you're on Google turned into what are they call them persistent disks in my case I've got my raid NFS server thing so I just turn them into NFS storage so I run create this object that tells kubernetes by the way there's a storage class object which I called managed NFS storage move it along to type I should have picked a shorter name and I'm making it the default so then I don't have to call my other bits of config I don't have to say what it is I just say I want to persist in volume and it knows that this is a default it should just do that and on the right you don't look too closely on the right is the deployments and actual controller that runs that looks for anything that wants NFS storage and on the fly it creates a subdirectory and mounts it and attaches it and my job uses it everything was wonderful really really nice really simple and not anything revolutionary one of the other building blocks is a service so I use for this I chose keep alive D someone's written a thing called keep alive D VIP which takes the config file on the left there in form of a config map and it says any time you see a runs capable ID any time you see something coming into that address you should forward it on to that service and again like these are just point is if you want to go learn about these go look up the name you'll find a more documentation that I'm going to try to put on a slide about these projects and on the right there I put it's a more interesting example of what you can do with the deployment so this is running in the host north namespace in a privileged context it's slightly more example of what you can describe their English was a bit complicated ingress object is kind of a reverse proxy it's the nginx abstraction I saw use a combo a combination of keepalive D and nginx and I've got let's encrypt running and so I get TLS certificates automatically and I have two separate setups one for if you're coming in from outside my network so I can run web hooks and things so I have it set up so that I can say hey Google or something something something and the Google assistant will talk to if this and that which will talk to something else which sends a web hook to my thing which comes in to my home network via here which then hits the engineer X between keepalive DB kubernetes which and then it turns my television on and then I have a separate internal one for just simple stuff like if I want to go to the Prometheus dashboard or something and I don't have to I don't want to worry about security and stuff so last couple slides the good news is it all works this works we're half a couple of weeks it's in production again high school-aged kids my wife's a teacher so for production for me means printing printing works and has been running through this system for a couple of months now I have a really nice install set up for the for the laptops I run all that in communities I run our ditched HTTP proxy and TFTP serve as communities jobs so I can just plug a laptop in pixie booted bam oh bam I'm into a core OS install I Intel stage a nice demonstration for me meltdown upgrade rolled out across my kubernetes nodes without me having to do a theme which was really nice and there's some stuff we need to work on still mostly around security the big thing with the banana pies is the CPU couldn't keep up with the TLS encryption on the gr PC connections so to turn off to a significant which is actually a bit pretty big hole unless we can go about things so I'm at a time I don't have time to deal with questions but I'm really happy to talk about this stuff and there's a bunch of links to other things you can go talk about find out about this I have all of my home network configures up here this is all of the kubernetes config for all the jobs and running you can just go and look at it and this is in a tool called cube config which is what I actually write in my day job it's just a generator for the JSON objects rather than having to write them out by scratch and thank you very much okay you have nine minutes and then our next talk will be in here with Casey shuffler so thank you very much how the state will leave up you and doesn't mean Hawaiian shirts a week so Brenda week a very happy to talk about the stuff and if people understood or want to know about next steps or anything like that I'm very happy to talk about
Info
Channel: LinuxConfAu 2018 - Sydney, Australia
Views: 6,741
Rating: 4.7192984 out of 5
Keywords: lca, lca2018, #linux.conf.au#linux#foss#opensource, AngusLees
Id: O--bzZ9ker0
Channel Id: undefined
Length: 44min 41sec (2681 seconds)
Published: Wed Jan 24 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.