Kubernetes 101 - Episode 2 - Containers

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome everybody to kubernetes 101 episode 2. today on this episode we're going to talk about containers which are the building block of all kubernetes clusters we're going to talk about how to build a container we're going to talk about building a custom go application and putting it inside a container and we're going to push the container to a container registry all things containers so hello and as we do with with all these live streams it's a little more informal than my normal youtube videos please feel free to say hi in live chat and where you're from it's always amazing to see where people are from i found out last week there's somebody else watching from st louis in crestwood which is only about eight or ten minutes away from me so it's always fun to see that there's other people in the midwest of the us flyover country and if if you looked at uh my video from a few days ago on the raspberry pi webcam i was wearing a casey it was called flyover camp from kansas city u.s and there's a joke in the united states at least of the middle of the country being flyover country because all the people from the east coast fly over to the west coast and vice versa and they never really land in the middle of the country but as we all know who live here in the middle of the u.s that's not true there's a lot of people that live here and there are even like 10 people that do tech stuff in the middle of the us anyways it's great to see everybody i wanted to say a couple things about this streaming series because i think it might not be obvious to some people especially my youtube channel has exploded quite a bit in popularity far beyond anything i ever imagined it would this year which is awesome thank you to everybody who subscribes um but i wanted to mention that um if you come into one of these videos you might notice the first three or four minutes is usually just this me babbling about stuff and a lot of it's not very consequential and it's not teaching about kubernetes but as soon as possible afterwards and i have an extremely awesome production assistant who helps me do this as quickly as possible we are able to put chapter markers on these videos so that you can skip this boring babbling part and go straight to something that you want to see and on the screen you can see i have a little gif of me looking dumb while i'm hovering over the chapter markers and showing you how that how you can get to those sections and it's also in the description of the video now if you do this right now you're just going to see the live playhead because this is streaming live at this moment but afterwards this will happen so within a few hours after posting the video and youtube processing it i'll get those chapter markers up and it's really helpful for people who are coming far beyond today if you're watching this live obviously it's not very helpful because it's not there but if you're watching this months from now or years from now in probably any year or two this will all be completely irrelevant because kubernetes will have been completely replaced by some new technology because that's our industry but if you're watching this a month from now it might still be relevant and you might like those chapter markers another quick note is that ansible for kubernetes is going to be i was going to just going to do this for the first couple weeks but i've decided to make this for the duration of the series ansible for kubernetes you can get a coupon link in the description for 4.99 now this book focuses more on automating kubernetes with ansible but a lot of the examples and tutorials and things from this streaming series are coming out of this book because it doesn't matter if you automate with ansible or some other tool the point of that book is to show you that kubernetes itself does need more automation wrapped around it if you want to have a truly infrastructure as code kind of lifestyle and i choose to use ansible for a lot of that stuff but you could use other things as well all right you also might notice that today i am wearing a purple shirt and what is that for well you might see some weird graphics here this shirt is a roll of toilet paper and it says on it save toilet paper cure crohn's next week is crohn's and colitis awareness week and in the united states we have an organization called the crohn's and colitis foundation that helps patients and helps with research for crohn's and colitis which are chronic diseases which i happen to have i have crohn's disease and i had to have a few major surgeries and if you've been following my youtube channel there have been a couple periods this year where i've had to deal with crohn's issues anyways i'm helping raise funds for an event next month called spin for the cure and there's a link in the description if you'd love to support the crohn's and colitis foundation through my length that'd be awesome if not that's fine i know that there's so many different causes that we can help nowadays and at the beginning of the pandemic i know i also called for people to please donate to food pantries local food pantries and there's you know there's local food pantries all over the place there's feeding america there's a lot of other ways to get where your local food pantry is but i know that a lot of them are still trying to get fresh food and things and luckily toilet paper is a little bit easier to get right now but you know it is better to save it so if you want to donate to the crohn's and colitis foundation that might help put a dent in the demand for toilet paper i won't get into the details of why that can help but anyway let's get back to this we're going to talk about kubernetes and containers today and it's important to have an understanding the foundation of why kubernetes is using containers and i'm realizing now that i probably shouldn't have the same background on the slides that i have on the background of this video because it kind of just blends together but whatever maybe we can change that up next time but why does kubernetes use containers what's so great about containers that makes it be a cloud native technology that's kind of at the core of all this cloud native stuff and i think i have to go back to a personal anecdote that i have from my time that i worked it was i think right after it was in the early 2010s we were on a team of developers who are building a website and web application kind of a suite that worked together with some search through a back-end system and that search was having performance issues and we were trying to do things like geolocation and all that and that was right at the time when um web browsers started allowing geolocation apis and things so we were integrating all this new stuff and we had a tech stack that was starting to get more complicated it used to just be the lamp stack and then we incorporated varnish on top of it for caching and proxying and edge side includes and all these other things and solar for the search so it was faster and had better text indexing so we started adding more parts to our stack and they started becoming very essential to our application and so we had this problem where i would be working i was kind of like the team expert on varnish so i'd be working on the varnish configurations and getting varnish running on my computer so i could do some quick testing and feedback and debugging cycles and i would get everything working it was awesome and it was like oh this is great it's super fast it's amazing and then i would be like okay i'm going to hand it off so you can work on a search part on your machine and there was problems because it was it wasn't fully documented the process of how to get varnish working on a different machine and another problem we had was that i was on my mac at that time another guy was on his windows laptop another guy was on a linux machine and all these different machines had different ways of installing the same software or it was hard to get the same version of the software so anyway this was it was kind of a meme i mean it still is in some places you might have this issue especially if you have some person who's like i'm i'm going to do it this way i'm you're never going to get me a change on my machine and it works here and if you can't get it working on your machine then that's your problem but we ended up going down this path and in that case back then that was before docker was a thing uh the year that we had that issue and so we used virtual machines but docker has a similar philosophy of well it's working on your machine so instead of instead of trying to make the configuration all be in a central place we'll just ship your machine and this this is a little bit of a joke i guess but in us in essence this is kind of how we are doing it even though you know it's it's not in the spirit of taking all of our stuff and shipping the whole machine that's kind of like what we did with the virtual machine based approach that we used in that project where we had solar and varnish and the lamp stack and a couple other java applications running in the background but with docker it's it's a little bit different and i'll get into those differences the main things that are important for docker and containers and it when i say docker sometimes i mean docker the actual like application or company and other times i mean containers and in my brain they're kind of synonymous you could also substitute podman sometimes for docker and containers or you could substitute cri crio or something like that we'll get into that in a little bit anyways so containers are made to be portable that means you can build it in one place and move it to another environment altogether so i can build it on my mac move it to a linux machine running in the cloud somebody could pull it off of there and run it on their windows machine or another linux machine running locally so they need to be able to be moved from machine to machine and still have everything that they need to run on all those machines they're also isolated and this is this is kind of the heart of what a container is technologically it it's something that uses some kernel features in linux to isolate one container from another using resource groups and c groups and name spaces and things and we're not going to get super deep into the details of technically how a container runs and and container groups and things but um the the basic idea is that it uses features in the linux kernel to build its own little island that has its own resource limits and has its own resource future usage and permissions and other containers can run alongside it and not know it all about what's going on in that container or you can link containers together things like that but the point of it is by default they're isolated and you have to explicitly say i want this thing to see this thing or i want this permission to be given and this permission to be given they're also consistent meaning when you deploy it in one place or it in another place or delete it and recreate it it's always going to be the exact same every time because of the way the container is structured and finally this is another reason why they're so great for cloud native computing where we might have servers all over the place who might have multiple clusters we might have some edge clusters that are running in a place that has a lower internet bandwidth available to it they're lightweight meaning they don't need to ship a whole lot of stuff with them and i think this this brings it up to the difference i think between containers and virtual machines because we might be familiar with the concept of both but the big difference is that containers and virtual machines containers are more of the application itself in its its own dependencies and virtual machines include an entire operating system now there are some containers that kind of emulate a virtual machine and do ship an entire operating system inside of it but that's not the norm and that's not what we want to target when we're building these containers the other thing that that all these four principles enable is the idea of building it one time and running it anywhere with clear boundaries and i uh this is another thing that really forces you to i think become a better developer and think a little bit more about the architecture of your applications is when you deploy a container and it's running your application your application has to be able to write to certain places like maybe a folder on the system maybe a temporary directory and you need to know with your application does that directory that it writes to does it have to be persistent so when i delete the container and bring it up back up does it expect there to be data there already the next time that it starts does it expect certain configuration options to be present in the environment if it does we have to hard code those and say these are the configuration options i need for my container so it makes you think a lot more about the boundaries of your application in interacting with infrastructure and sometimes people don't do that and it it also helps you to clean up like the entire deployment process of your application because i'm going to use an example and there might be somebody you can raise your hand if you've ever used magento before but there are some applications like magento that are sprawling applications that write to maybe five or six different directories just to run and cache things and stuff and they need the database and they might have caching layers and search layers and things that's not the way i think that i would want to build a cloud native application where you have lots of directories that have to be persistent or have to have high performance to be able to write and read from a directory and share it across different instances of the same application instead of that you need to find ways to use caching mechanisms like redis or something like that and and and be able to build a cache or build theme caches and things in your code and have that ship with your application instead of having them be written during the run time of your application so containers do require a little bit more diligence and it's not something that i think if you've never used containers before you're not going to pick it up in a day and start using containers and thrive you're going to have to take some time to figure out how to containerize your applications but containers aren't vms and there it there are ways to actually run full vms in kubernetes clusters and there's technologies like cubevert i think vmware has some tech that that lets you run vms inside a kubernetes cluster using kubernetes native terminology and crds and things but i'm not going to get deep into vms inside of kubernetes in this series mostly because i just don't have the time to talk about it and because i think for most people the the the reason they would use a vm inside kubernetes or managed by kubernetes is uh to run legacy software that's not really something that we all are going to do a lot of us are not going to be migrating a huge cloud of 300 applications into kubernetes some of us are and that's great but there are other resources for you to learn about that and that's not a 101 level series topic another thing that could be helpful for vms is high security applications where the the security of the hypervisor and certifications and things are important but again i don't think that's a one-on-one level course uh topic and also this is not a course talking about security and kubernetes we're going to touch on security but this is not like a deep dive if you're an infosec professional there are other places you can go for better stuff for that all right so the other thing that we need to cover before we get into actually using a container in kubernetes and uh and building it and figuring all that kind of stuff out is the history of containers and it's complex i put a note here at the bottom and you might not be able to read that it says yes i'm leaving a lot out and that's because there's a lot politics is messy and containers and the container ecosystem is full of politics and there are two reasons i'm leaving it out one is i don't want to offend too many people if i can avoid it if if i say something that that might be a kind of a political hot topic and the other reason is just there's so much that has happened since 2013 2014 2015 when when the container technology started forming into the container products that we see and use today and so i'm going to skip over some of the earliest history and start with something the thing that most of us actually started seeing when we heard about containers because there was marketing behind it and that's docker um docker actually started before 2013 the technology behind it but docker was the first very popular container technology that came onto the market and was really the inroad for a lot of us to start learning about containers and in the beginning docker was a company and a technology and a standard and it was software that ran a daemon and container building basically it was this giant thing that did everything for containers and as the years went on docker has evolved quite a bit they they broke out some of the enterprise stuff to a separate company their development tools have been broken up a little bit so now there's like a demon that runs in the background container d and there's a thing that runs containers run run c or c run c i think um and they're part of the open container initiative they're part of the um they're part of basically it's it's an ecosystem now for docker and not just one giant tool that does all of things although when you install i have docker for mac installed up here when you install docker for mac or docker for windows it's going to install these things together i think docker even it even includes let me see it includes a kubernetes cluster thing built into it i don't use the docker when i use mini cube or kind but docker includes kubernetes with it so it is still like a giant monolith if you install the desktop version but you can install the components as well around the same time that docker was on the scene core os which is now not a thing anymore coreos came out with an operating system and rkt rocket which is this little guy around that same time they started to get popular as well but the mind share of containers when people thought containers like i'm going to use containers a lot of people went to docker so there was a problem because rock rkt used its own kind of layer of interaction with the system and so kubernetes when kubernetes came into existence around 2014 2015 kubernetes originally was working with a plug-in that worked with docker or rkt and they decided you know what this is complicated because both of these things do the same stuff and use the same underlying technologies but they do it slightly differently why don't we make a new layer that lets us translate between different companies technology and the containers that they run and so the open container initiative was born and both of them kind of merged their work into building open standards for how containers run and so out of that docker uh moved its code for running containers into run c and i'm some of the things that i say here might even be contentious but to some of the people who have worked on this case because i'm leaving out so much detail it's ridiculous if you if you have followed all the stuff that's happened in the industry in the past few years then i'm sorry that i am leaving out your pet project or some of the background behind some of these things but anyway you shouldn't confuse run c though with c run which is a new version of this c based container running system that was introduced last year and there's also lots of other different tools that have come and gone like brap oci which uses bubble wrap and runs containers through it that came in 2016 and seems to have kind of petered out as well there's a bunch that i'm leaving out here there's also the crio which is an initiative that came out of red hat and i believe open shift which is using something called the container runtime interface which is so there's open container initiative and the container runtime interface that kubernetes used together to have a way of interacting with containers through kind of a layer of interaction that that allows different backend tools to run the containers and all of these different things run cc run docker's own implementation all these different tools have different advantages and trade-offs between usability performance speed of downloading containers to your cluster all these different things but crio actually tries to make a lot of that go away by allowing it to be a little more plugable and a little more performant and focuses just on running containers inside kubernetes clusters and no other use case so anyway there's all these different technologies and i have to say that if you want to read the best article that i've seen that summarizes all of this there's an article on capitalone.com and i have a link in the description it's called a comprehensive container runtime comparison and was written by evan baker so excellent article if you want to read up on the history of that and also i want to say that the sponsor of this series amazing.io is one of the companies that is going to help you wade through these waters because there there is a lot out there in the last episode i talked about the cncf and all the different landscape that they have all these hundreds of projects and and companies a company like amazi.io is going to help you to choose the best tech in your kubernetes cluster and to run that technology and not have to worry about the fact that this company bought out that company or this company is introducing this new technology they're going to give you the best running kubernetes cluster so you don't have to worry about about that part you just have to worry about making your applications run the best that you can and helping them to and to actually do the things that your company needs them to do check out amazing.ios products at amazing.io and take the pain out of managing your own kubernetes cluster and just focus on your applications so we know about containers we know about their history we know that they help you to have a lightweight portable isolated environment to run an application or to run multiple applications on one machine so how do we build it now there's actually other ways there are a lot of other ways to build containers no sound uh oh okay so it sounds like some people are having issues but try turning up the volume and unmuting i guess but anyway so there's there's actually a lot of different ways to build containers these are two of the most popular builda is a little bit newer on the scene docker of course has been around for ages most people probably are still using docker to do container-based things locally at least build-up is a newer project that comes i believe out of red hat through probably one of the container initiatives and if you look in the industry there's there's a lot of companies that have their their hands in the hands in the mix for all these container things there's red hat and ibm there's amazon there's google there's microsoft all these different companies and there's tons and tons of the smaller companies that aren't like billions of dollars of market cap that are doing these tool tools and it's you know in in maya in my opinion the philosophy over the long term is use what works and use what other people are using and don't don't be so quick to jump to some new technology just because you saw it was posted on hacker news and people are all excited about it and that's why today and for this series i'm still using docker and uh there's no real issue i don't think with using docker i mean the thing is that all these tools build containers that are compliant and will run in your infrastructure so it doesn't matter as much what tool it is it matters more is it easy to use that tool can you install that tool easily and not have to worry about drivers or bugs or whatever and are you going to be able to get help with that tool so the docker community and the build a community whoops both have a lot of great help and a lot of great guides out there but i'm more familiar with docker so that's what i'm going to be using here and enough with the slides slides are boring let's go ahead and build some things so i have from the previous episode we had that we had a little docker container that we deployed to our cluster excuse me and that that container was actually built out of a docker file that's in this episode so you might not notice this but if you go to let me bring up a web browser over here if you go to cube101.jeffgearling.com that's the website for the streaming series each episode the day that it comes out i post up here an article about it and it has links to all the examples from the episode so for episode one there's a link to the uh mini cube instructions and it basically said uh let's see that's for building the deployment here so we were creating a deployment last episode of cube101 intro and it showed a video on it of of charlie bittme one of the most popular videos on youtube of all time and i'm going to go back over here and that is this web page basically and the web page is part of this github repository that has all the examples in it including today's examples and you can see that the page here and then this is how i built that docker image i use this docker file this is probably one of the most simple docker files you're ever going to find and what it does with any docker file a docker file describes to docker how to build an image first you're going to save from and then you're going to put in an image that it builds off of so docker builds things in layers so it's going to take this image httpd at tag 2.4 it's going to pull that image down and that's going to be the layer that i start with and then i'm going to build on top of it with extra steps extra build steps in this case there's only one build step it's going to copy an index file into the container now this is not the only way to build containers there are other ways to build containers in fact i pull out my golden hammer of ansible sometimes and i have built containers only using ansible using docker as a back end for it but ansible does all the work inside and then i use docker to tag and push that image so there are other ways to build images besides docker files and sometimes i don't like the idea of having a docker file that's huge and sprawling and messy and has tons and tons of these like copy and run and things like that but this is the standard way that you build a docker image and the idea is if you can make your image build process as simple as possible so for this one it's just copying an index file if i go to the docker hub documentation docker hub httpd i'll go to the documentation for it uh it even includes an example of how to do this and that's that's how i built this image just basically copy this out put it in here and instead of copying an entire folder of stuff i copied this one file but there's other stuff you can do if you want to run with php or run with a java application or whatever there are other layers that you can put in to an image and build it that way if you want to do it that way but but that's that's beyond the scope of this little docker image i'm just going to show you how i built this image and pushed it up to docker hub because this image if you go to hub.docker.com and go to cube101 you can see that the image is up here and it was last pushed an hour ago and you can see that there's an intro right here so if i want to build this image i use i have docker running on my mac you can install docker on mac windows linux whatever and then you have the docker command available and that command lets you do pretty much everything with docker you can run images build images all that kind of stuff now i should note that over in the land of builda just builds docker images it's one tool to do one task and it's kind of the unix philosophy of you don't have like one tool that does five things although you know if you're talking about systemd you might be you might be wondering what happened in that case but you have one tool that does one task and that one task is building images with builda you would use podman to actually run containers and you would use what is it run c run or something like that to run the images in a kubernetes cluster anyways the point of that is to say that docker does all this stuff you can see that there's a lot of different commands that you can use with docker but you can also pull out components of docker and use them in a kubernetes cluster as well or even for your own usage but that's not very typical most people that use docker are just going to install docker and do this stuff so not clear clear so i want to build this docker image locally and right now i don't have any docker images available locally from docker images there's nothing right before this presentation i ran docker system prune all which deletes basically everything off your local system you could also reset docker to do that so i'm going to build this image so docker build and then i'm going to tag it as girling guy slash cube 101 intro which is the the tag that i'm using here for this introduction image and i forgot to give it a build argument that's the path here to where the docker file is so right now i'm in the episode 01 directory sorry that that's a little hard to see maybe i should make the contrast a little better there i'm inside this directory there's a docker file there so i'm just passing it current directory is the path where the docker file is so i'm going to do that and it's going to download the httpd image which is the first layer here and then it's going to copy that index file in so that's what docker just did if i say docker images i can see it just built that image for me and this is a pretty massive image i it's literally a web page and the web page plus apache weighs 138 megabytes that's uncompressed but still that's a pretty big image but it is smaller than an equivalent vm that i might have installed that might be running like debian 10 plus apache plus this web page so we're still a little bit smaller there and the nice thing now is i can take this docker image and push it somewhere and pull it somewhere else or i can even archive it in a in a tarball or something and copy it across that way and anywhere that i have this image i don't have to have apache installed i don't have to have that web page or anything i can just say docker run and i'm going to use dash dash rm just so it cleans up the container when i'm finished running it i'm going to say connect port 80 on my local computer to port 80 and then container and then girling guy cube101 intro that's the uh that's the image that we just built i'm going to do that and it's going to start up apache for me oh actually it's downloading it did i spell it wrong oh cube dash 101 so if i say docker images it just downloaded the actual cube101 without a dash so let me do that so it uses the local one cube101 intro and now it just starts up apache for me and if i go to localhost you'll see that it's running that same the same webpage that i had from the previous episode so that's how you can run the image just with docker straight up and test out your image i'm going to ctrl c to quit that and that's that's it for that application but that's that's an overview of how the docker file translates into building a docker image and running a docker image and now we're going to do the same thing but with an application that we build and at the end of it we're also going to push it up to a registry if you want to see how so you might notice that this this was pulled a few seconds ago by me i guess but you can see that it was pushed an hour ago i didn't actually push it but i have continuous integration running in github using github actions that builds and pushes docker images and if you look at the intro image let's see what we see here maybe not that for my go here tags so you can see that i actually have an amd64 build for intel processors and an arm 64 build that once docker is actually running on apple silicon this could run on an apple silicon mac natively or it can run on my raspberry pi's so i'm using docker's build x for that and i'm not going to talk about that in this episode but if you want to get into the details of that there's a github workflow called build episodes and again i'm not going to get into the details here but this basically builds and pushes all those images for me automatically every time i commit something to my repository and push it up so if you're interested in how that works go check that out in the kubernetes 101 repository now we're going to hop over now to building a go application and the reason that i wanted to do this this is not going to be you're not going to be like a go programmer at the end of this episode i certainly would not call myself a go programmer even though i've done a couple small very small go apps and i've submitted some patches for go and things i i there's a lot of people who are like oh i use this technology for a hello world thing and now i'm a blank developer that's not how i think we should be as an industry that causes a lot of strife because somebody will hire someone because they need a whatever developer and that's not it's not honest and it's not i also think our hiring practices are kind of crazy but i'm going off on a tangent right now and let's just take a quick look at this go application let's see so the first thing that you'd want to do actually before we look at the application let me clear this out and you want to have go installed on your computer so that you can play around with go and test things and and build applications locally things like that so in my case i used my handy brew brew install go but the golang website if you go to golang.org they have they have under where is it documents installing go they have instructions for installing it their instructions recommend installing their package that's fine too i just usually use brew for everything because i like using a package manager all right and somebody asked about rm remove after exec yeah if you use docker run dash dash rm what that does is it's going to start up the container in my terminal and leave it running in my terminal with the log output dumped to my terminal screen until i hit ctrl c and exit out of it and once it's done it will remove the container if i don't do that and i say like run the container in the background like this and basically demonize the container then at the end it will keep running and it won't remove that container and if i stop it it'll just stay there on my system so i usually use dash dash rm for my containers if i'm just testing them out locally that way it cleans it up by itself the other thing that i do is i often just reset docker completely some people are like oh i have all these things running in docker that are so important and critical it's like if you're doing that why don't you do that on a server somewhere not on your local computer where it's super critical i love to just reset all of docker on at least a weekly basis if not daily anyway so you need to get go installed on your computer um and this is the the little example that that we're building i was gonna live code this in front of you but after doing that a few times on live streams uh if you if if this was the only thing i was doing i would do that but there's always something like i forget one character somewhere and then you end up spending 10 minutes watching me try to look up documentation for go and figuring out oh i just forgot to print one character somewhere so i didn't want to do that i just wanted to show the basic structure of this go command which is hello and in go you basically build two different things packages and commands packages are things that you can import from other go packages or commands so like these are go packages these are ones that are shipped with go um and then i'm building a command called hello hello.go and this command is going the reason i know it's a command is is because it has a main function in it and that is what go will call whenever you call the command itself so this main function is going to do just one thing it's going to set up a web server and that server is going to listen on port 8180 so i set up a variable called address and it's a string and it basically says listen on all ports so i don't specify or listen on all interfaces so i don't specify 0.0.0.0 or 127. i just specify all interfaces listen on port 8180 and i'm going to use that down here when i start listening and serving then i set up a handler and this is just using goes built-in http package to to set up a a handler that responds to requests so this handler up here hello server is going to get a request and it can do whatever it wants with that request and then write a response out to uh to go's http package and then go is going to do all the back end work of handling the the response uh sending the response to the client and all that stuff so this is a pretty basic application uh i'm gonna listen on port 8180 i log a little message that says i'm starting the web server that's just kind to tell people like the web server starting and this is something i think that's important to can start considering most people don't do an amazing job with this but your applications when you run them cloud native in a kubernetes cluster you want to consider how you log things because kubernetes will start showing you like oh man it's so easy to scale up to 2 4 6 8 10 25 100 instances of my application and when you do that you need to have a good logging structure set up so that when your app is doing stuff or when it's thrashing or when something weird is happening the log data is going out to kubernetes and then kubernetes can aggregate it into something and whether that's an external service like datadog or whether that's an internal service you're running in your cluster like elasticsearch the elk stack or something you need to consider that and make sure that your app is logging the right stuff some applications don't log things very well at all some things don't have a plugable logging backend and they just log to files and it's harder to deal with that because you have to have like a sidecar container and that sidecar has to monitor the file and spit it out to some logging system so you want to try to make sure that when you're building applications for cloud native deployments you have good logs and you have the log data that you're going to need to be able to diagnose and debug things um so anyway have this log message that says the web server is going to start up now and then it starts listening for requests and serving responses so listen and serve a little straightforward and so you just pass it an address to listen on and a handler for requests and if there's a problem with it it'll log the failure but there won't be because i don't have anything else running on this port and it should just start up fine and that is our that is our whole uh hello command so now you know a tiny bit of go and there's also a test command next to it i'm not going to talk about go testing but this basically lets you test everything and go go is kind of cool in the sense that everything in the language is built in the core to be testable and well formatted a lot of languages people have problems with spaces and tabs and all this stuff and you can see here uh you might not be able to see it on my monitor but there's i have invisible white space so it uses tabs instead of spaces i'm a spaces guy and i probably just lost a few subs for saying that but um go is is wanting to use tabs and the way that i know that is i can just after i type up some go code i can say go i have to go into cd command order oh i'm in episode one cd dot slash episode two and then i need to go into command hello and then i can say go fmt and that's going to reformat my code the way that go expects it to be so it's kind of like built-in linting and you can actually integrate this with your editor i haven't done it because i'm not a go programmer by nature and i don't have it set up but you can use go go's built-in features to actually format your code for you and make sure that it's all nice and good and happy so and then also go has built-in testing so if i say go test it's going to pick up this test file and run it and surprise surprise it actually works that's because i didn't live code it if i live coded it would definitely not work and we'd spend most of the rest of this episode trying to figure out how to get it working so anyway here's our go app and i'm going to uh run it by using go run it's you know in some ways it's kind of like docker run it's just going to run the application and let me quit out of it so i'm going to say i'm going to clear this so i'm back up in the middle of screen i'm going to say go run actually let me cd back into the main directory go run command hello hello.go and it's going to run this command by calling the main function which starts the web server and now if i go into a browser and say localhost what is it 8180 you can see that safari requested the path slash and if i put in another path like test it requests the path test and it's logging that information so that's nice this is not a very full-featured web server but it works and it shows that we're compiling a go application and it's working so that's not super portable though because now if i want to send this application to somebody else they're going to have to install go on their system so that they can do go run so go can also build an artifact just a little file that you can run an executable file on any system just by calling it and so to do that you say go build so actually it's the same command as go run i'm just going to do go build and that's going to build an artifact that's the same name as the command so hello and now i can just call that hello and i don't even have to have go installed on my computer if you see the size of that it's six megabytes and that's just because it has all the dependencies built into it all the the http server and all that you know this this code here is like less than it's probably like a kilobyte or something but this this packages up everything that i need in that go application to run in any environment where it can run and i can just call dot slash hello and now i can distribute this hello binary to anybody they could download it and they could run it and they'd have the exact same thing on their system and it's running the same test again and you can see that it's running just the same as it was up here when i ran it directly so that's a nice thing about go is it can package up these binaries for us that we can distribute a lot easier we don't have to have a um you know like with php which is the language i'm most familiar with or with python you have to have that that language running on your system and have an environment that you can run to be able to run the applications you build in it with go you can build this binary and i i can pass this off to someone else and they can run it without having to have go installed so that's pretty cool now we want to be able to deploy this into a kubernetes cluster and you can't just throw a binary into a kubernetes cluster so you need a container but there's there's a little problem that we could run into so i'm going to i'm going to delete this binary hello and i have a doctor file here and it's going to be a little bit different you might see something if you're not too familiar with docker you might see something alarming if you know the basics of docker there are two from statements in here and this is doing what's called a multi-stage build so there's one build up here that actually builds that artifact so it builds that little hello thing and that's going to run in the first container and this is just a container that's going to run and then it's going to be used in this next step and then this container will be thrown away and the reason i do that is because if you look at this golang container so if i go to docker hub where is it go laying on docker hub and i go to tags and i look up this one alpine uh not alpine alpine so if i look at this image it's a 103 megabytes uh compressed well 104 megabytes compressed so that's a pretty big image and you know this this binary that we had is only six megabytes so it would be silly for me to ship a container that's 103 megabytes that just runs this one little binary that has everything it needs it's not like apache where i needed to have apache and a web page this is everything so i don't want to ship this image i could do that i could build it and then just have have the image run this hello binary when it starts up and that would work but then i'd have a 100 megabyte image so the next thing i'm doing here is i'm actually going to build the production container the end result container using a multi-stage build so this is the first stage and this is the second stage so i build it in this first stage and then i'm going to build a container off of alpine linux which is super tiny like kilobytes tiny and then it's going to copy the artifact out of this first container so it's only six megabytes into this container the the final one that's going to be built and then it's going to give docker a couple bits of metadata like expose this port because that's the port that we want to run this application on and then the entry point which is what docker will execute when you run the container without any arguments is going to be that hello binary so we can run this we can use docker to build this image so if i say docker build dash t yearling guy cube 101 go and pass it this directory it's going to first grab the docker uh golang image so it did that here and then it's going to build the actual application using go inside there then it pulls down alpine which it did where is it stage one wherever it is up here yeah it pulled down alpine which you can see it's only kilobytes uh three megabytes something pretty small and then it's going to copy uh the built app into that container and then it's and then it sets up the metadata for docker expose an entry point and now if i say docker docker images you can see that this cube 101 go that i just built is only 12 megabytes so instead of being 100 megabytes having the entire language build runtime inside of it i can build the thing and then build it into a separate container and it's very small and lightweight and it's only 12 megabytes so now if i'm deploying this like let's say i'm deploying it to a bunch of raspberry pi's which i do quite often if i had a 200 megabyte file it'll take a few minutes for those pies if i'm using like wi-fi or something to get all those docker images downloaded this makes it a lot quicker it uses less bandwidth if you're on certain cloud providers who charge you a lot for bandwidth that saves on the bandwidth it saves you on storage for these images so we want to make things as lightweight as possible and it's not too complicated to do it and this this episode is not going to be able to get super deep into details of how to do it with every programming language under the sun but this is one reason a lot of go developers do benefit from being able to deploy just the go binary itself they can optimize things and get things a lot smaller than necessarily with some other applications where you have to install a full the full programming language on top of your application as well so we have that and i want now to push it up somewhere into a repository because right now it's running on my local computer but let's say i wanted to deploy this little hello application into a kubernetes cluster in a cloud you need to have a place to put the container image that's not my local computer because i can't have amazon pulling from my computer what if i have it asleep or what if i have an internet connection that only has a 20 megabits uplink thank you charter in st louis or spectrum or whatever whatever it is for not having symmetric upload and download speeds anyways um so i want to put it somewhere and i'm just going to show in this example uh docker hub i actually have uh i forget what it's called a premium account or pro account or whatever their subscription plan is i have their cheapest plan five dollars a month but you can use any register you can even self-host a registry if you want but i don't like doing that just because i you know there's tons of cloud services and five bucks a month this is an easy price to pay for the ability to have a docker registry running but if you use amazon you can use ecr if you use on github github has its own docker registries that you can integrate with but i'm going to create a repository for this and i'm going to call the repository cube101 go because that's the tag that i used here so cube101 go a neat little hello world app and i'm going to make a private uh regis a private repository in docker hubs repository registry just because this is a very top secret application i don't want leaking out to the world so i'm going to create that and i'm going to push this up using the command docker push and then gearling guy cube 101 dash go and i'm going to give it the latest tag push that up and now when you do this if you're pushing to a private repository if you tried pushing in mind you'd get an authentication error and ideally you wouldn't be able to hack your way around that but for me all i did was i used docker desktop and i logged in here it says i'm on the free plan but i'm actually on a paid plan so i don't know why it says free plan maybe that's a bug but anyway i just pushed it up and now if i go here and refresh you can see that it's it's here latest and if i go to the tags you can see that compressed it's only six megabytes so it's nice to be able to get that go application or whatever application you're building into a small docker image put it into a repository and now i have the ability to pull this down anywhere that can connect to docker hub so anywhere with an internet connection we'll be able to pull this image down and run it on a computer now for this one i only built it for amd64 so unfortunately if i switch to a mac m1 chip this would not run on it natively hopefully though someday there will be the ability to run md64 stuff on a mac running m1 in docker but we'll see if they can pull that off otherwise i you know for most of my images nowadays i do build both intel and an uh arm amd64 and arm 64 are just one letter different so my brain just flips the two anyways for a lot of images nowadays i build both versions of it just because i'm trying to future proof and i always like to run these things on raspberry pi's and why not be able to do that you know unless you need tons and tons of ram or super amount of cpu a lot of things can run just as well on a raspberry pi 4. uh all right so now that we have that set up there's one other thing i wanted to just to show really quick is if you're using mini cube locally let me see if i can start one of my not mini cube starts i'm going to start i'll see if my computer doesn't die trying to bring one of these up if you want to push images into minicube there's a couple different ways you can do it one is you could use an external repository like this and then connect minicube to it and then pull down an image i'm going to talk a little bit about how you can connect your kubernetes cluster to a registry using a pull secret in next episode but you can also have minikube set up its own registry locally on your computer and pushed it or you can also use minicube's built-in docker context or a podman context so that you can build images inside that mini cube cluster and if it starts up here i can show that command if it doesn't then we can just move on but let's see come on mini cube you can do it my cpu is there it's 70 degrees celsius getting up there usually around the around 71.72 is when uh on my mac i notice the kernel task goes to 100 on all cores and that means it's throttling the cpu so hopefully we don't get up there and we'll see what happens okay mini cube's running so what i can do is uh right now if i say docker images you can see this is using my local docker on on mac docker desktop but if i say eval minicube docker env and if you're using podman this would just be podman env then it's going to switch my docker context to using the docker that's built into minicube and i can say docker images and now it's showing me what's inside a mini cube so now i could actually build the image saying docker build dash t geeling guy cube 101 dash go and it's going to build that inside of the mini cube environment so this is actually redoing all the stuff that i did on my computer but inside of minicube so that's one way you can do your testing inside a mini cube completely local disconnected from the internet if you need to and you can see now if i say docker images it has that cube 101 go image in it all right so um next episode sorry that i did not get into the live chat too much this this episode had a lot of content and i wanted to get through it because you have to know how containers work at least on a basic level before you can really understand how they're going to run inside of kubernetes and how pods and things work so for next episode we're going to talk about connecting kubernetes clusters to container registries which is something that most people have to do unless you if you use a completely managed instance that might all be done for you but it's still good to know how that connection works in case you do have issues or if you want to use mini cube with a private registry you'll need to connect the two together we're also going to talk about running this go app inside of a kubernetes cluster and different ways of managing it like deploying it scaling it rolling back releases things like that so please subscribe to the channel please consider supporting the crohn's and colitis foundation using the link in the description and until next time i am jeff gearling you
Info
Channel: Jeff Geerling
Views: 36,419
Rating: 4.971806 out of 5
Keywords: kubernetes, devops, introduction, intro, beginners, guide, k8s, k3s, eks, gke, aks, geerlingguy, kube101, kube, kubectl, docker, containers, podman
Id: AHDrejEv0SM
Channel Id: undefined
Length: 56min 36sec (3396 seconds)
Published: Wed Nov 25 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.