Getting Started With Kubernetes on DigitalOcean

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right welcome everybody uh this is a digital ocean tech talk uh entitled getting started with kubernetes on digitalocean and it is october 13th uh 2021 and i'm super excited that i get to do this tech talk with my colleague mason so uh let's introduce ourselves uh mason will you go first yeah um my name is mason egger and i am one of the developer advocates at digitalocean i focus on the infrastructure as a service side of things so you probably may have seen some of my tech talks before where i usually talk about python and droplets seems to be my favorite subjects so absolutely thank you and i'm kim schlesinger i'm a developer advocate at digitalocean on mason's team and i focus on cloud native technologies especially kubernetes so right now it is the week of kubecon cloud native con north america 2021 and we wanted to have some content for people who maybe are seeing all of the discussion about kubecon but haven't had an opportunity to give kubernetes a try yet so if you're watching this we hope that's for you and we have some viewers in our live stream so if you're there let us know what's your name where are you watching from and what brought you to today's talk so so this is called getting started with kubernetes on digitalocean and mason and i are going to be working together and i'm just going to show you our our goals so i'm sort of the kubernetes expert mason is python expert knows a lot about docker and so if you are new to kubernetes this is a good sort of set of steps to go through to get an application up and running in kubernetes so the first thing is that mason wrote a very simple python application and we're going to make sure that it's containerized the next thing that we're going to do is we're going to push that image of mason's application to a container registry then we're going to set up a kubernetes cluster then we're going to deploy at least three replicas of the python application inside the cluster and then if we have time we're going to try and expose that application to the internet so those are good steps and we have lots of people watching so just want to say hey to some of you so hello lawrence uh from amsterdam uh looking forward to learning a little bit more about kubernetes great welcome and we have uh abhinav saying hey we have michael from cape town south africa welcome gopesh from gainesville virginia in the u.s uh turkey from istanbul turkey oh my gosh we have a lot of folks uh gillet from belgium uh let's see willington from north alabama in the u.s welcome uh bernd from dorenburn i'm not sure where that is we've got a lot of a lot of people uh martin who i think we've seen before from sweden is curious about kubernetes on digitalocean we will show you some of that uh cesar we've got jan from amsterdam lots of folks in the in europe uh diego from the uk uh eric from boston oh we've got ghost from texas that's where mason is uh caesar from indonesia all right we've got a lot of people i don't think we're gonna be able to say hello to everyone but we're so glad you're here um we will pause and answer questions as we go um but let's let's get started um so uh the first thing is that our like big goal is that we want to deploy an application to a kubernetes cluster and so the first thing is we want to have a containerized python application so i asked mason to prepare a very simple python application so mason i'm going to stop sharing my screen uh will you share your screen and just show us what's your application what does it do yeah so it's a it's a very simple flask application um it's a hello world application the only difference is instead of saying hello world it's gonna say uh hello from the host name because i want to be able to demonstrate uh kubernetes pods and all of that fun stuff and whenever you're dealing with containerized applications and distributed systems it's kind of nice because if you to see it because if you don't see like how it is changing it kind of all it's like it's all magical it all looks like it's the same app so you really don't kind of get that like oh wow this is running across three different nodes and it's completely seamless to me so i decided to do that but it is very much just a hello world flask application nothing nothing special about it that's all right uh can you run it locally yeah so i've already built it locally awesome so it should be running and if i come here and it's all it should be on uh localhost 8080 i think is what i did that's i think that's what i saw yes so it says hello from and then this weird hash i'm assuming that's just how python is getting the hostname of my uh windows subsystem for linux since i am doing this on windows excellent all right so we have a python application it's a hello world application you've already got it containerized so it looks like that goal is done can you show us the uh docker file real fast and maybe just say about that yeah so the docker file's also relatively straightforward so it's from python um which because python 3.10 just came out this is running python 3.10. if you are concerned about python versions um i would definitely pin it like i normally am not the person that does a from python type of thing but uh for this example i was like we're just gonna get what we get and then basically what we do is we create a directory called app we switch our workdir into app we add all of the files in our local um directory into here so we'll add our main.pi our requirements.txt the requirements.txt has one thing in it which is it just has flask that's the only thing we need um what you could do is you could you know install this in a virtual environment and then do a pip freeze so you can freeze the versioning again for this short little demo i chose not to do that but it is totally an option so then you would install your uh requirements you um the exposed 5000 is actually not right anymore but it doesn't really matter maybe no i don't think it matters too much because normally the default of flask is on 5000 but as we can see from my uh thing i changed the port down here to 8080. so in reality i should have changed that exposed to 8080 but because we're using the dash p flag on the docker command really didn't matter that much um and then we just run our python app um and we're running it basically running it in debug mode with say running python if you wanted to run this in production you could install a whiskey um like uh us gear mod whiskey or anything like that g unicorn is one that i use a lot there's a big like debate on do we need to do that inside of uh you know docker and the answer is depends on what you want to do sure i actually don't know what a whiz key is what is that it's a websocket gateway interface i think so because because i think that's what it remember it's something like that but because python has the global interpreter lock and basically there's no like parallelism doesn't exist within a single within a single process like everything goes through this stat like this stateful like just global lock so it makes everything i can't think of the word right now it's not static the word isn't static it's i don't know the opposite of multi-process it makes it all where it all goes in line so it's not paralyzed you use it when you use a whiskey it allows you to spawn up um different processes and then that way you're kind of getting around it so by running the application multiple processes and then it kind of running in the background you can like so you could run like two to you know as many as you want but in a docker environment i probably won two to four um so you kind of get that i that way of kind of multi-processing where it handles more than one request um it's it's a it's a long standing that we've had in python for a very long time um and at this point like it's all nature to me i didn't think about it so single single threaded i guess serial that's the word ghost got it it's serial it's an s word serial i could not think of it and also said yes python 310 came out last week yes python 310 came out i think it was last week um the time blends together it was within the last two weeks for sure i think it was last week yeah we're running python 310 now very cool all right well thank you mason i'm gonna to go back to sharing my screen so we are moving through this first goal pretty quickly one of the prerequisites for i guess getting started with kubernetes is that you at least want to sort of like know what a containerized application is because that's what kubernetes is it's a container orchestrator and so digitalocean has a lot of materials about containers and docker so if you need to dig into that we encourage you to do that but right now since we're focusing on getting mason's hello world app deployed into a kubernetes cluster we're going to move on to the next goal which is we want to push that container image to a registry so a registry is uh it's it's a place where you can store container images and there are things like the docker hub and there's quay and digitalocean has a container registry which is called docr which stands for digital ocean container registry and we're going to push mason's uh image to that registry so mason do you want me to share my screen and pull up the docs or is that something i should have you do uh i can pull up the docks real quick so the ocr [Music] digital artist digital ocean vocr it's a it's it's a very common python or docker command but i never remember it i also i have some notes on it and i find myself at this uh documentation link a lot so uh we're at the digital ocean container registry uh docs page and we're just remembering how to push your image to the registry so let's see yes so if you haven't seen the digital ocean container registry um it's on the individual ocean cloud dashboard it's called container registry it comes with like a subscription with spaces and you can create different plans there's a free plan that allows you to store a certain amount and then all the way up to a professional plan which allows you to store a lot the main thing we need out of this is we need this registered registry.digilation.com sami so we've already created our registry so that already exists so we need to use doctoral to log in so document the digital ocean command line tool where you can interact with your account from your terminal instead of through the cloud console now the fun thing is is which registry am i logged into that would be that's because i have so many different digital ocean accounts now we will see so if this command doesn't if this command doesn't work then we have to try again also what did i call this thing uh docker images okay so docker tag python k8s registry actually that's where we paste this uh thank you wsl and for having that that was nice of you uh sammy slash python dash k8s and then now we can do docker push registry.digitalocean.com sami slash python k8 all right that means we're logged into the right one excellent so we tagged that image um with the digitalocean container registry information and then our like organization in the docr is called sami and then the name of the image is python-k-8s is that correct yes okay while that's happening i'm going to pull up a couple questions so this is not about the not about doc or anything but michael potter asks what software is being used to create this live stream so we use a tool called stream yard which lets us stream to multiple platforms which is really convenient so check it out michael and then [Music] let's see i'm not sure what the answer to this question is but says if main.pi takes a while to execute do i need kubernetes to be able to get and answer multiple http requests i don't know mason does that what do you think about that so that's a that's an interesting question um we often so python's not as slow as people want to say it is you know people that say that python is slow and can't deal with stuff are talking about python 2-6 and they're about 15 years out of date um python's actually extremely fast so it's not a bad idea to handle mult to do it but the idea that a flask application can't handle multiple uh http requests is just completely false like it is fast it handles it now if there's a lot of like waiting going on or you're doing like a lot of processing behind the scenes if you're not instantaneously responding like maybe you're opening a file or you're doing something on the back end that requires other long term you should probably use a whiskey for it and then like with that whiskey you could use kubernetes now you i guess there's the argument to say could we replace the whiskey with kubernetes and the answer to that is kind of yes but like having if you have like one core allocated to a pod and you're only using one thread on that core it's kind of a waste of resources like we like multi-threading is extremely powerful these days we can do a lot with it so there's no reason to really like not do that so i would say eh okay so it's a weird answer but no i don't i don't think you need kubernetes to handle multiple requests i think now are you if you're dealing with 10 000 requests per second yes but if you're dealing with 510 requests per second no like you don't you you probably could run it on your own without anything any whiskey or kubernetes and not see any sort of degraded performance dig okay that word that i can't say degradation aggregated so it sounds like python 3 is faster than python 2. um and that maybe don't worry about it until you actually get data about okay that's not able to handle all those simultaneous requests and then maybe then add some some tooling around that all right well i'm looking at your your screen it looks like that image might be in the registry let's look back at the cloud console and just confirm that here it is python k8 excellent all right so uh the application that mason wrote and we saw him run locally he containerized it and then he uh made an image of that container and then he pushed the image to a registry or digitalocean so we're using the digital ocean container registry so excellent that's like the first step to getting something to kubernetes so let's go back to our goals we are moving through these this is great we containerized a python application and then we push the container image to a registry and so that's sort of like your first step is you want to have an image that you can pull from a container registry and now what we're going to do is we're going to set up a kubernetes cluster so uh digitalocean has a product uh digitalocean managed kubernetes uh which will spin up a cluster for you um so mason would you like to do this from the cloud console or from uh doctoral from the command line oh i've never done it from doctor before do you have the command i do okay back to the screen i have to shuffle around some windows to see it actually let's go to the uh digitalocean kubernetes documentation that's where all those commands live yeah i imagine it's in the quick start yeah yeah so you can either create a digitalocean kubernetes cluster from our user interface the cloud console or you can use doctor um if you're someone who likes to live on the command line like i do so what do we see here create clusters yeah see there's a whole bunch like create cluster stuff but where's the stuff on how to use doctor for it i'm not sure let's let's look around we'll just search for doctor nope not easily showing up all right well we could look at i think what i've done in the past is like just look at the doctor like yeah documentation in the command yeah dr kubernetes yep okay so there's the subset of ductal which is doctor kubernetes and then there are all of these different options so we're trying to create a kubernetes cluster so why don't you do doctor kubernetes cluster and see what what our options are create and bleed so we can do a doctoral kubernetes cluster create yeah command is missing the required arguments so let me look at my personal documentation real fast um probably needs a name or something right yeah definitely we want to name our cluster so what can we do h on oh that doesn't that doesn't display well [Laughter] it at least needs a name i think we can get away with just doing a name okay let's try that what okay and we'll call it tech talk excellent love it okay all right if you hit enter what happens i did we are waiting oh [Music] these kubernetes clusters do take a little bit of time to come up so i guess we can go back to the chat for now while we're waiting on it absolutely so it looks like we've got a nice like uh notice it says clusters provision waiting for cluster to be running can you go back to the cloud console and let's see if we get any information there that helps us know when it'll be ready we'll definitely get a message on the on the terminal here so going to the kubernetes tab like we have yep there it goes it's going nice okay so our kubernetes cluster is spinning up and so yeah let's take a uh look at the chat um let's see yep so i think mason you're a little choppy so someone said can you make the terminal screen maybe a little bit larger i can make it a little bit bigger yeah absolutely excellent um paul asked does kubernetes take advantage of multiple cpus by default or do i need extra configurations i guess um i guess that would be a question is this a question about the application or about like the virtual machines that are uh running your kubernetes cluster what do you think mason i think they're just asking if if nitties like if i deploy something to kubernetes does it automatically take advantage of multiple cpus just like does your code detect and try to use them and i don't know i don't know the answer to that yeah i think the answer is yes one thing that as you become more fluent in kubernetes that you want to start doing is setting resource requests and limits on your uh workload deployments and so where you actually say like when you are scheduling this container on a node i want you to use no more than x number of vcpus and no more than whatever your memory limit is um and then uh kubernetes will be able to shuffle things around and do things like that so uh the best practice paul is to is to add that extra configuration which is uh resource requested limits um so that is a good question all right let's see what is our cluster doing we do have um yesterday we released a new uh it's an early availability program but we do have uh we've changed the underlying infrastructure of our control plane so that kubernetes clusters spin up more quickly on digitalocean kubernetes and are more secure and are more self-healing so i'm hoping this one spins up a little faster than ones in the past it looks like we're still waiting so we've got another question from stefan are there advantages to using kubernetes over openshift or other tools and i know this answer can be infuriating but it's absolutely true it totally depends on what you're trying to do um and so for you and your team i would recommend just evaluating the different pools and uh deciding what's best for you and the people that you work with um and uh i don't know anything to add mason is open just kubernetes now under the hood anyway what's up lawrence yeah like i remember what openshift wasn't like back in like i tried using openshift back in like 2012 2013 i was still in college at the time and it was interesting i never did quite figure it out but i think most of openshift is all kubernetes based anyway so you would basically just i think it's at that point it's almost a managed kubernetes service but i definitely don't know because i have not looked at in a very long time um but yeah so i guess i guess the question a good question here kevin like maybe this this is more of a more useful question for it is what are the advantages and disadvantages of running your kubernetes workloads on a managed versus self-hosted uh kubernetes cluster that's a great question so um a self-hosted cluster would mean that you have like some virtual machines and you sort of bootstrap kubernetes on your own and in that instance you're responsible for absolutely everything all the upgrades all the security um you have access to your control plane nodes um which are like the brains of kubernetes uh versus a managed kubernetes service uh something like digital ocean kubernetes or i guess openshift might fall into that category where you are still running a kubernetes cluster but the company that you're working with sets up the cluster they do the bootstrapping they handle all of the upgrades uh for like the version of kubernetes and all the underlying um on all the underlying software um so i mean again it depends um if you are a smaller business or you want to be able to deploy things onto kubernetes but you're less interested in all of the internal uh machinations of what's going on i would recommend a self-hosted service somebody's going to take care of all of the hard parts for you but if you're curious about kubernetes you have a lot of technical expertise like maybe do something like kelsey hightowers kubernetes the hard way where you're like building kubernetes from the ground up so uh it depends on how much uh control you want of your kubernetes cluster and how many how much time and how many people you have at your company i think uh so the bootstrap versus managed so all right um it's still going it's almost there great well then we'll answer another question from alan how can i get started with learning kubernetes um that's a great question i think uh i would search the internet uh maybe kubernetes for beginners or kubernetes 101 and try out a couple of resources and then pick one that you stick with we do have lots of information on on digitalocean's community page you can check out our tutorials but i think the the best way to learn is to actually like start doing the thing so uh if spinning up a cluster on a digital ocean is not feasible for you financially right now i'd look into this project called kind it's kubernetes and docker and you can run a cluster locally on your computer and it's free for you and so i would start i would spin up a cluster and then i would start using cube control commands as quickly as you can so i don't know find a resource stick with it and move right into the practical don't worry about the theoretical get some commands under your fingertips and then then start learning about the underlying parts um i think we also have a i think solution has like a whole series on digitalocean kubernetes for node developers i think yes our backstage person actually shared all of those in the chat so thank you so much kubernetes for full stack developers i love how the backstage and i have the same brain um but yes there's there's a whole ebook that we have for teaching yourself uh cuban kubernetes um and i think our cluster is ready to go excellent um well let's uh let's poke around the cloud console for just a second to see what's there yep you can go to kubernetes dashboard if you want oh you want to go over here so the kubernetes dashboard is an open source project i think under the kubernetes project where you can get some visualizations of what is running in your cluster and if you enable the right permissions you can have developers like deploy applications from here but it looks like uh we've got four different kinds of workloads already running in this cluster so we have daemon sets deployments pods and replica sets and our goal for today is to use a kubernetes resource type called deployments that will then create a replica set and then pods of mason's uh application so maybe we'll come back to this maybe not we might just get the information from the command line but looks like there's some good information there so this is just a user interface like a visualization of of what's running in the kubernetes cluster let's go back to the cloud console mason okay excellent if you go to nodes what do you see we just we ran the what was the simplest doctyl command so i'm curious how many nodes do we get by default if you click the plus sign i think it'll show us the different nodes this one right here yeah perfect so we have three nodes that are running our kubernetes workloads and they all have the the name the same name in the beginning and then they all have like a little hash at the end that's different so we've got three different digital ocean droplets that are running our kubernetes nodes and we have three because we want something that's highly available so if one node goes down there's still two running and if we have a minimum of three nodes set then kubernetes will add another node if one of them dies so that's one of the benefits of kubernetes is this self-healing aspect where you say this is what i want to be true about my cluster i want to have three nodes and i want to have five replicas of my application running there are control loops and kubernetes that are always checking those numbers and making sure that the actual state of the cluster is in line with uh what you have requested so that's uh one of the benefits of kubernetes if you go to insights what do we see okay excellent so just getting some information about uh what's going on on those droplets it looks like cool all right let's go back to the command line [Music] so mason has doctor installed and we use that to spin up a digital ocean kubernetes cluster now we're going to switch to using cubecontrol which is the kubernetes command line tool so mason if you just type cubecontrol let's look at some of the options for that uh executable there's a lot there's a lot cool so uh you can clear your your screen out uh whenever i spin up a cluster the first thing i do is i just just see if i can connect to it and so a great way of doing that is if you say cube control get nodes it should show us a list of nodes like we saw in the cloud console helps if you type it right it's uh i get it all right so we're just making sure that we can access our cluster from our command line is usually the slow not usually hmm oh there it goes oh all right excellent right so might be my internet today oh that's okay so we see the same information we saw on the cloud console we have three uh virtual machines running our kubernetes uh secondary nodes it tells us that those nodes are ready to go and the version of kubernetes that we're running is 121.3 uh mason will you do cube control get pods excellent so if you remember the dashboard we saw there were daemon sets and deployments and replica sets and pods in kubernetes there are namespaces which are ways that you can organize um pods and like keep them separate from one another and if you don't specify a namespace it's just going to check the default namespace we don't have anything running in there because nothing by default with digitalocean kubernetes gets installed in that namespace so if you run the same command mason and if you do dash capital a that's for all namespaces we'll probably see some pods then there's a lot let's see i'm getting a little bit of lag so i can't see your list yet that's weird i see it what's the other screen is frozen i'm gonna pop mine back up and then maybe if we go back to yours um huh that's weird all right we're having some technical difficulties let's add that to the stream do you see yours do you see your screen mason i don't see it i see it but it's i don't see it in this let me i'm gonna try something real quick okay oh there we go i suppose see your screen and see all of those pods oh good i'm going to try to quickly change internets but we will see if it works or not so okay let's see waiting for mason i still see a screen uh we'll look at some questions while we're waiting so jaren says hi i have about five to ten static websites i usually use a virtual private server instance with a linux distribution to host them there do you think it will be better to use kubernetes both performance wise and financially so that's a good question i think the answer depends on like how much traffic are your static websites getting um if you have a lot of spiky traffic uh where you get a lot of users all at once and then maybe it goes down kubernetes is good for auto scaling those sorts of things um but um it may not be the best choice for you it's something you'd want to want to think about and with digitalocean uh we tell you the price up front so if you're using our droplets um as the thing that you're hosting your websites on you'll you'll have the data about how much that costs and then if you go to the kubernetes tab and look at spinning up a cluster uh you get estimates about how much that will cost per month so i would say sometimes kubernetes is too complex for a use case so if it's just static websites and you get sort of like an even uh amount of users coming to your websites then uh your solution might be right um so that i would say just take a look okay am i right you're back you seem a little bit clearer to me uh yes moving over to the phone wi-fi because the other which is snap but anyway we shall continue that's okay all right well i'm gonna show our goals we're about 35 minutes into our live stream we did that first two goals mason took care of those we containerized a python application we pushed the image of that container to a registry the digitalocean container registry our next goal was to set up a kubernetes cluster and we've done that and the way i always check am i connected to a cluster is i run cube control get nodes and then we looked at the pods and so now we're going on to our next step which is we want to deploy at least three replicas of the python app in the cluster and so mason actually i'm sharing my screen i'll just do this i'm gonna go to the kubernetes documentation real fast so let me get a new tab open and i'm going to say kubernetes deployment yaml all right so we have some documentation from mirantis and i'm looking for the like official kubernetes documentation and so if you are doing an introduction to kubernetes tutorial often the person who's teaching that tutorial will say something like pods are like the building blocks of kubernetes and so we saw all of those pods running in all of the name spaces in kubernetes and those pods are the workloads that come with kubernetes and that digitalocean installs to make the managed kubernetes service work and so pods are already running and now what we're going to do is we want to have pods of mason's application running and instead of just deploying as a pod we're actually going to use a kubernetes resource called a deployment and the reason we're doing that is a deployment gives us the option to set uh how many replicas we want to be deployed um and it's just a a much nicer way and it's the way people actually deploy applications and workloads in kubernetes clusters in production so this is the kubernetes documentation uh it's it's giving us a ton of information and what i'm looking for is this is exactly what i'm looking for is i'm looking for a yaml manifest and so the way that i prefer to deploy things to kubernetes is by having uh yaml and then i like fill in the parts that i need to be changed and then i deploy it uh through cube control and i'll show you how to do that so um this is a yaml manifest for deployment it tells us the api version of the kubernetes resource so deployments are part of the apps v1 api in kubernetes we say the kind of resource this is deployment if it was a pod it would say pod here got some metadata um the name of this thing is nginx deployment and then we're going to add a label to it called nginx and then uh the specification we're saying hey we want three replicas we want there to be the label nginx on it and then going down to the container we say the name of the container we say the image so this is just a very straightforward like nginx server that's running and then the container port so i'm going to send this to mason and i'm going to have mason uh like put that in a text editor so let me remove my screen we'll add mason and then mason i'm gonna send it through you in the private chat sounds good okay so we just need to copy pasta pasta oh good that actually worked excellent do we want to change the the name and the metadata here to like python deployment let's do that yep python deployment and then we'll name it the label match labels i guess is this should this have to be the same yeah let's let's actually just keep it all consistent so yes uh anything that was nginx except for the image name let's call it python deployment okay python dash deployment and then do i don't need the version here do i tag no this is where um we will go back to the container registry and like get the address for the image we named it python k8 from the container industry does it have to be the same name from the container industry as it does here it does because that's actually the url um where kubernetes is going to grab like the tarball of the image and then install it in the cluster so we want to be super specific about that so let's actually can you go back to the container registry view in the cloud console yes here we go there we go beautiful okay so actually at the top of this page it tells us our uh just like yeah there you go uh our that's our container registry sammy and then we can add the image name at the end like that yes let's do that and then there's one thing at the sort of at the top of the file the metadata the app name let's also change that to python this one right here all right beautiful so going back to one of the questions i can't remember who asked it but was asking like can you use can you specify like how many cpu cores you're using and things like that we're not doing that now but in this yaml manifest is where you could set the resource requests and limits and so as you get more advanced in kubernetes that's something you'd want to do here so beautiful so we have now this deployment called python deployment we're asking kubernetes to spin up three replicas so three pods that contain this application um and then if we scroll down i think there was a port number i just want to make sure that matches um i changed that to 80. let's try and change that to 8080 beautiful and like sometimes i don't get this right the first time but that's okay so uh let's exit out of this save it and then we'll try and deploy this to kubernetes all right so what is the name of that file excellent okay so the way that you deploy a yaml manifest that is a file that's on your local machine is you say cube control create dash f and then the name of the file yeah and if we we wanted this in a namespace other than default we would tell kubernetes put it in this namespace we could also do that in the ammo manifest we're just going to deploy it to the default namespace so click enter and let's see what happens okay it's been created excellent so let's do cube control get pods and see if uh those pods are running ooh error with the image full excellent so uh what i want you to do is grab the name of one of those pods just have it on your clipboard and so those pods aren't ready there's an error error image poll so we're gonna look into uh the pod and look at the events that occurred to see if we can debug what's going on so if you do cube control describe and then uh pod pod and then pass it the name yeah let's see see what information we get there excellent is not able to get the image okay so it says unexpected status uh 401 unauthorized so it your container image doesn't have permission to be pulled by that kubernetes cluster so i believe if you go to settings uh i can't remember if this is in kubernetes or in container registry but we can uh enable that here so we'll just have to poke around a little bit let's see cluster info um container registry yeah let's look there i don't remember it's okay there we are i don't remember either kubernetes integration excellent okay so if we beautiful click on that click save and we're updated okay so i'm going to have you go back to your command line we're going to just delete that deployment and then deploy it again so but i can't remember the name of the deployment if you do cube control get deploy it'll show us all of the deployments in the default namespace python development okay so then you'll just do cube control delete deploy and then give it the name of of the deployment sometimes i just don't even listen to myself spelling is hard especially when you have 120 people watching you type okay and then do we just do a create again same way same way okay see what we get so hopefully we get yeah and then we do get pods i'm learning okay looks like they're getting there we're already farther than we were last time excellent okay so our old pods which had the error image pull are getting deleted and our new pods are are getting created so if you run that command again or if you added the w you can watch um oh we have to do it we can do a dash w still waiting on the other two excellent [Laughter] this is what i like to see caesar says me too i am learning uh kyle asks how do you reference an image that's outside of digitalocean that is not public so you would if you're not using the digitalocean container registry and that image is private you'll have to read the documentation for that registry specifically i think to give your kubernetes cluster permissions it might require some more advanced configuration like setting up secrets or setting up some way for kubernetes to uh sorry i just saw my cat jump up behind me um but um yeah if you're not using digitalocean container registry uh google like private image kubernetes and and look at the documentation there all right so we've got uh the python uh deployment is running and i would love to uh make sure that it's it's working and so how did we access your application last time when we were doing it locally we ran it and then we looked at uh localhost um so this can be a challenging part of kubernetes so all of these pods are running in our kubernetes cluster and they're running on ip addresses inside the cluster and they aren't uh available to us outside in the internet so um we've got about 10 minutes left i'm i'm curious mason your take should we uh spin up like up a busy box pod and uh do like a curl request to see that output or should we try to expose these to the internet i would say let's expose them okay cool but i have no idea what that requires so well uh let's actually let me show my our our goals again so we've actually gotten to our stretch goal so this is exciting so we've deployed at least three replicas of the python app into the cluster and then our stretch goal is to expose the application to the internet so we can actually see the hello world message so the way that you do that in kubernetes is through something called a service and so oops i didn't mean to remove that so i'm gonna go back to the kubernetes documentation and i'm already there and i'm going to search for service okay and there are some lots of information here so oh you get one of these uh very technical definitions and a service is an abstract way to expose an application running on a set of pods as a network service so kubernetes has different service types and let's see if they have those here um it looks like this might not be the place but oh no here they are okay so i have service types over here there's type node port and so what that is is you actually on the virtual machine you say hey i want to open port number whatever you decide and you just keep that open um most people don't use node ports because uh part of the benefit of kubernetes is that your virtual machines get spun up and they get pulled down really frequently and so if you have something hard-coded it's not going to last forever and you're probably going to get errors so we don't normally use node port services uh where where is that section uh type external name and that's where you're actually connecting a domain name with a service so that someone can go to like uh kim and mason techtalk.com and then they would get the information from the actual container but what we're going to do for this is we're going to use a load balancer service and so what this service does is it uh if you are using a cloud provider it actually spins up a load balancer from your cloud provider and uh it will give you an external ip so then someone from the internet can say hey give me that ip address the cloud providers load balancer will accept that request and then load balance it across your pods so we're going to do something like this so mason i'm going to send you this and i'm going to have you sort of do the same thing where you copy pasta and then fill in um fill in the blanks or make changes so sharing that in private chat with mason okay and do i add this to the deployment or do i create a new file great question let's create a new file you can set up yaml manifests where you have multiple manifests in just one file but for keeping things straight in my head purposes i would call this uh service.yaml or something like that okay we have it here excellent i think we called it like hi are we is this are we changing everything in here from to like let's let's call it pi service just so we know that it's the service um and that we know yeah it was something that we did and then the app was python dash deployment right yep so that's really important uh the label that you put on your app has to match here in the service because that's what your service is going to look for when it's getting ready to load balance traffic um and then let's see we've got the selector and then as we get to the spec uh i guess we already did that so now we're on to ports um i think i would just get rid of the protocol right now i think it'll do that automatically okay so you said get rid of the protocol one yes okay so just delete that but leave the little dash there let's see oh yes yama let me look um let's see ports yep and then an indentation and a dash so yep that looks good port 8080 target port that can also be 8080 cluster ip let's get rid of that and then the type is oh i might have done this wrong i totally did this wrong i i told us the wrong service type that's okay i said uh load balancer and we're actually gonna use cluster ip which does what i described so untype change it to cluster and then i p the i and the p are capitalized and then the status you can delete everything from status on cool all right and then we're gonna go through the same process of uh using cubecontrol create uh and then dash f for the file and then we'll check on the service and if we did it didn't do it perfectly the first time we'll debug and figure out what's going on i thought everyone's making fun of me for yawning yes i'm a little tired this morning like i don't know why that's it in the chat because i had a yeah i had a couple yawns like what on earth like so cute like thanks everyone didn't realize y'all music apparently it's cube you cube cuddle create dash f service yaml yep okay let's see what happens i'm also still mildly frustrated at my internet so yeah that's okay okay so we got some information back it says service pi service created let's double check that cube control get service um should show us all the services in our um beautiful okay so we have kubernetes we have the cluster ip okay i definitely told you this wrong so what i'm looking for and the reason that i know that i'm wrong is we want an external ip um that's created by a digital ocean load balancer and there's no external ip being created so i definitely mess this up so let's go back to the yaml and let's actually change the type of service from cluster ip to load balancer i think i got i got my head and i got i got uh insecure okay and then do we just do a cube cuddle what delete service pi service you could actually do a cube control apply uh and it'll patch so it'll check for any changes but that also works so apply dash f service.yaml yeah [Music] okay so what do we do now um does it say it's like been applied or created don't know it's the missing i don't well i don't know if it's an error it's more of a warning like it's all right it's a warning oh it's about the apply command missing okay so it's basically saying like hey we would like to have an annotation whenever you on your yaml file whenever you use the apply command and we don't see that so please do that next time so yes cube control we'll do that next time for sure okay this is what i wanted to see so under the services that we've got running we have pi service and then under external ip it says pending and so what's happening is uh digitalocean is spinning up a load balancer and then once it's done we'll get an external ip and then hopefully we'll be able to see uh the information from your pod so while we're waiting this can take a few minutes let's review our goals and see if we got there and then we'll take some questions or look at comments in the chat so if we go back to our goals so we actually got to our stretch goal we just did that right now excuse me we're trying to expose our application to the internet and we're doing that via a service type called a load balancer and we're just waiting for that to get spun up lots of magic is happening underneath the hood that we aren't doing so uh i'm gonna look at some of the comments so aparis dev says uh mason says cube cuddle which means you really like kubernetes so the between the yawning and my pronunciations today i swear oh that's a total compliment so uh the pronunciation of uh that command line tool there is is hotly debated i say cuddle mason says cube cuddle i'm not gonna fight fight about it um so um okay so uh gudovich actually the first question they had was what is the cost of this uh load balancer plus three nodes uh mason i'm gonna have you go back to the kubernetes uh tab in the cloud console and see if we can see what the estimated cost per month would be that's a great question i'm trying to figure out where that's at 30 bucks right here let's see okay so uh we will tell you at digitalocean what your estimated monthly spend is for this so yeah if you spin something up and uh have it running just look in the cloud console and you'll see that but it does say it does not include load balancer or block storage costs which is interesting so baby load bouncer would be 10 bucks uh so this would be 40. okay so 40. um to start it looks like i feel like this dash w doesn't actually update or it's just taking a long time it can take a long time looks like it's still pending cool um all right well mason tell me a few things uh about what you learned i just learned how to do kubernetes stuff like i haven't i haven't touched kubernetes probably since like 2017. um i was at a like at my previous job when i was a site reliability engineer we did a lot of mesos and then one we were moving into nomad space so i've i've i'm familiar with container orchestrators um but kubernetes was not the right option for us in my last job um sure specifically because we had a very complicated network step and like it was already implemented and if there's like bernabeu's wants the everything it wants to be your network stack it wants to be your database it wants to be your container manager so like just dropping it in was going to cause a lot of re-architecture on our part on the network side so we decided that it wasn't necessarily worth it for us to use kubernetes i do know now that they i have migrated to kubernetes it only took them three years uh since i left or two i don't i don't know i've been gone for a while now two years so um they're also still using nomad which is interesting we've got some good questions stefan says would you connect a digitalocean managed mongodb instance with kubernetes and um what you can do is have i believe your kubernetes cluster and your managed running in the same vpc and then you uh just have your applications like you'll have that connection string to your mongodb so technically it would be outside of the cluster but you could connect those things diego says do we always need a load balancer with kubernetes um if you want anyone to access your um your the things in your cluster which you probably do so load balancers are helpful and then there are ingresses which is sort of a more advanced use case of how you can accept traffic from the internet and then route it to different pods in your kubernetes cluster so good questions thank you z we're glad to have you on twitch we're back on twitch and we're super excited so um hello all the twitch people uh what are we still pending our uh load balancer external ip no we just got it all right well let's see are we going to get to see the hello world from mason's python application we did yay and it doesn't say the name of the host it does it's a hello from python yeah my bad welcome to ah okay so it's actually saying the name of the container that we saw when we said cube control get pods so if we were to refresh this i assume that yes we will see the container change because there's three pods and it's also interesting to see how it's choosing to rout like i just refreshed and nothing changed but it changed now so maybe sometimes it doesn't like it always changed but yeah those are our three pods t2r kk8p4v9 yeah so proof that we have three replicas of that container running and that the load balancer is sort of spreading traffic across them for each request um awesome this is a good question i like this question have we answered this question yet no michael says can you have zero containers running so it reduced costs if i go all night without a connection that is a good question so it is impossible to run a kubernetes cluster without some containers so when mason ran cubecontrol getpods from all the namespaces there are all these containers running that are required for a kubernetes cluster to function um so there will always be some containers running but you could set up auto scaling on your pods and say like scale down to zero when there is no traffic so that is one of the benefits of kubernetes is you can set up auto scaling on your virtual machines and on your pods and say like hey when i'm getting no traffic i want you to scale down to just one replica or zero replicas and then scale up from there it's also good to note that you're not charged by the number of pods you're charged by the number of nodes so remember a pod is a deployment of your container whereas the node is what you can you can kind of visualize as the underlying droplet that is running it so right now we have three nodes and that are running it so we're being charged you know the money for the nodes so you also have to keep that into effect that or in mind that you know you're being charged by the nodes so just because you scale your pods down doesn't mean that you won't you know still be charging you have to scare your nodes down i think minimum requirement is three i think that's a kubernetes thing right like you can no i think it's just one actually like i've had digitalocean kubernetes clusters that are just running two nodes um if you are running a production cluster like you want three just for safety reasons um but i think you can set the minimum to maybe two or fewer in digital ocean i can't find that yeah and i think when you create a kubernetes cluster so we didn't really see this we weren't able to see all the options because we did it through the command line but when you when you select your kubernetes clusters you know and you can select your i guess it just does like a default because like we just gave it a name and we didn't give it anything else um you can go down to one node in a basic node um this is uh very much just a develop like a basic plan okay yeah so one gig of ram and one cpu one node you're gonna be able to run very little on this but if you're doing like if you're playing around with development i guess this would be and you don't but want to do it on your own local machine i guess this works for development you should never run prod code in this be bad idea yeah it's it's it's may not it may not be available all the time um so yeah so uh you can select node count here through the user interface there's also a node count flag uh in doctor so the like command line i have some like saved like command line commands for spinning up clusters that get really long um but they're just uh they're just uh this is just a representation of those of those different options so uh but yeah three nodes is the default in any probably any managed kubernetes um service that you're using um so um all right a couple of questions let's see emray says can you tell about uh pods that are in a crash loop back off and so sometimes you you spin up a pod or you know something's going wrong maybe you've gotten a page and there's something going on with a pod and it's in a crash back a crash loop back off a lot of times that means that there's something going wrong with the application uh where uh kubernetes is trying to create the container and then there's something going on in the application code that like asks it to stop and so we just get in this loop um so i would look uh look in the pod logs if you can to see what's going on um and then turkey says can we access this stream later yes uh this will be on our youtube channel as well as it'll be forever on our youtube channel and as long as uh i don't remember how long twitch streams stay up on a twitch channel but we'll be temporarily on twitch um so um all right i think now is a good time to stop we have some other good questions i encourage you to google them uh or duckduckgo them or whatever your search engine of choice is uh but yeah let's let's just look back one more time at our goals so uh in the last hour uh we watched mason containerize a python application i guess he came with that but he explained it to us we pushed that application or the image of that application to a container registry that was sort of step one then we set up a kubernetes cluster using digital ocean managed kubernetes and then we used a kubernetes deployment to create three replicas of that python app in the cluster and then we exposed that application to the internet uh using a load balancer service type in kubernetes so i think that's the uh the fastest way to get something deployed in kubernetes kubernetes is a hugely complicated system um so i wouldn't that's not a production ready set up but if you're curious about kubernetes those are good places to start so final words mason uh no i think i'm good thanks everyone for joining today and this was a lot of fun like i don't i don't i haven't got to play with kubernetes in a long time and i've been wanting to so i'm glad that we finally used this tech talk as an excuse oh i just i saw your cat zoomies in the background he's got a stick and he's going nuts i definitely feel like the cat's the star of the show today so oh he's he's adorable but if you want more information about us uh mason egger on twitter and is that your handle for everything pretty much and i'm kim schles same thing github uh twitter and we're on digitalocean community uh thanks so much everybody we'll see you next time
Info
Channel: DigitalOcean
Views: 23,406
Rating: undefined out of 5
Keywords:
Id: cJKdo-glRD0
Channel Id: undefined
Length: 67min 30sec (4050 seconds)
Published: Wed Oct 13 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.