Deploy Django into Production with Kubernetes, Docker, & Github Actions. Complete Tutorial Series

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey welcome to django and kubernetes [Music] in this one i'm going to show you step-by-step how to deploy a django project into production onto a kubernetes cluster now i partnered with digitalocean on this one to use their kubernetes service to radically simplify this process now part of it is that we actually want our django project to run on a robust service the other part is we want to isolate our django project into what's called a container a dock or container and then we want multiple instances of this container to run and then have something to load balance traffic across these instances now this is what kubernetes does and it does it really really well among many other things like let's say for instance we're going to roll out a new version of our django project and we want to test it not on our primary domain kubernetes can do this and it could do it really really simply and quickly then we can promote that project into being our primary service which is fantastic and we can do all of this just within kubernetes we don't need to rely on other services so that also means that i can deploy a bunch of micro services as well let's say for instance you have an ai application that you only want your django project to have access to you can do that with kubernetes it's really really cool the thing is we're not going to get into that much depth but we want to see exactly how to build a project a real project using django docker and kubernetes now if you're already familiar with docker and docker compose you can probably skip to part 14 where we jump right into kubernetes otherwise we're going to go ahead and go step by step in getting all of this set up including on our local environment getting docker the container docker compose all of that stuff up and running so that we can be as successful as possible in this series now kubernetes is vast there's a lot of things that you can do with it so we're really kind of scratching the surface here but we're doing it in a really practical way by deploying a django project so if that excites you like it does me let's go ahead and jump right in thanks for watching all right so let's go ahead and talk about some of the requirements and recommendations to get these things correct within this project now first and foremost we're going to be using django which is written in the python programming language you don't have to download django at this point you just need to have python installed now depending on your system you're going to go into python.org downloads you're going to look for your actual system now in my case i would actually click on this link here for my system not just download python 3.10 you could probably use 3.10 but i'm going to recommend that you use at least 3.9 potentially even 3.8 so i'm on a mac i'm going to go ahead and click on mac and i'm going to look for the 3.9 version that has a installer here i'm on an intel machine as i've mentioned before which you can see inside of your about preferences on whether or not it's intel or it's a apple silicon if it's apple silicon just do the universal installer that probably works for both of them but the general idea is there is an installer now if you're on windows very similar to this you're going to go ahead and look for the windows installer based off of your machine 64-bit 32-bit and so on now one of the things about windows that we want to do is when you are running the installation process you're going to want to store your version of python right here so typically speaking if you were to install python 3.9 you would store it in your c drive at the root at python39 so that you can execute it from here this will become incredibly important when we create a virtual environment with this now if you're already comfortable with virtual environments in python by all means skip that recommendation if you're on a mac or linux once you install python 3.9 you can just run python 3.9 and all of that now the kubernetes command line tool is something we'll install i actually have two separate videos both for mac and windows on how to install this the linux guide on the kubernetes is actually really really easy to do so you can just follow along with that i assume that if you're using linux you probably already have somewhat of an idea of following some of those guides so if you're on mac os go ahead and install homebrew if you're on windows go ahead and install chocolaty those will be the tools we'll use to install kubernetes next of course i kind of recommend that you install docker desktop i'll talk about this more in general but install it for your local machine so you can actually run in docker commands because we will be using docker throughout now i also recommend that if you are really unfamiliar with docker and you're not really ready to just jump into docker stuff check out my series called docker and docker compose that goes into a lot more depth than what we'll do throughout here i'll remind you about that when we get there as well next of course is actually eventually we're going to install the do ctl or the digitalocean command line tool we'll use this for a number of things in the future as well so be sure to install that too at some point so those will be installed the only thing you really need installed right now is python 3.9 on your local machine because i have a video or a whole section for mac or windows on cube ctl itself that also mentions docker desktop now if you're not familiar with python at all if you're not familiar with django at all i recommend that you take 30 days of python for newbies in python or try django 3.2 if you've never actually done anything in django and you could probably do up to the first like 30 lessons or so or you could also do this concurrently what we're doing here is actually really fundamental basic django stuff to get it into production that's really it there's not going to be a whole lot of in-depth django on this series it's all about preparing a really simple django project to get that into production in a meaningful manner so that's pretty much it for the recommendations and requirements again python is required because we're building a python application if you weren't building a python application well you would pro i would probably just recommend that you do our docker and docker compose series and then install kubernetes but of course we are actually building a python application and i want to show you exactly how to do that so regardless of your background you'll be able to actually deploy a robust django application into production even if you only know a little bit about django which i think is really really cool so if you have any questions on this stuff please let me know in the comments i'm sure i will reiterate a lot of these things throughout this series because well some of them are necessary and i tend to repeat myself on some of those basics so forgive me for that if it does cause a little bit of a hey you already said this in your mind anyways so now let's go ahead and take a look at how we actually can install kubernetes ctl or cube ctl on our local machine now we're going to go ahead and install the kubernetes command line tool called cube ctl using homebrew if you have homebrew installed the command is simple it's either brew install kubernetes cli or brew install cube ctl cube ctl is the actual command that we'll be using inside of terminal vs code wherever to actually manage our kubernetes cluster now i will mention what we're installing here is not a kubernetes cluster we're not creating a cluster on our local machine instead we are installing a tool to manage the cluster itself so that means that we actually don't have to have kubernetes running locally in fact i recommend as you're learning you do not have it running locally because it can muddy up your machine quite a bit and it also adds a layer of complexity of installation that we just don't want to go through we're going to be using digital oceans services for all of that now if you have kubernetes already installed and you already have docker installed feel free to mix and skip the rest of this video but i want to mention that you also should probably install docker desktop on your local machine now i have an intel's chip currently but at some point i'll probably have an apple chip to know the difference between those two just click on this apple here go to about this mac and what you will see is the kind of chip that you have right next to that processor it should say the apple or intel depending on what you have really simple way to do that so installing docker what this ends up doing is it allows you to run the docker ps command so assuming that docker is installed you'll see something like this or you'll just see this top line here you won't necessarily see things running and so the reason that i recommend that you install docker is because that's what kubernetes does it actually manages our docker containers for us kubernetes is a container orchestration tool so having docker installed and understanding at least some of the fundamentals of docker is actually important i'm going to go through all of those fundamentals in order for this project to work with kubernetes it's pretty straightforward it's not really that complicated because in the case of for us what we're using with docker is really just like hey this is what needs to be installed on the machine that's going to run our web application essentially that's it's like installation requirements beyond the actual web application itself right so if you need python installed for example you need to tell docker that that's a good example of that so that is getting cube ctl going so let's go ahead and take a look at this install command i run brew install cube ctl uh it might take a moment and of course if you don't have homebrew installed just go to brew.sh copy this command in that will install homebrew for you notice that i have the kubernetes cli 1.23.3 or 1.23 that's the one that we need right and so in my case it says it's not linked the only reason for this is because i have multiple users so i'm going to go ahead and just clear this out and run cube ctl version client hit enter and there we go so we now see that version is running and i don't have an error like if i accidentally spelled this incorrectly and typed out version you know client it's going to give me this error that's obviously not what i want to see i want to see this as it's being installed so a really cool pro tip that you might want to have is k as the shortcut or the alias to calling kubernetes right and so to do this you will really just add in uh inside so you would go to sudo tilda dot z s h r c notice that i have z shell running here that's the default on max now we'll go to inside of there and actually it'll be sudo nano and we'll put in our super user password and notice that i have this alias right here for cube ctl for k you'll see that often especially on max that being the alias for running kubernetes ctl or cube ctl so by all means do that alias if you like obviously you don't have to and i'm probably not going to even mention it again or even use k to not confuse anybody but it's a nice little shortcut that i like to use on my mac and linux machines so next up what we're going to end up doing is we're actually going to configure our our vs code once we actually deploy our kubernetes cluster we will configure vs code so that we can actually run you know cube ctl get pods something along those lines but instead of it not being connecting i'm gonna be able to do it with different projects like i'll be able to have multiple versions of kubernetes clusters running and leveraging vs code to actually manage switching between those projects it's actually pretty straightforward on how it's done i just wanted to mention it now because it's kind of a teaser for what we'll do a little bit later but before we do any of that we actually need to get our fundamental project running we need our django project actually running and then we actually need to start mapping it to actual docker-based services so that we can see how to actually develop locally before we bring it into production on kubernetes all right so now we're going to go ahead and install cube ctl that is the kubernetes command line tool called cubectl it's going to allow us to control our kubernetes clusters now in our case our cluster is not going to be on our local machine it's going to be on a remote machine we'll talk about that a little bit more in just a moment we're going to install it using chocolaty this is a package management tool or software management tool that really just allows you to run commands like this choco or coco install kubernetes cli so if you already have chocolaty installed just run this and you'll have cube ctl installed and you can skip the remainder of this video now i also recommend that you have docker desktop installed and in the past you had to have windows 10 pro it looks like windows 10 home is now available for docker desktop that has not been my experience so if you have windows 10 home and you are able to install docker desktop and then you're able to open up a power shell and do something like docker ps and see something like this that's great that means docker's actually running and installed now we don't have to have docker installed on our machine to use kubernetes as i just mentioned however this series i'm going to be using docker to show you how to actually build containers because kubernetes manages a bunch of containers so we still need to use docker desktop and docker when we use kubernetes now that might not be always clear when you hear kubernetes but it's definitely the case we'll see a lot more of that as we go through the series so just take my word for it for now so what we're going to want to do is install the chocolatey software right so if you go to chocolatey.org or just search for chocolatey you're going to want to install this now notice there's a few different ways on how you can actually install chocolatey just go for the individual way or the generic way but most likely the individual way because you're most likely working on your own machine and you're going to run through this installation process right i think it's fairly straightforward on how you do it um so just go ahead and do that now it's really just running this code right here and inside of powershell and then once you have that installed you'll be able to open up a powershell and run it as an admin when you run it as an admin it will ask you to verify this and then you could should be able to do choco or coco and and do something like that so you can see that chocolaty is actually installed and then again like i mentioned in the beginning we're going to do choco or cocoa install kuber netiz cli okay and you run that and in my case i already have kubernetes installed notice that i have kubernetes 1.23 you want to have at least 1.23 if you don't have 1.23 then you're going to want to upgrade your kubernetes to verify that you have kube ctl or cube ctl installed of course you just run cubectl and then we can do version client this will actually show me the version that i have if i spelled cube ctl correctly and version client hit enter and there we go so this first command which is nice it shows me that it's not actually installed of course i spelled it incorrectly but if you type out it correctly and you don't see something like this but rather you see this that of course is because you didn't install it correctly it's certainly possible that when you when you run chaco that you might have to actually restart your system that probably won't happen but it is possible um i can't rule that out every time but anyways so now that we've got this we can actually do pretty much anything we need to do with cube ctl such as like git pods now in my case it's not actually going to work i don't actually have a kubernetes cluster set up your case will probably be the same so what what's going to happen now is going forward we will actually configure a kubernetes cluster at some point in this series and then we'll use cubectl to make that happen now i'm going to be running all of this stuff on a mac so if you have a linux it's going to be almost the identical commands if you have a windows it should be almost identical commands they should be basically the same things but unfortunately it's not right so powershell and terminal powershell of course is on windows terminals on mac and linux have minor differences in commands they are minor just trust me that they're minor but we're also going to be using vs code which actually helps make this a lot easier so that the command differences pretty much go away but the nice thing about qctl it is is it is cross-platform so most of the cubes ctl commands are identical so if there are minor changes that i miss and you either don't know the answer or do they know the answer just ask a question or help somebody else figure this out too because we're not going to be recording two versions of this for windows and mac because cube ctl is almost identical with the exception of the installation process which is why this video exists so if you have any questions on this let me know in the comments and of course if you're having any issues with docker then you might have to skip some of those sections or you could always provision a actual virtual machine a linux virtual machine install kubernetes and docker on that and then you can actually run all of these commands through that linux virtual machine so let me know if you want to see how to create a remote linux virtual machine so you can actually run those commands through your windows there also is the linux subsystem on windows which is something else that you could use but again i didn't want to make everything a little too complicated i just wanted to mention all of the different tools on how you could use cube ctl on windows and if i missed anything let me know in the comments now we can keep going and actually start provisioning our cluster all right so now we're going to go ahead and jump in to creating a virtual environment now if you're on a windows hopefully your python 3.9 is in this location like i mentioned in the recommended installations portion if it's not in this location navigate to where it is so you can actually execute python 3.9 and then you can run v like i have if you're on a mac or linux and you also have 3.9 installed you should be able to just run dash capital v and get that python 3.9 is there versus if i did three point you know some weird number it's going to give me this error and of course in my case on a actual mac it's also going to give me that error but if you're on a windows you're going to call that and do dash v just like that okay so the next thing is i also want to make sure that i have docker installed in my case i do and then i want to do cube ctl version client these are just verifying hey i've got some of the fundamental things that i needed installed to go if you don't have these things installed go back and watch what i did before okay so i'm going to be creating a virtual environment now for my project now what i want to do is i want to store my entire project inside of a folder called django-k8s inside of my dev folder okay so this dev folder is something that i created a while ago so i'm going to go ahead and just do make dur dash p if it doesn't work on windows just do make dur and then dev and then slash django k8s so in other words i'm making a dev folder and then another folder called django-k8s and if you're on mac or linux you can just do make dur dash p dev django k8 this dash p it actually will create all parent directories if they do not exist okay so now i'm going to go ahead and cd in here and we'll list things out and i have nothing in here currently right not that big of a surprise so what i want to do is i'm going to create a virtual environment for this entire project so i don't actually have to create a virtual environment for my entire project if i add new services to my kubernetes cluster i'm only going to be creating a django service in this case which we'll see that might not make a whole lot of sense right now but the general idea is i'm just creating a single virtual environment for this whole project so i'm going to go ahead and do python 3.9 or whichever path to actually execute this version of python and then we'll do dash m v emv and then v e and v so all this is doing is initializing a virtual environment in a folder called venv me personally i don't always do it this method but i wanted to make it as simple and straightforward for everyone to run a virtual environment okay so now inside of this folder all i have is this venv folder now if you do it like i've done it in the past where it's v e and v period that's going to create a different kind of setup where it has now these other folders in here as well just having it in the venv folder makes it a little bit simple because then all of my virtual environment stuff is inside of there so i'm going to go ahead and just remove these real quick and this is really for those of you who have followed me in the past or have seen a number of my items in the past as well so there we go so now i just have that venv folder so to activate this on mac and linux it's source v emv slash bin slash activate that creates our virtual environment just like that if you're on windows of course it's going to be dot slash v e and v slash scripts slash activate and that will actually activate it for you as well and you'll know it's activated by seeing this v e and v in the front and also now if i do pip freeze i should see nothing installed and also if i just type out python-v regardless of what system you're on assuming the virtual environment's activated it should actually give you the correct version of python now in other words if you're on windows you don't have to have that c slash python39 python.exe and then if you're on mac and linux you don't have to do python 3.9 although you still can you just don't need to okay so that's pretty much it that is now our virtual environment right it's really straightforward the one thing i like to do with virtual environments is upgrade pip as soon as i actually initialize a virtual environment uh you don't always have to do this i still like hesitate updating pip because like eight years ago one time i updated it and it just broke everything none of that has happened since but it often is a good good idea to upgrade pip on a regular basis and again that's the command right there pip can upgrade itself how cool is that okay so now what we're going to do is we're going to go ahead and actually create our django project we'll go ahead and install some requirements and create it now the thing here is we could absolutely add this entire folder to a text editor something like vs code i'm just not going to do that yet if you're already really comfortable with vs code by all means go ahead and do it i'm just going to stay with the command line just for a little bit longer so we can see how to do a number of things here for those of you who are already well familiar with django these are the requirements go ahead and install them and then just run django admin start project django underscore k8s period inside of that web folder so it looks like this also add this project to vs code that's for those of you who are a little bit more advanced if you're not as advanced let's go and take a look at how to do this well step by step nice and slowly too all right so now we're going to go ahead and install our necessary requirements and start our django project but unfortunately i actually closed down my terminal and i don't have the virtual environment activated i'm doing this for those of you who are not incredibly familiar with this so what we want to do is we want to navigate to where our django project was in my case it's in dev and django dash k8 and there's my actual venv folder now this tilde will actually give me my user root folder so in other words tilde is actually showing me this it's a shortcut to give my user root folder that's it and that's true on a linux mac and windows it just the slash might be in a different direction especially if you're using powershell it should work in both ways in any case we now navigate into this folder and i need to activate this virtual environment as a refresher to activate on a actual mac or linux it's source v e and v bin activate and to activate on a actual windows is v dot slash v env scripts slash activate just like that and then you'll have this virtual environment it i'm not going to show that again other than the fact that if you type out deactivate it deactivates it and then you need to reactivate it again in order to work inside of this python-like isolation that's really the key with the virtual environment it's really just to isolate python projects from each other which is why i mentioned i'm only doing a django project in the last one so i don't have multiple python projects in here although i definitely could kubernetes excels at that and so does docker in any case we now need to create our requirements file so what i'm going to do first though is i'm going to make a directory called web and i'm going to cd into this directory called web now this is actually where i'm going to have all of my stuff related to django specifically that's going to be our requirements that's going to be our doctor file that's going to be everything that's related to this particular project you don't always do this this is just something that i'm going to do and so i can actually run these things so now what i want to do is run pip install and then we're going to run django inside of quotes greater than or equal to 3.2 and then less than 4.0 now why am i using django 3.2 let's go ahead and take a look at django project.com go into downloads and you'll see that 4.0 is the latest version that's out however if you scroll down a bit you'll see that 3.2 is a lts a long-term support release i'm in the first part of 2022 here and so in 2024 3.2 will still be well supported because the 4.2 lts will be out at that point now you don't have to use this version of django you can use older versions of django 2 even 2.2 heck you could probably even get away with django 1.2 although i wouldn't recommend it because we're going to be doing stuff that's specific to 3.2 the and different versions might have just slight changes in here so if you have an older version of django that you still want to test out by all means go ahead and do that but anyways this is a nice and simple way to install the latest version of 3.2 or really whatever this package name is inside of our virtual environment so go ahead and enter and that's going to install that for me so while that's running i'm also going to now open up vs code so on code.visualstudio.com you can download this free text editor it's free it's robust it's powerful it's really really good so i recommend that you do use this you don't have to you can use the terminal all you want but i'm going to go ahead and use this now i already have it installed so if i run open period this should open up this folder this window right here if you're on a windows it's going to be i period and that will open up this folder now another thing we can do is go back a little bit into the root of our entire project where va env and webr enron code.period and what this will actually do is open up this project inside of vs code now if that didn't work for you just go ahead and open up a vs code and then a new window and then you can just navigate into the explorer here click on open folder this works on all systems exactly the same and then you're just going to want to navigate to your dev folder into where django k8 is and then you'll go ahead and open that now in my case this folder is inside of my users inside of cfe inside of dev and then you can see it right there and then you just go ahead and open it just like that which i already have it open next of course i'm going to go ahead and file i'm going to save this workspace as and it's going to be just django-k8 there we go cool i will actually do some stuff with this code workspace settings a little bit later but for now i'm going to leave it as is and then inside of web what i want to do is add in requirements dot txt and we're going to add in that django greater than or equal to 3.2 comma less than 4.0 okay that's so we have the correct version of django then we're gonna go ahead and add in g unicorn this is a web server gateway interface that's required for production versions of django which we'll talk about then we'll go ahead and do requests we'll also do django dash dot env we'll also run psy cop g2 binary if you use old newer versions of django you can probably just get away with this but the binary means that i don't have to worry about as much on the docker side of things which we'll talk about when we get there too next of course is going to be django-storages and then boto3 these last two i might not use in fact these last three i might not actually end up using but this is like the bare minimum i do in pretty much every project that i work on inside of a production ready django project okay so now that i've got this i'm going to open up my terminal with control tilde that should be the same on windows as well tilda of course being that right and so control tilde will toggle this terminal being open and as of now the terminal is not activating my virtual environment by default that's actually not fun i would rather it activate my virtual environment by default which is a setting all set up on my code workspace later but for now i'll go ahead and do source venv bin activate okay and then i'll navigate into my web folder and then i'll just run pip install r requirements.txt now if you're already familiar with this process you're probably like dude why are you taking so long just to explain requirements this is for those of you who are not familiar with this method of requirements not for those of you that are so that's really it and when you go into production you want to make sure that you get even these basic things right otherwise you're going to run into a lot of issues and it's going to be incredibly frustrating so i'm just trying to make it so it's not incredibly frustrating for the most of us anyways now we're going to go ahead and start our django project it's really simple django-admin start project and then the name of our project in my case i'm keeping django k8s as the name of my django project as well as the folder for the entire kubernetes project i'm going to go ahead and hit period here so it creates it inside of this web folder and there we go so i now have my django project going of course this will only work if i activated my virtual environment and i ran all of the installations necessary here next up we need to actually declare our environment variables that are going to be run locally let's go and do that now we're going to go ahead and implement some environment variables inside of our web folder here so we're going to go ahead and add a new file and do dot env so since we just created this dot env i also want to make sure that if i do use git which i want to use git as far as version control is concerned i'm going to add a git ignore file as well and make sure that web.env is not in there at all now this dot env file is a very common way to declare our environment variables for any given project a good example of this would be something like django super user password we don't want this password anywhere right so this password is going to exist in env on our local machine but we definitely do not want it in git which is why i created that git ignore but or actually added env to the git ignore but i still need to use these sensitive keys locally and so that's what we're going to be doing here and so i'm going to go ahead and copy and paste a number of these and you can pause it now and go ahead and copy these as well but the idea here is we want to have all of these environment variables in our project now you might be wondering these things right here these settings are only for something later inside of when we do docker compose and docker related things so you don't really have to worry about these just yet because they're not available yet we will fix that soon but of course now i need my project my actual django project to be able to read this.env file but the nice thing about this especially when working locally is something like this debug key here i can change it from zero to one and when i do those changes then i can actually have my settings being up to date with that so this debug here i can actually toggle this between zero and one to make sure that my environment variables are working correctly this is something i will change once i actually can load in these environment variables right now i have no way to actually load them in that of course is why we actually installed django dash dot env in our project so we now actually need to configure that in order for django to actually read these environment variables now i will say these don't have to just be secrets right like in the case of the username being admin perhaps you want that a secret perhaps not or perhaps that you want just something like regen in here and in my case you can you can add in it as a quote or you can just say something like texas that's where i am and so you could say stuff like that where it's just something that you might toggle in your production environment right you might actually change this in production then you don't actually have to go into your code at all make any changes instead they're environment variables variables that you can change inside of your environment at any given time however environment variables are typically where people put sensitive information as well because that will change from your local environment to your production environment as we'll see going forward before i go any further i just want to mention that i initialized git a version control that also has the python git ignore from github itself now i have this initialized that's why things are green in here now and you'll see this git ignore file is in there and so all i'm going to be doing after lessons is i'm going to do git add all and then i'm going to go ahead and commit these changes so let's say for instance we'll call this our initial commit and then i'll go ahead and actually put this on github i'll probably move around with branches as well i just wanted to mention that before we jump in because this code will have all the references i just won't show you when i actually save them and put them on github and also that's why things will turn green so let's go ahead and see how we actually implement the django.emv into our project itself it's actually pretty straightforward especially if you've followed their documentation so the first thing is we can go in the wsgi file this is something i use constantly and so we're going to go ahead and also import pathlib here and what i want to do is get the path to where our env file is which in my case is in the root of my django project where dot ev is which is right next to manage.pi now the path that i want is actually the same path as this settings right here right so you can totally copy that or i can just be a little bit more explicit on this so you can really just dive in for it and say current der or current directory is pathlab.path of this file and then you do.parent now i do encourage you if you've never seen this before just go into the python shell and play around with it so you can see what happens but in our case i'm not going to do that i'm going to also then say baster is equal to the current der parent and as you see here it's already identical it's parent.parent now we can't actually use dot resolve that's probably a good idea in fact is use dot resolve here just to make sure that we're definitely getting that path correct okay so why did i do this base directory is because i want the emv file path and that's going to be equal to that base directory slash dot emv this is why i really like pathlib it makes it really simple to get paths like this and typically this is for dividing right inside of python but in this case it actually does it for the path and this will this should work on a mac linux or windows all of this stuff which is really great so now that we have this file path i can use dot env so i'm gonna go ahead and import dot emv underneath these so import dot emv and then we're gonna just call the dot emv dot read underscore dot env of the string of that path now if that file doesn't actually exist the nice thing about env is it's not going to raise some major error in here the wsgi file actually readingthis.env is the most important one not manage.pi which is also what i'm going to end up doing manage.pi is really just so that when we're running locally it actually does read that.env file so i'm going to go ahead and import.emv here and then in this main portion here i'll just do emv.read.emv prior to the main function call here you could also put it above this as well but notice that you definitely want it to be executed before the settings module is set as a default so that the settings module has access to all of those environment variables so now that we've got this what we could do is we could actually print out what those things are so any given environment variable so let's go ahead and do that region one like i've seen before or like i said before inside of my.emv file so let's go ahead and activate our virtual environment source v e and v bin activate we'll change into our web folder here and then i'll run python manage dot py shell and i'm going to import os and then print out the os dot environ dot get that region value that i have there and hit enter notice that it says texas um so that's great so that is how we actually grab environment variables anywhere on our project anywhere in our code and then also just so you know environment variables are always in caps like this that's pretty much how you set the standard environment variables like that okay cool um so that's installing django dash dot env so really simple really useful now some of you might be like well are we going to end up using the emv file in production now we can and we also don't have to so there's actually two different ways on how to think about this so again the nice thing about django.emv is if that dot env file does not exist then we won't actually have any errors which we can try out by changing it so i'm just going to change that file name real quick just to show you and i'll go ahead and say dot abc or something weird and we'll exit out of here again go into the shell and now you'll actually see this error here right so it can't find it so it doesn't read it so now if i try to do that os stuff so import os and then os dot environ dot get region i should get nothing right because it's just not working so just an important thing to note with these emv files it'll give us that warning not a big deal at all but let's make sure that we've got all the right ones great let's keep going now we're going to go and update our django settings to handle our postgres related database settings now these configuration items are going to be used later as i mentioned but i still want to update my actual settings file itself and i'll just go ahead and do something like postgres ready being zero for now in case you are wanting to test this now of course postcode is ready being zero as opposed to one or just not even being there which will probably remove it later so for now i'm going to go ahead and use all of these so jumping into settings.pi the first thing i want to do at the very top is import os and then we'll navigate to the bottom or really just after the other databases configuration here and i'll paste those in right so what i want to do of course is declare the roughly the same names but now i'm going to go ahead and say db username equals to ospath.environ or rather not path but os.viron.git and that's going to be my postgres username just like that right and so we want to repeat this process a few more times the next one being our database password okay and the only reason i have these variable names different is so that you know that they actually are different values and can be changed over time as well as they're not going to be identical to what's the environment variable value again to not confuse them at all or confuse them as little as possible next of course is going to be our db probably our database name but i'll just call this db database which is going to be our postgres database and then we'll go ahead and grab the host and port as well so host and port there we go and so now i've got all these different things let's just make sure that host and port are correct here and so if i get rid of this with the exception of database ready i don't have that yet i'm going to go ahead and also say db is available or just avail and what i want to do here is i want to make sure that all of these are set so to do that i use the python all here and i'm going to go ahead and do db name password the database itself and the host import okay so of course the reason i want these all to be set is so that i can actually configure the database as needed the next one i'm going to actually leave this in as postcards ready to grab the environment variable for that and what i want here is the string event equaling to the string of one in fact this is the same as what i want to do with debug up here so this time of course instead of being postcards ready it's just simply debug and we want that to be one as well as of now i still have it in as one okay simple enough so now that things are available i'll go ahead and do two things so if db is available and and this should be a veil like that so we're going to do avail and postgres ready then we're going to update our database settings so the database settings for postgres i'll just go ahead and copy and paste these real quick it's really simple we still do the default we just change the engine from sqlite3 to this and then we use our environment variables that we just set and again we just want to make sure that these are all the exact same otherwise we'll have some issues so when we go into production the main thing here is that if these are not set if the environment variables are not set then the production database is no longer available which i think is really important we could also now use this parameter in our views somewhere else to see whether or not that database is actually available and as of now we don't have that postgres database so i'm just going to leave it in as postgres ready being false right so inside of our environment variables it's currently false so we're going to just save that as is and so if i actually wanted to review this i could print out my databases configuration when i try to go run django so let's go ahead and just run django just do run server i realize there's a number of things that i haven't done in django yet still but there we go we now see what our django database is it is the sql light database it is not our postgres database which we'll solve when we actually implement the um docker compose related things here so close that out and that's it okay so naturally now anywhere on our project if i need to use an os value an environment variable value like our django secret here i can do that right so this django secret is not great we want to make sure that that also changes so os dot environ duck it and our django secret key really simple i'll be sure to link something in the description on how you can generate these secret keys i'll also be sure to add that in later when we actually go into production uh because of right now this is not a great one and it might also warn us about that but it also might not so we'll go ahead and leave that as is for now so that's simple enough on implementing these things but now we're actually ready to start our docker related services for a number of reasons as we'll see going forward all right so now we're going to go ahead and prepare our django project for docker by creating a docker file now inside of our web folder here we're going to go ahead and create a new file called docker ignore now what docker ignore does is it ensures that our docker file doesn't add files that we just don't need or doesn't copy things as we'll see so you can literally copy our git ignore here and just bring that entire thing in one of them being that we probably don't want to copy is this dot env file at least we don't want to copy it here and then we also don't want any virtual environment related stuff which i think is also already in that git ignore file somewhere as well so if we do a quick search we should be able to see that v e and v is also already in the get ignore file so it's also in the docker ignore file too now at this point so we don't actually have to put it there so in any case now that we've got this docker ignore file again still in web we're going to go ahead and create a docker file now this is going to be an absolute minimum docker file and it's really meant to just get us going docker can be a lot more complex this than what we're going to do here but we could also add in additional installations that our project might need i mentioned a little bit ago that postgres we're using postgres-binary to make sure that this is just a simple way to get docker working this is a good example that there would be additional installations that we'd want to do inside of our docker file to not use postgres binary but just simply the postgres library just like that so in any case let's go ahead and get this docker file going it's really simple first and foremost we save from so what is it that we're trying to use here at its core are we trying to use ubuntu well you could be and that's okay docker has a lot of ubuntu images that you can use are you trying to use nginx well no we don't want to use nginx here what we want to use is python right so we are using the python programming language but the question is what version of python so to answer this question we can go to hub.docker.com i mean you probably already know what version of python but let's go ahead and just search for python and inside of here going into our tags here we can see all of the versions of python that are available so if i type out 3.9 i can see all of the ones that are tagged 3.9 and something right so here's 3.9.10 dash slim right what about our python version let's take a look at our python version before i select anything in my case i have 3.9.7 right so i can absolutely use 3.9.10 but let's actually just stick with the version that we have so 3.9.7 and notice 3.9.7 slim you don't have to do dash slim but it makes the docker container smaller which is important for our local machine notice this compressed size here versus let's say the buster a different version has a enormous compressed size in comparison so we're going to go with that version again there's a lot of ways to optimize docker and to learn about docker and i highly recommend that you do but for now we're just gonna go ahead and do three point nine point seven dash slim and by all means change this to the version you are using and then the one that you wanna use and you can also use the latest version if you like as well but we want to be explicit about this and at least match our local environment most likely if it slightly is different than your local environment just make sure that the major version is the same so 3.9 is the same now what we're going to do is we want to copy all of the code all of this code we want to bring it into our docker project and to do this we do copy dot slash app so what this is going to do is it's going to copy this entire folder that's everything in here including subfolders into slash app now the reason we had that docker ignore was to make sure we don't bring in stuff like pi cache like this right here or pre-compiled things as well so that's another reason to have that docker ignore copy we'll actually respect docker ignore which is nice next we're going to change our working directory to that folder of app so our entire project now is in app and then as far as docker is concerned it just did this cd slash app if you will for the remaining commands so the next thing is i'm going to go ahead and actually run a virtual environment creation so run python3 dash m v emv then we want to actually we could do vnv just like this but typically speaking when you create a virtual environment inside of a docker and container you're going to put it in the otp folder that's just a standard place to put additional installations that you might need to do in this case v-e-m-v is one of those so that means then also our app itself doesn't have the virtual environment folder within that app folder itself which is true for our web folder as well so this web folder and the app folder are going to be the same they don't have virtual environment in there so we want to put it in otp venv this of course is going to be something we'll have to remember because now what we need to do is run that version of pip so i need to do pip install and requirements dot txt right so i definitely need to run that or dash r but of course the question is what version of pip now if you're not already familiar what we can do locally let's take a look at this and go back into here if i want to call the virtual environments pip i just run venv bin pip and that will actually give me that virtual environment's pip this is a little bit easier when we actually run dash python or slash python and see that now this should also work on your windows machine just a path is probably going to be a little bit different so just go into that folder to see that path right so i can go into this folder and i can see all of the actual bin scripts that are available notice django dash admins in there hey what do you know activate.ps1 is in there as well for you windows powershell users the only reason i show you this is because now i can use the otp v e and v slash bin pip now everyone needs to do this because this version of python is running on a linux machine it's not running on windows it's not running on mac so this is actually how we run that installation for our virtual environment of course we need that because these are all the requirements our current project needs to run another thing that you'll probably want to do in addition to this is run you know something like pip install i don't know dash pip dash upgrade like we've already seen this is a optional step you don't always have to do that but i do recommend it and then the final thing that we want to run is showmod plus x entry point.sh i'll explain this in a moment after that we're going to actually execute that so we'll just do slash app slash entry point dot sh okay so what is entry point dot sh well if you think about running things locally let's take a look at how i can run this django project locally first i'm going to cd back into web so i'm sort of emulating what's happening in this docker file here so what i'm going to do is i'm going to do pwd i'm going to grab the absolute path to my virtual environment so absolute path v e and v that's the absolute path to that virtual environment and then if i go into the bin python i now have that version of python and i can run my manage.pi run server this is the long way of doing it without actually running and activating our virtual environment that is what entrypoint.sh needs to do in other words i don't ever need to activate my virtual environment here that's hopefully what's going to clarify that entry point.sh is going to run things very similar to this but of course it's not going to be manage.pi run server because of course that is the development server we do not want to run that in production we want to run g unicorn we want to run g unicorn in production because that's what it's made for so of course we need to create that entry point file so inside again inside of web here i'm going to go ahead and create entry point dot sh and we want to make sure we spell it correctly so let's rename it to entry point.sh this one i'm going to go ahead and copy and paste things here we'll explain everything for this okay first and foremost this is a bash script so we declare that it's a bash script it's not a python script or something like that it's just a bash script and it's really just a text file it just happens to have.sh so this is just another way to make sure our system knows it's a bash script next we have an app port we'll explain that in a second we also see the into slash app that's so we can reference our django project okay so there is our django project so we've got inside of web the main configuration folder that's what this is doing because it's going into app which again going back into the dockerfile that's where everything is and then we dot wsgi that's this file here and then it's looking for this application declaration that's it and then we bind it to a specific port that port of course is going to default to port 8000 just like your development server when you go to production you might actually change that and what this does is changes it based off of the environment variable of port sorry if that's confusing but that's what you do oftentimes you'll put a worker temporary directory this is a good thing to add optionally when you need to next of course is the final thing is this otp vmv ben g unicorn this goes back to what we just talked about when we ran the python version of all this let's see where where it went that's this right here so we're doing roughly that equivalent but now using g unicorn in production so if we want to see that equivalent locally what we would do is run g unicorn here i'm not going to do everything but this is giving me my local absolute path to g unicorn and then i can run django k8 dot wsgi application colon application and this should actually run that and notice the default port here is a thousand i didn't actually change it but notice it's also bound to this host which i want to bind it to zero zero zero so it runs on any host without errors okay cool and so i can cancel out with control c okay so that that's our docker file very very minimal really simple on how we ended up doing all of these things now one of the things you'll often see is notice that this is three steps i don't need this to be three steps in fact i also wanna make sure i'm doing the right upgrade for pip here and so i'm gonna actually change it into one step and just do and and for the command to run cho mod plus x just means that this is executable so that docker can actually run this accordingly and so now what i'm going to do is have this and then the and and and then we'll just do a slash here hit enter we can tab it in now and then again the end and slash here pin enter that way it's just a little bit cleaner as to what's going on here in other words this whole thing can run as one single step it doesn't have to be multiple in fact you can also have the virtual environment as part of that step i just like having these things separate just so i can see that hey this is about creating a virtual environment this is about installing things and making sure things can run so they're just slightly different as far as what those tasks are concerned so that is a huge fast primer into using docker i realize that in docker files granted i haven't done anything actually for docker yet but i'm also leaving out a big problem that we need to solve and that has to do with the fact that you know if we actually run our docker our django project here every time i run it i run python manage.py run server it gives me this warning unapplied migrations so it's definitely something i still need to address as far as the migrations are concerned inside of docker which i'll talk about but this is that minimum docker file and of course if you're having any issues with this check out my series that i mentioned before called docker and docker compose on our website cfe.sh that will help clarify a lot of things with docker and it also takes a little bit more time to get there but hopefully this isn't really that complicated given what we've already done we pretty much have already done all of these steps and this is why docker's so powerful is that i'm basically just saying hey computer or hey docker these are the steps i need to have happen for this environment so my application can actually run through docker which means that i can run it if if docker can run it i can run this application anywhere so i don't actually have to provision that virtual machine or that server anywhere this is the power of docker and then when you add in kubernetes into the mix then kubernetes will decide how to redistribute a lot of that stuff which is also pretty cool one of the most common commands we need to run inside of a production environment with django is python manage.py migrate we need to make sure that our django code all of the models are synced to what the database has if they aren't synced then we have an issue now before we actually create the script to do so i want to address why it's so important that this is done and it has to do with the docker container itself right so a docker container or any sort of container is ephemeral what this means really is that it's just running the code it's not actually gonna stay forever more than likely it's gonna go away a database on the other hand we want it to stay forever we want all of that data persistent we do not want it ephemeral but our django code our web application code should evolve therefore the things that are running that code should be able to be removed now this is important because if code is changing over time that probably means our models are changing over time and therefore our database needs to reflect what those changes are which is exactly what this python management py migrate command does for us so where can we actually run this command now on one hand you might be like okay well i'm going to put it into entry point dot sh i'll go ahead and grab the docker you know opt v and v then python manage.py migrate much like we did with g unicorn and generally speaking this is okay it's certainly possible that you're going to want to run this command right in here this is a nice safeguard i would say but it's actually not what i want to do i will cut this out because i am going to save that path because i still want to run that path it's roughly the exact same and i just have this empty file that i'm not going to keep but the general idea here is this is the migrate command i want to run another command that i'll probably want to run at least one time is to create a super user i need a way to actually log into my admin now you might be wondering oh well what about collect static should i put that in there as well yes and no i would say you could either make a entire script for collect static if it's necessary or you could run this elsewhere because there's a lot of places that you can run collect static that have little to nothing to do with the database create super user and migrate both are directly correlated to the database collect static is not okay so with this in mind we want to run those two commands so i'm going to just go ahead and move collect static up here paste this here so these are the two commands that we want to run how are we going to do this well inside of web i'm going to create migrate.sh and it's going to be very similar to what we just saw with our entry point i'll actually explain this one though so hashtag exclamation mark slash bin bash so it's a bash script now we're going to go ahead and put django super user email and this is going to be equal to the environment variable of that or the default whatever defaults you want to put in here in my case i'm just putting hello at teamcfv.com and that of course is a real email for my business but the idea here is we have a super user email now for this it's so we can actually run the create super user command now with this one we would do otp v emv bin and then python manage dot py create super user this email that we want to use here is going to be dollar sign that email dollar sign so that we can actually run the string substitution this is how you do it in the case of a script where you can get the environment variable or another variable that's just like this and set that equal to that one so we could also maybe just to clarify things we could just call this super user email that way we don't get anything confused as far as the variables are concerned the other part of this is we actually want to cd into our app directory the only reason we cd into that app directories to make sure that manage.py is there because that of course is the root of where our django project is going to be and of course that's because of these two lines right here cool so this script now will go into that and then run this create super user now one of the things about this script that will likely happen is well after you run it one time it's going to fail we want to catch that so we can just do two pipes here there's a straight line up and down it's right next to that slash if you're on a us keyboard and then just say true so now every single time this runs this is not going to fail us out if we didn't have this it will fail and it'll run errors for us now of course before i can even create a super user i need to make sure that my database is migrated correctly and to do that well we already talked about it we talked about it a lot it's python manage.py migrate and then no input so when we actually run this and it's saying that migrations should have occurred that is also another potential error that could happen right in other words if we think about this when we run make migrations that is saying hey there's changes that need to happen in the database based off of what django sees in the code so run make migrations this is like to prepare django to run migrate to actually make all those changes but the thing is we don't want that to happen in a script at all we want that to happen somewhere else to make sure that those make migration commands are running in fact when we actually test our code so python manage.py test there is a way to ensure that we do have our migrations done so there we go that is our migrate script now you're probably wondering when are we actually going to end up using this well we're going to use it in a number of places as we'll see and it is basically identical to the entry point itself and of course this location right here is also directly correlated to this stuff right here right so there is a lot of overlap in these script files and the only reason i spent so much time talking about this one is because making sure that our migrations happen in production are critical otherwise we're going to have errors that are related only to django but it feels like docker is not working or kubernetes is not working which is obviously a big problem and could stop you right in your tracks so now we actually need to run this and your intuition might be oh let's go ahead and pop it in the docker file just like what we have here so something like cho mod plus x and migrate.sh and then running to migrate.sh or really it would be something more like bash or sh which you know shell or bash either one works in this case um that would maybe be your intuition to just do it like this but unfortunately that's not how it's going to work because remember migrations have to have access to the actual database when we build the docker container in this process we will not have access to the database most likely you might have access to certain environment variables but even that you won't necessarily have access to this brings me back to collect static and super and create super user both of those commands the actual docker file itself most likely will not run those commands even though there might be this like inertia to do so now if there is no inertia to do so hopefully well that's good but if you're anything like me and you start using docker a lot more you're like oh wow can i just automate everything in this docker file the answer of course is what i'm trying to iterate here is no but now what we should see is how to actually put this docker file into action locally with a database so we can actually test essentially locally what we are trying to do in production it's not the same test because we're not going to be using kubernetes locally but it is closer in that direction because of a tool called docker compose in a moment we're going to jump into discussing docker compose actually doing some configuration so our local django project has access to a production-like database really simply all through docker and docker compose so if you've taken my docker and docker compose series you can probably just jump to our repo here go to the 12-in branch that branch right there and then go into docker-compose.yaml and actually just copy this yaml file now if this already looks confusing to you then you should probably watch this whole video but if it doesn't look confusing to you then you can just jump past this video and look at part two that one's gonna be a lot more condensed one other thing to mention is that we do make some changes to settings.pi to ensure that our django database is actually working so you also might want to just copy these things too assuming that you've been doing everything up into this point so for those of you who are ready to just jump into docker compose this one is a little bit longer but it's only because we need to configure a number of things in order to then to orchestrate correctly on our local machine so we get that much closer to using our kubernetes environment that we will set up so inside of the root of our entire project you know right next to where web is and v e and v we're gonna make a new file called docker dash compose dot yaml it could be y a ml or just yml i like using a for some reason anyways so i am going to call it docker compose this is the default name that you should use you can use different names but you have to remember a flag when you do so i recommend just sticking with docker dash compose just like we used just simply docker file okay again those things can change first thing we want to do is declare the version which in this case is 3.9 the version doesn't really matter that much because docker compose is basically the same as it's been for a long time the first thing that i need to declare here is services and i want to give my first service a name so i can call this my django web app if i wanted to or in to simplify things i'm just going to call it web much like it's matching the exact same folder here okay next what i want to do is declare the build now this build is referring to what container do i want to build here so where do i build this well in this case the context is also web as in this directory right here now if i had the docker compose file within the web directory the context would be a period because it'd be in that directory but it's not it's actually in web we can use that sort of context too to make it a little bit cleaner of a actual context path if you will so the path to where i want to build this container so also declare a docker file in my case that docker file is just simply docker file and that's it so this will build that container image for us now this is very similar to writing docker build and then tagging it something and then you know using the file so tagging it let's say django-k8 and maybe even with v1 in there and then using dockerfile and then again using web now in this case maybe we'd say web like that and this is probably not even going to work but we would want to change in that web folder anyway but that's just generally speaking what's happening here with this docker compose next we're going to go ahead and declare the actual image that we want to call this in this case i'm going to call this django k8 as in the image name i want to give this docker container so this image name is the docker container image name and you can definitely make it something different than this you could call it like cfek8 or something like that you can give it a unique name if you'd like i'm just naming it the same name as my django project roughly speaking then i can also tag it with you know v1 v2 or 1.0.0 or whatever other additional tag you might want to have on there much like what we saw here that's actually what we're doing here is we are declaring this same kind of thing but just right here and so you know in theory if you actually push this into the docker hub you could also do stuff very similar to this which is pretty cool if you're not already aware of that so there is our web image okay so what i can do here is i can actually just leave it as is and i've got my docker compose file in here let's make sure i'm in the correct place i do not want to be in web but i want to be where docker compose is and now i'm just going to write docker compose up so if you're using an older version of docker compose or a linux based version you might have to do docker dash composed but if you're using a newer version of the docker desktop you can just write out docker compose and up all this is going to do is bring up your services it's going to actually build the container if it doesn't already exist and then it's going to run it now notice that it is running and it's saying listening at this part we'll talk about that in a second i'm going to stop it by doing control c and that will actually stop this from running another way to stop it is to just run docker compose down in the same directory i'll do that next but anyway so i i just did that let's go ahead and force the build if i do build this will now build that docker container image from scratch and it'll do it every time right so it's not necessarily gonna download everything from scratch but this is actually building it much like you would do with docker anyway it's installing everything it's running everything we would need to see now the first time you run this it's most likely would have done the same thing too i just have run it through my own tests and therefore i was able to just right away go off the bat and so what i notice here is that i've already got an improperly configured secret key setting is not there it can't be empty now why is that happening well we go into our environment variables we've got a secret key here it says fix this later and then well if we look into our django code itself does not have a secret key so we actually didn't set it this is one of my favorite things about docker compose is that i can then go back into docker compose let me just close down this terminal here and i can declare something new and that is env file and i can actually use web dash or slash dot env i can do this locally another cool thing about that as well is i can say environment and i can give a key value pair here so i can give a porch value and let's say for instance port eight thousand and twenty or whatever number doesn't really matter there but this now gives me two different ways of creating my environment variables which i think is really nice so now i go to build it again and we'll see what ends up happening here first off it doesn't see the enviro the env file which is pretty interesting however it is running the application is actually running which we can see that it's listening also at that port so if i actually try to open that port it's like hey this isn't working i can't access this what's going on here well by default docker containers are not accessible to the outside world which means that even though it's running on my local machine i can't actually access it unless i do a port mapping which is by saying ports and declaring some local port let's say for instance 8001 to the internal running port like 80 20. now if i do docker compose down hit enter this is going to take this whole thing down right it might take a moment it might take more than a moment it might take a little while because this is actually running this container is actually running so it will take a little bit for it to come down but now that it does i'm going to go ahead and run this up build it back up again and see what ends up happening notice it still says 80 20 but i mapped the port 8001 so now what i'm going to do is just run localhost to 8001 what do you know there is my django application now if you're you know thinking hey maybe this is a different january project running i can just bring down this docker compose and take a look uh in a moment eventually it's actually going to come down okay and there we go so now it's no longer accessible so let's go ahead and build this thing out just a little bit more the next one is command and we want to actually run an additional command or change the default command that's happening for our docker file here so this is going to be sh-c so this is going to be a shell command first what we're going to do is show mod plus x app migrate.sh we are making app migrate.sh executable hmm where did that happen oh yeah migrate.sh oh yeah dockerfile oh yeah okay so there we go so this is gonna allow me to make that executable and then i can say end and sh slash app slash migrate dot sh and then i can say and and slash app slash entry point dot sh so the default still goes in there and now when i run this it's actually going to work but of course we still have another step that we need to take care of here and that is before we even run these services again and that's going to be doing postgres db so actually declaring our postgres database this is actually going to be matched a lot closer to what's happening here currently it's not correct we've got our postgres db which is fine we've got our postgres host which is this postgres db in fact it's the same name isn't that interesting so that is actually what we want to declare this postgres db service host in other words if i called this db i'd have to come back into my environment variables for django and change that to db that way that they are actually matched up okay and so i'll leave it in as postgres db next what we want to do is declare the image we're going to use for this in this case it's going to be just simply postgres the next thing i want to do here is i want to expose a port here and that's going to be the same port that we had up here or i'll leave the default for a moment actually so we'll leave the default for now and i'll leave expose 5432 just for now one more thing that i need to do is i need to say volumes i need to actually attach what's called a volume to this now what this is going to allow me to do is make sure that my postgres data stays persistent it's like if these services go down that database is still has persistent data that's the point of this so we'll go ahead and declare a new volume down here so we'll say volumes and we'll just call it postgres data which the coolest thing about this is that docker compose knows what's going on here even if you don't docker hose definitely does so the volumes that we want to use inside a postgres database is going to be that volume whatever we named this and we want to map that to var lib postgres sql slash data now if you look at the postgres image itself like the actual documentation that they have available you'll see roughly speaking these things going on right here but there is also a few other things that we want to set which is our environment variables for this these are the actual postgres environment variables so you can say environment and then come in and give the environment variables now those variables happen to correspond directly to these variables so you can absolutely put them right here and hard code them into your docker compose file but i think you're probably seeing where i'm going with this and that is you can also just use that env file as well instead of hard coding them in two places you can just use the same environment variables twice pretty cool now both the database server itself has a default for all of these different variables and the actual docker compose does as well so we have a lot of really good stuff going on here so now we have the proper database how cool is this okay so now let's go ahead and try this again now that i have the database and migrate we have this stuff coming up so in this case it actually ended up running and it seems like it ended up working so if i scroll up a bit i see that it does apply these migrations but there actually is another argument that i need to add into the web here and that is depends on so we tab this in and do depends on the service name is going to be the postgres db here okay so this depends on means that this won't be built and run until this is available because in my case it just happened to work but you know sometimes it won't necessarily work okay so now to test that everything worked correctly including this my great stuff we can go into dot env and look for our super user username and password both things i'm going to work with i'm going to refresh in this local host here i think we should be able to have access to the local host now and it should actually run let's see here we've got our database running yeah it doesn't seem like that went down okay so go ahead and go to localhost 8001. again let's verify that it is 8001 and just make sure that i can log in here there it is and then go to the admin and so what's actually happening here is it's actually taking a while which we will address in the next one but for now it's actually going to take a little bit of time and that's because these can these actual services communicating with one another does take a little bit of time that's just how it is unfortunately and so i'm going to go ahead and do this slash here at the end and go into admin don't worry that the styles are missing this is just testing things out go ahead and log it in and there we go so i'm actually able to log in with the exact same credentials as i have in my environment variables for my django admin user pretty cool right it's pretty straightforward but there are things that you might be wondering is like okay this is i think the last most important part of this and then i'll just copy and paste some redis code in here as well and that is the actual you know port that you want to use okay so if you already have postgres on your machine or you have a number of different projects running you might have this port already taken and that's true with our port up here too that's why i flopped that around a little bit as well so i actually really want to focus on the database though because this is the thing that we're going to have as persistent as possible for this project and so inside of my environment variables let's say 5433 like i copied and pasted before that's the port i want now so to do this i need expose five four three three that's one thing but what i also need to do is i need to add in a command in here and just dash p5433 whatever that port is that i just changed it to that's what we need to do and then we just bring down our running images everything that's running and then we'll just bring it right back up in order for all of that to run and while those things are loading i'll go ahead and copy and paste a redis database here as well and this is just for reference i'm actually not going to be implementing this and it's actually very similar to our postgres database we'll come back to these ports for postgres a little bit later which we can also put in in a moment but i do need to add in this redis data here as a persistent volume also so there we go so that's a redis database now and then i will also add in the ports themselves so we'll go ahead and do ports and we want to map the external port to the internal one so i want these two to be the same and again i'll discuss those things just a little bit okay so we brought that down we're going to bring it right back up i will rebuild it it shouldn't have to it shouldn't take very long to rebuild it but there goes to attempt to rebuild it there and we've got a fair a failed port port all already allocated okay so in my case i already have that port running on another one probably from my tests so i'll go ahead and just change this all across with you know 54 34 all across the board just to make sure that is working correctly and that's why we're doing this because we wanted to address that prior to moving on okay so now that i've got this running it looks like everything's working okay and it's still applied a lot of these things here so it's still actually created the database which gives me a little bit of concern that is actually not surprising anymore okay that has to do with the fact that we came down here and said postgres ready being well you know not ready ever okay so i'm gonna get now can get rid of this all together okay so we're gonna save that and one last time we'll go ahead and test this bring it down and then bring it back up but something that's interesting here is redis is already working uh that's that's in there we can now use redis with the various ports that are available and then everything else was roughly working except for that database connection so go ahead and try it again and that only had to do with my settings inside of django so what i'm trying to see here is if django cannot connect to that database which you should simulate you should know what that looks like just by going into env and changing the port settings or something like that around in there and just trying to make sure that it actually isn't working right that's kind of the goal here but here we go so now it's trying to do this and here it is it's saying that it failed authentication so it's possible that it actually does not have access to it let's refresh in here and there we go so we now have a failed authentication for that user so this user is actually not working going back into our settings let's make sure that all of our settings are correct postgres db ah there we go our settings were not correct so let's go ahead and grab this password here and bring that in i'm really glad that actually showed up because it shows you the error you would see both in terms of right here on the command line but also when you try to use django this operational error now you might be wondering why are we spending so much time on docker compose and all this stuff because these are very real errors that you can see when you bring this into kubernetes and it's better to solve them locally using something simple like docker compose then something a little bit more complicated like what we need to do for kubernetes so bear with me on these docker compose things we still have another one to just improve this a little bit it's not going to be that much of an improvement but it still will improve our workflow quite a bit um and so i'm going to go ahead and run this and we'll bring it back up brought it down that went fast bringing it back up we really just want a successful build here for all of the things okay and so one of the things that we want to address in the next one is the question that might be on your mind is like well am i going to have to build it every time i make a change in my code like i literally just did well that's what we want to address with the next part of this okay so now after running this after building it what we see is of course it's applying all the migrations that's okay we actually wanted that to happen at least one time with our database and it looks like everything did happen successfully good and so now if i go to the admin i can actually try to log in again with our credentials our really really bad local credentials and i'm going to go ahead and log in again and it appears to be working so our last real test here is to run this all over again i shouldn't have to build it this time i could just do up and i want to see if it creates or runs those migrations again it shouldn't have to run those migrations again so let's go ahead and run up if it does run those migrations again that means we failed and it's actually not connected to our database it did run those migrations as we see here no migrations to apply and it also gave me this command error that username is already taken that of course is because we came into migrate and we're still running this so if this wasn't here that whole thing would have failed and nothing would have ever come up right because it would have said the username failed so boom there we go again this was a little bit longer on in terms of docker compose and of course if you already know docker composes this is going to be a lot of repetitive stuff to you but the general idea here is now we have a way to actually run our web services and all that now there's also a way to connect your static files for this so what you can do is right underneath the command here you can just put in volumes for your static files and you can use a external static file so you can actually use the static files folder for that as well so that's something that is optional for you too this means that basically you can use a static files folder inside of your web folder here to actually declare those static files so that's completely up to you it's not something you need to worry about really because of what we're going to do in part two of docker compose in the last section we created this docker compose file with our web application in it in other words our web application was living right next to our database and something else that you may have noticed is that if you made changes to your web application you had to rebuild that web location in order for docker compose to work correctly in other words i had to run docker compose up dash build every time i made changes to my python code this is of course not ideal instead what we want to do is really just have our postgres database running all the time in fact i wanted to just run in the background that's actually what we're going to do now i'm going to go ahead and comment out my web service i do not want that running at all i'm going to do all of those things a little bit differently as you'll see and i really just want my postgres database to run in the background with the same configuration that i have here right so that's actually kind of important the next thing is i always want this to restart so what i can do is say restart to always and i can do this on my redis database 2. looks like i already have it but restart always what that's going to end up doing is if my entire machine restarts the next time that the docker desktop runs this postgres database and this redis database will run also pretty cool and so i might actually want to change the ports for all of these to something that's maybe specific to this project so like six three eight eight for redis and just changing those things as well i already changed these ports but if you didn't change them be sure to change them in all of the areas on this docker compose and then of course in my env file if i end up using redis itself again i'm not actually configuring that i just want you to know about it because it is nice to have it when you need it okay so now that i've got a new docker compose file what's gonna happen okay so let's take a look here first off i need to make sure that i run docker compose and up this time i can do dash d that puts it into detached mode that means that it's running in the background if i do docker ps this will show me all of the things that are running in the background i happen to have two different almost identical background services running because i tested both of them right so what we should see here is the actual images that we are wanting to use that's our django k8s postgres database dash one django k8 redis database dash one these obviously correspond to well the folder that it's in and then the service name and then the account that they have so in this case the service name is postgres db and there it is and what we also see let me just get rid of that what we also see is a local port mapped here right so i have this right here this is the actual local host port that means that any you know thing on my computer any service any django project any any python project anything that has access to postgrads can access this port and that is because it's on our local host this is roughly speaking how it works in production as well but anyways so what how that's happening is because of this right here so if i actually cut that out saved it did docker compose down this of course is going to bring down that service then bring it back up in detach mode look how fast all of that is pretty cool and then i do docker ps now i actually do not have it mapped anywhere just like this service down here it's actually not externally mapped at this point through my own testing that doesn't matter to us redis is mapped in this case but our postgres database is not okay so that is why expose is one thing that is cool in the sense that these services can talk to each other when we have exposed there but if we actually want the our local system to have access to it so we would do ports and we would do that same mapping however we see fit so in this case it's 54 34 all across the board okay so now our environment variable what is the new host it is no longer a postgres db right that service name means nothing to our local django project we can use localhost most likely and we should be able to also use 0.0.0 like it's mapped to right here i'm going to go ahead and give it a shot with local host when i say give it a shot what do i mean by that well i mean going into web with my virtual environment activated like that and then running python manage.py migrate hit enter and what happens is i get localhost failed at this port so well maybe we have a little issue here let's go back and look at docker ps and we see oh hey what do you know i actually don't have it running there still so let's go ahead and do docker compose down again and then docker compose up bring it up now that it's up oops i made another mistake let's close it down with ctrl c docker compose up dash d detach mode we don't want this running we don't need to see it you totally can have it as it running like that but with my development projects especially ones that i'm focusing a lot of time on right now i like having it in detach mode so now let's go ahead and run that migrate again and what do you know it's now working if i do python manage py run server i should be able to see that server but what's more is i should be able to actually check from my settings my django settings i should be able to check that this value here so let's take a look at that so first off we need to do source we need to activate our virtual environment so i'm just using a quick command to do so and then i'm going to go ahead and jump into the python manage.py shell and from here i'm going to go ahead and run from django.com we're gonna import settings and i'm gonna print out settings dot db is avail and what do you know it says true right so now it's actually available to me because well that's because of these environment variables here but i'm not getting any errors while using this right down here it's definitely using that database and of course there's a number of different ways on how we could go about testing that including taking that database down so if we exit out of here and then do docker compose down remember whenever you need to take something down if it's in the background or if it's just running you can take it down right there so when i do that and i go into my local project which i think i put at looks like it's at localhost 8000 and we go into the admin now it cannot connect right so it actually cannot get into that project so let's bring docker compose up again in detachment again and we can refresh in here what do you know it's now connected now i have my django admin static files working for a number of reasons one of them which is it might be cached so if you don't see your django static files here it's not something to worry about it could still look ugly like it did a little bit ago when i actually had django inside of docker compose so now that it's like this i can actually make changes on the fly and still be connected to that postgres database but it's also highlighting what our actual django project is going to be like when we actually put it into kubernetes and that is the postgres database itself is going to be a managed service that's outside of really our control it's not something that we're going to have in kubernetes it can be done it's just not something we're going to do at this point so this is really the approach i work with a lot especially when i need different kinds of databases to be running and i don't want to provision them or maybe i don't want to actually have to like have multiple different ports or multiple different users i can just make these things on the fly with environment variables docker compose and all that so yeah i think it's really really important to test these things out locally using docker compose hopefully and in this case i no longer need those volumes either actually i'll keep that commented out as well so that's our new docker compose file granted you could also have another docker compose file in here and run based off of that i'm not going to do that because it's not something i have any interest in doing the last section of this was really meant for those of you who have never done anything like this before it's to some degree but anyways so now we've got our final docker compose file it's time to actually start moving towards using kubernetes in a moment we're going to go ahead and provision our kubernetes cluster on digitalocean but before we do that i want to mention that some of you are probably jumping to this section right here because you don't want to do the other parts or for whatever reason so be sure to clone the github repo here and follow along to the cloning this repo steps i'll update that over time but general ideas you want to make sure that you clone the repo correctly and have all the things necessary that github doesn't necessarily have and you'll probably also want to jump into branch 14-start because that's where we are okay with that out of the way let's go ahead and make sure that we are already signed up for digitalocean if you're not go to this link right here you'll get a hundred dollar credit to really learn how to use kubernetes which is really nice after you do that make sure that you log in to cooper to digitalocean into the kubernetes section on digitalocean just right there and we're gonna go ahead and create our first kubernetes cluster the version of kubernetes that we use it doesn't actually matter that much just go off of the recommended version that's what i pretty much will always recommend to you as well mainly because how what we're doing here is configured doesn't change that much version to version at least that's been true for the last few years next we're going to go ahead and choose a data center region this is going to be the region that's closest to you as you're learning or closest to your users as you deploy it okay that's just for latency so it goes a lot faster i'm closest to new york so that's what i'm going to do next is a vpc network now you can absolutely create your own vpc network inside of digitalocean under the networking tab i'll just briefly mention that vpc networks are great for communicating across services on digitalocean it stands for virtual private cloud so that means that our kubernetes cluster for example can communicate to the database that we'll end up provisioning all of this will be done through the data center itself it won't actually go out through the internet which is really really nice and a key advantage to using a vpc network is that it's just all through a private ip address which is great not a public ip address so that's pretty cool i really really like using vpc networks next is our cluster capacity so what i'm going to be doing here is i'm going to select a pool name you can have multiple pools and you can have multiple pool names but a node pool is really just a collection of nodes that you can target directly in kubernetes we're not going to go into that much depth on this one because we are just going to be using one node pool so i'm going to go ahead and just call this my django-k8s pool with whatever after that in fact i'll just leave it as just for my django k8 pool okay and then i've got basic nodes you can actually make the nodes more advanced as in you can vertically scale these nodes you can make them really really expensive as you see 155 a month or even almost 2 400 a month almost 2500 a month and three of them you're talking a good amount of money now this is vertical scaling we don't need to do that we can use basic nodes and horizontally scale so i typically start with twenty dollars a month now twenty dollars a month with let's say i don't know seven nodes that's that's a lot of capacity already we can just start with three you have to have a minimum of two okay realistically i mean you could actually use it with one but it says recommended a minimum two i think a minimum of three is actually preferred even if it's three at ten dollars a month right and i'll get into the reason in just a moment but we'll go ahead and do twenty dollars a month and just leave it as that and again if you signed up for that free credit you get 100 so this will last you about a month and a half or if you actually delete your kubernetes cluster at any time you totally can and you totally should what we're doing especially one of the things that kubernetes does really well is we're going to write a bunch of configuration that really is just a matter of typing out a few commands and then all of our services can be back up the only thing that really needs to remain persistent if you wanted to is the database so actually what's in the database and all of that if you want that to remain then you'll have that running consistently where the kubernetes cluster you can spin it up and take it down every time you go to learn which is really really nice and then you're just charged per hour which in this case it's eight cents an hour it's not too bad next we're going to go ahead and give it a name and again i'll just call this django dash kas obviously they have a bunch of defaults for this just because it needs to be unique across all of your systems and stuff like that the project doesn't matter as much you can create a new project if you like and you can also add tags if you like such as like django comma k8s or something like that or whatever tags you want to call it i'm going to go ahead and create this cluster now so the cool thing is once it creates what it's doing actually is if we go into the droplets section it's actually going to be creating droplets in here so right now it's not quite there yet but it will be in a moment right so it does take a moment for this to actually spin up and to get everything working you don't have to worry about the high availability control plane just yet that's something that's a little bit more advanced for like you know real production use cases and stuff like that but anyways we'll get rid of that so it's going to provision this cluster which really means just making all of the droplets for us on our behalf right so we can refresh in these droplets in a little bit but i want to mention that what we're doing with a cluster it's going to be multiple droplets this is much like saying having multiple virtual machines working in fact it's the same thing as saying that but it's also much like saying having three different computers at your local you know house or or business that are all running too so like three raspberry pi's working together in tandem with kubernetes is essentially what's happening here so you can sort of think of it in those terms if you'd like i tended to think of it that way early on to understand better generally speaking how these quote unquote nodes end up working so we'll let this continue to provision while it does that i'm going to go into actions here on my cluster and download the configuration file this configuration file we will actually implement in the next section but for now we can just look at it and there we go there is our configuration file it's really not that complicated of data and i actually had another one right here too right just showing up because i've tested this a few times but the general idea that configuration file will allow cube ctl to actually use that particular nodes or those nodes or that really that kubernetes cluster the entire kubernetes cluster that we might want to use right that's what that configuration file does so we want to make sure that it is stored in a safe place right so in my case i have two in the downloads i'm actually going to delete both of them so i don't get confused at all and i'll just download it again so again in the next section we'll actually start to configure it and use it but it is important to know that if somebody has access to this configuration file they have access to your entire kubernetes cluster and of course when in doubt if you need to if you're like oh i don't know about this cluster anymore i don't know if i use it anymore just go ahead and destroy it just get used to being okay with destroying clusters and virtual machines it's totally okay in this case it's nothing like a physical device it's nothing like your raspberry pi at home or your computer at home you're not taking a hammer to something you're just deleting a resource that can be brought up back online no problem i've had that question asked a number of times so for those of you who are like why are you talking about this i know this well that's because i get it every once in a while any case so now with this kubernetes cluster being developed what we should see hopefully at this point is actual droplets if it's not here yet i'm going to pause it and come back once it is okay so now we have progress on our droplets themselves as we can see the thing i want to highlight here is the fact that it's three different droplets this should also illustrate the fact that you don't necessarily have to use the managed kubernetes version you can actually manage kubernetes yourself across droplets i just think it's a lot easier to just go off of digitalocean's managed service to do it it's also well not really expensive to do that it's actually a pretty easy free thing that just charges you for the droplets themselves and we can actually verify this by going into create a new droplet a 4 gig droplet at 80 gigs hard drive so let's go into droplets and go to basic so we have a basic one and what we have is looking for the uh two gig i think it was like uh well maybe it's not actually written here oh regular there it is so it was 20 a month per node there it is right there this is literally that it's the exact same thing and of course you can use all of the other actual droplets themselves as a node but all this is doing is showing you that you can actually provision droplets on your own and you can provision kubernetes on your own it's just a lot easier to do it through digitalocean because you can also manage it you can manage kubernetes in here in other words you can actually go into one of your kubernetes clusters and you can update your resources that you have available like you can actually add additional nodes to this so resize or auto scale notice i can i can just bring this thing way up right or i can add auto scaling in here too so you can have a minimum number of nodes or a maximum number which the cool thing about this is has to do with everything that's going on inside of your cluster so if you have a lot of traffic going on and all your resources are just maxed out you can add in a maximum of nodes of seven that of course means it's gonna hit 140 a month but more than likely if you're hitting that much resource demand you probably have a robust service that's more than capable of of actually paying for that service itself okay so that's actually one of the coolest things about kubernetes as well now doing this on your own and managing this service without kubernetes it's possible but it's a lot harder to do and it takes a lot more work and effort on people's parts right it's not all automatic and stuff like that which is also another huge advantage to using kubernetes itself which i think is pretty cool so we can also see the insights all the things that are going on with our different pools in here or our different individual resources or the entire cluster itself which is also pretty interesting too and of course there's a number of other resources that you might end up looking up and trying to figure out but the main thing here now is actually using this so we actually need to connect to kubernetes now they have a guide here on how you can connect it this is okay this is fine we actually don't have the digitalocean command line tool installed just yet we will have it in um but what we're going to do instead is we're going to just go off of that downloaded configuration that i've downloaded a couple times now i actually only need one of them but we're going to go off of that configuration and actually do that in the next part to actually make sure that we can run in with our kubernetes cluster so of course if you have questions on this let me know but the thing here is we just wanted to provision it and talk about it a little bit but the general idea is kubernetes is vast there's a lot of things that go to it so i recommend not trying to learn everything but rather learn enough so you can actually get your project into production that's the point of this series of course but then after you learn enough then you're going to start to pile on what you need to know and all that the key things to remember for kubernetes in my opinion number one is it's a cluster of virtual machines a cluster of droplets in this case that cluster can expand or contract on how you see fit you can add in a policy to do that which is fantastic you can also provision additional resources for that cluster for both vertical and horizontal scaling to just make your cluster that much more robust and what it's going to do is it's going to manage all of your services those services are typically through docker containers so you have all of these different containers kubernetes just makes them all run as they need and as respect to kubernetes inside of digitalocean we have all of it in a vpc a virtual private cloud so that our kubernetes cluster can communicate to our database that we have yet to provision or any other services in our virtual private cloud or in our vpc when we need to in a secure manner without it being exposed to the public which is really really nice so the kubernetes clusters itself now we have to learn how to connect to it for one but then also how to start provisioning resources on this cluster so that it can actually run those are the key things that we'll do in the next few sections let's get to it now we're going to go ahead and connect to kubernetes using our local cube ctl and our cube config file now that kubernetes config file is inside of your cluster under actions and download config and so once you download it you'll have it in your downloads something like this so i actually want to show you a couple ways on how you can actually use this cube config file so the first way is just right where it's located now of course i'm going to assume that you already have cube ctl installed like that if you don't have it installed if you're on mac go ahead and use homebrew if you're on windows use chocolaty i have a whole installation process for that earlier in this series but what i want to do is actually use that cube config file so i'm going to go ahead and export cube config equaling to tilde slash downloads slash django k8 cube config.yemel that's literally the name right there and i'll go ahead and hit enter so this actually creates an environment variable on my local machine if you're on windows it's going to be slightly different than that i'll just go ahead and place the reference here for you that's it right there on windows it's roughly the same but still just a little bit different and so once we have that we should be able to echo this thing out of cube config and i can see that actual path again windows users you're going to do something more like this where i think it's env and cube config i believe that's how you would echo that out on windows but any case we're going to go ahead and now use that path that's really the key thing here is now i have an environment variable of cube config to that path to that file so now when i use cube ctl in this terminal and say get nodes what will i end up seeing i'll actually see the nodes that i just provisioned on digitalocean those are literally the ones as we see here we have the pool name and each node identification at the end of it which is great that's a simple and easy way to do that but of course we don't want to keep our config file inside of you know the downloads folder that's not ideal instead what i want to do is i want to make a folder inside of my django project here right and so i'm going to make a folder inside of here for the actual entire project that is not just for django but next to v e and v i'm going to make dot cube now dot cube is a common way for cube ctl or the kubernetes command line tool that is a common folder here inside of this folder i'm going to go ahead and bring in my cube config file here i'll bring in a copy and i'm actually going to rename it to just simply cube config so let's go ahead and rename this here and there we go okay so now i have a new location for it now granted i could do the exact same thing where it's you know very similar to this here i'll just go ahead and put the two different ways right very similar to this i could do it relative to my project here or what i could do of course using vs code is go into the code workspace and what i'm going to do is i'm going to actually bring this in here into my settings like this oops not the whole thing settings like that okay so let me close this down a little bit and so this gives me a environment variable option specifically for this workspace right so whenever i open up this project this workspace will now reference that cubeconfig file and so that when i do open a new terminal window let me close this one out and open a new one what i can do is i can actually echo that cube config file and it'll actually show the location for it now the reason i have it set up this way is notice that it has nothing to do with the fact that it's in django-k8s but rather that it's local to this particular project now of course if i am going to go this method inside of my git ignore here i want to make sure that i do not check this in at all so i do cube there and that is yet another way to have kubernetes working on your machine right specifically for this project and inside of vs code now this works cross platform as you see here so i have all of the configuration on here of course if you need this reference just go to the github repo and you'll have the exact reference itself or of course you can type out these paths this workplace folder here or workspace folder excuse me that actually can reference this folder here the root of this folder where you know code dash workspace is in other words it's usually this path right here so it does it it replaces it for you and you don't have to do anything there's a lot of cool things that you can do with vs code that are associated to this but unfortunately what we actually don't want to have happen is well maybe you forget to do this or maybe you're like i don't want to work in vs code i want to work in here so i do cube ctl get nodes here i've got nothing right and so what we do here now and this is true across all systems is we want to make a directory in our user's root folder called cube in my case it already exists so i can actually navigate into dot cube and i can see that there's well a cache folder here but where there's really nothing in here that of note so i'm going to do is open up this folder actually i'm not going to do code i'm going to just go ahead and say open period and of course if you're on windows it's ii period and then i'm going to open this folder up and windows mac linux all of you have roughly the same setup at this point you might not have cache in there and you might have just created that dot cube file but now i can actually bring this on over here now there's a number of ways on how i can actually declare this file i can actually use a bunch of different commands to select this file too but i'm not going to do that instead i'm just going to call this the default and that is just simply no dot yaml just simply config and just call it that now inside of my terminal i want to make sure that it is called disk config not config.yaml and we'll use that as move just like that and so i list it out there it is just simply config okay so this now gives the entire system it's a global system is now called config of course if you already had config in there you could ignore this because you probably have already seen this to some degree and so now we have a new configuration so if i scroll up to the cube ctl get nodes i now globally can access this now i don't want global access to this i'm going to change it back from config to let's say to django k8 cube config yaml so now it's back and again if i do get nodes it's going to be like it doesn't it doesn't know can't find it anywhere so there is another setting for this specifically but i'm going to leave it just like this i just really wanted to have my django k8 configuration on my actual local terminal and this is true if i shared this with you on your project just make sure that you add in that cube config there on your project and boom it's going to work exactly with your cluster based off of this setup i personally think this is a really robust way to do it locally when you go into production like if you have a remote server or you're using github actions which is something that we'll probably have to look at you're going to use the root config file probably or the root.cube folder itself so if you have any questions on this let me know but now we have the ability to do all sorts of fun things with this because we can do git nodes which is nice if i try to do git pods i probably see hey there are no resources found in this namespace that's what we actually want to change going forward now we're going to go ahead and deploy your first container on kubernetes we're going to go ahead and deploy nginx to kubernetes and then we're going to destroy this same container that's all we're going to do it does take a little configuration to get there which probably is not surprising but it is important to get to the point where we know how to make a deployment of any kind of container that's all we're doing just a single deployment okay so let's go ahead and jump in to our project here i'm going to actually create a folder for all of my kubernetes related items so the first thing i'll do is create a folder called k8s inside of here i'm going to make an nginx folder and then inside of there i'm going to make deployment.yaml or deployment.yaml so what's going to happen is the command that i'll use is cube ctl dash f apply k8 slash engine x slash deployment dot yaml okay so this is the command i want to run now where this is located actually does not matter at all it also doesn't matter if we accidentally delete this which i'll show you why a little bit later as in accidentally delete the yaml file assuming this deployment is in production so what i need to do is the very first step which is declaring a deployment so i'll go ahead and declare api version and there's going to be apps slash v1 we'll take a look at this a little bit more in detail as to where you can find these things out but when we need to deploy container when we need our container to run on our kubernetes cluster this is what we'll do next we're going to say the kind it is in this case it's a deployment next we're going to go ahead and add in some meta data and we're going to give this a name the name is going to be nginx-deployment i am giving it that name it is my arbitrary name right here but it's a name that i want to remember which we'll see in the next few lessons next up we're going to go ahead and declare the specification or the spec here and so what we need to do is first off create a selector of some kind now this i'm going to go ahead and say match labels and this is going to be the app of nginx deployment i'm going to give it the exact same name as the deployment metadata next we're going to go ahead and do template and we'll go ahead and add in some metadata for this as well these are required we'll do labels and app again nginx deployment okay so this is just stuff that you'll have to do on a regular basis for a deployment which we'll show you the documentation for in a moment so you can see that okay next up under the template we're going to go ahead and do spec and then containers okay so in here i can have multiple different containers the container i'm going to use i'm going to name it as just simply nginx the image is going to be nginx latest and then we'll also go ahead and declare our ports and i want to do a container port being 80. okay so all of this configuration is referenced in the kubernetes documentation or it's just something that you'll use so often that you'll just know what the configuration is or you'll just reference it from something else now this image right here is pretty critical for this so if you don't understand this part i'll explain that in the next one as well as to how this works but it is coming directly from docker hub okay so i'll save that when i say how it works i mean literally how we would run this deployment locally so now that we've got this we have the file all ready to go i'm going to go ahead and hit enter and i actually have it written incorrectly it should be cube ctl apply dash fk8 slash engine x slash deployment yaml and hit enter so what this did is it deployed based off of this file and we can check this deployment by doing cube ctl git deployments literally this is a kind of deployment so i can get those deployments just like i have my nodes on you know the actual digitalocean kubernetes cluster i can do git nodes same sort of thing now notice that it gives me a deployment here there's actually a deployment which is pretty cool so that deployment is in effect so what i can also do is i can do get deployment and grab that specific deployment just like that and there we go so that actually is showing me that deployment more in depth because realistically what's going to end up happening as you grow with kubernetes is you'll have a lot of deployments not just this single what but the big question is how do i actually access this deployment from the outside world i don't actually know how to access it so of course what i need my access is i mean directly going into the container itself or having end users being able to access this container so i'm going to actually expose this to the outside world in the next part but to actually access the container itself takes a little bit of a different step and this introduces the concept of pods so if we do cube ctl git pods this will show me the running instances of this deployment in other words i can actually go ahead and grab one of these pods and delete it so i can do delete pod and then that oops cube ctl delete pod and then that and that actually literally deletes that running service and it will actually provision a new one for me as we'll see in just a moment once it finishes so it's doing all of that at the same time now let's not get bogged down by all those commands you're not going to remember all of them right now just just bear with me as i explain them but notice that our pod has changed the actual name of it has changed now this is the actual deployment itself notice that there is only one of them if i wanted to have more than one if i wanted to have more deployments of the nginx deployment then what i would do is come back into my spec here and i want to add in replicas and add in let's say however many deployments i want as of now the default is one i'm gonna go ahead and add in three so i'm gonna go ahead and now i made some changes to this deployment i can do cube ctl apply dash f again k8 slash engine x slash deployment dot yaml and hit enter this of course changes the amount of pods i have now these pods are distributed across my nodes across those virtual machines and they're all running in as efficiently as they possibly can what if i put 30 pods here what would happen again i just apply that change and then i look and i see git pods it is now literally trying to create 30 instances of this container to make it run which is incredible that is so cool i could also just keep it back at 3 apply it again and again it's going to configure in that way now of course it's not going to happen instantaneously it does have to terminate pods that it already created but it will keep the ones running that are already running or at least they'll keep three of them and then over time those will come down to just being the number that we want now there's a lot of things of course that we could talk about this but i did mention how can we get into one of these pods and so we actually can just grab one and then do cube ctl exe execute dash i t then we do two dashes and then the command that we want to execute which is bin bash now before all of that this spot right here this is going to be the pod name right there so we actually want to put the pod name in there so that way we can actually do cube ctl exe dash i t pod name dash dash bin slash bash and this will actually allow me to actually go into the container itself the running container now those of you who know docker well will be like hey that actually looks a lot like the docker command to go into the bash container which is exactly what it is of course there's a lot of things that we can do here and we actually will end up doing a number of these things when we actually deploy our own containers that is our django web application but the key thing here is to see that we can actually run a deployment really simply so these containers are running but we can't access them from the outside world right i don't have an ip address anywhere for me to see we will cover that in the next portion for sure but for now what i want to do is just take down this deployment and again it's incredibly easy it's just cube ctl delete dash f and then the file we want to delete the file of course gives all of the description of that deployment and cubectl is smarter to know exactly what deployment it is and where to delete it so go ahead and do k8 slash engine x slash deployment dot yaml hit enter it's gone and now if i do cube ctl get pods i have these pods are being destroyed if i do get deployment or get deployments i have no deployments any longer so again those pods still take a little bit of time before they go away now of course there is this thing of default namespace that's something that we will touch on briefly but again kubernetes is huge deployments are i think the fundamental aspect of them it's like you have deployments and then you build things around these deployments so we need to see one of those things the next one called a service let's go ahead and take a look [Music] all right so now that we have this deployment configuration it's time to actually expose it to the outside world so the first thing that i want to do since we have this is just to ensure that this deployment is going because they actually did take it down so let's go ahead and do apply dash fk8 nginx deployment yaml okay so just wanna make sure that's running so we'll do cube ctl get pods and just make sure hey what do you know it's up really really fast um cool so now that we've got that let's go ahead and create a new file in here and we'll call it service.yaml and so inside of this we're going to go ahead and declare our api version and this time it's again v1 this time it's not apps v1 but just simply v1 and so we're going to go ahead and declare the kind of service it is or that it is a service rather and then we'll also declare the metadata for the name and it's going to be nginx service and then we'll declare the specification just like we did with the deployment which is spec and you know just like what we saw here next up we're actually going to give what the type of service this is this is going to be a load balancer now some of you might know nginx really well like the actual service of nginx and the tool that nginx is and you know that nginx can be used as a load balancer there's a way to configure it to be used as a load bouncer that's not what this is this is actually going to be a digital ocean load balancer to distribute our traffic across our various replicas inside of our deployment right so directly for all of the containers that are running not exactly how nginx works but it is very close because it's distributing traffic into the term of a load balancer that's what a load balancer does so it is different than nginx i just wanted to mention that for those of you know what nginx is and what it does well next we want to declare the actual ports that we're going to expose to the outside world so we give it a name of some kind i'm just going to say http because that's typically what you'll start with is http you'll declare a protocol here and we just want a tcp connection typical protocol and then we'll go ahead and declare the ports that we're going to use port 80 is the standard port that you open up to the outside world so you don't have to declare the port which i'll explain more in a moment next up we're going to do our target port now what is this exactly i'm going to go ahead and say 8 000 for the moment and we'll just leave it as is next up we're going to go ahead and do selector and we're going to say app and this is going to now declare our nginx deployment right so part of the reason that deployment.yaml has nginx deployment three times is just to simplify this right here now there are other uses for these different labels and there are sometimes cases when you want to make them different but i typically speaking especially as you're learning i just make these all the same regardless of what the container is down here okay that's just to address that next is this target port right here so this port is made for you know like loc the actual domain that you'll end up having which is definitely not this but let's just say that domain and the connection again is http slash the default port here is going to be 80. so whenever you go to something like this that default port will be 80 versus 1234 which is what you would maybe put here right and so we'll leave it in as 80 so we don't have to really think about adding a port towards the end of it now in some cases you will want that some services you'll definitely want that now with this target port what is it this going to be now this actually is correlated directly to our deployment to the selector app that we're using that deployment is whatever this is the container port that's the port that we're letting you know kubernetes know what this is running on so the the service itself is not going to verify that we have to make sure that we know that and so we're going to leave the target port at 80 just like our deployment has and so now that we've got a deployment and a service it's time to actually spin the service up again let's just verify we have pods running those appointments are there we can also do get deployment just to verify that that deployment is up and ready so now we're going to go ahead and do cube ctl apply dash f k eights engine x and service dot yaml we'll go ahead and hit enter it has been created so much like we have this git deployment method here i can also do cube ctl get services or just get service and that will actually show me all the various services notice that this says external ip that is pending and it also gives me those ports look at port 80 is there pretty nice and it's 11 seconds old so where's this load balancer being created well if we go back into our digital ocean console and we go into networking or well we could also go into our kubernetes cluster that actually might be a good idea let's go into our kubernetes cluster to see what's actually happening in here so that we can verify that this service is part of our kubernetes cluster so we go in here and then we click on our resources so what we'll see here is a load balancer now so it actually has a load balancer showing up that load balancer is directly correlated to this service digamble file now if i click on this load balancer what it's going to do is actually take me to where the digital ocean load balancers exist right so this is actually a thing that i can implement outside of kubernetes right so that's exactly what's happening here kubernetes is provisioning a load balancer for us inside of digitalocean which is pretty cool um so you know right now it's still being created but of course if we want to look at this load balancer again or any of our load balancers we could just go into networking and then click on load balancers and then we would see the ones that are actually being generated as well as the ip addresses here so the nice thing about this in the terms of the digital ocean console is i can actually map a domain to that load balancer right to whatever the ip and address ends up being this does take a little bit of time for it to provision so give it a moment and you know if for some reason it feels like it's going slow you could always just try and run apply again and that might actually speed it up actually probably won't do anything but it is something that i do when i'm getting a little impatient it's just hey apply this apply this apply this so while it's still loading i did want to mention like how do we actually recover if i accidentally deleted the deployment and yaml files on my local machine now on one hand hopefully you're using version control as i am that will actually bring that back on another hand there's a way to export or kind of view what's actually happening in any given deployment pod service any of that stuff we can do cube ctl and i'll go ahead and say git service right so this gives me all the different services here i can actually grab the service i want so get service that service name now it just hones in on that and then we can use dash o as in output and then i can say the kind of like format i want it to output in so i can say json and i can say yaml so if i do that hey what do you know this is now well it's a lot more data than i necessarily need for the bare minimum file but it does give me all of the stuff that i had created for this particular you know service itself right and so what you could do then if you are interested in making some changes to something like this you would just put it into a new yaml file right so this is me sort of emulating recovery and you can start deleting things that you just don't recognize or you could just delete things willy-nilly and what it's going to happen let's say for instance i deleted this entire thing right here and maybe let's go ahead and delete all of this and there we go so i leave it just like that right so i deleted a bunch of stuff and now what i'm going to do is save it as backup diamel we will delete we will probably not even keep this in a moment but now i'm going to go ahead and do cube ctl apply dash f and then in k-8s you know nginx and backup.yaml hit enter and what it will do is it gives me potential errors that might actually occur in here and that's kind of the reason that's kind of one of the things that you want to see when you are coming back right so you're going to go one by one on getting this thing back to where it was and you know this is just a trial and error situation if you need to recover the chances are really good that you hopefully will not need to recover notice that metadata is empty in here right i want to make sure i'm using this at least okay and i noticed this because the resource name was empty as we see here so i go ahead and run it again and there it goes it's actually configured something it did seem to make a change but i don't think it actually made any changes so if i run that again it says it's unchanged but it's really just this file specifically is unchanged so we'll just go ahead and do cube ctl and get services and we now see an external ip in here so the recovering from a service that fell down is probably not something you'll do very often but something you might do very fairly often is actually look at what's in that service just really quickly a quick glance at what's going on in any given service that might be something that you end up doing so that's also pretty cool but now that we've got this exposed to the outside world we can actually verify by going to that external ip address we hit enter here and what do you know welcome to nginx this is showing us that the nginx container is running it's also showing us that that service is running okay cool so the next thing is i'm actually going to go ahead and delete this backup here something that you'll see fairly often is these things being all combined so i'm going to go ahead and do combo dot yaml you'll often see the deployment in here and then three dashes those are to signify another configuration and then you'll have the actual service in there as well and then we'll go ahead and run the apply and this is now going to be combo and so that just handles both of those things right so the nginx service looks like it changed a little bit only because i you know went back to service.yaml but if i run this again it's actually not going to change at all and so you'll actually see this combo a lot especially in documentation where it's actually bringing two things in just be aware of how to spot this is these dashes here will separate them more than likely the api version and the kind will be different from thing to thing so you can actually have a lot of different deployments all written out in here as well the only reason i have it separated is to show you well make it really nice and simple as to what's going on in each one so you can use it as reference in the future more than likely when we come to django we'll bring this combo back in because that way it's all mapped into one single thing now i'm going to leave you with the challenge of bringing this service down and bringing the deployment down both things should be incredibly easy at this time now we're going to deploy a minimal fast api application that i have on the github as iac python but i also have a public container for this as well at this location the fact that it's a public container is important and the fact that the containers already built is important other than that it's pretty much the same configuration as we'll see in just a moment so what i'm going to do here is inside of k8 so i'm going to make a new folder and i'm going to call it apps inside of there i'll go ahead and do iac python digamol and what i want to do is declare basically the same thing so i'm going to bring in first off the deployment i'll do those three dashes and then i'll grab a service in there as well and bring that in too okay so the deployment itself is going to be my iac python deployment so i'll just replace some of the nginx stuff with simply iac python the image itself is different this is the coding for entrepreneurs image of iac python so this right here is coming directly from this right here up in the url or more specifically in the docker poll command whatever that is that's what you're going to want to do and then whatever tags that may exist there this one only has one tag at this point so that is it that's pretty much the difference in the deployments from the nginx container and my custom iac python container granted you didn't see me put this together but it is there and it exists now one of the things that is an attribute of this iec python is actually very similar to what we have inside of our docker container and file is this entry point.sh along with the app port configuration i do this a lot for my docker images as you see so the environment variable of port is something that i can actually configure so i can actually do it right here so env and then we'll do name the actual environment variable name in this case it's going to be port and then the value i want to set this value must be a string so i'm going to do 81.81 for example then the actual ports that we want to expose to our other services as in our actual service here the nginx load balancer service or really the iac python load balancer service i want this to be exposed or mapped to that and you also might recall down here this target port needs to map to the same target port cool and of course i need to change this to iac python for the deployment and now we have everything configured as we needed to for both the deployment and the actual production service what's cool here is i have these environment variables now this invaria this variable itself it's okay if that one get it gets exposed that's not really that big of a deal in terms of what port it's using on of course that is definitely something we need to figure out though when we actually bring our django project into production because we don't want all of our environment variables in this yaml file at all we want a more secure way than just that but every once in a while you'll see environment variables that make sense and this one absolutely makes sense for this particular project so let's go ahead and create it okay so i'll do cube ctl apply dash f k eights slash apps and iac iac-python.yaml hit enter and of course it's going to go ahead and configure that for us as you see there so we can do k.getpods this will show those pods being created if they're not already created i did some testing so they actually created really fast so in other words the image this container image right here was already on my kubernetes cluster so it actually went really really quickly um in your case it might take a little bit longer than that but probably not by much and then we can also see the deployments here that of course is what's going on there and then finally the services now the service itself is going to take a bit of time while that's running i'm actually going to go ahead and take a look at one of these pods so we can actually do again cube ctl execute dash i t the pod name and then two dashes the command we want to run which in this case has just been slash bash so we can actually go into this application now this application itself is actually not a whole lot different as far as the configuration is concerned than the django project we can actually verify this as well by looking at that docker file if we look in here it has a couple of other things that you might actually end up using but what we see here is it's actually creating a virtual environment in the same location and then it installs the requirements in that same location as well at least that's how it currently looks right it's using a different version of python this is possible that it could change over time but the general idea is that we have something in here that we can reference without doing all the jenga configuration so i can actually you know activate that virtual environment with source otp vmv activate uh source otp ve the bin and then activate and that oops this should be source slash otp venv bin activate there we go and so now if i do python or pip freeze i should see all of the requirements specifically for this project which are a number of them in this case it's fast api and whatever version it is and a few other things going in there as well as g unicorn the g unicorn part is the important part because of how similar the docker file and the entry point are from django and fast api it's identical essentially fast api just has one extra type of thing that has to do with the uv acorn worker but django doesn't need that otherwise it's almost identical which is actually really cool and also shows us maybe a light at the end of the tunnel as far as hey we're not actually that far away from getting django to work in kubernetes it's true we're not that far away from it we need to solve a couple things first the first thing that we solve is making a private image actually having a private container repository not the public one on docker hub we're going to use digitalocean for that then we need to get our environment variables correct and then the final thing would be wrapping in our database with everything we might need our database before some stuff but that's the general idea as to where we're going to work from from here and hopefully at this point the load balancer was almost done i'm going to let it pause and then we'll come back and just see this thing actually finally working all right so my load balancer finished as we see here i can actually check out this ip address now and take a look and yes it's absolutely working this is now a kubernetes production grade fast api application of course it's missing a database which is a whole other story but the general idea here is it's totally working and it's working directly from all of our configuration and of course if i exit out of that running pod and do cube ctl get services i should be able to see the external ipa it's not there yet it's not showing up in this service which is interesting it probably will very soon um but anyways so it definitely was fit it did finish provisioning as far as digital ocean is concerned it just doesn't have it updated on my cube ctl side uh just yet which is curious um but nevertheless it does work and it absolutely is a python based docker container image that has now been deployed into production on kubernetes so i'm really excited to keep going and get us to the next few parts that we need in order for django to work in a moment we're going to go ahead and provision our private container registry but before we do that i just want to mention as to why we need to have a container registry at all and why it needs to be or should be private so if you do docker compose up dash build on your local machine and hit enter this will actually build your container images if they need to be built now in our case we are using the public container images of postgres and redis so i actually don't have to build them locally they already exist somewhere that's somewhere my local machine can actually get it can go into docker hub and download the container registry much like what we just did with kubernetes it was able to access this because anyone who has access to the internet can pull this actual container and use it right so kubernetes does that really well and it does that with all of our other images as well right so we've got our postgres image that's the public one anyone can use that and the redis image again same thing now our own application on the other hand we do not want it to be public but we still need to build it so the actual image still needs to be built and put somewhere so that somewhere i'm going to go ahead and close these out that somewhere is going to have to exist so if i do docker images on my local machine i see all of these different container images that already exist there including my django k8s one which i created in docker compose a while ago so even if you have this or not that's not really the point the point here is this needs to actually be stored somewhere that's private the second part of that is kubernetes needs to have access to grabbing a private container image now that access is much easier if it's all managed by digitalocean which is why i recommend that you use digitalocean's private container registry so that your kubernetes your local kubernetes will have access to those repos a lot easier notice that you can use one repo up to 500 megs and our django project at this point is only 278 megabytes so you should be able to do this for free me personally i use the professional one because i have a lot of different repos that i use constantly so 20 a month in my opinion is not bad given what it does it saves you a ton of time in actually storing your private uh repositories over something like docker hub even because there's additional aspects to docker hub that you'd have to implement in order for it to work on digitalocean if you want to see that let me know in the comments but we're going to go ahead and stick with this one i'm going to go ahead and select professional feel free to do starter just to see if it actually ends up working for you i think it will just with one worst case scenario you just provision basic and now you have quite a bit more storage in there and that's only five dollars a month so i'll go ahead and do this one and i'll call this cfe k eights and then i'm going to go ahead and create this registry of course you can't actually use this registry at this point you have to do your own custom registry so this will provision it'll take a little bit of time to do that and now of course we need to actually connect to this registry somehow so we can actually push our container images in there and to do this one of the best ways is using the digitalocean command line tool however it's a little tricky to get that thing up and running on windows at least i've had some issues with it so we're going to go with in my opinion the easier method which is just simply the api token and logging into the registry directly now when we put this into a github action we'll go back to using the digital ocean command line tool or doctl and use the registry login because we can actually expire when a registry login actually happens which i think is cool too so feel free to use either way in just a moment we'll actually go ahead and use an api token login build and push a container registry there so let's go ahead and take a look now we're going to go ahead and log in to our recently created registry on digitalocean through docker so you need to make sure that your local machine has docker on it now if you don't have docker on your local machine we will show you how to implement this later in a github action so you don't actually have to touch docker at all locally but i think it's a good idea to practice this anyway so what we want to do here is do docker login registry.digitalocean.com and what we are going to put in for a username and password are the api tokens and this of course is coming directly from their api dependency free you know actual guide that they have it's just paste api token paste api token cool so let's go ahead and create one now to create one you'll just go down into api here and personal access token generate new token and then we'll just give it a name in this case i'll just do django k8s and we'll say local token okay generate that token and there it is i'll go ahead and copy this now do keep in mind that you might want to delete it or edit this re-update it regenerate it do all sorts of things that you might do from time to time nevertheless i'll go ahead and copy this and i'm going to put it into my username and my password okay so you can't see either one it just does the same thing login succeeded great okay so now what i need to do is build my container image for my web application so i'm going to see the interweb and we will actually create it so it's going to be docker build the normal docker build and then we need to tag it something so the tag that we need to use is actually corresponding to our recently created registry that's this right here so this is the root of our tag the next part is going to be the actual image name in my case i've been calling it django dash k8 right so this is the actual image name itself the entire thing we need that entire thing there and then we can tag it whatever we want like v1 or latest or whatever i'll leave it in just as latest okay so i'm going to build it with that tag then of course i'm going to declare my docker file that i want to use which is the standard one which of course this is the default anyway and then i'm going to go ahead and use the context of this current folder and i'll just go ahead and hit enter now in my case all of this stuff should have already been built somewhat recently so it's not going to take that long but if yours wasn't built somewhat recently this might take a good amount of time for it to build up but of course it's also going to be running all of the python related installations which also takes a little bit of time but after we build that what we want to do is take that same tag we don't need the colon latest here but take that same tag string this right here we want to actually run docker push that whole thing and then i can do dash dash all tags this will allow me to push all of the tags i might have for this built image now at this point this push doesn't matter that much in terms of multiple tags because i only have one tag it's this one right here but i can actually add in additional tags on this build script and i can also tag these docker built images later as well now this push right here is going to become a lot more important once we actually put this into github actions which we'll do later so that actually becomes a critical piece of all of this much much later so in other words i could have just pushed the single tag itself right here and that would have been fine but as we see here it actually pushed a number of layers in here this is actual storage size as we see right there that's exactly what's happening and now inside of our kubernetes repository or really just our container registry not only for kubernetes but it is our container registry i can actually refresh in here and what i should be able to see is some data if i scroll down i can see some data here right so there is my django k8s one image 21 seconds ago with only one tag right and if i open this up a bit i can see all of the other tags if i had them as well as the actual deployments themselves look at it's actually not that big so you should definitely be able to use this on the free tier and so the next part of this of course now that i can build this is to actually integrate it with kubernetes and look at this i can actually just integrate all clusters or select the single one if i have multiple and this will allow me to pull images from here now it's read only access but still it's allowing this to pull it so it actually updates everything you need for this which i think is so cool i'm going to go ahead and hit continue and now i don't have to think about this stuff now yes there is a way to do all of that stuff manually and there is a way to do it with other container registries but if it is just that simple why would you need to okay so that's kind of the point here to me as to why you want to use the digitalocean container registry among all the other ones now of course if you already have a bunch of images on a different container registry and you just want to use that one there are ways to do it with kubernetes so check out the kubernetes documentation or the documentation for the private registry you might be using because it does take a little bit of time to get there to get to the point where you can use that other one if you're so interested but i'm just going to imagine most of you this is a new look at kubernetes for you so things are all good there now before i go any further with the kubernetes stuff one of the things i wanted to mention is you can actually use this image inside of app platform so you can actually come in here into app platform go to create app and you can navigate down to hey your digitalocean container registry and you'll actually be able to pick that django project so this is actually another cool aspect of this of using the container registry is i can actually deploy this really quickly into app platform just to see if my django project's working correctly or to test a different version because you can see there's different tags here there's all sorts of stuff that you might consider doing prior to promoting it to your kubernetes cluster or anything like that which is definitely another advantage but just something worth mentioning okay cool so now that we have our container registry set up i actually want to get my manage database set up next now we're going to go ahead and provision a managed database on digitalocean now the first thing i'll mention is that you can absolutely provision your own database and manage your own database through kubernetes although this actually saves us a ton of time and a lot of potential headache in the long run in managing databases of course if you want to learn how to manage a database it's definitely something well worth learning but it's not something we're going to do at this point so let's go ahead and navigate over into the database section on your digital ocean console hit create a database cluster we're going to be using postgres notice that redis is also in here if you need redis by all means go ahead and add that as well the machine type is going to be a basic type of a month so essentially if you need a bunch of different databases just at the bare minimum it's going to be 15 a month for those production grade databases now these aren't really hobby databases this is actually made to run in production so that's what we're going to stick with next we need to choose the same data center that our kubernetes cluster is in now you might have already forgotten which cluster you selected especially if you did like new york or san francisco because there's a couple options there so if we actually open up the kubernetes tab here we can verify what cluster we have in my case it's nyc one which of course is new york one so i want the same exact data center in this case notice that it also gives me the different droplets right there but for both of them but we're going to stick with the exact region data center one or new york one the vpc as we mentioned when we actually provisioned our kubernetes cluster we want to use the same vpc that's there now one of the ways to check this is by going into the networking tab go into vpc and then actually verify the vpc and the resources that are in there currently right now there's our kubernetes cluster this we absolutely need to have as the same thing maybe at some point they will also add in a quick glance as far as the you know vpc networks are concerned next up i'm going to go ahead and create a unique name for this in this case i'm going to call this django dash k8 db postgres i don't actually need that remaining stuff at this point but it's going to do what i'm going to do next the project this part i don't think really matters much i'm going to leave it in as the same project as my other one and so now i'll go ahead and create this database cluster while that's provisioning what we're going to do is we're going to jump back into our local project and we're going to create a new file in here called dot emv dot prod now this of course is going to be our production environment variables so i'll go ahead and copy that and paste in here copy our original dot env and paste in here and then right away i want to add this to my git ignore file so emv.prod even if you don't have a git ignore file it might be a good idea just to create one just to make sure that that's in there also in our get docker ignore file we want to put that in there as well now these profile the prod environment variables the reason i have it locally is really just for my own reference and when i actually need to provision the proper secrets for kubernetes to leverage these things this is what we'll see in the next few okay so while it's actually being created what we want to do here is we actually can go through their getting started part right so add trusted sources for example you can add your computer's ip address as a trusted source which is pretty cool you can also add in various pools the actual node or the entire kubernetes cluster i'm literally going to only allow my kubernetes cluster access to this that's it nothing else now what we've got is we can use the public network version of this which is not something we need we want to use the vpc network version of this which is fantastic we have a way to connect to this over our vpc so what of course what we want to do here is we want to add in all of our parameters here so the username is going to be do admin again dot e v dot prod next of course is going to be the password that's written here and it's going to be this one right there next we're going to go ahead and grab the host name no surprise there the port okay and then the actual database itself which is just simply default db and there we go so we now have all of the parameters necessary for this to work we also have this thing of ssl mode being required that's something i'll actually come back to in my django settings it's not something i'm going to do just yet but it will be really easy to come back to this as we'll see so go ahead and say continue and there we go so this is actually a quick way to copy your database from a pre-existing database right there's a bunch of different ways on how to do this so if you are migrating this project from another database and you need that data it's actually pretty straightforward to do although it's outside the context of what we're doing here so that's it as far as provisioning our database but there are a couple other things i want to add into my emv dot prod one of them being my super user password so what i'm going to do here is i'm going to go ahead and activate my virtual environment so source v e and v bin activate oops we need to navigate back a little bit we'll do that again and then now i'm going to run python dash c we're going to import secrets and then we're going to print out secrets dot token url safe safe and 32 just like that and it looks like we have a syntax error and it's from import okay let's fix that real quick and there we go so now we have a new secret that we can use for our super user password just for when it's generated for the first time then we can always change it later the django secret key now we can absolutely use this exact same thing or what we can do is use the one-off secret key or the random secret key generator that comes directly from django itself and that's it right there so this command will be also in the comments below but it's also on our blog post called create a one-off django secret key which i've mentioned before but anyways we've got this now secret key and so we can add this in here too these actually aren't a whole lot different the secret key might have a few more special characters in here because it's not actually url safe like the other one this one can be used in a url if need be okay so now we have all of our environment variables pretty much prepped up so we want to actually bring these into our kubernetes prior to actually creating our django deployment and that's a critical piece to this right so we need our django project to have access to these things and of course we don't want to build our docker container with access to these things because then it's going to stay there and it's not incredibly secure instead we're going to be using kubernetes secrets and that's what we'll do in the next part so when we deployed the iac python project we saw that i have environment variables right inside of the deployment declaration now of course we don't actually want to do this with sensitive keys instead what we want to do is use kubernetes secrets and you can actually do that from an env file so if i actually do cube ctl and get and then secrets i can actually see pre-existing secrets that are already in there one of them is actually for our private container registry right on kubernetes which is really cool so that's one thing but the other part of this is we actually want to add a new one in for our environment variables for production it's really really straightforward on how to do it so it's cube ctl create secret the secret type in this case it's just generic secrets now we're going to go ahead and say the name of it so django dash k8 dash web dash prod dash env that's just the name i think that makes the most sense here it's the django k8s k8's entire project right on that cluster and then also it's the web deployment that we'll eventually have from this and then it's the production environment variables which of course is in this case.emv.prod so that's going to be the name of our secret and very similar to like the actual name of the secret for the private registry that we have this is going to be the name just for environment variables but of course we need to declare what's going to go in here and it's pretty simple we'll just do dash dash from dash env file and then declare where this file is in our case it's in web slash dot env dot prod and i think i'm writing this in the root if i am not sure about that i can just go ahead and copy this hit ctrl c and then list everything out and yes i am in the root of this project so i'll go ahead and run that again or actually paste it in and run it so now it actually created that file for me and again i can look at all the secrets in there now and i see this secret right here okay so what we can check out is if i do get secret that secret name and then do the output of this as yaml i can actually review what's actually being stored in there and it's all of this notice that it is actually somewhat secure in the sense that it's not regurgitating the rop things back to me it's actually encoded in base 64 and coding which is nice so that makes it a little bit easier for me to have some confidence that even if i accidentally gave this away you know it's not going to be that easy to just diagnose what the actual past or all these keys end up being which is cool okay but now it actually exists but one of the things you might be wondering this is certainly one of the things i wondered right at the beginning is what if any of these configurations change how do we actually update this what i end up doing is i actually delete this so i delete this secret and then i just run the create again so it's really just simple as cube ctl delete secret that secret just like that and then i run this again let me just go up and then boom it's back okay so this will make a lot more sense once we put it into deployment and then also once it's all inside of a github action so all of this stuff is automated and also gives us reference on how to do it manually too which is both things are win-win with the github action stuff which is still coming but anyway so now we have those actual secrets so we're actually ready for designing our django deployment there might be a couple things that we still need to do to actually finish off the deployment the actual deployment file and the service and all that that goes with it but generally speaking we now have our kubernetes cluster of course we have our docker image for our django project built and in the private container registry which kubernetes has access to we also have all of the environment variables necessary for that django project those environment variables also include connection everything that we need for connecting to our postgres our production grade postgres database so yeah that i think means that we are ready for some level of deployment let's go ahead and take a look in a moment we're going to go ahead and create our django deployment and service but before we do we need to address the ssl mode in our database notice that it says it's required you actually have to connect securely to this database this isn't always true with production databases and with django so to actually set this up we're going to go ahead and jump into our local.env and we're going to add in a new one here called db ignore ssl equaling to just simply true now the reason i have that is because my local environment is where i should revert things in some degree and so what i'm going to do here then is inside of my django settings underneath my databases and i can put it in the if database is available clause or i can just put it underneath it i like putting it together so i'll go ahead and say if not well what i'm trying to check here is that this is not true essentially and the reason i'm using true is just to make it like very very clearly i want this to be secure all the time unless i specifically put in my environment so we'll go ahead and look at this and unless i do exactly this so os.nbyron dot get that equaling to true right so if this is available then i'll go ahead and say if not that then i will update my database so first off it's going to be databases and then default the default database next up is going to be options and we'll go ahead and say equals to and this is where we put ssl mode and require let's keep the double quotes going here there we go and it's really just that simple to make sure that we have ssl mode in our local development we do not want that because we don't have it set up in that way at this point okay so now we have that out of the way there's one more thing i actually want to update as well and that's inside of my container registry so when i created this first container i just called it django k8s this actually adds a little layer of complexity to some degree or at least confusion down the line because our kubernetes cluster is also called django k8 so i just want to also update that a little bit and i have the build and push notes right inside of the project itself so if you need to go in and check it out all i'm doing here is going to navigate into my web folder i'm going to build it now with dash web on there that's it just to make sure that it's very clearly it's the django k8s you know kubernetes cluster and it's the sort of web service that we're using in there and that's kind of how i'll address it going forward just to make it a lot more straightforward and not conflicting in our configuration and the only reason for this or the main reason for this is now when i come in here i'll do django k8s and we'll just call this our web.yaml now i could have it as its own folder just like i did with nginx the nginx folder is mainly just to separate things so we can visualize it a little bit better not so much that that's how i end up using things in fact i end up using them much like we did with iac python most of the time anyways so now that we have built this image i do need to send it into my container registry with this tag or this command right here again this is just sort of review as to what we've already done and when we do add this into github uh actions then we will definitely have all of those things written out a little bit more clearly as well um so in a moment the actual repository will be available for us okay so with that housekeeping out of the way let's jump into our actual deployment now a while ago i mentioned that there is documentation on kubernetes website which is kubernetes.io for all of these various features like a deployment so if you actually search for deployment here what you'll actually come to see is you'll see deployments right there and this will actually give you some examples hey that looks familiar the nginx deployment we actually did essentially this it was a little bit different maybe by a little bit not a whole lot but overall it was basically the same thing so i think this is actually pretty cool because the documentation does also show this stuff and you're like oh what about service we look for service we click in here and we just did that same thing and it's also very very similar obviously i went into all of that context for you but the kubernetes documentation i think is is really well thought out and there's a lot going on to it if you sort of know what you're looking for and if you don't know what you're looking for it's a lot of things that you can test which is great okay so at this point our actual container image has been pushed so we can go into our container registry again and there it is so i see that that container right there and i can delete this old one which is exactly what i'm going to do now i do not need this old one at all because i will not be using that one okay cool all right so now we just have the single one it's time to create our deployment so like i said with the documentation if you went in here and did a search we searched for deployment this is really if you ever forget some of these things obviously i have the code on my actual github repo that you can use but notice that the api version is in there with apps v1 and kind of deployment right so you can do that exact same thing you would just put in api version and then apps v1 and then kind being deployment and so on right and so if you do come into this deployment and copy it you can paste all this stuff in and just change it to how you see fit in our case i'm calling this django k8s web deployment and we can leave that as deployment or use both of these things i'm gonna like i said i like making it really easy on myself and just using the same label all across the board except for of course the container image this is going to be the new thing for us because of where it actually exists well where does it exist first off if we go into our container repository this is where it exists it's right here so we need to include the registry link here and then we need to include our actual image name and then which version we want to use in this case i'm going to default to latest okay and this should actually be right here okay so then the name of it i'll just call it this now by default the deployment will not just connect to this we have a secret to connect to it so what is it that that secret is like how do we actually know what that secret is well what i want to do here is i'll go ahead and say kubernetes or cubectl git secret okay and so i want to look for my container registry name which is this right here so i got my container registry name again we can verify that just inside of our digitalocean account of what this is now that automatically generated a secret for us on our kubernetes cluster when we connected it right if you remember back to when we connected it there's probably a way to connect it again or to other clusters right so at this point i haven't integrated to all and that's how we connected it right and it actually added these secrets in for us i don't need to do it again but now i have that secret which i can verify by just clicking that and i can also do dash o yaml so you can actually look at this secret in a little bit more detail as to what's going on here right but unfortunately what this secret is not showing me is how to actually use this secret inside of this container so what actually have ends up happening here is it not only creates this secret but it also adds the secret to what's called a service account so what that means is we can actually do cube ctl get service account and then the default service account that's automatically generated for us and then we'll go ahead and do the out again and we'll take a look at that this gives us our service account this again is also connected to when we actually did our cluster there and what we've got here is image poll secrets this is the service account that we have by default this deployment will use this service count by default inside of there it has this thing called image pull secrets this is the configuration we need to add into our deployment that's it and we're gonna go ahead and do this put it right in there this is actually back one okay so there is the secrets and of course it's going direct this is actually the secret not it is named after the repository uh but it's actually named for the secret right or the the private container repository that is uh but this one is actually referencing the secret itself okay so i think that's probably a little confusing i'm sure it is but the general rule of thumb here is what happened when we connected our private repository is it automatically created a secret for us and it also updated our service account that service account is how we're able to access a variety of things in this case the access is going to be to specifically this image right here okay but of course we're not done right because we have another secret that we want to use so let's go ahead and go back into our secrets here we have this other secret called django k8 web prod env so what's cool about this is inside of our container declaration here we can actually go ahead and say emv from and then use a secret ref and the name let's tab that in the name being django k8's web prod env pretty cool i can also declare the secret name for port and then the value being you know whatever value i want to use in this case i'll just use 8001 and i'll put that right there as well okay cool so that is our django deployment pretty straightforward on how that's done hopefully maybe not completely but it is there now the only final thing that i really need to do is create that service account which i'm going to go ahead and just copy and paste it because we've already done this one a couple times now so django k8's web service right it's a load balancer it's targeting the same port as the container port and there it's also targeting our django k8 deployment perfect okay cool so now let's go ahead and go back into the root of our project and we'll go ahead and do cube ctl and then apply dash f k eights apps django dash k8 web dot yam all hit enter and there it goes so what we want to see now is cube ctl get pods and we'll go ahead and watch this to see the deployments being created right so it's absolutely creating them for us which is super nice and it's going to take a little bit of time but now it looks like it's running so i can hit ctrl c to cancel the watch event here so dash w will watch these being created and so now like we've seen before i can do cube ctl execute dash i t the pod name any pod dash dash bin bash and let's make sure we spell cube ctl correctly try that again and what this will do is bring me into that django container into the working directory that we declared on our actual general container uh our docker file for that django project and then we're going to go ahead and do source otp v e and v bin activate we want to activate that virtual environment now what i want to do is i can run python manage.py migrate this in theory should actually bring me into the air i got a ssl mode of required being incorrect it looks like i made a mistake here on our container image this should be require little things make all the difference sometimes so with that being said i could absolutely then update everything let's go ahead and update everything in the next one because of this error but we'll see once we actually update it i should be able to run that migration of course it's still manual we still need to automate it in fact we need to automate a lot of things but we will be able to see how to solve this error in our code let's take a look all right so in that last one we got this error here on our django project invalid ssl mode being required that of course should be simply require now part of the reason to show this error is twofold number one we have a really severe need for continuous integration and continuous deployment methods that way i'm not doing a bunch of stuff manually so there's a really really strong case to have github actions for example the other side of this is seeing how to recover from something like this so let's actually take a look at another thing with our environment so i'm gonna go ahead and do cube ctl get services so in my case i don't have an external ip yet for this service so there's no real way for me to access my you know django project online just yet and also if i do cube ctl log well actually let's go ahead and get the pods what i want to see here is i want to see the log for any of these pods right i want to see what's happening there and we can just do log and then grabbing one of those pods so cube ctl logs rather and grab one of those pods right and so it's showing me that it's running at this port which is great so the environment variable at least one of the environment variables is working and the other part of this that's kind of cool is it appears that these environment variables are working as well so if i actually jump in to the python managed up high shell i should be able to see these things right so the dot emv does not exist but let's go ahead and import some of those environment variables let's just grab maybe the super user username so again the emv is not there so are the environment variables there let's go ahead and print it out so os.nbyron.get that username and hit enter and what do you know it's actually there and so what i would challenge you to do to verify this as well is to change your environment variables like i mentioned you are going to change your secret and do all that we can also very much make sure that the secrets are being used the kubernetes secrets are being used by looking at something that's a little bit more specific to the production environment variables which was our postgres host and sure enough there it is okay so we know that those things are working correctly which i think is really really cool but of course the actual migrate is not working correctly and there would be no real way to see this unless we did a proper production test which again we will absolutely have to take a look at and so what i want to check out right now though is i want to see the service itself i want to see this thing running so here we go i now have my external ip address yours might have provisioned a little bit faster i don't know but anyway so now if i go here i get a different error this is not on allowed hosts well this is of course because of well a couple of things number one with django especially when you're in production you need to have a allowed host of some kind right so the allot host setting right here okay so i definitely am going to add that in right now this probably should be in the environment variables actually so let's go ahead and change it just slightly and so we'll say allowed host equaling to that notice that debug is on that is a big no no so let's go ahead and get rid of that now back into our settings what i'll do is i'll say my allowed host now is equal to os dot environ dot get whatever that allowed host is and then if it is something then i'll go ahead and say my allowed hosts are this one let's actually change this to being env allowed host that way it's not conflicting for those of you who just want to make sure that we are not you know renaming things incorrectly or put env allowed hosts plural so let's just change this okay cool and i'll just move this down a little bit now we have new env files so what do we need to do here how is it that we're going to solve this problem now you might recall when we actually created these secrets we went in and did cube ctl delete secret and then the django dash k8 dash web prod dash env okay and again i can actually look at that by just doing git secret you don't have to remember the secret name at all you can look at it there or you can look at it in your actual deployment itself right so the deployment will also be in there secret ref right there okay cool so i just deleted those secrets but my connection to one of those pods is still up now the question here is did that secret go away did the environment variable change right so if i import os again let's see and print out we'll do os.environ.get and let's grab one of those secrets of course it didn't change right it's still there the secret itself the deployment itself all of this stuff a future pod will likely not have those secrets anymore because they just don't exist they deleted them but the current running ones it's not like they lost them and if i do cube ctl get pods they should be all the same ones they're about 10 minutes old right until i actually delete one of these then we might see something different right and i do encourage you to test that out but anyway so now i'm going to go ahead and bring back that secret as you may recall it's really simple it's cube ctl create secret generic the name we want to use of course we want the same name that we have in our deployment and then we are going to go ahead and declare where that env file is and there we go so now we have the new secrets in there okay they made some changes now they're there good my settings have changed as well okay so i have require in here as well as that allowed host so i actually have a new question what do we do now i change something in my django code well eventually this will all be automated that's definitely what we want but you got to think about i just changed the web code what is it that i need to do well i actually need to build this project again the entire thing and deploy it into the cluster itself so i'll go ahead and cd into web we're going to go ahead and build it again and then we're gonna go ahead and push it okay so once it finishes building we will push it again now this is where tags will make all the difference as of now we're only using one single tag what would be a good idea is to have multiple kinds of tags not just the latest tag but also previous tags that's that way we can actually revert to previous images that were already built we'll get into that when we go into github actions as well but as of now this is going to go ahead and build it push it and do all of that good stuff and then we can actually well what exactly what do we do after we have created this deployment there's a lot of different options but i'm going to show you a fairly simple one on what we want to do with these deployment changes that way it actually does change and so i'll let this finish pushing all i'm going to do here is initially let's go ahead and change the ports all across the board on the service and the environment variables and all that let's see if that does anything for us okay so the image is now on the container registry so now inside of the root of my kubernetes project i'll do cube ctl apply dash f i made some changes to this deployment we'll go ahead and do k8 slash apps slash django k8s slashweb.yaml before i hit enter here i'm going to break this down you don't have to do this part i just want to show you what's happening on my end i'll go ahead and do cube ctl get pods and dash w to watch the pods i'm going to go ahead and apply this change i'll hit enter and we'll see what's happening okay so now it's creating a new container at least one but actually it's creating three right so it's actually going to do all of these and it's going to take some time to do so so the big question here that's yet to be figured out is if this image right here is actually going to be the proper latest one like is it actually going to do some of those changes now we definitely want to see how to enforce this for sure even if it's not a new tag but we also want to see if by default it's going to pull that new image and create a brand new container for us this is going to be pretty interesting i'm going to let this actually finish off i'm going to go ahead and close out those pods let's see how much are left looks like those are terminating and the new ones are running now and the easiest way to check this too is to open this up a bit and see what the age is of these ones so all of these are less than a minute old so let's go ahead and go back into our project here i refresh this time that error goes away and if we go into the admin i actually shouldn't be able to log in just yet because i haven't run a very specific thing django migrations right and at this point you might be like wow this is there's so many layers going on how am i possibly ever going to remember all of these things while i do this you're not going to it's just like there's just too much going on so what we end up wanting to do is a couple things number one creating your own little notes file that does each thing step by step or number two which is what we're actually going to do is create a github workflow that will do all of this for us so we can reference it for ourselves at any time now the key thing here is i have some notes for this right so for this particular project this one's not going to be public but i just want to show you the notes and just the general idea in here i've got my workflow and i've got this thing called domain right so what this is going to do is some of the main things but i just want to mention look at we've got our build here right we've got the login the registry login like i mentioned before we've got our kubernetes cluster stuff we've got apply and then we have this thing called rollout which is pretty cool okay so that is pretty similar to what we're looking at here we also have this update file which actually will replace something with the newest latest things right so again this is not exactly what we did but it is something we're working towards and so it's going to build that container image for us it's going to log us in it's going to push the actual built image for us as well with a tag then we're gonna go ahead and use that tag inside of our kubernetes cluster and again this is stuff that we will see which is i think is pretty exciting on how to do so um the the final thing that we really need to do here though uh prior to moving on is running the migrations so i'm going to go ahead and close out this window here and i just want to make sure that everything is working so go ahead and do cube ctl get pods and of course we're going to use one of these pods here we're doing this the manual way so we can see how it's done so cube ctl execute dash it the name of the pod dash and then bin bash now i could actually run the migrate command right there but i'm going to go ahead and go right into the container itself so we'll go ahead and do source otp v env bin activate still in the root of this i'm going to run python manage.py migrate hit enter and this should actually run those migrations another thing that would be cool which would be better actually is to run the bash migrate script because that will do the migration but also create our super user that of course is based off of the env.prod so back into our actual ip address here we're going to go ahead and do the admin user and then i'm going to go ahead and grab this super user password and we're going to go ahead and log in and what is going to happen uh right now it's loading it seems that maybe it's doing something maybe it's connected maybe not maybe we're having issues we will see any time now okay i'm actually going to just reload this page because it could be a previous session that we we've got going there and there we go i actually have access from my production django kubernetes project with my production manage database really really cool okay so we still have a number of steps that we want to take care of one of them is actually starting to work on our github actions another one is maybe having our static files even show up here as well let's take a look all right so what i want to do now is develop a deployment guide our own little guide that's going to be a markdown file but really it's just step by step what we need to do from here on out and then we can discuss all those things as they happen so let's go ahead and get started so in the root of our web folder here we're going to go ahead and create deployment guide md you can call it whatever you'd like you can do a txt file or skip it all together i'll have this in the actual repo as well as you already know so the very first thing that we need to do is we need to test our django project this is incredibly common when it comes to actually deploying any sort of django code or any sort of python code you want to test it before it even moves in the deployment pipeline at all so for example if we do source v and v bin activate and then go into our django project so cd web and then python manage.py test this test should be successful right if it's not successful nothing else should happen so that's the very first thing and of course the reason we're even talking about this deployment guide is because we're going to turn this into actual functional workflows that end up happening on github actions okay so the next one is we want to build a container so after it actually does test correctly we want to go ahead and build this container now this is also pretty straightforward it's docker build f the docker file itself then we want to use the tag of registry.digitalocean.com whatever your actual registry is called right so if you go into digital ocean and you go into your container registry uh whatever yours is called mine is this link right here right hopefully this is all review then of course the image name that you want to give it so january k8 and web now we want to tag it with latest but what i also want to do is i want to bring in another concept here with a another tag so i'm going to go ahead and say v1 realistically this is going to be our commit id so once i actually bring this into github actions i'm going to change this to being our commit id but for now i'm going to use v1 and then put a period there so this of course will build our container file and it will also actually tag it two times one with v1 one with latest so let's go ahead and run that of course if you don't have docker locally then this is not going to work and of course you want to make sure that you run this command within the web folder itself otherwise it's also not going to work and so i'm just going to let that finish up running after it does finish we want to push this container to you know digitalocean or we'll just say do container registry how do we do that docker push the root name here so without the tag and then just do all dash tags that's the easiest way to push it with both of those tags and we'll go ahead and run that okay so while that's running the next thing is going to be probably do we need to update our secrets so this is sort of an optional thing and we definitely already did this so i'm going to go ahead and copy and paste this one first we have to delete our secret and then we just create a new one now the reason i'm doing it now is because the next step is going to be updating our deployment so if i make any changes to my secrets this is the time to do it step four next after we update this deployment what we want to do is we want to keep this yaml file the same in regards to putting it on to github right so i'm actually never going to change this locally i'm going to change it right now for a moment just so it does make the change that i want it to change but really it's going to be the we want to run cube ctl apply dash f and then we can get the relative path here and that's what we want to run it as now remember how i said this might be the commit id later so let's go ahead and just call this commit id just for a second right so this commit id we want that to be this deployment file here and the deployment file itself doesn't really need to change latest is okay but if we change it to the commit id then it will definitely change the deployments like you'll actually do a roll out of this new deployment that's kind of the key here right so at this point we're going to leave it like we had it we'll go ahead and run this and what we most likely will not see is anything being changed okay let me just navigate into the root of my project again i'll keep one open for django anyways so we run this and we'll go back here and what we see is it'll say deployment unchanged right as soon as i open this up and change this to v1 save that and run it again deployment configured and when that happens if we do cube ctl get pods what's going to happen is it's going to create these containers in what's called a rollout so this also means that i can actually wait for that rollout to happen so i can actually do cube ctl rollout status the deployment or deployment slash so this is a deployment right here and then the deployment name right so the name that we gave it right here we would just wait for that to finish right so this is a nice key way to ensure that during our deployment we're waiting for this to finish so we're going to say wait for rollout to finish okay and so that means that the next step would just be our migrating of our database and we'll do that in just a moment before we do that there are other deployment things that i actually do want to change so coming back into our deployment i am going to change this back to being latest okay and then i also want to add in the image policy underneath this image poll policy to be always that's capital a for always i always want to have that so that's what i will keep this is something i left out but at this point it's a good idea to always pull this now the point of putting always is it's another way of doing a roll out so if we do cube ctl git pods we can see all of the available pods notice that my iac python is in there as well but if we actually terminate one of these so if i did cube ctl delete any of those pods oops i should do delete its cube ctl delete pod and then the pod name what this does is it will actually roll out a new pod and if we have this in as always that means it's going to pull that image when that rollout is happening when that new pod is being created which we can verify a number of ways we could verify it but just you're going to just have to trust me on this one that that's going to pull the image the latest image whatever that ends up being versus before where if i didn't if i just kept this as latest it will pull the latest image but it won't necessarily roll the pods when you need them to i will definitely reiterate that later but the the rule of thumb here is if we ever make changes to our docker build latest if all of the places we reference it as latest kubernetes might not know that it needs to make an actual change to that deployment right just because there's a new docker container image with the same tag it doesn't mean that kubernetes knows that and so this deployment file hasn't changed at all at this point that's why we need to actually substitute this from latest to whatever the actual commit id ends up being this i think comes a lot more it makes a lot more sense once we actually do it but i just wanted to really reiterate it now so you have a better sense of all that okay so now my grading the database i have two different ways on how to migrate the database the two different ways actually have to do with how to get one of these individual running pods so if i do cube ctl get pods again and take a look at the individual running pods to actually run the migration we have to do this i'm going to go ahead and delete this terminal out just so there's only one let me clear this out get pods there we go so to do this we run to go to any pod we do cube ctl exe c and then dash i t the pod name whatever a single pod name is not a deployment name but actually a running container in terms of a pod two dashes and then what command we want to run so in this case i actually already have a command to run that command is in migrate.sh so this is where we're going to do that so we'll do bash app migrate.sh we hit enter that will actually run our migrations on that cube file or that kubernetes container that's running right which is really really nice now we also notice that there is this command error that username has already taken that of course is because of in our migrate when we run this this actually is an error right so that user had already exists based off of all of our environment variables and whatnot so naturally if i changed one of those environment variables let's say we change the super user name what we would have to do then is well pretty much all this all over again update the secrets update deployment potentially even rebuild the container so we have a new commit id then update the deployment or really just make some changes to the django k8s gamble file the changes aren't going to have to be significant enough where they make sense like changing the port might not necessarily force a change for your writing containers then we have to wait for the rollout to finish then we can actually run that command of the cube ctl execute it and then that single pod dash dash bash and then app migrate.sh now of course when i said that there's two ways to do this that's because of this right here right this is actually not what we want we want to use this as a sort of a temporary measure to test locally but when we actually go into production we need a more sustainable method measure and that is by using one of these two commands okay so what's happening here is it's setting a variable and it's going to run a cubectl command which is getting all of the pods related to a deployment now so if i do it all related to a deployment it now narrows this down quite a bit and so we can narrow that down even more by using this next part here so that whole thing let me just show you that whole cube command will give us a single name right this single name is what's being set to this single pod name and this command down here is identical it's just slightly different on how it's executed that's all the first one i believe would be the more suitable name for only kubernetes the second one has some linux command line stuff that's going on here which is why i think this is the preferred method for overall kubernetes usage so it's looking in all of the items that are listed out in the pods and it's grabbing the first one and looking for the name itself notice it says metadata.name which of course corresponds directly to metadata.name cool so once we're able to do this then i can just exchange this out here for dollar sign and then that just like that so run the migrations okay so this is roughly speaking the deployment guide i'm going to make some updates to it when it's actually in the repository itself but the general idea is we need to test our django code of course we ideally have this tested with a postgres database this is something we will do in the next one for sure after that test finishes then we want to build the container push it then update secrets if we need to update our deployment if we need to if there actually are major changes which in our case if we're actually pushing this we're going to assume that there are major changes to start and then we want to wait for the rollout to finish the only reason we wait for that to finish is so that i can actually execute on one of these pods because they are available they're ready to run and i can just go ahead and say migrate so it actually does take care of all of the changes that may have happened when it was built or prior to that right so that's the key here so if you need to refresh on this one this is something that you'll want to do from time to time now of course in my case the way i quote unquote refresh on here is by using github actions so that's what we're going to do now is we're actually going to build in something in here for github action so that every time i deploy this code into github it does this in all of these steps in this deployment guide but i want to lay them out manually so that doing github actions is like oh wow that's a no-brainer now because of this deployment guide here too so let me know if you have any questions on this otherwise let's keep going in the last one we created a deployment guide this was meant for a reference to run various commands locally to just verify that our project's working and it's working correctly now what we're going to do is take this deployment guide step-by-step and break it apart into a continuous integration slash continuous delivery aka cicd pipeline so we want to have the use of github actions we're going to do it with that and we're going to break this apart into each chunk that actually makes the most sense starting off with testing our django code so inside of github if you go in here a lot of times you'll see a actual docker or django configuration here and this is actually great because it gives us a starting point now before you even go here what i do recommend that you do is actually import this repository so instead of forking it forking it is great absolutely if you want some of the latest changes which i'm not going to change this a whole lot over time but what you should do is go in and import this repository like this and then just change it to something different now the reason for that is because then it's your repository and then inside of settings you can do these secrets in here so right now would be a good time to have this as your repository and import it into your project so then you can go into these actions here and go into configure something like django now you can absolutely write it right in here in line i'm actually going to bring it right back into my local project so i'm going to go ahead and copy that and then i'm going to do dot github and then as a folder and then a workflows and then we're going to go ahead and do test django diamo and we'll paste this in here so it already gives us some of the core things for tests which is really nice but it's not quite there and the reason i know the actual directory i've used github workflows a lot that's one thing but as reference when you do go to github actions and you start one of their templates that they have for you you can just see it just like that okay so if we break down what's happening here in this workflow is this is going to be triggered on a push a github push on your main branch which right now i'm actually working on branch 26-in so this won't necessarily push automatically and also a pull request on there as well now what i also like having myself is a workflow call on here as well as a workflow dispatch i'm going to show you both of these things as we go forward workflow dispatch will be something i use first now the question is do you actually want this to run every time you push this in this case i'm going to go ahead and say yes just for now we're going to say yes this is the one i want to run when i push this next what actual virtual machine do we want this to run on which is just a docker image we're going to call this a we'll just use in boot too that's a very common thing to use for this particular project the next thing is the version of python we want notice that i can actually have a list here and if you have a list here it's actually going to run this list on each one that's actually pretty cool so we can actually test our project on multiple versions of you know python that's great especially if you're doing projects that need multiple versions ours only needs 3.9 so that's really the only one i need to test it on i don't need to test it on previous ones but by all means test it on multiple personally i would leave this in here just so you know that it is going to have the ability to test it on multiple okay so next thing up is we need to actually change the default for our working directory so i'm going to do here for this is defaults and we're going to go ahead and run and then working dash directory and this is going to be in dot slash web so that working directory is this folder right here all right so that's all i'm changing here so i'm going to go ahead and add this in we'll do git status and actually what i'm going to do is check out branch main and then i'll go ahead and do git add all git commit and then add django test workflow github action that's quite a mouthful now we'll go ahead and do get django or git push and then we'll just do all okay and so this is going to push that workflow and it's going to run these tests for us now there's one thing that's actually glaring that's not great about what's happening here what that one thing is the fact that i don't actually have a postgres database as my test so if i come into my actions here now hopefully what i'll see is this github action here and also hopefully what i'll see is it's running so notice that it actually has the potential to do multiple things right this potential means that if i had other versions of python it would also show those other versions of python but here we go we've got a improper test right so i went run tests and it says the secret key setting must not be empty so this is actually really cool the nice thing about these tests is it just failed which means that if we think back to our deployment guide our deployment workflow here that means that step one failed step two should not execute at all that's a critical piece to all of this now the reason it didn't execute of course is something maybe you know about maybe not but if we go into our django project here our secret key is set to an environment variable that failure shows us that it must not be empty therefore it actually doesn't have that environment variable what do we need to do we need to set it and yes we can set it inside of our actual test the test pipeline here and we do that by coming under here and saying env and then we set in our environment variables so this is a test key not good maybe make one better right it's not a good test key but it's still test key right so we save this and we'll go ahead and do git status and then i'll do git add git commit updated test workflow for django and then git push dash is all okay so now it's pushing that into this again and yet again it's going to run that workflow it's going to run the new workflow and it's going to run as quickly as it can and it's usually pretty fast right so it i mean github actions is pretty fast across the board for all sorts of things so if you're used to building things like this personally myself i don't even do this locally that often anymore i now just use github actions in order for this to work like in other words you can actually run python manage.py test on your local machine prior to having it being pushed into github at all but i actually think it's much better and more efficient to do it all and get up that entire pipeline in github which is you know why we're talking about it right now but anyway so now the test actually went through but we actually do have another glaring problem and this glaring problem is the fact that i'm not using a postgres database now before we go any deeper what i will say is i have a entire blog post on this and it's going to be linked in the description as well but this entire blog post shows you how to do the django github actions i'm sort of skipping a number of things that might be in that work that blog post itself and really just honing in on what we need which was one of the big things is changing the working directory that's not in that blog post at this time but this next thing is actually having a postgres database so to do this we have to use what's called a service so in here we'll go ahead and declare services and i'm going to go ahead and say postgres main now this name here is just something i'm giving it you could call it db you call whatever you want very similar to using you know docker compose how we name it those services in fact the way i view it is it's almost identical to how docker compose works notice that i have postgres right here an actual image so we come in here and say what is our image now in my case i want to declare the image type that i'm using i believe i'm using postgres 13 in production let's go ahead and verify that on digitalocean so let's go into our databases because you want to match your when you're running tests you want to match your production database as much as possible so i have postgres13 awesome so i'm just going to go ahead and try out postgres colon 13 okay so just like with you know our docker compose here i was able to use environment variables to set the actual you know environment variables to set all of the postgres stuff so all of this right here not postgres ready per se but all of this stuff right here i can actually use again in my tests so what i do here is i can set it directly in these services or i can put it on my entire environment variables for this entire pipeline this workflow this is how i prefer to do it just like i did with docker compose it's in one place and then it's available to both my postgres you know project as well as my django project so there we go we're gonna put all of these things in here and just like that so localhost is actually the default for running inside of your your actual postgres project here i'm actually going to navigate the postgres port back to its default i don't need to change it you can change it if you want but i don't need to change it so i set these environment variables here after i do that i will re-declare them down here because every once in a while you might have these declared differently in your django project right so all i'm going to do here is we're going to say the db this is going to be dollar sign curly brackets two curly brackets env.postgresdb okay so this is of course their github actions oriented here all it's saying is whatever your environment variables are declared up here you can actually re-grab them here it's really really cool next of course we can do our user and env dot postgres user now i will say going back to these environment variables in my django projects i rarely put them in as postgres i usually do db stuff like this so it's a little bit less of writing for one but the number two i don't always use a postgres database so using db underscore makes it a little bit easier uh when i need to switch to database projects and the only reason i did it here was to make it all simple for all of us um but that's another reason to have them declared here but then also down here as well and the only reason you're declaring them down here has to do with the fact of postgres itself the actual image itself is to just ensure that we're getting the correct one okay so there we go and so we add that in now another thing you might end up considering which we'll look at in later sections on github actions is having some of these values hidden inside of this entire workflow when you're testing it i don't think testing testing a database doesn't actually matter that much as far as keeping things secret but of course we could actually absolutely have these values being secret and then updating those as well if we need to okay so the next things are actually declaring our port here so we'll go ahead and do ports and it's going to be our port mapping and i want to make sure that it's exposed just like we did with our docker compose so identical i don't think we need to make it a string like that it should work like this just like in docker compose as well and then we'll go ahead and add in some options that we didn't necessarily need inside of [Music] the docker compose and we probably wouldn't necessarily need it here but these i've seen as best practices for github actions itself so feel free to copy those right now okay so that should be it as far as getting our postgres database to test on our django project now any other environment variables that need to exist on your project should be in here right so notice that debug is missing i did that on purpose debug is not there but there might be another thing that i need to check and that is scrolling down in my database i got databases available database ignore ssl now the big question is is this actually going to work i'm not sure if it will so we need to test it out because in production i need it to be ssl but maybe during testing i don't need it to be ssl so let's leave it because if i leave it like this it's going to assume that i don't need s or that i will need ssl so i'll go do git status git add git commit and added postgres to django workflow test and then git push origin main or all either one now we'll go back in to our workflow notice the last one was successful that's great no surprise there because it didn't have anything that complicated so i'm going to go ahead and let this workflow run and then we'll take a look and my build was successful if i look in run tests here i have no tests to run at this point so it probably would be a really good idea to have real tests to run at this point but it's sort of something i'm leaving outside the context of this whole thing but but now it actually ran this workflow and it did it did so testing and actually this workflow is very similar to how our django pods work inside of kubernetes in the fact that the service of postgres that is an external service but each one of these workflows runs fairly isolated from other environments other types of things right so like the whole thing is a docker container that's that's what i mean by that it's the whole thing's a docker container and it's running just like it would on kubernetes itself now the actual environment the actual machines that it's running on are different and it's also not testing it on our actual production environment which potentially could have issues i'm not going to say that it can't right it definitely could so perhaps there's a long-term way where we need to think about having a whole another kubernetes cluster that is actually doing all the tests in a in the same region and all that stuff i don't i think that's a bit overkill and something that maybe you'd have to consider if you had you know millions of users a day not not maybe like a thousand a year or something right so in any case we now have a way to fully test django on the github actions with a postscript database this is something you should definitely reuse in all of your django projects right off the bat now one of the things i want to show you was this workflow dispatch and what i actually want to do with this test and that is i want to get rid of the automatic version i'm going to call this django ci and postgres test okay and this build instead i'm going to call it django test you don't have to call build you can call whatever you like the things that are required are some of the others right so like the jobs runs on defaults all that all that other stuff there's only a few places where you can kind of divert from what it's called anyway so now that we've got that let's go ahead and do git status and i'll do git add dash as all git commit and we changed django test execution okay so we'll do git push origin main and so now when i push it naturally it's no longer going to just automatically run that test so in order for me to run it i can either come into the workflow directly and run it this way or what i could do is design another workflow to call this workflow itself so this is this workflow call thing allows you to nest calling this workflow with another one and there's a lot of good reasons for doing this testing code is one of those notice that i'm only using environment variables here everything that i need to run this test is self-contained i don't have any repository secrets on here whatsoever which makes this a little bit easier to run as well but anyway so now what we want to do is actually start the next process in our development in our deployment guide which would be building the container and pushing the container let's see how that's done all right so now we're going to go ahead into the next two parts of this deployment guide that is building our container and pushing it with github actions so inside of our workflows here i'm going to call this build.yaml and i'm actually going to go ahead and copy a lot of the test django one and we'll just change it ever so slightly so we'll go ahead and say build docker container and push to do registry okay so the actual triggers here i'm going to leave them in as we'll leave it in as workflow call and dispatch and then we'll go ahead and do push i wanted to bring this back in of branches to main and then poll request and branches main again next of course is going to run on this that's true so i'm going to call this docker build now one of the things about this is i want this to run well after the test actually runs so before this docker build happens i'm going to go ahead and do test django and this test is going to happen based off of my repository that i'm currently working with so if i go into my repository here i'm going to go ahead and grab the part after so coding for entrepreneurs is my username here the repository name here slash dot github slash workflows slash test django dot yaml at main so at the branch main so what this is going to do is actually run that test first which is fantastic that's exactly what i want to have happen so instead of actually going through pretty much anything else what i want to do here is i'm going to go ahead and get rid of this strategy we don't need that anymore i should not need the postgres service to build my container at all i might need environment variables but i doubt it's these so i'm gonna go ahead and leave it like that the working directory i no longer need because i will use that in line this time but i'll leave this as is and i'm just going to only use one step and that's checking out the code that's it so i just want to test if this will test our django code so this is just another way to go through that so let's go ahead and try this out i'm going to clear this out and do git status git add git commit and added docker build workflow and get push origin main and then we'll go ahead and jump back into our project here we go into actions right away it should be running right so we see the name of it it says build container and push to do registry whereas the django ci and postgres test one is a little bit different and so i'll jump into here notice it's going to docker build and notice it's actually testing our django code how cool is that okay so what happens here is docker build finished really really fast part of the reason for that is i actually didn't specify that docker build required test django to finish so to do this i'm gonna go ahead and say use is or rather not use this but needs and then test underscore django so it needs this job so we could call this test django job just to make sure that it's clearly not the other test django here but it's actually that job specifically so now if i do this i'm going to go ahead and add it in again and this time we updated the docker build workflow and git push all okay so now of course the tests went through just fine now we've got a workflow that has a job that depends on another job right and so now they're piped like this so they're no longer next to each other which they can be they can actually run concurrently which is fine but in my case i want the docker build to be re like it actually has to happen prior to actually running the next step now i can absolutely do this same thing for our um kubernetes part of the docker build i'm actually not going to do that instead i'm going to just run build and then everything i need to do in relation to kubernetes is going to happen in this build.yaml i just wanted to show you that these things are available so if you needed to separate them out you totally can i am not going to separate them out for a very specific reason that will start to become clear just shortly okay so the first thing is the step here i'm going to give it a name and we'll go ahead and say check out code i like naming the steps that way you have them although you don't have to do that the next step is going to be install the digital ocean command line tool so this is going to be using a open source command line tool that digitalocean allows for you to interact with its services we didn't install this at all through this project primarily because we didn't really have to now we still don't have to there is a way around it but using the command line tool makes the actual workflows here in github a lot easier as we'll see when we get into the kubernetes of it all so anyways so we'll go ahead and install doctl and it's going to be digital ocean slash action dash do ctl at v2 so if you ever like have issues with it whatsoever just do a quick google search for it you'll see that there's this action here and it's a action doctl on the digitalocean there might be newer versions later as i would imagine there will be and it's going to show you exactly how to use it and that's what we're doing here right now so we can actually think about it in this terms as well so what we want to say here is with token and now i need to add in a token which is going to be our secrets.do api key or something along those lines or we can say it's our api token key right notice that it says secrets though and this is why i recommended that you imported our repository instead of forking it so that you can go into the settings of now your copy of this repository and go into secrets go into actions and create a new repository secret in this case it's our do api key now i need to actually go back into my digitalocean console so let me just navigate to that and i want to go down into api here and tokens and keys i'm going to generate a new key this time i'm going to call it django i'll call this k8s prod token actually we'll call kate's github action prod token it's kind of a mouthful but that's what we'll leave it in as i'll copy this and we'll go back into github actions repository secrets here and we'll paste this in add that secret okay so now i have that secret available to me this is the repository secret so i can absolutely use it now in my github actions just like this but of course if i need to roll this key at any time i totally can and if that key is no longer working this workflow will just fail which is also nice to see okay cool so next up after we have that key this essentially gives us access to the things we need it to and so i'll go ahead and say name log into do container registry with short lived credits or credentials so we'll go ahead and do run do ctl registry login and then we'll expire this after a certain amount of time so we'll do expiry seconds and we'll do 1200 which we don't we probably don't need that many seconds but let's just give it a good amount of seconds in order for this login to expire now this right here is the reason we're using doctl at this point we don't necessarily have to use it always but we're using it at this point specifically because it expires after a certain amount of time so that means that i'm not leaving something hanging for some example right so when i log into the registration one day expire and so the next part is of course we're going to go now build our container image so that's the next step we've logged into all the services we need to log into much like we did it locally the only reason it's not in the deployment guide is because it's kind of assumed that up here 0 log in to the registry via docker and api key like we did right i just don't need to do that anymore once it's already done on my local machine but in my github action i need to do it every single time now i'm going to go ahead and declare the working directory right here and this of course is web and then i'm going to go ahead and run the commands i need to run now in here i can actually cd into web too i don't necessarily have to do it right here but i like using the built-in features that github action has that therefore i use this working directory command when needed so now i'm going to go ahead and do docker build just like we've seen before so we'll do declare the file of course so docker file and then we'll give a couple new lines so the first one going back in here i'm actually going to go ahead and copy these lines here we'll bring those in and let's make sure we tab and use that slash to allow for multiple lines okay and then of course we want that period there now this is the part that i want to change this v1 here so what is it that i want this to look like well when i actually commit using github i get a commit string a commit value that i can actually use inside of my builds here so i'm actually going to go ahead and use that value and we'll do dollar sign github underscore sha just like that no no two dollar signs you don't need that just one dollar sign and one curly bracket and then two colons now and we'll say seven so this is going to give me the first seven characters of the commit as we'll see just shortly so that will build a container and it will tag it twice one with latest and one with this commit id and so if we think about this two going back into our deployment guide in this part of updating our you know our deployment itself our kubernetes deployment this latest image now i can actually change it to whatever the commit id is so after it's built that commit id will update we'll see this in action in a little while probably in the next section but now we'll go ahead and do name and push image and i could say to container registry or whatever here we're going to go ahead and now run and i'll do docker push and then the base of this right here and then dash all tags just like that okay so now of course we need this to run so i'll do git status get add git commit and updated build workflow and then git push dash all right so now it's going to go ahead and push into our github actions and it should be running and i'll let this finish and we'll come back it's still running but i wanted to mention that there's a bunch of variables environment variables that github actions come with that you don't actually have to handle so there is also one called a run id that should be unique for each workflow which means you could use the run id instead of the shop um like this right here but the reason i use the sha is only because then i can better reference it if i need to for the actual commit and for the build which we'll see as well but it gives me all sorts of really interesting things here if you needed to and of course if you just do a quick search for github actions environment variables you can see all those things right and and basically i used it in a way that is pretty common which you just use like a key substitution with a linux type command excuse me like a string substitution not a key substitution and then it actually cuts it down to seven items so it's not super super long that's kind of the key thing there but it is nice that there are built-in environment variables for github actions this is still building because it's going to take a little time for all of this to run which is actually a good thing right so we wanted to take some time to test the build part also does make sense to take some time but it's actually not that much time right so build container image it took 24 seconds and it worked it appears that it worked so i'm going to go back into my digital ocean account i'm going to go into my container registry and we're going to go ahead and take a look at this you know image so we come in here it says four images now we've got the latest image as well as this commit sha so this commit shot is great because now if i were to update my deployment let's do this manually just to see what it looks like we come into kate's apps i change this latest to my new commit shop and we do cube ctl apply dash f k eights apps and django dash k8s dash web yaml i hit enter and then i'm gonna go ahead and do k cube ctl get pause dash w to watch them changing and what's happening of course is it's going to create each one of these containers and you know going back to that deployment guide we of course need to do what i just did but also if i have this rollout status thing this is going to hang on it's just going to wait there until they're all finished which means in the terms of github actions it's not going to do the next step until that part's done which would be migrating that database as we've already discussed but it's really cool to see it at least locally what's going to end up happening on our github project itself so the other cool thing is if i am local and i accidentally let's say for instance i keep the deployment like this which is how i want to keep it i want to keep it locally as latest in fact this is what i'll always deploy on my github project itself but now that i have it local if i do it again i can actually do some of those changes it will actually change them locally although it will be the same container image it's just using a different tag this time but that also i think is pretty cool so that i would have some ability to still work on this locally and not have to make any changes to the actual uh deployment file itself locally and again i i want to keep this just like this because of what we'll do in the next one um but at this point we now have a pretty robust system even just here right we can test it then we can build it and we can push that image into the container registry our custom container registry so yeah i think that's pretty cool so now of course we just need to bring it all together and have it running through kubernetes [Music] now of course we're going to update our github actions to actually use our newly created image and so i'm actually going to skip the update secrets for a moment we'll come back to that and really focus on the deployment portion that is actually taking our newly built image the container image itself this one right here and deployed that one into production on a github action now in our deployment guide i actually updated it to have four ways to trigger this deployment rollout one of the ways is calling this command right here now this actually works generally speaking especially if you have image pull policy of always on your actual deployment like i do right here so in other words if i actually always tag it to being latest the latest one it will use is going to be updated over time right so if i for sure have this tag here but the question to me is i might not have that tag or the other part of this is maybe i want to know specifically which github commit caused any particular issues or what deployment that is so in my opinion i think having some sort of identifier on the image itself is better than just simply latest however this does work so the next one is the image update we can actually call directly which image we want to change this is the one we're going to end up using another way to trigger a change is to update an environment variable it's very similar to what we did up here in the sense that it would use just latest all the time so in that environment variable we could actually have the commit version if we wanted to there's a number of steps we could get to do that but i'm not going to do that the other thing is of course changing the deployment file otherwise not necessarily the environment variables but making other changes like actually changing the image in line now doing this change in line in a github action is possible it certainly is possible but it actually is a lot more headache than is necessary so i'm going to go ahead and just do the simple one or as simple as i can think of it is by changing the image on the github action so that's what we're going to be doing and it's going to be very much like this now before we can do that we have to actually log in to use our kubernetes cluster so i have to use doctl which is another reason to have this all in one step doctl i actually logged in here so i got the container registry here and i logged in actually specifically here this is where i can use doctl locally or in this this action so we're going to go ahead and grab our k8 cluster and this is the cube config file and we're going to save it with short lived credits right so we want this to be local so that i can use cubectl so to do this it's pretty simple as far as the run command is concerned we just run our doctl kubernetes cluster cube config save expiry seconds this one i'm going to have it be a bit shorter so 600 seconds it doesn't need to be as long because we are not building an image it can be as long it just doesn't need to be next of course is the cluster name itself which in our case is django-k8s now when we see stuff like this it might be a good idea to actually say cluster name and put it into our environment variables and then that way we actually use that instead but that's kind of up to you especially with how we're doing it this one in general as we uh haven't made this incredibly reusable it's actually pretty not reusable because of the names here i could totally make it better and that would be a way to do so and i just want to mention that now okay so now we've logged into that kubernetes customer we can go ahead and then say update the deployment image okay so what this is going to do then is it's going to run that command cube ctl set image then we need to declare the deployment which in our case is this right here so back into build it's going to be deployment slash that and then the image we want to use for the container we want to use so notice that inside of a deployment we can have multiple containers in here we only have one we have to grab the one by the name so whatever named container we have that's the one we're going to go ahead and grab and in this case mine's january k8's web and we want to set it equal to this value right here quite literally the same value so this will update the deployment image and it will also create a rollout so now what i can do is say verify deployment or let's let's actually add a name here and we'll go ahead and say wait for rollout to finish and then this is going to be simply run and we'll go ahead and do cube ctl rollout status deployment this deployment right here so essentially waiting for all of the rollout to happen which kubernetes by default is going to do one by one it's not going to destroy all of our current containers it's going to actually slowly roll them out and this happens to be a lot better if we have multiple replicas so if we don't have multiple replicas if it's just at one then it will actually change that single one but if we have multiple then it'll deal rolling update by default which is also nice that is another setting that you can add inside of this deployment too so check the docs for that so this is gonna wait for that to finish as we've seen many many times and then we can actually run whatever final like commands that we might need to run so in our case we want to run our migrate database command and this again is going to be cube ctl but the first thing i need to do is we'll go ahead and export single running pod and this one is what we've seen before and i actually have a note for it in our deployment guide which was using something along these lines where we actually grab a single running pod or a single pod name and do something along these lines right and so that's actually what i'll end up doing as well it's just making sure it's running just like that and so now we have a way to migrate our final application so this would be something i would definitely want to do shortly after right after the rollout status now one of the things that we also might consider is having something like this happen sooner or something in the deployment state is to only roll out one of these and then run that migrate and then roll out a few more that gets things a little bit more complicated i think this actually works pretty well overall and the nice thing about having this github commit is um or the actual sha is that i can actually go back to a deployment that for sure is working in the past if i need to roll back um that's not something i'm going to cover at this time but now we've got that we'll go ahead and add this in and we'll go ahead and say that we have our k-8s deployment in github actions and we'll go ahead and do git push origin main okay and in my case i'm going to go ahead and say force because i've made a bunch of changes but anyways through that testing i've found probably the best method the easiest method for all of us to use which is this right here and so one thing i still need to do of course according to my deployment guide as well is going to be adding in our secrets that's something i definitely still will do but before i even get there i want to make sure that all of this is working correctly on my actions itself so if we go into our github actions i'll just let this actually finish off and then we'll come right back and great so the deployment ended up working as we see it actually did wait it waited about 30 seconds not too long which is great and then it ran our migrate database command in our case it should just say uh all of these things right that username is already taken of course in app slash migrate i can probably get rid of the create super user thing now but at the at the end of the day we have all that stuff and we can also now just describe our actual deployment so if i look at one of these deployments i can see what the new image is right so of course it's different than a local one and if i apply the local one things would change only in the sense that the name would change it actually might create new pods for us because the actual image changed but the actual container that's running won't be any different and we can verify this by going into our container registry here and looking at the repository and we can see what the latest one is so let me just refresh in here i think i have quite a few of the latest ones because i was doing some tests but the idea here is 1 c 8 a 0 6 5 and then latest and 1 c 1c8a065 so those are the exact same at this point so if i were to just run it it would it would do that and the other cool thing is now since i have these tags if i have older ones that i want to change to i can just run that same command that we just created or that we just used this command right here i could do this locally and change it to the version i want to have on there so i can just copy this really quickly and oops not that whole thing i didn't realize it would give me that whole thing so let's try that again and we would we would then just change it to what that image is right there which is really really nice if i needed to to change it really quickly or on the fly so that's actually something that's interesting that perhaps we would do as a workflow item as well at this point i'm not going to touch that what i do need to do as a real final step is going back into our deployments is actually adding in those secrets so what i want to do for that is before i set this image here or update the deployment now i want to go ahead and do update deployment secrets right and so to do this we're going to do basically the same thing i did locally now the first thing is we're going to create a new file and this is going to be in web slash dot env dot prod and then we will have all of our environment variables in there and then we will actually that so this actually creates the entire file and then in the same command or the same step we're going to go ahead and do cubectl delete whatever our environment uh passwords are or the actual secrets are which is this right here right so we delete that and then cube ctl and create secret and generic the secret itself and then whatever we put at this path right here so that it might be in a different area it's completely up to you i just like keeping them all in the same area so the commands are the same locally and whatnot okay so there we go and that's it that's pretty much it but but the thing is we need to actually add in all of those proper passwords um all the environment variables there so what i'm going to do is i'm actually going to come in here i'm going to copy all of these bring it into build.yaml and paste them all in and then we're going to tab them over so select them all tab them over into our env file so you want to make sure that these are on the same line as i do have them here i'll break this down and there we go so these are the only ones i really need i don't need the the reddest stuff i got rid of that because i don't actually have redis in here and so in the future when you need to update your production environment stuff you'll just the actual environment variables you'll do it right here and so what i'm going to do now is i'm going to take the ones that i want to be secure which is pretty much all of them and then change them into dollar sign secrets dot django super users you know january super user username and then go into my repository into the settings down into secrets um and not actions but yeah there we go so secret actions or action secrets rather and then we go in here we bring that in and then we add what name we want it to be right and then you're going to want to repeat that process for this whole thing i'm actually not going to do this step by step but i'll let you do it finish yours off as well and make sure the variables are correct the nice thing about this is the allowed host so if your loud host changes for some reason and you need to run this again you totally can so you can run this build at any time because of workflow dispatch and that's when you would actually run it only when you actually have deployment secret changes that you might need and if for some reason that the github sha is not changing when you run it you can add other environment variables such as the github like number well i can't remember what the number is but the github number that actually runs each time or increments each time i think it's like the run number each time you actually run any of these workflows so go ahead and update these now and we will be right back okay so i have all of my secrets added from here and of course like i said if you need to add new ones just add it right in here and the thing is if you some for some reason needed to change them on the fly you can you could just log into your github repo go into your build diamol and you can absolutely change what's going to end up happening here i don't have them yet because i haven't pushed this but you can totally do that and the nice thing is it's all in line and it should actually reboot everything for you and worst case scenario you have a whole another workflow that if you needed to reboot your entire deployment you could do that right so let's see it is this right here if you need to re force that out you totally could create a whole another workflow that does that that really just logs you in to your container registry right here so first off you log into the dctl doctl that is and then you want to log into your kubernetes environment so this right here and then you can run all of the kubernetes commands as you see fit okay so let's go ahead and add this one in now so get status get add git commit and added environment variables and then git push origin main okay and so now of course it's going to go ahead and do all of that building like we've seen before and looks like i have an error here syntax error perhaps i have invalid spacing somewhere line 42. so we got line 42 and that's this colon here don't need a colon there let's try that again so used to writing those colons okay so we push it again and now looks like we didn't get immediate fail so looks like it's gonna work now okay cool so in a moment what we're going to do is actually up like actually implement this in this chain and with all the changes that we might need to make so we can actually have our static files working as well through digitalocean spaces um so the big thing about that is it's going to actually update this build.yaml to some degree and maybe some execution commands that we might want to run so i'm just trying to give you a practical example of something you might do in the future when you need to make some changes to build.yaml one of the challenges i'll leave you with now though is to change your build.yaml to be a little bit more reusable for other kinds of you know containers and images right so other kinds of deployments that is not just this django deployment but perhaps that iac python perhaps you want to add some stuff into that so i'll give you a hint it has something to do with this env right here to make the entire build a little bit more based in env the other thing is you could potentially move out your build and push into different workflows uh if you really want to get your hands a little bit dirtier as far as getting all this stuff working so one of the things that i probably don't want to have in the long run is this django secret key as my environment variable at all because i don't want it to accidentally get replaced somewhere in here like this django secret key perhaps you accidentally do dot env nah that's not good especially with this so yeah that's uh pretty cool and if you have any questions on this let me know of course i think using the code on github as a reference will help solve a lot of the issues that perhaps i didn't show or perhaps i didn't solve but a lot of it is going to be you testing things out and making sure that you just have certain configurations done correctly there's another reason why i went through that deployment guide because if you didn't get that deployment guide if that was still kind of hard or challenging to get through this github action stuff would have been probably closer to impossible but now let's go ahead and take a look at a really practical way to update everything that we just did using our static files and before we go i just want to mention it should be cube ctl delete secret i think i just had it as delete but of course the code on github would reflect the correct one all right so let's take a moment to congratulate ourselves because we have our django project running in production on a docker container using and utilized by kubernetes on digital ocean which is fantastic and on top of that we have a sustainable way to deploy future versions of that using github actions which is also fantastic this one is all about adding in one more practical aspect to our django project and that of course is using our static files now this is also going to show us how we're going to continue to update our project and why github actions is a critical step to ensure that what we're doing on kubernetes or with kubernetes is well supported and also we're not having to manually do things constantly so go ahead and log into your digitalocean account go into spaces and we're going to go ahead and create a new space for this project right so you're gonna come in here and you're gonna hit the green button create and spaces and then you're just gonna make a brand new one now in my case i'm gonna be using ny3 new york three you just have to use one that's pretty close to your users of course if you can create enable in the cdn i highly recommend that you do because you just need a custom domain to do so next of course i'm gonna go ahead and restrict my file listing i'll handle those things later we don't need anyone to be able to list these contents in this space i believe you can change this if you need to at some point in the future so in my case i'm going to go ahead and create a unique name for this this space is also known as a bucket i'm going to call mine django dash k8s dash static okay or maybe just django k8s because that's been the name of my project so there we go and that's it and then i'm gonna go ahead and create this okay so with that in mind i'm gonna go into my django project now and i'm gonna make a new folder in here and i'm gonna call it just simply static files and inside of here i'm gonna go ahead and say blank dot txt and say blank on purpose cool next of course i'm going to go into my settings and what i want to have is i want to have that in as my static route we're going to give a blank as our static route and i should mention that this will be also linked in the description but this blog post right here does a lot of these things and in there is our static route as well i just want to add this in very first and this is going to be simply our oh that's weird our base directory and then our static files okay so the base directory is already linked up here as well and that's this right here next in our installed apps i'm going to go ahead and add in storages this is django storages now in my case i should already have it in my requirements.txt right there okay and i also want to make sure i have boto3 in there as well to ensure that this works because i'm going to be using the aws version of this aws is the pioneer of the static file storage that we're using here which is just object storage so what we're going to be doing is obviously using this as our primary place to have all of our django static files part of that being our admin static files all the css and javascript but also any of our other static files that we might need okay so with this in mind let's go ahead and add in all of our configuration so inside of django k8s here i'm going to make a new directory called cdn inside of there we're going to go ahead and create an init file and inside of the or next to the edit file we're going to go ahead and create conf.pi this is going to be a few things we're going to first off import os we want to grab our aws access key id we're going to want to grab our aws secret access key and these things are also going to be found on the django the django storage documentation as well aws storage bucket name now you might think this should be an environment variable i actually don't think it needs to be an environment variable because it's pretty accessible once your project goes into production but you could totally make it in as an environment variable itself these two definitely should be and of course we're going to need to set these things in a moment and we'll have to remember where all we're going to want to set them okay so we've got that next we're going to do our aws s3 endpoint url and this is going to be where it's actually located which is this link right here it's really that simple but it's also possible that we might need to get rid of that bucket for a moment so i'm going to get rid of that bucket just for this one and then i'll do my aws s3 object parameters now this one is actually probably pretty critical first off our cache control you can set the age that you want to have on here i'm going to do my max age being equal to 86 400 seconds and then my acl is going to be public read so by default i'm going to have it being able to be read publicly next up is my aws location i'll just set this one in here as well and that's going to be my actual endpoint here so i'll go ahead and copy this that entire endpoint there we go and then what i want to do is my default file storage and then also my static files storage so this is for one is for static files things that i add to my project by default these are like usually within the third-party packages we might use or in our case it's going to be the admin package the default file storage is where i'm going to be uploading things to right so user uploads or you know my own team member uploads stuff like that so inside of here i also want to create one more thing and we'll just call this backend stop hi and this one i'm going to go ahead and just copy and paste some stuff you can do the same which is really just creating locations to where we want to store things on our bucket so in other words everything for all of our uploads is going to be under media by default everything for static files will be under static and then of course i want to just reference these things and so i'll just go ahead and also copy those to the path so django k8 this right here cdn and this should say c d in not that so c d n there we go and then static files and so on right so we got it back in cool so now i need to get my access keys let's go ahead and do that so back into spaces we want to grab manage keys here and we're going to scroll to the bottom space access keys we're going to generate some new keys here the keys i'm going to call this django k8s now i actually am going to do k8 local i'm going to do two of them one is going to be my local keys so let's go ahead and grab these into our env file not our product file but our emv file i'm just going to copy them first there we go and then grab the environment variable names that i gave them in just just a second ago and that's those two right there great so now i've got my local ones i'll also generate some new ones so django k8 and prod now you might be wondering where do we put these well these are going to go on github so get into our repo into settings into our secrets actions and let's go ahead and add these in so new repository secret i'm going to go ahead and grab this first one and again grabbing the same aws access key id that's going to be that top one and then the secret is the bottom one so we'll create a new one with that secret and there we go okay so now i've got those environment variables i got everything configured as i think i needed and then the last thing is um this is actually not what i want my static route to be but i will change that in a moment i actually want to go ahead and do from the dot cdn dot conf import um all and i'll do hashtag no qa for it's not always best practices to do a big import like that this allows my vs code or any code checker that i have to just sort of ignore that import and i do want to have static files durs and this one is actually going to be my baster here this one will do dash cdn and again inside of my web i'm going to make another one i'll just duplicate this directory here and we'll call this one we'll rename this one to be just cdn okay and i'm going to actually add in let's do git status okay everything's looking good so far so let's go ahead and do a collect static now i'll do first off source v e mv bin activate cd into web and then python manage dot py collect static go ahead and say yes okay and so this might take a moment and while it's running i'm going to go back into my digitalocean space for this and take a look to see if it actually is starting to have files and notice that there is a folder called static in there now which is great so that means that it's actually creating the static files as neebie notice that it says blank.txt that is coming directly from this folder right here i do want to keep that folder in there in case i need to add my own css or javascript static file cdn is meant to be our static route that we actually override with this right here the static route becomes the actual digital ocean space so now that i've got all of those things in there that's a that's a really good sign so it means that at least it's working locally so now i need to make sure that it's working in production so first and foremost my env my emv these keys need to go into my github build.yaml so we're going to come back in here i'm going to paste those keys in of course i do don't want to expose the actual key itself but instead i need to update my secrets right and so that of course is a critical step here and then we're going to go ahead and do dollar sign secrets just like that save it okay very good and then the next thing is i'm going to go ahead and copy this migrate here and i'm going to make a new one called collect static dot sh paste that in and instead of migrate it's just going to be collect static and then no input as well and i'll get rid of the super user email i do not need that so we save that as well and our docker file doesn't need to change at all um migrate.sh shouldn't need to change either collect static.sh i don't believe we'll need to change we might need to make it executable but i do not believe we need to change it so with that in mind i can actually come down here now and do one more step come down here and call collect static dot sh or did we let's see yep we called it collect static collectstatic.sh and now i think i've got everything necessary with all of these changes okay so now we'll go ahead and do git add dash all git commit added static files for django storages and do spaces and we'll do git push origin main okay that's it so i i did collect the static files and i think that they'll be collected again so actually what i'm going to do in here in my space is i'm going to go ahead and delete these because it should actually collect those static files again and it should actually bring them back in again on github and it will have some time before um it actually does do that so if we go into our actions here um first of course it's going to test everything which takes some time after it tests everything then it's going to go ahead and build it and run it and do all that all that good stuff so that's actually a pretty complete look at what we would need to do if we are changing all of this different stuff in this case it's a really practical example because we have a actual third party application that requires us to update our environment variables as well locally and in production so locally another thing that you could consider testing or at least implementing is your env prod and updating this to be actually the prod variables i actually think it's not a good idea to have your production environment variables locally anymore even if i mentioned it before that i like testing every once in a while at this point there's really no good reason to have these locally and exposed like this to where if anyone gets on my computer they would just be able to access all this stuff whereas if it's on github in the current form at least you know and you go into the secrets here there's actually no way to review these secrets you can only change them which is generally fine especially if the actions are running and stuff like that but the thing is you can't just look at them you actually would have to go to the source you'd have to go to you know digital ocean and and find all the different keys and whatnot like that so that's also another thing that's worth considering in the future for you but of course that's just one of those best practices you don't want to have your keys exposed too often the local one this does still give me access to my local keys as well so you can kind of have both of those things in both places but the just the general idea is in production you don't want to have those keys exposed in too many places that's really really critical okay so now that we've got all this we just need to verify that the digitalocean project is running so i'm going to let this finish building so far it looks okay the thing that i'm only worried about is this migrate data database command which should probably be changed the name should probably be something more like instead of migrate database command it should be like post build django commands something like that post build not docker but django commands migrate collect static cool so the other thing that's really nice i think about how it's currently set up if you need to make a new admin user for some reason you can just go into your github actions change your super user stuff in here and then you can actually create a new admin user that way it has to be a new one it won't actually reset the admin user password or whatever you put in here it will actually be a brand new one and so i get an error here and i think it's probably because that file is not executable let's go ahead and see i get a photo core or forbidden option here so it's not actually allowing me to run this which is interesting so what i'm going to do with that then is i'm going to call that command locally so let's go ahead and do k get pods or cube ct i'll get pods sorry i have that shortcut so cube ctl get pods and we'll call this pod right here 110 seconds this should be the latest version so cube ctl execute it and then bash app collect static dot sh and yeah we're getting forbidden okay so let's go ahead and jump into the shell so we'll come back here and go into bin slash bash and all i'm going to do is the manual version so source and that's going to be slash otp bin activate and then python manage.py collect static oops python manage.py collect static say yes and still same problem so we're going to go ahead and run into python import os and let's take a look at our environment variables let's see if the access key is in there so we'll go ahead and print out os on viron dot get aws access key that works and print os.environ.get aws secret access key also works so certainly possible that i have the wrong keys let's just verify that by going back into our api keys here and scroll down to the bottom and where is it there's our prod key we're going to go ahead and regenerate this okay so now we've got that key again should be the same id ah it's not even it's not even the same id i made a mistake clearly okay so going back into github let's go into our aws secrets secrets actions and secret key that's the one right there and our id there we go okay so hopefully that actually changes the equation a bit and luckily though what i didn't see as far as an error is concerned is when i tried to run this i didn't see that collect static was not executable it is trying to execute it just looks like it maybe had the wrong keys uh which is interesting i thought i put the right ones in but i guess i didn't okay now that i've changed the keys let's go ahead and try and run that again i'll click the workflow and hit run workflow okay so we'll let that run and come back okay so the build failed again and i know why the reason being that our actual deployments are the same so i'm just going to go ahead and roll them out and we'll look at our pods now and what this will do of course is create brand new ones new deployments a big part of the reason is because the build itself i didn't actually push anything to github and so it just ran the previous git shot it didn't actually change that part which is part of the reason why i mentioned that you could put in the environment the build actual number so github actions environment variables and if we go in there and look for the workflow or the build let's see here let's scroll down a bit i always forget what this one is um it is going to be down here in our run id okay so a lot of times you might end up doing that so that you have a really unique item here this might be a little long so let's go ahead and put this down to like i don't know five this yeah yeah we'll leave it in s5 and then we'll come back down a little bit more to where we need to use that again right there okay so we do get status we'll add that in a moment but i want to just make sure that this is able to work even locally with my new pods and here's one so we'll go ahead and do cube ctl exe i t that pod two dashes and then bash app and collect static.sh we just want it to work one time if it works one time locally it should work in our build process there it goes okay cool so clearly i didn't save things correctly or didn't put things in correctly but now i updated my build so that it can run at any time so we're going to do git status get add all git commit and updated build for collect static and git push origin main and now it should run again so we'll actually see it again and hopefully with this new image it's going to be a much longer tag which i think there's probably some sort of limit to the tag name itself but this is probably not too long 14 items here right so we got our 13 items whatever so we don't have that many actual characters as part of the tag but we'll see what ends up happening there it's maybe something worth taking taking a look at the other part is actually having this rollout command is now i've seen to being a pretty good workflow itself so let's go ahead and add in rollout.yaml here and i'll go ahead and grab my build copy the whole thing paste it into rollout what i don't need is this to run every time i really just need it to run here maybe on the workflow call so i'll have it as both so go ahead and roll out our django deployment and really what i need to do is log in to do ctl that's this of course i shouldn't have to log into the registry so i'll go ahead and get rid of that all the way down to i think just my kubernetes and uh log in which is right here so go ahead and leave that let's go ahead and bring this back up to our secret there we go i do not need to update this in fact i should be able to get rid of everything here and now name run rollout on deployment and back into our deployment guide this is the rollout there we go we probably did i use the cluster name i can't remember i did there it is um i do not need to run the tests although you ah yeah you probably won't ever need to run it and then we'll just call this rollout and there we go the secret is the same so we're still using that secret again git status get add all git commit added roll out workflow and i'll just push it in a moment but i really just want to see that this is actually working again hopefully at this point we have it all solidified and ready to go now for as much of github actions is a really great tool to use things like what we just did can get a little frustrating because there's a lot of back and forth and trying to figure out what exactly happened thus testing locally is often a good idea i should have actually tested all of this deployment locally to some degree because i made not a major change but enough of a change to the actual rollout to where it would have made sense to test it locally first like to actually run the you know post django command locally first but here it goes it actually did collect those static files as we see fit and so now we've got everything as is that we needed and we also have this new cool rollout thing so if i ever need to just update my images it's going to do that as i need it to pretty cool so now of course would be the next test that we should absolutely do is create actual jango test to see if that's working properly because it should be now let's go ahead and create a django app with a test that succeeds and a test that fails as well as a model so go ahead and make sure our virtual environment is activated of course we'll cd into web and then python manage python start app and we'll just call this posts like blog posts or any other kind of posts and inside of here let's go ahead and create a really really simple model and i'll go ahead and just call it class post and it's models.model and we'll go ahead and give it a title equals to models.char field max length being 120 and that's probably all i'll leave it's incredibly simple but the main part here is really just to test things out to make sure that future features that we might want to add to django are working correctly i'll also add it into the admin so from.models we will import post and then admin.site that post again we can verify this once we go into production next we're going to go ahead and create a test so i'll go ahead and do post test case and it takes in test case and i just want to do test failure this will be our initial one and all we're going to go ahead and do is say self.assert true false okay so naturally asserting true and false is well false so now that we've got that let's go ahead and add this into our installed apps as you would so then this would be like our internal ones and i'll go ahead and say posts this is of course external this is typically how i like or organizing these whenever i have internal and extra external applications so now if i run python manage.py test locally i should get a failure right actually first off we have to run python manage.py make migrations and then python manage.py migrate it looks like maybe my database wasn't working correctly so let's go ahead and run that test again and we'll go ahead and say yes to delete the other database and we see that it fails right so false is not true okay cool so now we have a failing test so let's go ahead and do git status get add all git commit and we'll go ahead and say added posts model and test okay so you know git push dash just all and the main thing here is we want to test in our workflows that this fails and then it doesn't continue to on any longer now this is actually going to be fairly obvious because of other failures we've had but this failure is a little bit more realistic in the sense of django's portion in all of this all of those other failures were not necessarily django related they were just other sort of configuration issues that were coming up against and so now that we've got this we can actually see a real failure because of a real test related to django which is pretty cool i'm gonna go ahead and let it finish actually running this workflow and we'll take a look and it finished running naturally i run tests here ended up failing and it's going to say the same sort of thing or at least it should this time it's saying that it does not support ssl but ssl was required so we have a another error that's now showing up and it's related to our database and our testing of course as well as inside of our workflow so let's go ahead and do test django one of the things that we don't have in here which i did mention a long time ago that we probably need to add we now definitely need to add it and that's db ignore ssl so let's go ahead and add this in here and we'll go ahead and say true and i believe we have it in as the string of true and sure enough we do so we definitely want to ignore this so now we'll go ahead and run this again so that again is not a django test that's just a environment variable error that happened there so go ahead and add that in git commit updated env for testing django in github workflows and again we'll go ahead and push it all this time the test that should fail is not that one not related to the database so we'll go ahead and take a look at that once it finishes and the test finish this time this time we should see the same assertion error that we had locally false is not true which of course is a good sign that is showing me that at least my tests assuming that they are good tests which this is not they are working though so what we could do is just make this a little bit better and let's go ahead and do from dot models we're going to import our post model and we'll just define setup here in this method and we'll do post.objects.create title equaling to hello world and now i just want to assert that the query set that i have here so post.objects.all that q s dot exists that's it now running this locally so python manage.py test doing that again this time we should get a ok and we'll go ahead and do git status git add dash all git commit and updated tests for valid tests something like that and then git bush dash all and there we go okay so what we are now wanting to see number one of course is to make sure that the workflow actually is successful which is going to be true and then number two is actually making sure that in our production django project that not only do we have this post model but it's also accessible like we can actually change it so looking at this locally if i run the server here what i should see here is going into the admin we should be able to log in as the environment variables user so in this case it's just simply admin and this password and then we've got our posts here and i should be able to add in a post like hello world save it and this should actually save it and of course if i close out the server and run it again i shouldn't have to run into migrations or anything like that it should just be there should be persistent now in production we're most likely going to see this given what we've done but there's certainly the possibility that we won't see this now the first step already did succeed so the actual job did complete but of course if we did anything wrong then when we actually do a future deployment the post will disappear it won't remain persistent which is of course what we're trying to avoid now there is another way to ensure that the post does stay persistent and we're not actually using a sqlite database at all like the default that is and that is just getting rid of this setting right here and not having this db is available we could quite literally just have this and only this in our settings.pi so then when we run our tests if none of the database is available our tests will fail and that's going to end up being true when we go into production as well because we run our migration which will also mean that our tests are not necessarily our tests are failing but our database is not connected correctly and so that's something else that would end up happening i'm actually going to let this finish building and then we'll go ahead and take a look at the proper image itself and go into the cluster to make sure that this is actually working so we'll look at the service ip address in a moment so it's still deploying and it's almost done but what i want to do really hopefully kind of quickly is go into the admin and let's see if i'm logged in i'm not logged in yet so let's go in here and grab our production super user password hopefully i can do this fast enough if i log in here i should not s i already see posts it was it was too fast for me anyways so it actually ended up working which is great so we can also verify that this ended up testing okay it finished in a little over a minute let's say maybe three minutes total in pushing this into production and now i can actually go ahead and do a post and say hello world and hit save and sure enough it saves so what i should be able to do also is if i do cube ctl get pods i should be able to pick any django project in here and cube ctl exe dash it and then dash and bin bash i should be able to come into any of these activate the virtual environment at otp emv bin activate and then i should be able to run into python manage.py shell and then do from post.models import post and then do qs equals to post.objects.all and then for i n q s we can print out i dot title or i think we call it title and there it is okay so on any of my containers i should be able to have access to this and this is just another thing verifying that it's all working and it's working as intended and also migrate runs collect static runs all of that stuff is running and it's running without me doing anything it's just going along really really well the one thing that i had to do locally was run make migrations right that was the one key thing that i have to do locally which could be solved with using something called pre-commit but realistically if you are developing njango after you create models you just want to get in the habit of running make migrations that's just a very uh common thing to do now if you don't run make migrations and your model file is different then it's going to give you a warning and potentially an error and potentially come down which is also pretty interesting to see now one of the things when it does happen when errors do happen because they will what i tend to do is i tend to actually go into a pod just like this and see if i can diagnose the errors in the pod itself another thing is of course using cube ctl itself so if we do get pods we can see a specific pod here and we can do cube ctl logs and then that pod name and that that should show us at least some of the errors that potentially could happen now there's other ways of logging in here but what you might see is actual errors coming through inside of this log and the last and final thing that i'm not going to set up because we're basically done is actually setting up your email like the actual transactional email inside of django which i do have blog posts for but you can set up that transactional email so that if there is a 500 server error it will actually email to your admins what that error is and if you set it up correctly that will be a really good thing so to me that's one of those production things that you definitely want to have set up it's just not something i'm going to set up at this time because i have a blog post for it and also it's actually fairly straightforward on how you end up doing it but now we have a really nice workflow for deploying our django project and we verified it by adding a third-party package in the last one which actually had a number of significant things we had to change and then in this one which was far less things that we had to change locally and in production we just had to check and verify one of our workflows and that's it i mean and that workflow change is not always going to be common right so this right here ignore ssl that only happens on certain databases and it's not necessarily going to always happen in your production files now this one single environment variable might actually get you into the point where you want to have a different settings module for your different environments for your different stages one being your development stage one being your production stage so you can actually duplicate or reuse settings modules for those different stages now i have an entire blog post on this called staging django for production which is all about this is being able to switch between different settings modules now i'm going to show you how to do it with just python manage.pi it's not a whole lot different with g unicorn but the idea here is that if you run python managed up high you can actually declare a different settings module let's do run server and settings so my default is going to be in settings up high inside of django k8s so i can use django underscore k8 dot using dot notation because it's python and then simply just settings i don't need to do dot pi and then of course i can change the actual port itself and what this does is just changing that settings module that of course is not the only way to do it right so if you go into let's say wsgi notice that it's actually setting the default django settings module in what do you know our environment variables right inside of the wsgi file it also does it in models dot are not models up high managed up high rather it does that same thing as you probably have already noticed but it is something that's pretty important to know about now when i see this setting and i run the this module setting right here it actually does change it so let's go ahead and take a look at that example real quick i'm not going to keep this i will actually delete it but let's go ahead and just rename this to like dev right so to actually use dev it would just be simply django k8 dev and then i run that and now it's showing me this different settings module and it's the exact same settings in this case but it's really just to highlight how you can actually use different environments to your advantage now i actually have solved doing different settings modules myself by using the combination of docker compose with environment variables just as we've seen here the reason i like doing that is because then my local development environment matches almost identically to what my production environment is and if i'm testing out new features i don't have to expose those features to all users right i can only expose them to certain users and that's typically what i'll do when i'm testing out new features and if there's dramatically new features i can have them in as environment variables and then just do a custom deployment whether it's in kubernetes or not but having a custom deployment that is directly for those new features right and then changing perhaps environment variables right in there and again not exposing that to the outside world and not making that my main project so just a few things to think about when it comes to actually building your django projects but of course the key of all of this is just get going build it get it out there and use as much of django and kubernetes as possible i really really like this combination i think it has a lot going for it and it's going to be even more valuable if you start using other smaller services throughout your project that are complements to your django project but not necessarily running within your django project those are called microservices but oftentimes it is a good idea and that's what kubernetes really excels at is when you start to add in a lot more services on here kubernetes handles the management of all that really really well all right let's keep going hey thanks so much for watching hopefully you got a lot out of this series i want to leave you with two challenges one actually deploy a brand new web application onto your kubernetes cluster whether that's a python application or not just deploy a new one basically taking everything we learned in this series and applying it to something new and then do that again and then the next challenge would be to how to communicate across deployments through kubernetes not with an external ip address and not with a domain name but rather through kubernetes i'll give you a hint it has something to do with services and how we communicate within docker compose itself that's what i'll leave you with but thanks again for watching and i look forward to seeing you next time and thank you digitalocean for helping us out with this one thanks again see ya [Music]
Info
Channel: CodingEntrepreneurs
Views: 99,137
Rating: undefined out of 5
Keywords: django, kubernetes, digitalocean, python, kubectl, docker, trydjango3.2, python3.9, tutorial, django tutorial, kubernetes tutorial, docker tutorial, docker compose, deployment, production, github actions, ci/cd, continuous integration, continuous delivery, ci/cd pipelines, devops, k8s, cluster, microservices, fastapi, flask
Id: NAOsLaB6Lfc
Channel Id: undefined
Length: 311min 53sec (18713 seconds)
Published: Tue Feb 08 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.