AI as an API - Part 2 - Deploy an ML Model Rest API using Encryption, Docker, Keras, FastAPI & NoSQL

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey welcome to part two of this series i do recommend that you finish part one prior to jumping into this one but what we need to do here is actually deploy our ai as an api into production using a virtual machine it's a fairly straightforward process but there's a lot of things that we need to consider so let's go ahead and take a look right now [Music] now we need to pick a cloud provider for our production application i have three criteria for this but primarily two number one is that you have access to a virtual machine we need to be able to control everything about the environment that our application is going to run on because we need tensorflow to run we need to be able to install the version of tensorflow we want so we can load in our keras model the next thing is the ease of use right so i want something that's pretty easy to spin up and also spin down so we can you know really experiment a lot and then the final thing is more future thinking is having the access to a gpu if you don't have access to a gpo then we won't be able to ever train it on that service right so that's why i'm picking lenode as our cloud provider now as far as virtual machines go we can absolutely use digitalocean droplets we can use amazon ec2 we can use google cloud compute engine we can use microsoft azure virtual machines and we can also even use a raspberry pi now it's not a virtual machine itself but it's a machine that's a lot like what you would use in production on a virtual machine so for like 40 bucks you can own this device and have the capabilities of running everything we're gonna do here as long as you get one that has at least about four gigabytes of ram now what we're not going to be running on is something like digitalocean app platform or heroku now those two services have a lot of benefits in of themselves but we are not using them because we do not control the entire environment we don't control the infrastructure at all now that is one of the downsides of trying to deploy on there that's why we want to actually get a virtual machine and control the environment from scratch now if that scares you at all don't worry it's not actually that challenging to do there are some steps we need to get there and that's really the part of what we're going to be doing now that's why we're doing what we're doing now is to see how to control some of this infrastructure and to have our application running in production now we're going to go ahead and generate our ssh keys so be sure to open up terminal or powershell depending on what system you're on and type out ssh and of course ssh stands for secure shell all this means is we can actually call ssh connect to a virtual machine whether it's on our local network or on a cloud provider and we can actually do some commands in there from our local machine right hopefully you've seen this before even if you haven't it's actually pretty straight forward on what to do so the first thing i need to do before i can use this on a cloud provider is i need to provide that cloud provider with our ssh key so we'll go ahead and generate one now with ssh-key gen and it's going to ask you to generate this public and private key pair i want you to think of this as username and password right private we never want to share username we can share so now it's going to ask you where to save this file of course this is going to be updated depending on your system and your username so just go ahead and hit enter for the default in this case mine already exists i am not going to overwrite this i will talk about why in a second but i'll leave that out now if yours doesn't already exist it's going to ask you to write a password you can write a password if you want so let's go ahead and take a look at that i'm just going to make another one in the same location and i'm just going to call it delete rsa hit enter it's going to you're going to see something like this this passphrase so by all means go ahead and write one in there and it's going to ask you again now in this case i definitely did not create one but what you see is where it's actually stored so your identification the actual private key is stored here and then your public key is stored there okay so i'm going to go ahead and navigate into my ssh folder which is the same thing as navigating to that folder with the tilde there notice these new keys i have in here those ones i'm going to go ahead and delete i am not going to use those but i just wanted to show you how or what it looks like when you generate one and so now i just have these two keys right here so the dot pub is the public key that is you know think of it as our username and the id rsa is like our password these keys work in tandem together to then connect to a host that will know about this publication key and then once you do work with hosts it will be added to this file called known hosts you may or may not have one of these things but here is essentially what it'll look like in ip address with a number of keys related to it as well so that's pretty much it as far as generating the key and so now what we can do is we can absolutely copy this key and put it anywhere so if i do cat and hit enter you'll see the actual contents of this key i mean by all means open it up with a text editor as well these contents will just go ahead and copy and paste into our cloud provider whichever cloud provider you end up using now using an ssh key just makes it really easy to ssh into any ip address actually calling ssh without additional authentication in some cases but you can also do all of this without an actual ssh key on your cloud provider or in your virtual machine of any kind but we do want to generate one because i really want to make it nice and easy for working with my virtual machines so i'm going to be using lynnode for my production application so be sure to sign up and jump into the console and then you're going to want to navigate to your user and then over into ssh keys and loading something like this now you may or may not have any keys on here we're going to go ahead and add one now so notice that it says ssh public key so that should give us the you know information that we need as far as what we need from our ssh keys and that is the public key right here and we can copy this with cat and tilde slash dot ssh right there that will give us that value i can copy this and paste in here now some of you might have seen the shortcut with the pipe and pb copy and then hitting enter that also does copy it as well however that does not necessarily work on a windows machine but anyways with this ssh key i'm going to go ahead and give it a label and i'll call this ai as an api ssh key now the only reason i'm labeling it that way is like if i want to delete it later now as a user on most of these services especially on the node you can add a bunch of ssh keys for your various machines so the other part of creating an ssh key is perhaps you use a virtual machine to manage your virtual machines that is completely okay and actually often encouraged to do so that your local machine isn't a point of compromise potentially but anyways we're going to go ahead and use our own local ssh key and now it's in lenode so the next part is actually provisioning our first virtual machine now we're going to go ahead and provision our virtual machine on the node so jump into the console if you're not already there and navigate over to the nodes and hit create lynode i'm going to be using debian 10 as my base linux image but you can use any sort of linux operating system that you find interesting i'll just say that if you use devin 10 at the very least you will be you know configuring it the exact same way as me but i think a lot of the configuration should work on a lot of linux distributions next the region we want is the one that's closest to you right now especially while we're learning closest to you right now is great when you go into further production when you're not learning anymore make it as close as possible to your end users which often will be as close to you as possible anyways so i'm gonna pick dallas because i'm in texas so i'm gonna go ahead and use that next i'm gonna go ahead and select a shared cpu this is the cheapest version that i can get which is about five dollars a month and it's roughly a penny an hour so that means that while you're learning doing one that's a penny an hour is great or in my case i'm actually gonna do the eight gigabyte one which is six cents an hour now i will show you how to shut it down and all of that so you're not charged constantly and we'll do that in just a moment anyway next our lino name label i'm going to call this ai as an api now i might actually call this v1 just so i can distinguish it from future ones you could always add tags in as well the root password itself i'm going to go ahead and let google suggest one to me and i'll copy this and paste it somewhere uh in my environment variables okay so in this root environment environment variables i'll just create one that's called root vmpw just to make sure i have it and also to show you another way to get in here so the next thing is actually adding in our ssh keys are right there i'm going to go ahead and add those ones in and we'll go ahead and hit create lynnode now do note that it is 40 a month but this means that that's what it would be if you ran into 24 7 365 it would be 40 a month for the entire year now of course at any time literally any time you can come in here and just delete it and it will only bill you per hour that it was running not per month right so if i delete it right now i might get billed five cents six cents not very much but obviously hit up the support team if there's some issue with the billing uh but do be aware that there are lynodes on here dedicated lino's that you can run for like four grand a month right that's a lot and then if you go to cpu level it's for sure four grand a month right and maybe even high memory now that one doesn't but dedicated cpus are quite expensive but that is look at that machine that is quite beefy as a machine anyway so i don't imagine you're going to be running this very often anyway so that's why we start with the the lowest tiered ones of um you know six cents an hour now the reason i'm not doing the one gigabyte one is because well tensorflow tensorflow is a pretty beefy package itself so it can't just run on any machine that's why i also mentioned if you did a raspberry pi you would need at least four gigs if not eight so keep that in mind as far as our memory is concerned but there we go there is our now provisioned lynode now i want to show you something that you might do down the line i'm not going to do this again but it's it's as simple as creating an image we come in here we create an image we select all the node we add the disk to it and then we add a label so let's go ahead and call this my ai as an api v1 and let's go ahead and say standard config for my image okay so it's estimated about 16 a month to save this one but i'm going to go ahead and create it and what this does is it creates a snapshot of my current project in other words once that image is done i can go in create lenode go to images and i can actually work off of that image i have a bunch of old ones as well but once it's done you could work off of that image itself instead of going off of the debian image that we did earlier and so that's a really key and easy way to stop your progress in its tracks sure it's 16 a month to keep it on lenode but you can then pick up whenever you need and that's actually what i recommend because turning this off doesn't necessarily stop your billing in fact i believe it does not stop your billing at all so you'd have to actually delete it and and realistically do not be afraid to delete these virtual machines the real production servers when when the services that use a lot of virtual machines they turn these things on and off constantly and i think lenode makes it pretty easy to do that as well i should say on and off and delete constantly so a part of what we're trying to do in this part of the series is to set up a system in which that we can well turn these things on and off when we want and just run a few commands and it will be back to where it should be which means like you know downloading all of our models downloading our code and turning it into a proper application so yeah that's it now what we want to do is actually ssh into this virtual machine once it finishes capturing that image now let's go ahead and ssh into our virtual machine now in my case it is running so it is ready for me to actually ssh into it and i'm just going to copy this access right here and then we'll go ahead and open up vs code and sshn i'll hit enter so every time you ssh for the first time into a new ip address it's going to ask you this question it's basically saying hey are you sure you want to go in here yes no or fingerprint i'll go ahead and say yes and notice it says an add to it my list of known hosts now before i go any further let's see what that list is i just want to know about these things because you will run into issues every once in a while so let's go ahead and list out the dot ssh we see this file here called known host so i can do cat ssh and there it is right so that's the ip address itself by all means whenever you feel like it delete that known host thing it's it's okay because it's just going to ask you that question all over again so to prove it let's go ahead and do that and i'll ssh again into that same machine by pressing up several times say yes and what do you know there we go so i'm in i'm in my virtual machine but what i've noticed is i just logged in i didn't have to do anything now this of course is how we provisioned it we provisioned it with our ssh keys but i also made one called delete me specifically to show you what you need to do when you don't add your ssh key or if you're on another machine so i hit exit and i do ssh say yes again now it asks for the password the root user's password that's this password right here that we set when we provisioned it and there we go so now i'm in two places at once and i can just go ahead and shut this thing down or rather sudo shut down um oh plus you know it will shut down in just a moment i think it's pseudo shutdown now there we go and so it kicks me out of that session but i also want to delete this linux right i purposely made this virtual machine to just show uh illustrating a sample and boom there we go that's kind of the habit you want to get into i'll tell you when i was first learning virtual machines and using them i would be so hesitant to delete it like i don't want to lose my work i don't want to lose all the things that i did but if you set things up correctly your work will not be lost if your virtual machine goes away in fact you should never really lose your work if a virtual machine or any machine dies a big part of it is because of git and version control version control is a big part for us the other part is object storage which is what we set up in part one so anyways now what we need to do is actually get the version control system working on our virtual machine and take a look at that now we're going to go ahead and install git both on our local machine as well as our production machine so the local machine i'm sort of assuming that you already have get installed so you can actually type out git here if you don't definitely download and install that then in our production machine we'll go ahead and install it here so it's actually really simple i'm going to break this down and we'll do ssh root at our ip address there and then we'll do sudo apt update and then sudo apt install git with a flag or we'll leave the flag out for a moment but this will do two things at once so update and then it'll run an installation a lot of times it's going to ask you for yes or no are you wanting this to continue with the disk space that will be used we say yes okay of course because we want git installed and if you have to run it again just run it again because every once in a while it might fail based on dependencies and all sorts of stuff but anyways i typically run it with the dash y there just so i don't have to answer yes every single time because usually if i'm hard coding this yes i do want to install it now to me the app update and app install is very reminiscent of pip with python but notice that pip is not in here right so my machine my production machine it doesn't necessarily have pip and it doesn't necessarily have python either but in this case it does have python 3 so i can do pip that way but again it's also saying not no pip is found so of course this is where installations come in so we do python 3-pip something like that and that's where knowing a little bit more about how a linode distribution ends up working is good for us so notice that pip takes up quite a bit of space but it's actually not just pip it's all sorts of other things that are coming in with it so those are definitely packages you may or may not want to have so now that we have git installed and we have pip installed well i'll let this finish it's time to actually use our code right so this whole time i actually stored our api reference code the ai is an api reference code right here on github and you can absolutely use this code or you can use the code that you have been working with so in the next part i'm actually going to clone this code i'm going to move it into my own personal project right so my own personal username not the coding for entrepreneurs one but i'm kind of pretending like i'm you for a moment and then we're going to work off of that code the reason being is i want to make sure that the code i'm working off of is the code that i can actually change and also code that's not already being used in production right so this ai is an api project has a bunch of code in here and a bunch of things in here that we haven't even talked about yet and maybe never will because i'm going to personally continue this one keep upgrading it for the coding for entrepreneur service so it's it is an actual production application uh so it will diverge a little bit from what the course code ends up looking like uh so in the next one that's what we'll actually set up all right so now what we're going to do is start the process of actually deploying our code into production which means that we're going to have to clone a project and bring it onto our application and so i'm going to be cloning this public project i'm actually going to be using this public project going forward and you absolutely can too in the next section i will show you how to clone a private repository so you can use that as well so you have either option because having a private repository is also something that you might be really wanting to do so don't fork this code just clone it just grab the link itself it will be in the description but you can always go to the coding entrepreneur's repo the main profile page there and finding this repository too and so let's go ahead and ssh into our our local host our root remote host rather and um like that and so what i want to use is git clone that url right the entire thing and then the folder i want to store it in in this case i'm just calling it praj and so if i list things out i see proj here if you actually don't put the folder there it's just going to use the name of the repo as the folder just like that okay and so i'll go ahead and delete that one i do not need it okay so if i see the entourage hey what do you know i've got my project here now of course if you were following along with me this whole time you'll remember i did python three dash m v e and v period creating the virtual environment okay so right off the gate so right out of the gates we have a installation error right because linux requires you to install virtual environments now that's fine i totally understand that it needs that but the thing is are you always going to remember this now my intuition is no you know why because i don't always remember this and i do this all the time right so i don't expect you to remember every single command you would need to actually build your environment which is why we are going to be using docker so docker is actually going to what we'll use to run our environment but we could still go through the process of installing everything because it will give us some of the steps necessary to make that installation right and so we can keep going create a virtual environment now notice that it's python 3.7 i was using python 3.9 so it's already the wrong version of python so if i do bin activate and then pip install r requirements.txt you know all of these things hopefully will end up working now depending on when you're watching this there's a really good chance you'll see a docker file in here that's something i will definitely do with you on the video but it's going to take some a little bit of time to install everything and this is where it would almost certainly fail if you didn't have a big enough machine is through the pip installations maybe with pandas maybe with jupiter but almost certainly with tensorflow it'll just time out it'll run out of memory altogether so again this is not the method i'll necessarily use long term but it is a method i can use it's just docker makes things a lot easier as we'll see just to remember how everything needs to be installed and how everything needs to run so let's go ahead and look at the private cloning first and then we'll get back into the docker related stuff i'm not even gonna let this pip stuff finish because well we're not gonna use it okay so let's go ahead and exit out of here and let's get that private repo going now let's take a look at cloning a private repository so the first question is how do i even make a public repository private well it's actually pretty simple on github so i'm gonna go ahead and copy this link again and i'm gonna log into a different user so in my case i'm logged in as team cfe not coding for entrepreneurs right so i want to just go to my profile here just to make sure i'm definitely not coding for entrepreneurs okay and so now what i'm going to do is click this plus sign here and go to import repository paste that repository in and import it in as my private ai api or something like that and now just select private okay so this is certainly not an option when you fork it right this of course is not a fork so when changes happen to this repository you won't know about them that's how forks work forks you will have the opportunity to know about them anyway so i'm going to go ahead and begin this import it's going to bring all this code over and that's going to be great but of course it is a private repository so right now i have well if i go into my you know virtual machine here this right now this git version is not ins like actually associated with any github account i certainly could associate it with a github account but do i want to um well yes and no right in my case i'm going to say no because of how i want to actually manage any given virtual machine i want to think of virtual machines as things as disposable as possible i don't think of them like my local computer right i think of them as like a session with your phone like i have chrome open right now this is a session of chrome if i close down that session it's gone once i start it back up new session i think of it more in terms of that not in terms of actual hardware that i'm quote unquote destroying right again going back to like let's not be afraid of deleting anything anyway so now i've got this private repo i'm going to go ahead and copy it and open up an incognito window i should get this 404 page right so that will show me that it is private and of course if i try to clone it now we can see what happens right so inside of our remote or our actual virtual machine i'll go ahead and do git clone of that new repository and hit enter it's going to ask for my username right which is another way of saying hey we sort of failed here now you can absolutely authenticate you can do all those things that's fine but i want to do a little bit more of a sustainable method here so let's hop back into our github account right and so i'm going to go into settings my user settings scroll down to developer settings click on personal access tokens and we're going to go ahead and generate a new token i'm going to leave an expiration of 30 days and we'll just call this ai api access token just to reference what it is i can limit the scope for this which is definitely a good idea i'm going to go ahead and just grab the repo what i will not allow it to do is to delete packages or delete a repo okay so that's probably a good idea just don't let anyone delete something we're gonna go ahead and generate this token i'm gonna copy this token and now i'm gonna go ahead and jump into my virtual machine again and i'm gonna go ahead and say export token equals to that token now the reason i do that is so i can actually reference that token later just like that right so it's exactly the same token which means then i can actually do git clone okay and going back to our repo again so my repositories my private ip okay so here's here's our repository again that's not going to work so what we want to do is get clone and just part of this right here we'll use so first it's https colon slash and then the token itself we can put it in here with some substitution that is done in linux so that's dollar sign curly brackets the token variable which was this right here right so that name and then colon and this is x dash oauth dash basic at now it's going to be all of this stuff right here and i'll call this private praj and there it goes so cloned in private prash okay so if i do get or get pull all right already up to date and so let's go ahead and just do an arbitrary file in here so testing and my new file exclamation mark commit new file okay so i've got testing my new file in there and again git poll now pulls that file down okay so this is now a private repository that we were able to clone now every once in a while you might have to reuse that token you might have to reclone you probably shouldn't have to it should actually use that token going forward because if we um look at git remote dash v we actually see that it includes that token in there now right so if the token does go out you just have to change these remote urls to include that new token right you don't have to reclone it you just have to update the token itself and set those urls uh that's not something we're going to cover at this time because we don't really want to cover the private repo of this all i want to make sure that our public one's working so if you ever want to use the private one you can okay cool now i'm going to go ahead and install docker on our virtual machine at least you absolutely can install it on your local machine and i recommend you do if you can not everybody can but at the very least we can definitely do it on a virtual machine if you set up a linux machine and so the idea here is actually pretty simple and it boils down to something like this so on my virtual machine i have a virtual environment activated for python so if i type out dash capital v and hit enter i see that it says python 3.7 hey it's python 3 great so far so good if i go in my local environment my local machine with that virtual environment activated and type out python-v i get python 3.9 oh no now there are so many applications that can probably run just fine on both of these things but when we're dealing with something like machine learning and specifically with tensorflow or pi torch you might not actually have these environments working correctly just because of that python version and that's only one simple thing so the question is of course on a virtual machine well why don't we just install a new version of python well that would work but the other thing is what if you did want to test multiple versions of python in fact what if you wanted to test several other kinds of applications and also maybe you don't want to use ubuntu anymore you want to use debian or vice versa you know there's a lot of reasons as to why we would use docker i could totally go into those and talk about them all day but the idea here is this simple we want to make sure our environments are identical now i did mention a long time ago that virtual environments help separate our python packages from other python applications but it doesn't solve this problem both of these if you didn't actually look at the python version you would assume that oh we're good we've got virtual environments in both places that's not what we want okay so let's go ahead and close out our local one and just be in our main project here on the virtual machine itself and i'll just go into the root of it so first and foremost we're going to be installing a docker package or a new linux package so we'll do sudo apt-get update dash y hit enter and then what we need to use is the https get.docker.com command and so what we're going to be doing here is curl dash f s capital s l capital l that is and https colon get docker.com and then we're going to put it out or put the file to another file called gitdocker.sh and what we'll see here is this getdocker.sh and we can actually look at this so this is just a shell command that does all kinds of stuff to make sure that we can get docker installed right it verifies what machine you're on what distro you're on and it does all of the things that you need to get docker working on your linux machine so i'm going to just run the sudo sh or bash either one but basically running the script itself so sudo sh git dash docker.sh and this will take some time to actually install docker for us now in my case i do have docker on my local machine right so if i type out docker here i will see that it is running and if you're on a linux and a mac you can absolutely get docker no problem windows users you might need to have a professional version of windows to actually have docker work so you have to have virtualization essentially but the other thing about all of this is something to note is you can have a virtual machine controlling another virtual machine so in other words you could be using a virtual machine right now that's controlling your production application at some point with again ssh just like we did getting in here so that's another thing to consider as well if you were to want to use a local environment with linux in any case i now have docker in here so if i do docker ps i see this right or just simply docker i'll see that okay so the final command that i want to run usually i'll do this after i install things is sudo apt auto remove and y and hit enter okay and so i want to highlight why we use docker one more reason we'll do that in the next section but overall docker itself is highly highly recommended to help isolate our applications no matter what application it is whatever programming language it is it could be python it could be ruby it could be node.js it could be anything pretty much that's open source and it isolates it very nicely and then we can move it around a lot more too so let's go ahead and see an example of it being moved around so hopefully at this point you're already convinced that docker is probably a good choice for production but i want to show you an actual working example with nginx so nginx itself if i try to run systemctl status nginx i get nginxservice not found right so i actually have not installed nginx at all on my local virtual machine right so what i want to do now is just say docker run and i'm going to expose the port of 80 to mapping to the port of 80. i'll explain that in a second and i'll just run nginx okay so right off the bat it says it can't find ingen x so then it starts downloading it it literally starts doing everything it needs to do to run nginx okay it actually happened pretty fast for me and so i go back into you know my ip address here i open it up in a new window and welcome to nginx so it's now running in seconds right so if i hit cancel it stops it stops it from running with control c now it's not running i do that again right it's going back it's running okay and so what i can also do with docker is i can say instead of just run i can also add a restart policy so restart and put it to always okay so that means that this application will always restart assuming it's in detach mode so the dash d will put it into detach mode so there's the whole command i hit enter now and now i can't just close it out i can't just run ctrl c to cancel that from running but if i look at the ip address itself it's still running okay so what's happening here is docker is running this in the background that's what that detach mode is so if i do docker ps i can stop it with docker stop that now we will absolutely use these commands again so don't worry about trying to copy them or remember them or anything like that but what it will do here is it's going to stop nginx from running in the background but what i want is actually to have it always running right so now it's stopped but i definitely want it to always run when i say always run i mean going back into our lenode console here reboot hit reboot our system should go down nginx should go down i should get kicked when it actually does reboot so it's starting to reboot and that is of course a common problem for applications is you know if the system goes down then it's no longer accessible but what's important too is that when the system comes back up right so it's rebooting right now when it comes back up is it still running everything that it should be running right so there's many ways to make sure that it is running what it should be running but what i just showed you with docker is one that should work or at least let's go ahead and test it so we can make sure okay so it finished booting and let's go ahead and take a look i'm gonna refresh in here now of course when it does reboot it's not necessarily going to come online right away but i will give it a moment while that one works i will go ahead and try to ssh in there as well i'll run docker ps to see hey what do you know the container is going up it's been up about a seven seconds and if we refresh in here now it's back on right and so we could also test this if it shuts down and see if it comes back on when we shut it down so power off okay and so right away already if i turn this into an image then we would probably have docker running already like in other words docker would boot up already okay so i'm gonna let you guys finish shutting down and bringing it back up to verify that um but that's just a really simple example a very practical example of when you might use nginx and docker right but also it shows you docker itself it's showing you the power of just using docker on its own now we totally can combine nginx with our web location that's not something i'm going to do in this one it's not really that hard to do but it just adds complexity to maybe an already complex system but what we want to do is actually get our own application running here so we need to create what's called a docker file to do that all right so now back into our local project we're going to go ahead and create our docker file now the docker file is just a file a single file that will have a bunch of steps in it that docker needs to take to configure the docker environment correctly that's it it's not a whole lot different than what we've been doing manually with our virtual machine let's take a look at what i mean by that so first off the very top of it we use some sort of base image right so i'll explain that in a moment then we'll probably copy some files we might change our working directory then we might run some commands and then we also might want to have one final command that will actually run you know run our web app okay so at the baseline level this is our docker file and if we think about it we've been doing this on our virtual machine too so the base image what does that mean well going back into the node we went to create we picked an image in the linux image this isn't actually much different except i can actually declare something like python 3.9 or 3.8 i can actually get that image which means that docker the company that runs the open source you know container system actually made one for python but you know there's one for debian there's one for ubuntu there's one for all sorts of different languages here so i'm going to just use the one from python for now just for a second then we want to copy whatever files we have available so that's these files now i would say you should be very explicit to the things that you want right so the files that i want for example i want to bring in app and i'm actually going to leave it into app slash app or let's go ahead and call it slash app for a moment okay so this is going to take the app folder this one right here and copy it here right so again we're treating this docker file very similar to what we treated our virtual machine like right so i copied over or cloned code from github and i saved it somewhere in my case on the virtual machine i saved it into root flush app like that or rather i think i called it root project app and then the other one was private you know proj like that right so i'm doing something very very similar but it's just with the code that's local it's not with the github code itself and so that's where i put it in like this okay so now the working directory i want is really just app so what that means is this command right here can actually run you know something like python in python mp pip install and requirements.txt or can it now the app folder is being copied but is anything else in the app folder being copied well requirements.txt is way down here so that'd be another thing i need to copy so copy in our local file folder copy requirements.txt into the app requirements.txt right and so now this command can work okay so we can keep down this path of all of the things i need to install now for you guys i actually already created a base image that's going to be easy to use so first and foremost if you go to the github repo you can look for docker dash python and look for python 3.9 web app and in here i have a bunch of base images that are going to be easy to just implement so in our case i'm going to go ahead and go into just cassandra itself okay and then the docker file here here's that base image right so you can absolutely copy these things notice that it's running app get update apt-get install installing all sorts of things it's installing the cassandra driver right so we can do all of those things or i can actually go from docker hub itself and use some of the base images that are in here like 3.9 web app cassandra that's the one i'm going to use and so to use this i'm going to go ahead and copy this docker pull command here come back into my docker file and do from pasting that in of course docker pull is related to a docker command i want the image itself and there it is all right so there's all kinds of images on docker hub this is just one i made specifically for this project so it's going to be really really easy to just run it and we don't have to remember all of the commands again if you want to know what all of the commands are you just go back into the docker file for it which is this one right here 3.9 web app cassandra python 3.9 web app cassandra right there's a reason for that you know uh organization there anyway so here's the docker file so this is it so if i wanted to use that directly i could just copy this and paste that in there like that right but this shortcuts it this makes it a little bit easier on me for my applications in general so now that i've got that i've got my app in here the next thing of course i actually want to run the installation for sure but i'm going to do it just slightly different here i'm going to do python 3 and this time i'm going to create a venv or a virtual environment at opt venv that's actually a very common place to create a virtual environment in production this is where the root of that virtual environment will be not necessarily where my project is in fact my project is going to be an app right and so i'll we'll take a look at this in a moment but then i'll just go ahead and run opt venv bin python dash m pip install r requirements dot txt okay and then this next run command i'm gonna go ahead and leave that out for a moment okay so the final thing is i actually need to run a command for our project itself okay but i'm not going to do that yet i'll actually come back to this docker file i just want to get some of the baseline stuff working before i go any further so for now i'll go ahead and do git status get add git commit and this is going to be you know 33 add docker file and then we'll go ahead and merge it to our main so get checkout main git merge and 33 end and then git push 33 end of course if you don't know how to do all these things we could get push origin 33 end and get push origin main just a way to use my branches so our lessons are the same all throughout so anyways now that we've got that let's go ahead and look back at our original code right so api is an ai course if we scroll down a little bit i now have my docker file from 33 seconds ago and there it is okay so it's a incomplete docker file for running our python application but it should at least do something for us right in other words now going back into my virtual machine let's go back into linode grab our ip address here and ssh root at okay so we're going to list in our cd into praj and we'll get get pull origin main this should pull that docker file and sure enough there it is okay and so i i want to show you how to actually use this docker file so we will build it in just a moment but the idea here is we have the docker file it's in there it's ready to go overall but all of these commands i can run right now or i could at least run this one so you can see that it does work at least on my virtual machine right so let's go ahead and do that i'm going to copy this command right here i'm in the root of my project right so that's important as well i'm in the root of my project and it has actual requirements.txt and notice that it's running right so it's absolutely installing that command so whenever you're in doubt and you have some issues with your docker file itself hop into a linux virtual machine and run each command in here now in my case i already have this code in here that's why i was able to test at least this command but that would be one way on how you could test it just directly from the docker file itself and that alone too could potentially just be your reference file for getting your application running it doesn't necessarily mean that you're going to use docker either right so it could just be an ongoing reference and yet another reason to just kind of dive a little bit more into docker but in the next one we're actually going to build the image itself and then we'll add on to it as much as possible too so now we're going to go ahead and build our docker project it goes docker build then we tag it ai api that's what i'll call it you definitely need to tag it so that dash t is for that and then we'll do dash f docker file and then period so before we actually run that i want to just mention what's happening here first off we're calling the build command and we're tagging it this way second the docker file itself is this docker file right so it's just referencing this docker file if you had a different name for the docker file you just reference whatever that is and then period means hey build everything in this location which of course that docker file has some copy commands in here which we'll look at in a moment but anyway so let's go ahead and run this with again docker build the tag the docker file itself and then period and it will go ahead and run so first and foremost it's going to actually extract this base image all of the things that build up this base image which are all available from docker hub this is where it's stored so it goes username docker hub repository and then the tag itself right and that's true for any image that's on docker so even if it was python it'd be the the official python repository in this case it's the coding for entrepreneurs python repository with the you know tag that i associated to it and so it's going to take some time to actually build this thing out but the first time we build it it will take a little bit longer than future times that we build it depending on all of the commands that we may or may not add to this so it's certainly possible that in the future it will have more commands it will take a little bit longer to end up building this um but what we'll see here is it'll go off of our requirements it's copying those it's creating a work directory it's creating our virtual environment and then it's going to run all of the necessary installations on this container image itself and so that's what our build command is doing it's building a container image right so it's a container image that we can run at any time like we saw before with nginx all it did was download a bunch of stuff like this did but then it just ran it right so if i were to build it and save it somewhere else as in not build it on my virtual environment here or my virtual machine rather then i could totally access that one right so in other words if i made this entire project on docker hub i could do that and then that would be the project i would be able to pull and run but i want custom code i want to change things and i want it to be specific to my needs right here not not just a general package itself right so this will take some time to actually download and install which is another reason why we actually have the docker version going that we have but also the reason that we had that bigger machine itself because if you see tensorflow right here it is well it's pretty big it's a pretty big pretty big package so it's going to take some time for it to download and that would have been very true if you tried to just download it in your virtual machine as well so i'll just let that finish but the idea is every time we make a change now i can still run the docker build notice i am local but i i'm just going to explain some things here so docker build i can still do you know ai api and something like that and this will build a new version but what i can also do is just tag it so if i did v1 and then the next one v2 v3 and so on i could continuously do this that way i have a bunch of different versions of this project able to be ran and so that's a potential that you might do in the future at some point as well now we are certainly not done with our docker file so i do have things that i need to update for it before i even go much further with this but what i want to see is once it's finished building i actually want to see the the actual file system that's in there and the different commands i can run so i'm going to go ahead and let it finish building and then we will take a look at those things all right so it finished building and what you notice is it says it built and tagged latest so if you don't add a tag it's just going to consider it to be the latest tag and that's it so now what i can do is run that so the way i'm going to run it in this case is just going off of the tag itself whichever tag i want to use and i just want to see the file system so it's docker run dash it and then ai dash api and then bin bash so in the future we won't necessarily have to run all of this stuff but we definitely will still need the tag bin bash is essentially like saying let's ssh into this docker uh it's not the same but it's going into the docker app itself so notice that it did change a little bit so if i list things out in here i've got my app and requirements.txt so quite literally the things that we told it to bring in this of course is different than what's on the virtual machine itself so now if i were to want to run any of my virtual environment commands the virtual environment if you remember is in cd slash otp venv so i can of course activate it like this and then do python-v and now i've got the correct version of python at least 3.9 you know that last version of course matters as well but the 3.9 is the key thing here and so going back into the root of this user um what i should see here or rather into slash app not the user go where i actually copied all of my stuff to then i can actually see all the app itself and i can cd into the app also and in here actually let's go back a little bit i should be able to do uv accord right and i i can and so uv acorn and then the app and it was app dot main colon app dash reload and this should in theory run our app now i have environment variable problems so that's something i need to address of course but it is able to essentially run the app so the environment variable stuff is going to come in with our docker file once we actually get that all up and going correctly but overall we now have a docker file running or potentially running on our virtual machine now when i say potentially i mean the actual app itself is not running but the docker application the docker container is running on that virtual machine and we can exit out of it just like that okay so now we actually need to add in our environment variables so this is actually going to be a fairly straightforward process but it's something that we should really consider how we're going to do it now we're going to go ahead and handle our environment variables so there's two features to this one our actual virtual machine needs it and two our dockerfile also needs it so if we do copy emv to app and then dot env that's exactly what we want to do in fact we don't need the dot in front of that app there for any of these okay so we certainly need that in order to run our application so i definitely need to add these environment variables before i do i also want to add another script file in here and i'm going to call this entrypoint.sh okay so the reason i'm doing this is so i can actually run my application itself right so inside of the entry point what i'm going to do is i'm going to declare it as the bin bash all right so this is a bash script that's going to be running and it's going to be using otp slash venv bin g unicorn and this otp venv bin is much like our local virtual environment bin except it's going based off of the docker file itself which is creating the virtual environment in otp venv so that's where that's coming from as well and so the unicorn app itself we want to add a worker tempter and those will be dev shm that's a very common thing to do and then we actually want to add in the worker itself the worker process the worker class that is it's going to be a uv acorn class and it's dot u workers.uv a corn worker and then we're going to go ahead and bind it to the 0 0 0 or 0.0.0 and then a port that we want to use so i can use just a standard port that comes in or i can have a default so i'm going to go ahead and do run port equals to the environment variable of port or i'm going to set it equal to being 8 000 and then we'll just do something like that okay so this another command now for dockerfile two things the environment variables and then this entry point command so entry point.sh that of course i need to also copy so go ahead and do copy dot capital copy dot slash entry point dot sh to app slash entry point sh okay so a couple things that we definitely need in production in order for this to run now the other thing about this when it comes to copying you might be tempted to just do copy dot app like that this would copy everything in your current working directory so it's literally everything in here i think it's better to go one by one on the things that you for sure need see i copied this entire app folder because i know that i'll probably need it in production so just a little thoughts on that one okay so now we have a new docker file so i'm gonna go ahead and update it and i'll just do git add that's just all git commit and this is going to be the you know add environment variables basically add environment variables and they'll do git checkout main and then git merge looks like i'm using the wrong branch but that's okay we'll do 34 end and get push origin main okay so now of course looking in our github repo we should see that so we should get our docker file in here notice the dot emv is still not showing up and our entry point is in here as well great so both of those things are working i did realize that we also need to update our requirements because the entry point itself is using g unicorn i don't think i ever added that into requirements oh looks like i did okay good just wanted to verify that that was there because it definitely needs it okay so now the thought process is well we need to update our env file in our production application so let's go ahead and navigate in there and let me just grab the ip address so ssh root at okay so into cd proj we need to make our environment variable file in here so go ahead and do sudo nano dot emv and i'm literally going to copy my local env file here i don't need the root password though but i'll leave everything else up in here okay so this actually would be a really good time to be like okay i'm going to change all of these environment variable files here it would also be a good time to mention that in the long run i would not actually hard code the env file here instead i would use something called a ci cd pipeline to configure all of my virtual machines right now there's a lot of complexity that goes into that so i just want to make it really simple by just copying and pasting our environment variables here right now okay so at this point i should be able to run my dockerfile build again so docker build dash t ai api tag i'm going to call this version 2 and then f docker file and period okay so it built it really fast this time but i didn't actually change anything so i actually do need to do git pull origin if i do get status in here these things have modified a little bit that's okay i'm not going to worry about that for my production environment but i do not see env in there even though if i do catch.emp it's definitely in there okay cool so again i'll go ahead and build this new version and now we see that it's going to take a little bit more time it's not going to take nearly as many as much time as i did before but it will still take some time i i did catch something that i should have done in my docker file and that is making this entry point executable so we're going to have to do run and this is cho mod plus x entry point dot sh it certainly still might work but there's a good chance that it also might not so we want to make sure that whenever we have uh scripts that we need to run we make them executable and i'll actually do this up a little bit more and so the installation part of docker will take the longest time probably but now that i've added that entry point i'll go ahead and add it and do git status get add and then git commit and we'll go ahead and say updated entry point in docker file and they'll do git push origin main okay so of course now our production app could actually update so what i just showed you actually was the process that you're going to have to go through from here on out with your project right and that is you work on some code locally you push it into your github repo and then on your virtual machine whichever virtual machine you're running you're going to have to run a few commands you'll still have to do some manual work there are tools that automate all this stuff completely like i said cicd tools one of them being the actual github actions itself can't run this right so the production version of what we have does have actions to run this to actually spin up a new infrastructure item destroy it update it all of those things are all in there and that's all done through doing something called terraform so terraform is actually doing all of it it's a really really cool tool let me know if you want to learn more about it i'm not going to be covering it in this series but it is something that you might consider doing to help automate all of this configuration and also the deployment process because you see like it's not fun to sit here wait for it to finish so i can do a few more commands um but once we have it mostly configured it's not going to take that long it's just something that has to be done on a regular basis after you actually run a deployment like you actually push some new changes in the code you'll just have to go to your production app and make some changes there i'm going to let it finish real quick and then we will actually go back into the docker image bash shell to test out a few more commands in there too okay so it looks like it worked fine so what i want to do first is docker and then the dash or docker run it our most recent version here and then bin bash okay so i'm in there and so what i'm hoping that i'll be able to do is first off activate the virtual environment so source otp v and v activate or rather bin activate okay so if i do pip freeze here i should see all of the pip freeze things but now what i want to do is actually run one of my pipelines i think i actually didn't bring them in so that might not work so let's go ahead and just use uviacorn and it's app dot main colon app reload and now i don't think we will see the same issue except i don't have well i don't have my astra db connect in here so that's a problem i also don't have my models in here or at least i shouldn't right so if we go into the app so see the app nothing in here would show me my models which in my local environment uh oh my models are not even in the app itself forgot about that so cd back a little bit right and so i definitely need to implement the pipelines as well as getting our models installed and working correctly too so that's something we'll do for the next few now i'm going to go ahead and bring in my pipelines folder for the pipe ir commands now i do intend to add more pipelines in here which is why i'm bringing the entire folder not just one of the files so up in my copy command i'm going to go ahead and do dot slash pipelines and put it into app pipelines just like that simple enough go ahead and run this and it's going to be running off of our e otp v bin and this one i'm going to use python dash m and pi pi r so just to make sure that i'm using that virtual environment's pi pi r not somehow using a different version of pipe ir and then we want to call it inside of the pipelines themselves and then the actual pipeline that i want to use which is ai dash model download okay so notice that i'm actually downloading the model directly in the docker container when it's being built this is not necessarily the most effective or efficient place to put it but i'm trying to do it for simplicity purposes in this deployment and not go too much into docker i think we already went a lot into docker there's definitely more ways to improve this is kind of the point but this actually should work for us and and hopefully what it will end up doing is creating and getting us the necessary pipeline download going so let's go ahead and update this so get status get add and then git commit and 36 added pi pi r pipelines and then git checkout main oops get merge 36-end and git push main origin main okay so we made a change we committed it we pushed it now going into our actual virtual machine i copy the ip address there so ssh root at we cd into our proj git poll okay docker file was updated we can look at that docker file there it is and there it is let's just make sure our pipelines are in here too and the local repository in this remote server they are okay cool and so now i'm going to go ahead and give this a shot now one of the things i could have done locally is just tested that the pipelines are correct like the actual somewhat relative path is correct but i'll leave it just like that this time i'm going to go ahead and do docker build and i'll just leave the tag as ai dash api just keep it nice and simple and then the file of docker file and period okay and so now i've got no entry point um that's odd let's see our docker file itself oh yes working directory was in the wrong location okay so simple fix but let's go ahead and update it so get status get n git commit updated working der location git push origin main okay and then in our production get pull origin main or just get full and we'll run it again okay so now it should work great and we'll give it a minute um i i really want to check to see the actual pipelines working but the good thing is it's definitely copying the pipeline in there it does have to install all the requirements before running the pipeline i think that makes a lot of sense to me hopefully it does to you as well but basically we just need to run any given pipeline here after all those requirements are done so let's go ahead and let the requirements finish and then we'll see the pipeline okay great so it finished and we should just check that you know the models are in there and so we'll go ahead and do our docker run dash i t a i api band slash bash and i just want to verify so i see the models folder in there now and spam sms and there we go so we've got all of the things related to this model so it's definitely working another reason to use those pipelines is just really really simple to just run this pipeline here and so of course now the last and final piece to this is to make sure that we actually get the connection so app ignored astrodb connection we need to get that into here as well now one of the methods to do it would be actually using the pipeline right so to upload the connection into a object storage of some kind and then to download it the thing i don't love about that is well it's not encrypted right so what i actually want to do is encrypt this upload it into git or wherever and then decrypt it in our production application so that's the sort of remaining step as far as building the application itself is concerned and then we just need to run it and expose it to the world fairly simple uh but it's something we'll do in just a little bit now we're gonna go ahead and secure our astrodb connection bundle so if we actually look at this bundle and unzip it on our local machine we should be able to unzip it and see that there's certificates there's keys there's a truss store there's all sorts of good stuff going on in there that we want to secure so delete the unzipped folder now i will say that this bundle is probably useless without these two keys right here in your environment variables but why not encrypt it if we can and doing it so it's actually not that hard because of the cryptography package out there this is an open source package for doing modern cryptography now i am not a cryptography expert by any means that's why i use this package so i'm just going to be going off of the assumption that this package does add a layer of security using the encryption that we have here now i would say or at least offer that doing some sort of encryption is better than no encryption so that's what we're going to be doing now before we actually implement it into our project though i actually want to use jupyter and notebooks to see how this is going to end up working so go ahead and open up jupyter notebook if you don't already have it running go ahead in there into your notebooks and create a python3 file here and we'll call this encryption okay so if you have to just run pip install cryptography you probably will now we definitely will need to add this into our requirements.txt too so we might as well add that in there as well okay and so i'll just run that oh that's another way to run pip you can use exclamation mark or the percent sign it doesn't really matter some people prefer this is really the point there okay so what we want to do here is we want to import the fernet to generate a key once we generate that key we can start actually encrypting data so let's go ahead and do this so from cryptography.fernet import to the class for net okay so to create a key simple it's just for dot generate key hit enter notice that it's in bytes it's a byte string now this is not great because what i want to do is actually put this key in my environment variables so it's going to be a lot easier to sign things so to change it into just a string we can decode this byte key by using utf-8 and there we go so there's our new encryption key using utf-8 so i'll just call this my key all right so how do i actually sign things well to do this we're going to go ahead and actually sign the thing we want to sign so i'm going to import pathlib and i'm going to go ahead and give my base dir and that should be well basically just a couple back so pathway dot path and this should be my root projector actually that's not the way to do it we want to do path and parent and then we'll go ahead and do exists cool and then we'll go ahead and do our after baster and app and let's go ahead and make sure that that exists too okay so it's not in the right place let's go ahead and do get rid of the period there do dot resolve and then parent and there we go cool so using path lab inside of jupiter is not always the easiest thing to do but i certainly want to have this now what i want is my ignored dir and that of course is inside of my after and ignored now the ignored really just means ignore it on git not really something i want to ignore in the long run so what i want to do is create a encrypted dir or let's call it a secure dir and this is going to be our after and we'll call it encrypted and then decrypted and what do you know decrypt it okay so we've got our inputs so here's our input secured will be an input at some point and then decrypted as the final output so how do we actually encrypt something is the first question we need to ask ourselves right so what we want to do first is loop through every path in our ignored directory and we'll go ahead and give it a glob here to get all of the files that are in there i'll go ahead and print out the path there we go and so i'm going to encrypt every file in here and yes including even ds store now in git ignore files this probably wouldn't show up in the future anyway i could also be very explicit as to what file i want to encrypt but the idea to me is i want to have an entire directory that i'm encrypting whatever's in there i don't necessarily need to make a bunch of conditions of certain files that should not even be in there ds store is something that will pretty much only happen for mac users so if you're on windows or linux you probably won't see that so anyways what i want to do is turn this so this local path here i want to actually encrypt it and then store it into my secure directory so i'm going to go ahead and get a relative path for this and that's going to be path dot relative to the source directory right so that's this path of course relative to that and now i can print out that relative path let's get rid of the original path and so there we go so the reason i'm having a relative path is so that i can have other files in here as well now the glob would have to change to have other kinds of files but really i just want whatever this relative path is because that's what i'm going to store in my encrypted data okay so the destination path then is going to be my secure directory with this relative path so i'll call this the destination path and we'll print that one out okay so the input path this one the destination path is that one now why am i doing that well part of the reason is because we need to use this key this key itself will then sign all of these things or encrypt all those things not sign them but encrypt them and so we're going to use that key with a fernet so f equals or fur equals to fournette with that key now it's very often that you'll see this as an f variable i don't like using the f variable simply because of f substitution f string substitution i think a lot of people get confused with that anyways so fournette is fur and with this what i can do is i can open up my bytes so the data from any given item here so i'm going to go ahead and say you know path bytes equals to the path dot read bytes this of course is the same as open you know a file path and read bytes just the path way of doing it so since i have these bytes i can actually encrypt it so data equals to fernet dot encrypt of those bytes okay so this is the new data that i want to store in my destination path so it's simple destination path dot right bytes of that encrypted data pretty simple okay so let's go ahead and run that and oops i didn't initialize fur let's try that again all right so we've got no such file encrypted data okay so the actual secure directory is not there so let's go ahead and create it security make dir exist okay being true appearance being true we'll do that same thing for our decrypted dir once we get there okay so let's run that again now it runs all of those files so let's do this same idea here but this time using the secure directory as our input and we'll go ahead and print out the path and what do you know those two files are there now if i in your case it might be just one file right just dstore it might not be there again okay so now if we go inside of our project here if i open up my azure db connect i've got well different data if i look at the zip file it does this right so if i actually try to open this opening up our finder let's go into that encrypted folder and try and unzip this it's going to give me an error it cannot be unzipped that of course is because we just encrypted it okay so to decrypt it well we are going to basically do this exact same thing except not going to the secure directory as the absolute directory but rather to the decrypted directory now we could go back to the original directory but i think that might cause a problem with your data so let's not do that and of course this is going to be relative to this path okay so now the path bytes instead of using encrypt we use decrypt then we have a relative path to our secure directory right again our destination path is going to be based off of those two things our destination path should be already there and then there we go so go ahead and run that and now it should have decrypted that data so here's that bundle again and if i open this up what do you know in the decrypted folder and it's able to be decrypted and of course this is all thanks to this key right here and yes absolutely we can use different folders but we can also add a bunch of things to any given folder so i can go ahead and go like this let's go ahead and add a few of these connect bundles i'm going to delete all this stuff and just leave the one connect bundle in a little bit but i just copied all those and now i have a bunch in there okay so yet again i'll go ahead and secure all of those so inside of encrypted there they all are and then we run it decrypt it again and there they all are right it's a pretty cool and simple way to do this uh for encryption now of course one of the you know biggest downsides is the fact that we are using this encryption key so if you were to change the key and then try to decrypt some stuff well let's actually delete all of these things first and now let's go ahead and try and delete some stuff oops let me try that again with changing the key running that uh initializing fournette again and deleting all of these things key changed go into decrypt run it again we get a signature did not match right so this is where you might get into some issues and that's why i have three different folders so the ignored one is really the original and then we've got encrypted and decrypted so encrypted is the only thing i will check in to production decrypted is the only thing that will be finalized or used from our app in other words in our app itself when we actually try to load that in this right here this cluster bundle will have one more step in here looking for the decrypted version as well right because we certainly want to have some of those things in there okay but before we go i want to go ahead and clean up this a little bit and delete those folders and delete our extra ignored connection bundles in here because now we want to take what we just did in a jupyter notebook and actually turn it into some functionality an actual python module so we can call these methods just really easily all right so now we're going to go ahead and convert our jupyter notebook into an actual module that our project can use so i will go to this encryption one and make a copy and at this time i'm going to call it encrypt and module without pi at the end but just encrypt module i already have cryptography installed so i'm not going to do that one and i'm just going to rearrange things a little bit so it's actually like the module itself now what i probably won't need are the paths inside any given method to run this encryption so let's go ahead and start our first one and i'm going to define it as encrypt der it's going to take in an input directory and have an output directory no big surprise there and so it's going to take all of these things in right so the fernet the actual loop path here now let's tab these in okay so the input directory is what's going to be looped through now i want to make sure that i'm turning the input directory into a path lib path and that's also true with the output directory so pathlab.path as well and what we did see before when our directories weren't created this loop had a problem so we just need to create one of the directories here or the output directory really so output dur dot you know make dir exists okay in parents okay so there's our encrypt directory now the key itself though is actually not great because what we want the key to be is based off of os so we want to actually put it into our environment variables so i'm going to do then is i'm going to define a method called generate key and it's just going to really return this fernet here okay so then i can run this basically one time let's insert the cell below i'm not gonna have to run this very often well let's make sure everything has been triggered there we go so here is my encryption key now let's get rid of that jump into emb and create encryption key and set that equal there no spaces okay so now this is what i'll use for my actual key being os.environ.get okay and we'll push that up a little bit so this encryption key will be used in our function here and i'll just say key is equal to that and if not key then we'll go ahead and raise an exception encryption key is not found all right well let's give it a shot so i'm going to go ahead and do encrypter and this is going to be our app dur or well we can actually use these directories here so i'll put it in as a string though because it does not need to be a path lib because it gets converted to a pathway and then we want our decrypt or encrypted dir so our secureder string and securer let's run that function and there we go so again ignore dirt is not defined of course it's not so these items i need to update a few things in here as well so relative to input okay and then we want to make sure that our secure is in here as well and we've got ignore dirt oh it's probably from here okay and this is one of the things about copying a jupyter notebook you need to make sure they're all ran okay there we go so encryption key is not found no surprise there because we didn't actually load it in so let's go ahead and import from dot env import load dot env and run load dot env okay and so again i don't think we'll have it encrypted ah it actually loaded that's great so loadonv is a really cool thing i thought i'd have to refresh the kernel but i did not okay so now let's go ahead and run it now it's using that encrypted dir method so back into our app hey what do you know it actually created decrypted too but that has more to do with this cell right here so what i'm going to do now is get rid of these and i'll delete it again so delete encrypted and delete decrypted okay so we run it again and sure enough encrypt is there next one is essentially just copying this and turning it into decrypter everything's the same except for this right here and this should be decrypt right so we might want to change this to being secured dur right and just changing the input directory to being that but that's like something that's so minor that probably doesn't even matter in fact we would probably have methods one single method with the mode of like encrypt or decrypt but i'm going to leave that to you if that's something that you want to do because the only thing that really changes is this decrypt part right here that's it okay so now i'm going to go ahead and get rid of this method here or this loop i'm going to go ahead and call our decrypter just one time just to make sure it's working and of course it changes from the ignore dirt to the secure and the encrypted or are decrypted or as the destination okay so we run those two things and jump back into our project there's decrypted and it should be able to actually decrypt it because we really didn't change much we just kind of changed how it ends up working all right and so back into my jupyter notebook here i'm going to get rid of everything that was actually called even these folders here we do not need them we don't need any of them actually we just need the modules themselves and the key and i definitely do not need the import up here and so there we go now what i'm going to do is i'll do kernel restart and run all just to clear out some of the notes in there and then move everything up because this is going to be our final module and there we go and so let's delete all these with shift and select or shift and up when you select on the side go to edit and delete cells edit and delete cells okay cool there's our encryption module i'll go ahead and download this as a python file and we're going to go ahead and keep this we're going to reveal it in the finder or the file explorer if you're on windows and then we're just going to drag this encrypt module over and change it from encrypt module to just simply encrypt and then i'll get rid of all of the jupiter notebook related things in here that always come up as just notes that it's essentially made from a jupiter notebook cool so now we have our encryption module ready so it's time to take this encryption module and turn it into a pipeline that we can run at any time the other thing is the actual folders that we're going to use for this we might want to create them now as in the output directory for our secure folder and the decryption directory for that output as well those things may or may not be included in our git ignore file now i actually think that we will leave those things until we actually start using this module itself so i'll leave those folders for a little bit because in the pipeline that's when we'll actually declare the folders because our encrypt module doesn't actually care what folder you end up using and i will say that we probably should have some checks in here that these are actually folders not files because it's possible that you do that but even the input directory that could be a file but you would probably want it to be a directory because it actually loops through everything so you could always get the parent of this if it is a file i'll leave that challenge to you as far as how to do that but this is a pretty straightforward way of doing encryption for an entire directory especially now that it's a module so we can reuse it all right so now we're going to go ahead and make our encryption and decryption pipelines for this ignored folder and the encryption folders that will come with it and so in this pipelines here i'll go ahead and do encrypt.yaml and so this is going to take in a few steps so the step one i'm not going to write it out just yet we're going to go ahead and do our import step two we're going to go ahead and actually you know pass in our args and then step three we're going to go ahead and actually call our function here okay so step one we'll just go ahead and name it and there's going to be piper.steps.pi import and that's going to give us a n and inside of here we'll go ahead and do pi import with the pipe tab in a little bit and do from app.encrypt or roll just do from app import encrypt okay so the next step was our args so we'll give it a name of piper.steps.set and again we'll go ahead and do er in this time it's going to be a set and we're going to declare to encrypt this is going to be my key value pairs in here so the first key is going to be our input directory and that of course is app slash ignored okay so this input directory or the pipeline itself is going to be run on the root of our project here so thus app ignored makes the most sense and then our output directory let's just tab this in a little bit output dir is going to be app slash encrypted okay so no dash there that's it so i totally can actually have multiple items in here that i am not going to do i'm just going to keep one and so the next part is to name it so you know step one step two last one is pi by r dot step stop pi this is just a name of this actual running method and so i'm going to go ahead and run exclamation mark pi so this will call python we encrypted the module of encrypt or we imported the module of encrypt so we just call encrypt dot and then encrypt dir and this is going to be the input directory so how do we actually get the arguments that are coming in here well that comes from for each so for each can actually call from this right here so this is a list of items it ends up being a list of items with key value pairs in here so this is a dictionary if you will where this can have multiple dictionary values and so in here i can actually call for each and then pass in that to encrypt here right so the only way for this to run this stuff is to have this for each so when you do that it will give you a variable of i for each iteration so each one of these is assigned to the variable of i that is a pi pi r thing um and then what we'll be doing here is calling each value just like that so the value of course is the input directory for the first one and then the output directory for the second one and that's it so let's go ahead and run this with python dash m pi pi r pipelines and encrypt hit enter and now we get this encrypted folder here okay so in our case it actually didn't bring anything in so maybe we have ignored as spelled incorrectly it did create the folder but now within the actual input directory correct it runs through and we get our encrypted data here great so that's a folder i actually want to keep in my github repo okay so i'm not going to add that one to git ignore i'm going to go ahead and copy this entire pipeline and call it decrypt now diamo paste it in here so now i'm actually going to change these to d to decrypt and we'll go ahead and say secured dir and output tour that's fine so the secure directory is going to be encrypted right so the old input directory and now our output here well what is it that we want to call it now i could actually leave it in as ignored but the problem with that is if i run this locally it runs the risk of me messing up something here so i'll go ahead and say decrypted as my output directory and then we'll change the method as decrypt and then also before each loop okay and so we'll also use our secure dir here our output door is still the same place and there we go simple enough run this again and now run decrypt hit enter sure enough there it is so this one i certainly do not want to check it right so let's go back to our get ignore file here and just do app and decrypted save that now that one will not show up in my database call i just want to add my cluster bundle and basically say well let's go ahead and grab the source directory here so i'll say sourcedir equals to these two things here and i'll say if not sorcerer exists then i'll set a new sorcerer as base der slash decrypted okay and so here we go source directory so now of course it's going to run all that and look for that decrypted data encrypted now just to finish off this pipeline and to make this fully ready we go into our docker file and we already have our entire app being moved over right so the entire folder is definitely going to be in there and we actually already ran a pipeline right here so all we need to do here now is run the pipeline of decrypt now this of course is going to assume that we have the correct environment variables which of course right now we do not but that's our pipeline now we just need to deploy this it's going to be fairly straightforward to deploy it so let's go ahead and start that process now we're going to go ahead and update our production environment with the encryption key finally now off the video i actually made a few little changes one of them being that i added the dot slash to this entry point you can absolutely use the absolute path or just dot slash depending on the working directory just a minor change but something that we needed to do and also in our entry point i actually never called the app configuration itself so be sure you have both of those things changed before you go into production so now on my virtual machine here i'm going to go ahead and cd into praj and git pull origin main and just let that update my code then i'll go ahead and do sudo nano and env and add in my encryption key there so once we do that we have the basis for our entire project to work and so every once in a while when you are updating a project just simply do docker system prune dash a dash volumes and i'm not going to do it right now because it will make the build time a little bit longer so i'll go ahead and say no and so the reason for this is because once you start building things on a regular basis on your production environment it will start to eat up room on that production environment so pruning it is a good idea from time to time doesn't have to happen every time but it is a good idea sometimes but then the next time we actually run docker build dash t ai api with the file of dockerfile period after you prune it this will take quite a bit longer because it's doing everything from scratch then okay so in my case i actually just built it so it went pretty fast overall but the idea here is if i messed up on my decryption it should actually run an error so if the environment variables weren't done correctly that's what would happen the next thing is i actually want to just jump in to the container itself so dash i t ai api bin bash remember this is kind of like sshing into a docker image and i just want to call that entry point dot sh command the exact same command i have at the end of this docker file just to make sure that that's running it looks like it is right i don't have any errors now and what's cool about this is it's not giving me any errors about astra db either so that means that my encryption process did work in production as well which is awesome but the thing is i if i wanted to i couldn't actually access this server right because i really just ran a local version of that command i don't have it exposed to the outside world at all so let's go ahead and close this out exit out of the container itself go back into the virtual machine itself and now we'll just go ahead and run docker run dash restart always and then add the environment variable port and in this case i'll do 8001 just to show you the mapping part of things then i want to expose the outside world port to port 80 right so in other words the default non-secure port is port 80. so if i come in here and set 80 the browser is going to automatically go back to that port but if i said 8001 it's going to look for that port right and so if i wanted it to be 8001 i would just say 8001 here but i'm going to leave it in as 80 and then map that to port 8001 right so the internal 8001. much like up here we had an internal port of 8000 running this is very similar to that it's just it's going to map the external port that anyone can go to to that port 8001 next of course we're going to move it in detach mode with dash d and then ai api we hit enter detach mode is how we're going to be able to restart this always so with that it should run and we should be good to go so let's go back into our project here now i can't actually access it on 8001 so i'll get rid of that but i can access it right there what do you know we are now in production now one of the big things about this though is you should have changed your encrypted file here right so if you were pulling my code up into this point this encrypted file would not work it would just not unzip it would have that same astrodb error so that is something why you would actually want to at least import this to your own personal project and work off of that so we already gave you a bunch of strategies on how to do that but that's pretty cool i think it's now in production it's now working we could totally test this and i definitely recommend that you do but of course i'm going to test it just by looking at the data set and what do you know there all is which is awesome so this is now a production version of our ai as an api running and working so the biggest question is this was a lot of manual stuff how do i make it a bit more automated right so on our actual production virtual environment we ran a bunch of manual commands this is not ideal now part of the reason i actually wanted to work with pipelines in the first place was because of the automation right so it actually tends towards making this as automated as possible especially with the encryption step the downloaded model step was also automated but this could certainly be improved quite a bit but the idea here is we want to use automation whenever possible but when in the case of automation we don't want to automate before we know how to do it manually right so we definitely want to know how to do it manually so if our automations fail we can actually go backwards all right so let's go ahead and talk about some of the considerations you should have when updating our code and our project so what we would do is we would actually clone this virtual machine make changes to the clone and then rotate things okay so what do i mean by that jump into lynode you would come in here click on the side here and hit clone now i can absolutely make an image out of it like we saw before i could totally create an image and do it that way as well that might be an effective way also but the idea here is we would clone this and with the new clone we would just power this on right so let it boot up and give it some time to do that now also with some of these lynnodes we would probably add in a private ip address so we'd come in here and go to add private ip address right there now in this case i actually allocated one while i was testing which you can see down here so this private ip address the reason we would have that is so we can implement a load balancer i'll show you that in just a moment but while this is actually running the cloning process then to get it to running takes maybe five to ten minutes which is why you didn't see me do it but i can go ahead and copy this new endpoint here this new ip address and right now it's not fully running right it's not even accessible so we do need to give it some time before it goes up completely but once it does it will give us the exact same information the exact same data so if we go into data set all of it's running and it works identical to the other one i didn't have to do anything extra so the reason i show you this is because now i can log into either one of these make some changes and then eventually you know i could even change the ip addresses i can swap the ip addresses but that's not efficient what would be more efficient and better in the long run is to have a actual load balancer so if we go into the node balancers here this is where you actually create a load bouncer in linode but the load balancer will do a couple things one it will distribute load across these virtual machines so that not one of them is handling everything and then two it will allow me to rotate these virtual machines a little bit more easy let's take a look at what i mean by that now you don't have to do this just follow along with me i'm going to call this my lb and then i will not add tags at this point i want to select the region the exact same region that i have for my nodes and then i want to scroll down to the backend nodes i'll go ahead and say node dash one grab the ip address notice that these are private ip addresses i had to have private ip addresses on both of these something i did not do in the video other than show you the reference okay so then node two and we come in here and now we've got two different nodes two different ip addresses two different nodes right so the public ip addresses are these right that doesn't really matter that much but now we've got a potential for a load balancer i hit create and this will take a little bit of time to boot up but the idea here is we've got an ip address here now so yet another ip address that will be booted at some point once the actual load balancer is fully up uh so it will take a moment before this is actually able to receive data but then also redistribute it to one of these right and so we could talk a lot about the actual instances themselves and a lot about load balancing but if we look in here and go to data set now i see that i have this single ip address and it's going to all the data sets and for all we care is this is working now in the back in the back scenes and the actual load balancing itself it's going to distribute the traffic to each one of these ip addresses so if one of them goes down this is like if we were updating it one of them is going down so what's going to happen is the node balancer or the load balancer is going to distribute traffic to the one it can actually access right so the one that's shutting down we could kind of say like hey this is it being fixed it's being improved it's doing all those cool things and then that way our service is not actually going to be interrupted now there might be one lag period for the node balancer because it will take some time for it to fully propagate those changes but now this is virtually shut down but the node balance are still working so our service essentially would still be up and working and have no major problems maybe hopefully assuming we did everything right but at least that's a method on how you could do this and of course then you could actually delete this instance itself and then you're good you could just work off of one instance still it just does add a little bit of cost but the cost is a huge savings in terms of keeping up time okay and so that's just a quick and easy way to do that um maybe it's not that easy maybe it's a little complicated still i do recommend looking into more load balancer stuff if you are interested in that but i will say there's now we have the ability to start automating this and that's not something i'm going to do in this series but let me know in the comments if you want to see it the idea behind the automation would be to use github actions to run workflows very similar to the pipelines we were doing with pipe ir and then using a tool like terraform that will actually do the provisioning of our infrastructure like as in turning on all of these instances here and also maybe even adding a load balancer it would also allow us to write scripts to pull all the code run docker install docker build and then docker run all of those things that we did cloning our repo all of that stuff can be handled with a combination of github actions and terraform so the i think it's like such an amazing combination those two things with a service like lenode to actually run it and then the other part about actions that's really cool that will better secure your applications is you can actually put the environment variable stuff in github action so you're not manually doing that at all because well we don't want to manually do anything especially not hard coding you know stuff like secret keys but i i knew i was going to mention all this stuff so that's that's really the the major next step step in in making this a lot more automation of a deployment pipeline at this point but even without an automated deployment pipeline we now have strategies on how we would go about updating any given instance and having little to no downtime okay so one of the challenges of course would then be to turn this ip address in to its own domain name now the way i would actually treat this service itself the actual ai model service is i would make it a microservice to a bigger project a bigger project that actually controls the domain and all that and potentially even adding in other load balancers to help map and deal with some of this data like through nginx but that's really it i mean we've got a production application running all of the things about it are running and working and working i think pretty reliably it's just updating from here how reliable do you want that to be now if you're just starting out and you're just getting your feet wet and getting something like this into production i don't think you have to worry about downtime i think downtime comes when you actually have services that are relying on this particular service now for your own edification to get better at all these things then yeah learn how to build the automated pipelines because doing things manually is how you actually end up causing a lot of errors and breaking a lot of things but that's how i learned the most myself when i broke something that was in production and i'm like oh my goodness i need to get this back up and live so i worked super hard and learned a lot in a short amount of time now you might find yourself coming back to this video to do exactly that this video hopefully illustrates some of the things that you need to consider when going into production and also it shows you how we do that final production so if you got this far and you're able to do all of this including the load balancer stuff good for you that's amazing this is not something that that many people know how to do from end to end that is doing the machine learning part all the way to the web application part encryption and then also into production so i'm proud of you if you got this far and let me know if you did i want to check out the things that you're building if you have some cool stuff to share all right so now that we've done all of those updates i recommend that you shut everything down if you haven't already that is deleting that node balancer and you know powering off and shutting down each one of these nodes you should have all the code to get these things back online if you need but of course i definitely do not want to have a 40 a month thing that i'm not using running so be sure to do that too hey there thanks so much for watching hopefully you got a lot out of this one now my challenge to you from here is to actually use a new data set and ideally one that you created yourself so to do that you are going to probably want to scrape data on a schedule i have a whole series dedicated to that so check the link in the description but the idea is once you have your own data set and you can make your own ai models then you're going to be able to actually launch your own rest api service and potentially provide a lot of value to the world because this ground of ai and machine learning is so so fertile so i do recommend that you try it out and really just push yourself to the next level of what's possible or even what you think is possible because i totally believe in you thanks again for watching look forward to seeing you next time [Music]
Info
Channel: CodingEntrepreneurs
Views: 13,039
Rating: undefined out of 5
Keywords: install django with pip, virtualenv, web application development, installing django on mac, pip, django, beginners tutorial, install python, python3.8, python django, web frameworks, windows python, mac python, virtual environments, beginner python, python tutorial, djangocfe2021, python, django3.2, web apps, modern software development, web scraping, cassandra, nosql, astradb, selenium, celery, jupyter, mlops, machine learning ops, aiops, devops, linode, production
Id: nTdMjFcK3SM
Channel Id: undefined
Length: 111min 41sec (6701 seconds)
Published: Wed Oct 20 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.