Deploying a MERN Application (with Docker, Atlas, and Digital Ocean!)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
check check one two mic check one two and just check if my redirect works cool all right so the plan is to take a basic mirn stack application and i'm going to go through these three different portions first i'm going to dockerize that application including the apic server which is built with express then i'm going to dockerize the react client set up docker compose to handle the coordination between those containers when i run them run a mongodb image so that i don't have to run the database locally on my system and can run that in the container as well and then i'll set up hot reloading so that we'll have a nice development environment where we can modify the code and see those reflected inside the container when we actually go about and and run that so the very first thing i'm going to do is just run the application as is i ran a version of this in a video a while back on my channel so that was the docker compose tutorial where i went through in six minutes and just very rapidly showcased how to set up docker compose this is going to be sort of a walk through end to end of getting started with this application dockerizing it and then deploying it onto a digitalocean vm so that'll be the full process and i'll just jump right into it the very first thing that i'm going to do is to start the database and i installed this database using homebrew so in order to do that i'll just do brew services start and then the version that i'm running so we've got it successfully running there we can do brew services list and that will give us the different services that are running here we see homebrew running sorry mongodb running and that's gonna be running on port 27017 which is the default port for the back end i'm located within this server directory and i'm going to call this yarn start script which will just run node and then index.js so it'll be yarn start and so now we've got our api running on port 5000 we can actually check that it's running by just hitting localhost on 5000 and we see this hello world response cool then we're going to need to run the client so the client similarly will use that yarn start method so that'll call this script here which will be react script start and so this was all set up when you run that create react app at the very beginning you get all these different scripts you can run react script start will start a development version of the server and then open up a browser for me to view it when we actually deploy this we're going to use yarn build which will instead set up a static set of javascript css and html files that we can then deploy but for development purposes we'll use yarn start and so that's running on port 3000 and we see this super simple basic merge stack application if we go to list movies there's no movies in the database so nothing shows up but we can go to create movie and we'll just create a test movie here we're going to rate it 10 out of 10 and the time is going to be noon click add movie then it's added successfully and so if i go back to list movie it shows up here i can then update this and it will be reflected so we've got full crud functionality here oh hey hey immigrant programmers welcome to the stream yeah i was recording this video and then i just decided to to do it live so welcome to the stream and it's going to be a long one i've got a lot of things to do here if you see from my to-do list but we are just on number one so happy happy to answer questions throughout as well as long as i don't get too distracted so that's the basic application um i'm going to go ahead and stop it so that we can start jumping into some of these other steps so i'll kill the server kill the client and run brew services stop mongodb okay so that is that is good we've got everything stopped and so the very first thing that i'm going to do is to dockerize the the express-based server and so that is in this directory i'm going to go ahead and exit one of these cool and so in order to dockerize the server i'm going to create a docker file within here okay that should be good and i'm going to use the node version 14 base image so whenever you're looking for a base image i usually try to find the official one so i'll just do node.js docker file and hopefully we'll get a docker hub here we go here's the official page for the node.js docker image and we have all these different tags some of them are based on alpine linux which is a slimmed down version of linux some are based on debian and so it'll have the different version of debian in the tag we're going to just use the node 14 slim image so 14-slim and that is this particular image and that should be just fine for our purposes i'm using the slim image because it has fewer dependencies installed my application is pretty simple so i'd much rather have that smaller image size than have a bunch of dependencies installed out of the box so i'm going to do node up so we'll do from and then the name of that image node 14 dash slim and so that should be a good starting point now the very first thing that i'm going to do is just set a working directory within this container image and so this is just the convention for where the source code is going to live and so the the command for that is workter workdir and then i'm going to use user source app and so that now that i've run that command we'll set the working directory where i'm going to put my source code and where i'm going to run commands from then i need to copy in my package.json and yarn.lockfiles so i'll use the copy command for that and so it'll be package dot json and i'll copy it to dot slash and so because i set this working directory that now corresponds to this location within that container i would then also copy in the yarn.lock file to the same location and so the reason that i'm copying these files first and not just copying in all the source code is that docker has a pretty smart caching system where each of these commands builds upon the previous layer and so any modification to something that would impact one line within this this docker file we'll then impact and i'll have to rerun the rest and so by doing this first and then running yarn install now i'll be able to modify dependencies which i'm going to copy in here in a minute without having to reinstall all my dependencies every time i build that image so now after line 8 i will have installed all my dependencies i'll then copy in the rest of my source code with the copy current directory to current directory there is one additional modification that i want to make though and that is to add a docker ignore file and the reason that i'm going to do this is because i don't actually want to copy in my local node modules directory because that has all of the mac os specific modules installed instead what i would much rather do is copy in everything except for those and then i'll install them as i just did on the previous line also if you're in the if you're in the stream let me know how my audio is i think it's coming in a little loud so i'm going to turn it down just a touch give me a thumbs up if you can hear me and if it's if it's good audio levels okay so within this docker ignore file i'm just going to put node modules and that just tells docker when it's building the image to not copy in the contents of this directory that should be good i'm going to run my api server on port 5000 expose 5000 and then finally i'm going to set the command that gets run when i run this container and so that is going to be that yarn start command that i used before to actually start the server and that should be all that i need to do for that particular doctor file i'm going to then create a makefile here and so i use makefiles just to store off various commands that i'll need later so here i'm going to store off a build command so build will be docker build i'm going to tag it as like api server or something yeah i'll just call it api server and then i'll use the period here to indicate current directory so if i do make build it will run that command and that will go off and tag it with the api server tag awesome so that should be good now i'm going to move on to the client so it's actually going to be quite similar so i'm actually going to copy this and the client docker file is going to be identical with the one modification that it's going to be running on port 3000 i'm trying to think if there's any other modifications to make i think it should be fine yeah let's just leave that for now again yarnstart is the command that we want i do need the docker ignore file and again it's going to contain node modules and so i also want the same the same build target except this time i will call this react app and so now i have a client image and a server image built the final piece that i need to get into docker is going to be the database itself and so because i don't need to make any modifications to the database i'm just going to use the public docker hub image so this is the official version of mongodb from docker hub i'm going to use the 4.4 version i think should be fine so i'll do 4.4 bionic as my version and i'm going to use docker hub to coordinate all of these images working together so i could given my my current state i could say docker run and then pass all the different options required docker run over here for the server as well i could then run the mongodb image but coordinating all the different networking and volumes and everything between all these images would require a ton of different commands and so instead i'm going to use docker compose and so i'll put a file at the top level and this will be compose.yaml and i'll move that up or i'll call it dockercompose.yaml okay and so here this is kind of the contents of that other video if you've already seen that sorry for the duplication but we're just going to specify the version of the docker compose api that we're going to use so that's version 3. that's the latest major version we then need to specify the three different services that are included in our application so we have services the first one will have react app the second one will be api server third one will be we then will have networks and volumes as well so let's go through and define what these actually are the react app i will use the image tag and that will be react app like we just tagged it before we can also specify a build directory and so that will be client and so docker compose if you give it a build directory you don't have to pre-build it ahead of time so i can just specify that and then it will know that the client lives within the the client directory and it could go build it on the fly we will do standard in open equals true and this was something i discovered when i was first setting this project up is that by default when you run the create react app development server on the particular version of linux that my docker image is using it would immediately exit and so by using the standard in open true it's kind of a workaround to get around that particular issue we want to set up the ports for this so this is going to be as i said it's on port 3000 and we want that to be mapped to 3000 on my host system so onto my macbook 3 000. great that should be fine and then networks i'm going to create a network here in a second called mern app and i'll create that down here and so by adding all three of these to the same network they'll be able to communicate with one another so mern app and then i'll just use the default driver for that driver is the bridge driver that should be good and i will say yeah that should be fine for that section for now the api server is going to be similar i'll specify the image api server that's what i called it again we can specify the build directory this time it's dot server we want to use the ports directive and this one's going to be running on port 5000 great and i'll attach it to that same network learn app and then because the mongodb database has to be running before you start the api server because it's going to try to connect upon startup i will specify that this depends on which we're going to define here in a second okay and then the service we will specify the image as and then that tag that we wanted was 4.4 dash bionic uh port is going to be 27017 which is the default port for mongodb 27017 27017 great attach it to the same network app and if i have any typos you should call them out in the chat you can be my remote debugging system the final thing that i want to do for is set up a data volume and so this is going to be where the database actually stores its data since by default it would be contained within the local file system of that container which is isolated from my host file system and so if i remove that it would then disappear so i will create a data volume called mongodata and i'll call that sorry use the default driver which is just the local driver and then we need to mount that in and so we specify the name of the volume here to start so that'll be data and then within the container the path that that will be mounted into is going to be the default path where mongodb normally stores its its data which is data db and so assuming i didn't mess anything up here which who knows if i did we should be okay to build and run our application there is one modification that i know that i need to make to my api server and that is the the connection string to the database itself when i was running it locally on my macbook you would connect to localhost but now that both of these things are inside of containers i need to change this hostname to match the service name where the the database is running so within this docker compose i named my service and so that's where i get this value that i will put in here and so i'm going to go ahead and build my api server and my react app again just for good measure and now i'll go up and i should just be able to do docker compose up but i'm going to create a makefile because i'll be using it later this will be run run dev this will be docker compose up and let's see fingers crossed we'll see if we get any errors it looks like we might have succeeded first try let's see so if i go back to port 3000 and i refresh the page we get no movies because it's a new database with no data in it but if we create a movie uh dockerized test movie this one's only rated 5 out of 10 and we're gonna say it's one o'clock hopefully we insert it successfully awesome uh cool so that is sort of the process of we created these docker images for our client and our server we got a publicly available mongodb image to use for the database and then we set up our docker compose file that contains all of the necessary information to tell docker how to spin up those containers as well as how they need to talk to one another on this network and where to store the data there's one more additional step that i want to do in this first part before i move on to part two and that is to mount our source code in so i'm actually going to kill this here currently if i made a modification to the application code it would not show up in my website in the browser until i rebuilt the image we can get around this and which is very useful from a development perspective by actually mounting in the directory of our code into the container itself and so i'm going to add a data volume to each of these so this will be volumes and then i will specify that for the react app i want to mount in the client directory excuse me i'm going to mount it in the client directory here and i'm going to mount that to the path within the application that i stored the source so that'll be user source app i do need to make one additional modification and because this client directory also contains this node modules directory which we ignored when we went and built our image we need to take consideration of that when we mount this volume in the way that we can get around it is to just add an additional amount at that path with no local path specified so instead of having anything on the left side of the colon i'm just going to directly say user source app node modules and so this basically says don't take anything from my local system at this path and so that will enable the docker file or sorry the container to use the the version of the node modules that were installed when we ran our docker build command we want to do the same thing on the server side which we can do here uh and i think that should be good to go so i'm going to once again make run dev not sure why that's taken a long time but it looks like it is working so if i refresh the page here list movies one two three so it looks like i broke something because now i'm not getting the movies and i'm not able to create a movie so let's try to figure out what i broke and we'll get to see some live debugging react scripts not found ah so on my api server i mounted in the client directory rather than the server directory because i copied and pasted and did not modify so let me stop it run it now let's see what we got okay now we have our test movie from before so we seem to have fixed it but we should be able to test whether or not the application source code that we mounted in whether we can modify that on the fly now so let's change something about our header so in our client we have simple mirn app and let's change it by adding an exclamation point because we're all excited i hit it i save it and then it hot reloads and we get that reflected at the end of part one so let's go add some check marks just to feel good about ourselves and if you're in the chat and have any questions about what i did in part one i'd be happy to pause for a second and answer things before i dive into part two there's no questions i guess that means everything is understood and we can proceed um so we've got it as sort of this development specific docker compose setup there's a number of things that we are going to want to change before we deploy this uh first off our database has no password on it so that is very bad we have our source code mounted into the container we never want to mount in a executable code that is going to get run instead we want to build all of that as a static container image and whenever we make a change we would then go off and build it within a continuous integration system and deploy that static image we also are hard coding some things within our source for example when we had to modify the path that the database was connecting on we went and modified it here this would be much better to pull out as an environment variable and so we'll do that yeah so there's there's a number of things that we're going to do the first thing that i'm going to do is to create a separate sort of development and production docker file so i'll start by re-by copying this renaming one of them to be dev and one of them to be dash prod dash production um that should be fine uh one thing that we want to make sure to do in our production version is to add a uh we we no longer need these build directories so we're going to get rid of that because we will always have the image available for us we're going to add the restart unless stopped command let me make this full screen and so what this is going to do is if the application exit for whatever reason docker is going to try to restart it unless we physically told docker to stop so this provides us a little bit of robustness against some quirky bug that comes and causes our app to crash docker will in the background try to restart it for us which is a good thing i want that on both of these i'm actually going to remove the image entirely i think because i'm going to move my database to mongodb atlas which is a database as a service provided by and so i'm just going to strip this out entirely okay like i said i'm going to remove these source code mounts because we don't want to mount in our source code into the image and that should be about all we need to do at least for getting started i am going to do i'm going to change my image name here to production and production so that i don't accidentally cause my run my development image in production so i'll tag it as such rather than running on 3000 i'm going to run on port 80 and port 88443 so these are the conventional ports 80 would be http and 443 would be https okay so i've separated that out i'm going to add some new make targets in my makefile so this will be build production docker build dash t app production and then i'm actually going to pass a f command to pass it a specific file so it'll be docker file uh production and i want to copy that uh we're going to make some more modifications to that in a bit we're actually going to use a multi-stage docker file where we build our node app and then copy build our react app and then copy those files into a caddy web server which is a web server that will set up https automatically for us so i will hold off on that for now but yeah it should be fine yarn build so this will be the build command and then we're going to add here in a minute copy into secondary caddy stage so that's still a to do similarly on the server side i'm trying to think if there's any modifications i think my server side should be fine i do need to extract out the environment variables into a config file yeah so for the server side i'll have a config folder this will have a dev oop dev.end local.m and production.nbc and so these are going to contain things such as that database uri so here this was here all right and so that is what it is when i run it in my development environment for my local environment i'm actually going to move as i said move that database to atlas so i'm going to go and do that now i actually went and went ahead and did it ahead of time but i can show you what i did i just have a free cluster running on atlas they allow you to have a very small cluster set up for free and this cluster i'm actually sharing between two applications one of which is the application that i built for or the one that i sort of added devops practices to for the traversing media channel so it is running here on this database and then i added a new database to the same cluster if i click here we can see under collections side by side i have the storybooks database from that other app and then this cinema database for this particular app and so off screen i'm going to copy the password for this into my environment variables file so that i can connect to it so i'm going to pull this off screen okay cool and so now i have my uh my environment file set up with the necessary information to connect into the database i do then need to modify my connection string which will be let's see let me close these my connection string which lives here in this database index file and i'm going to read that in now from an environment variable so that'll be process dot end dot uri and the reason that this works is because i'm already using the dot end package here on my server which will load in that appropriate environment file that i passed to it at runtime and populate that environment variable the uri environment variable from there and then i can read it out with that process.n syntax um jatin mishra says do i have a udemy course i do not have any courses right now all my content right now is just freely available on youtube i may do some courses in the future but for now it's it's just youtube based okay so back here we've pulled out our uri into our different environment variables we have the database moved over to atlas as i was showing you it's pretty straightforward to set up within the gui you can also set up with an infrastructure as code tool like terraform if you want um but for for learning purposes i think it's fine just to go and click in the gui and create it although the better bet the best practice in terms of a devops flow would be to create it using something like terraform uh what do i have next on my to-do list here so we have separate docker compose files move db to atlas update client to build production version so i'll do that next i also within here this is going to be the same i'll do a make build local uh or build dev and so this will the first thing let's go into the client directory and then build it uh make and so by using this syntax i can recursively call make and so i'm going to move from my top level directory into the client directory and then from there i'll run my make build target yeah i definitely plan to do a deep dive on terraform eugene that that's definitely a topic that i would like to tackle sometime early next year if i have the time in addition to building the client we want to cd into server and then make build and so that will produce two things here did i change this no let's just make build let's make this make build dev so now in this make file i want to do this will be make build is fine my top level one now this should call make build dev um and now let's add one for make build production okay and so the the the versions i have here are going to be dev which are going to be entirely running those development based docker files i'm then going to have a local and my local version is going to be running my caddy web server so it's not going to be running that development version of the react client it's going to run the caddy web server and it's going to be using the atlas database and then separate from that i'll have my production and my production one will similarly have the caddy version the only difference here is going to be whether or not i am uh connected into i guess yeah whether or not i hot load my source code into the the container itself so here i want to have everything baked into the image here i'm going to set it up to hot reload and mount that source ms webdev girl hello welcome and then this needs to be production and this is the version that'll actually run on my digitalocean virtual machine that i set up here shortly this will be local okay so the next step is going to be to take my uh client and build the image uh build the react app itself into the static set of html javascript and css files and then copy those into a cadi web server so this dockerfile production is going to start very much the same way that it did before but then we're going to have a secondary stage so this is a really useful technique when you're working with docker so this will be first stage and that's where all my dependencies live and how i build my app and then here is second stage and here i'm going to start from a different base image and so this one i'm going to do caddy let's check out the caddy font caddy docker hub and so cadi is a really interesting web server that has a ton of uh great features you can set it up as a reverse proxy you can set it up as a file server it's pretty simple to get started with and the real killer feature is that it sets up https for you so as long as you configure everything correctly it will set up https automatically so i'm going to use this tag should be fine then i want to pass in a caddy file and so i'm going to do i'm going to do that dynamically so that i can have different caddy files for different whether i'm running locally or whether i'm running in production because i need to set that up slightly differently because of how https is set up locally i'm not going to set up https it'll just be running on localhost on port 80 but when i actually deploy it to the vm it will be set up with a certificate so i'm going to set an arg so this will be a argument that is only available during uh build time so if i use the arg command it's only available at build time if i use the end command it'll be available within the container itself so here i'm gonna have this caddy file and then i'm going to copy and i'll reference this argument so the way that we reference that is dollar sign curly braces caddy file so whatever i pass in at build time there i'm going to pass in a path to my caddy file the caddy file is how you configure caddy itself and then we want to copy that into this etsy caddy caddy file and this is the default location where cadi is looking for this configuration file and so now i need to actually create that those files so the first one will be caddyfile dot local and this will use we'll do a few things we'll do http slash local on port 80. so that is the domain that i'm going to serve on locally within here i'm going to set the root to my slash srv so slash serve which is where i'm going to have my website files hosted in the container and then i'm going to set up a route and this is going to allow me to configure cadi as a reverse proxy so that when i hit the just local host with no additional url parameters or anything it will take me to my react app but if i go to slash api it will route the traffic directly to my express back end so to do that we can do route and then use the reverse proxy directive proxy api star so this is going to capture anything any path api that starts with slash api and then i'm going to route that to my api server container so api server is what i named it within the docker compose file and it's on port 5000 and then try files path path oop and this is going to go to index.html so basically this is going to allow if we hit a different path that is not api it's going to tell caddy where to try to look for those files and it's going to fall back to this index.html in the the root serve directory and so that should give us what we want in terms of serving our react app from that base index.html and then finally we need to specify this file server directive okay um now we need to set up our makefile to actually use this so i'm going to do that now we're here and we're going to separate this over a few lines this will be react at production and let's actually tag it with the local version here that's good actually it's going to use the same docker file since we added that we could have had a separate docker file but because we did it this way with the argument we'll be able to use the same docker file for both of these use cases we then need to specify two things we need to specify the build argument caddy file and so this is going to get substituted in uh in our docker file here and so we're basically specifying which of our caddy file or we're going to create a caddyfile.production here in a minute but which one we want to use and so this will be caddy file.local and then we want one more build arg and that is going to be the base url and so i haven't modified our source code here yet but the base url is going to specify where the api server lives so for our local version it's going to be [Music] http colon slash localhost 5000 slash api that should be fine save that i'm just going to check on the chat here thank you so much for this tutorial facing a challenge to move from dev into prod yeah i think there's a lot of considerations that need to be taken into account when you're taking a system and getting it ready for production so hopefully this can help give you some some good tips along the way uh mswebdevgirl says so i'm building a react portfolio website it's kind of simplistic everyone wants to see react websites even though this project project doesn't really need react yeah i think using react just so that you have more familiarity with it and can talk about which parts you like and which parts you don't like is a valuable thing so i don't see any problem with that uh thoughts on palumi versus terraform i haven't used palumi personally i really like terraform i have heard palomi is nice to work with and i know that it uses more general purpose languages like python whereas terraform you have to learn their specific hashicorp configuration language hcl so i don't have any experience with palumi but i do like terraform a lot cool so i added this option here and that is going to live in my api setup so here in my client source api index i have hard coded this base url here but instead now that i set it in my in my docker in my docker build command i can use process dot n dot react app base url is what i'm going to call it um and i need to then set that within my docker file here before i build it i'm going to do arg base url and react at base url and that is going to be whatever i set this to here and then on my local host so i need to set it as an environment variable here in my production one but on my local docker file i can just hard code that here as a local local host 5000 okay so now we are building our image we're copying in that caddy file dynamically we then need to finish off our docker file here and the way that we do that is we're going to copy our files from our first stage so this will be from builder and i'm going to name this top stage as builder so i'm naming this first stage and then i can copy from it so this is how i reference that down here below and i'm going to copy from user source app build so slash user flash source app so that's that work directory and then slash build is where the files get populated uh when we run the yarn build command there in line 16. and i want to copy that to the slash src directory srv sorry the serve directory finally i'm going to expose 80 and capitalize that expose 80 and i'll expose 443 even though we're not using it locally that's fine so this dockerfile now should be able to use for local or for production and eugene asks if it's advisable to use docker compose in production so the main reason not to use docker compose in production is that one of the primary benefits of using docker and containers in general is the ability to run many different instances of them and very quickly bring them up and tear them down and scale them out and so using a docker compose setup essentially locks you into having a specific one one copy or however you've defined your compose file of each service and so from that perspective you would probably not want to use docker compose i'm using it here as an example but you would probably want to use something like docker swarm or kubernetes if you get to a larger more complex system where you need to have lots of different replicas of your applications and services and scale them independently of each other so hopefully that that answers the question here i'm using it just to showcase how do we take our app bundle it up as a container use some best practices around configuring things how we're going to have separate development versus production versus local run configurations and then get it running live on the server but if we did want to make this a more robust uh thing i would likely even rather than use docker compose i would set up my ci pipeline my continuous integration continuous delivery pipeline to deploy the docker containers individually this would also enable you to make a modification to either just the client or just the server and not have them fully coupled like this you want to try to decouple things and so probably not using docker compose in production but here i'm just using it so that i can get everything spun up and showcase it running good question though okay i then need to add a production caddy file but i can do that in a minute i'm gonna look in here on my to-do list okay yeah i'll go ahead and add my production caddy file as well so i'll copy this one rename production uh and so this one is going to be similar to before but instead i'm going to deploy it to let's say mern my [Music] mern.mysuperawesomesite.com and we'll do that on port 443 uh i need to specify the email address so i actually own mysuperawesomesite.com so that's why i can do this tls with my email address that i use to purchase that so when it goes off and tries to provision that certificate it needs to confirm my identity in order to be able to do that so we'll do that the other portions of this we still want that route set to slash srv our route now looks okay yeah so i think everything there should be fine we just had to add that one line and now we need to update this accordingly so i'll do that so now we're going to tag it production and now we want these build args so they're going to be slightly different um so now instead of the local caddy file here we'll do production we're tacking it correctly um our api now our base url is actually going to be https colon smirn.mysuper site.com api okay that's good that's good and so now in our top level make file we'll have build local build production build we do need to modify our compose file to be able to use those but i'm actually going to take a quick break to grab some water and i will be back in just a couple of minutes so be back in a few all right i am back i see a little bit of noise coming on my mic so let me check something okay let's jump back into it um i had just modified my docker files and my make targets to be able to run these new versions let's get my caddy file based deployment running locally and so in order to do that i'm going to do a couple of things yeah so my docker compose file now i don't have my mongodb database because that's running an atlas that should be fine i do need to pass in my environment files to my api server and so that's going to contain the connection string to the atlas db instance and so here i'll do end file and that will be dot slash server slash config slash um and we will pass the environment variable as an environment the local versus production as an environment variable env dot end when we actually run this so like here when we're doing run production we'll say end equals local and end equals production and so that will allow the single docker compose file to select between my different environment fair environment files so that should be good um i also no longer need this because i'm not using that development react server anymore so let's go ahead and see if that will run oh i do need to pass it the particular file that i'm using though so now that i've renamed my docker compose file i need to specify the f this will be docker compose dash production.yaml similarly here and similarly here but this one will be dev okay so i'm gonna go ahead and run my make build local target and so that's going off and building the local version of my react client so that's going to build the static files put them in the caddy file using the caddy caddyfile.local variant so here we see it creating the uh the production build we grab that new docker file the image from docker hub the caddy file one we built our api server okay that's good now let's do make run local i fully expect to get errors here but we shall see react app uh okay let's do localhost 80. so the client appears to be working correctly if i do create movie one two three add movie i do not get a success so let's look at our developer tools here let's see 404 okay so what did we get when we spun up the server server running trace deprecation docker composed production environment local okay ps ah so i i think i forgot to change the image tag uh within my production yeah so here i'm still using my uh same tag here and here i think that i need to then also include this environment variable so that when i run my docker run command it knows to use the version that i tagged as local and here i'm tagging that as local and production base url localhost 5000 okay so make build local evan dewey wow this is some complex stuff you're a very smart guy thanks evan really appreciate it um so we've got the the new version successfully built api server latest uh i also need to run that when i do here okay okay the front end seemed fine before but it was the back end didn't seem like it was connecting properly to the database um okay oh okay so almost 80. true so i it's loading properly but when i click on the path it actually removes my port specification on local and so i wonder if that has to do with my caddy file it gets stripping off that uh we're 80 there okay like run local so i'm here so why does that go directly to there let's see if i hit the api directly api slash movies so the api appears to be working correctly i'm getting back the json object from the database but there's something incorrect about my caddy file that is causing it to mess up that's how it was uh doctor file expose 80 expose 443 true i do want to mount in a volume for my caddy data and caddy config but i don't think that is the issue all right because i go here get my application great movie test one two three one one the insert works successfully if i go to just list i get this error if i go to api movies test one two three okay so it looks like i'm not actually inserting it successfully because nothing yeah the the version that i just entered is not showing up uh the test one three one two three so it still seems to be an issue with my uh database connection but i would have expected it to give me an error when i started my server if that was the case um so let's just change this to be something nonsensical uh and then we'll rebuild so we would expect this to then fail when it tries to set up our connection because this is not an actual database it does fail okay how are we using on the front end react app base url oh i think let's see so there it was localhost 5000 here it was my base url and that i'm specifying as this uh action this is not not very exciting but just some some real life debugging going on here ah let's see about 80 route so let me just try to go to the api server itself hello world uh movies flashlist no cannot get i should be able to go to that path slash movies cannot get movies that's weird hello heron and mendes you joined at a very boring time where i'm just trying to debug why i just separated out sort of a development and local and production uh configuration and now i'm trying to run my local configuration and the front end is working so if i go to localhost 80 i would get my application i can see this page but when i go to list i get cannot read property length of undef undefined and the it's returning nothing i wonder if i have an issue with my atlas config where i'm not actually able to read let's see database access uh network access includes my current ip okay it's fine why can i not get moved not found well i'm going to continue on and get my virtual machine set up and hopefully as i do that process something will come to me in terms of why this is not working as expected i do want to make one additional change and that is in my docker compose production i'm going to mount in a volume for my client and that is going to be where i store my tls certificate so it's going to spin up and get a https certificate for me to serve on and so that will be at the location caddy data and that's wherever on the the host system it's going to store it and then data here inside the container and caddy config and that will be the config directory okay i think that is fine so we did this we did this we did this right right that's good uh and then what else we did this but it's not really working so we're gonna we're gonna give ourselves an x for now and then i'm going to move on to part three and hopefully we can figure out what's going on along the way so in order to do that i'm going to go to digitalocean i'm going to start by creating a virtual machine and again the better way to do this the devops way would be to use something like terraform or another infrastructure as code tool to provision these i'm just going to do it in the gui for the sake of time docker on ubuntu is the the marketplace image this is a really nice starting point if you're using docker and just want to get something up and running very quickly it will have docker and docker compose pre-installed on the machine i don't need much compute power at all let's just go ahead i'm going to tear this down afterwards though so i might as well pick something that has a little bit of juice behind it that should be fine uh create droplet i am then going to go to networking and firewalls and i have this http firewall rule so it's gonna allow traffic on port 80 allow traffic on port 443 and allow me to ssh into it and i just need to connect that droplet to it it will show up here not sure why it's not showing up let's see oh it's not provisioned yet docker ubuntu do do do okay so it is now provisioned and we should be able to add it to this firewall firewalls http traffic droplets add there it is and then i also need to add a rule to my atlas database configuration that will allow me to connect to this machine so i'm going to copy that ip address go to here and add edit this to match my current virtual machine confirm while that is applying let's set up a make target in order to ssh into this machine just string it's going to be root at the ip address stage so by using root at that ip address i can then hopefully connect to that machine so let's kill that add it to my known hosts and now this is a session on that machine so that's great i also need to add a domain name record a dns record pointing to this machine so i'll go here to cloudflare and we're going to do something like [Music] mern.mysuperawesomesite.com point it there save because i'm using caddy it's going to use unencrypted connections to set up the https certificate so i need to set this to flexible and later i'll upgrade it to full once we've provisioned that i don't want to save okay um now i want to actually copy my code to this machine um and so the the proper way to do this would be to uh actually have a system like github actions or circleci or gitlab have a pipeline that builds these images and pushes them to a container registry like docker hub or google container registry many of the cloud providers have a container registry the way that i'm going to get this code onto the system is to copy all the files from my local system onto the machine using an scp secure copy and then i'm going to build the images and run them there directly so i'm going to do copy files and we're going to use the scp command recursively for this whole directory and then we're going to pass it that ssh string and then the path on the [Music] system that we're going to copy to is just going to be root because that is the home directory for the root user which we are using to log in so let's do make copy files there we go everything's getting copied over one thing that we don't necessarily need to do is copy all of those copy all of those node modules so i'm actually just going to delete them since i don't need them since i'm using docker anyways so get rid of that get rid of that and that should vastly cut down on the number of files that we actually need to copy and there we're good make ssh and if i do an ls i can see all of my local files were copied into this directory make is not installed out of the box so if i do make it will not be found but i can then install it with app mate apt-install make and so now i should be able to do make build production and fingers crossed it's going to build our production images with all of the necessary configurations i'm probably going to run into the same bug that i ran into locally unless it has to do with the caddy file config and the production one is correct but the local one was incorrect for whatever reason but we will find out here shortly hello from romania sabul bulescu adrian hello and hello hamza islam welcome to the stream we are just trying to copy our code onto the uh digitalocean vm that i just set up and we're building the images there and then we're going to try to run it we have a react client that we are using on the front end when we're developing locally we run that as a development server when we run it in production we're actually building it into a static set of html javascript and css files and then serving that with a caddy caddy web server we have an express-based api and then our database when we were running locally for development we were running it inside of a container all of this set up with docker compose when we're running it in production we're going to have it in mongodb atlas which is a database as a service product from and it is building our images oh i did not set my base url in in my make file did i that i think is my issue from before oh no i did within my within this makefile okay i did set it but it looks like our docker images have built so i can do docker image list and we can see we have api server we have react app production latest and so i'm going to do make run production and see what happens caddy file read white but it was not found here ah so i i defined this but i didn't define a volume in the volume section so uh let's do caddy data local caddy config driver local question hi we'd like to know why to use a make file isn't docker compose sufficient a docker compose file would be sufficient i just don't want to have to memorize and get correct all these different options like i'm setting this environment variable local for these two i'm passing it the name of the the docker compose file and so it i could type this out every time but this is just a way for me to separate the different commands and keep them organized so that later when i come back i'll be able to find them and use them appropriately chris fischer says thanks found you from traversing media great video great content thank you chris um we just added that data volume to our docker compose production file we are going to exit from here we'll do make copy files to get the updated file into our system make ssh make build production okay and then make run reduction we see it setting up the https it looks like it was successful releasing lock that's a good sign so i'm actually going to go to [Music] mern.mysuperawesomesite.com uh first i'm going to upgrade this to full so now that we have our certificate we want the traffic from cloudflare to our origin server that's our cadi server to be encrypted so that's why i set it to full and our configuration is actually enforcing that traffic must come on 443 over the encrypted line and so that's why it didn't work a minute ago but if i refresh still not working ah there we go we have our application list movies and we get movies from our mongodb database uh live demo movie that is 10 out of 10 and the time is 4 34 add movie success list movies and there we have it so if everyone in the chat wants to go to that site you should now be able to load it and interact with it um yeah so i'm not sure what was going on local it was something to do with how i was capturing the uri path uh or url and so when i when i tried to hit the slash list slash movies path it was somehow getting truncated and not properly passed to the api so i'll probably debug that offline but it appears that everything is working in our production version i would want to do a couple of things like for example i would probably run this in the background so i would add a i would add a dash d flag here actually how does docker compose handle running things as a daemon yeah so you can add the dash d option to run it in the background [Music] um what else i would want to there there there cool green check marks across the board except for our our local version of the production config was not quite working right um but i'll probably figure that out offline like i was saying but anyways what what else would i want to do before deploying this my super awesomesite.com i mean one thing that comes to mind is that currently this api doesn't require any authentication to connect to so you can just go in and anyone can create movies and delete movies and so depending on the purpose of the application that may or may not make sense um if you you could add user authentication you could either implement it yourself and have passwords and you could store those passwords or you could store a hash of those passwords in the database you could use an external provider like a google oauth or maybe an auth xero setup as a way to add individual user accounts and then you could set up permissioning on the api itself such that only certain users could modify the values or you could have a separate listing for each user for example so that would be another good step to take as you were working to productionize this thing you probably also wouldn't pass the environment variables as a file because now make ssh oh where am i cd dot now in my code on this vm i have config cd config i have these files that contain my secrets my my connection string to that database so i'm going to actually remove those star.end so those are no longer there the the proper way to do that would be when you issue your docker compose command or if you're in doing individual docker commands you would pass them each as environment variables on the command line so we could have something like this where instead of passing an end file we could use let's see what it is actually docker compose environment variables yes you would have something like uh environment and then we could say uri and that could be equals and then we could actually pass that from our make file we could have that and then in our makefile when we actually spin that up we could have uri passed in like so so that would be another improvement over the way we did it with the environment variables or sorry the environment variable file we also then wouldn't need to use dot end the the node package we could do it directly i think that is about it any other questions from the chat before i go ahead and sign off thanks for tuning in and hopefully this was helpful as we migrated this system from sort of a baseline locally running node system dockerized it got it set up with docker compose locally made the necessary modifications to have a production build of that react client changed some environment variable setup to combine the two so that it was a little easier to work with between development and production so that we can actually use as similar of setup as possible and just make the modifications in our configuration versus in our application and then set up and ran it on a virtual machine in digitalocean all right i'm going to sign off thanks everyone take care
Info
Channel: DevOps Directive
Views: 14,452
Rating: undefined out of 5
Keywords:
Id: DftsReyhz2Q
Channel Id: undefined
Length: 97min 49sec (5869 seconds)
Published: Mon Dec 28 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.