Continuing to Dockerize a PERN App! [DevOps Office Hours Ep. 04 -- Featuring Donny Roufs]

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hello and welcome to the devops directive office hours this is the fourth episode uh and once again i am joined by donnie roofs and we will continue to work on his project league decks we've had two episodes so far on this project the first episode uh we just reviewed the stat the status of the project went through the code base a little bit in the second episode that was the last one we spent some time dockerizing the application and getting a development set up with docker compose and then in this one we'll be continuing with that and adding some more production production-ready configurations and depending on how far we get we'll start working towards getting that deployed onto digitalocean so donnie welcome back oh thank you i'm actually hyped about today because today is going to be this this next step of moving away from development so that's that's cool yeah yeah and at the end of last stream we had gotten stuck a little bit on a couple of things so i'll go ahead and share out my screen here in a minute and let me check the lot let me check the settings on my page because it says we have zero concurrent viewers which usually we have a few by this point i wonder if i have it set up as private or something we could be here chatting all alone forever well i can still watch it back so that is true that is true no it looks it looks fine so we'll see if people join in the meantime we'll just be we'll just be building some stuff oh ismail says he's here cool good to know we've got at least one maybe it's just the youtube studio uh is is acting up there we go now we've got a few viewers coming in welcome welcome glad to have glad to have you here i need to share my screen and i will try to zoom in more than i did last week because i remember it being a little difficult to see so let me know in the audience if you can see this all right and also let us know where you are tuning in from because it's always good to good to know who all is watching and connect with the community members let's see so i have up our oh this was this was just some stuff i was doing with kubernetes playing around in the background we have our docker compose file let me close everything else and as just a reminder we have a postgresdb these are just our development settings that's exposing port 5432 to connect in all of our services are connected on this pern app bridge network which is configured here at the bottom that just allows them to all talk together without necessarily being able to talk on the host we then also have the api server which is located in this server directory it is reading in a couple of environment files that have some secrets so both our database connection string as well as some other configurations there within the environment file we're mounting in our source code here and that enables us to reload on the fly so when we make a code change locally that will be reflected inside the container and then we're using nodemon currently to watch for those changes and restart the api server when it detects those on the front end it is a react based client located in this client directory uh this is just sort of a hack the the development server that create react app provides when you do yarn dev or npm dev if you run it inside of a debian based container it closes immediately so this this allows you to keep that server open without it terminating the container we're exposing on port 3000 again we're mounting in the source code so that we can reflect changes live and not have to rebuild the container every single time uh and then i don't think we're actually using this for anything this is just the uh database admin tool right yeah it's something that comes in handy uh during development sometimes but besides that you don't use it at least i don't got it cool and so at the end of last stream we had gotten a little bit stuck on getting this all running with the docker compose setup i afterwards i went and debugged that so i can show you what i did essentially we were not properly initializing the database so because it's a postgres database and we need it to match our schema of our api we need to seed that database and then run any migrations that we have on it so what i did after the stream was i took the bootstrap script that you had already had and i modified it slightly uh and so now what it's doing uh is it is running this yarn prisma migrate up and yarn prisma generate inside of that container and then it actually starts the index.js server process um this works but it's not ideal because generally when we're on when we're deploying this to let's say digitalocean we're not going to want to run these commands every time we're only going to want to run them if the database needs a migration um and let's say if the app crashes we want to restart only this we don't want to restart and rerun any of this stuff the way that i'm currently calling this is just at the end of the docker file whereas before we had this was running that nodemon command or it might have just been running node or yarn start instead uh it's running this bootstrap script and so that gets executed each time let's just make sure that everything is working as expected let's see and before it's broken oh no no i just noticed that the digital life christian is in the audience so welcome glad to have you here we're talking uh docker and node and postgres all sorts of good stuff uh i am in the top level locker compose up docker dash compose it's in swarm mode didn't even know i was in swarm mode but yeah so that prints out a few things green is our front end purple is our database yellow is our api server the relation already exists okay [Music] all right let's go ahead and see what that looks like on port 3000 uh and we should be able to log in and log out if things are working properly i think you said is it admin and then asd easiest possible terrorists that's right that's right uh cool okay so that all seems to be working um so we're back to where we were at the end of last stream um like i said we probably want to extract out this database specific stuff from our main server container and have like a way to see the database a way to migrate the database etc let me just add a let's just have a to-do list here for extract actions from server container uh uh once we do that uh we want to add nginx server engine x server and build of client what what else do we need to do good question good question i mean eventually we'll need to get our certificates and stuff set up with this configuration right now you have those just configured directly on the droplet right uh yeah so initially i did it on the droplet but then i started using cloudflare and now cloudflare handles everything for me got it uh so you is basically is it you have a certificate from cloudflare that you put on the droplet are you using the flexible option where it like terminates your https at cloudflare and then sends a hd i believe the flexible option okay yes i'm not one person sure because the thing that i did before i used cloudflare was set up my own certificate but like this is i can't remember what it's called but there was one to do it and then i used that for like a few months and then i moved to cloudflare and everything just walked so nice i was like okay it works whatever okay yeah so maybe we don't need to worry about certificates then but yeah let's focus on this first one to start and joshua says i always use strict on cloudflare and let's encrypt on the server yeah so by doing that you can make sure that traffic isn't encrypted from the cloudflare proxy all the way to your server and it's a bit more secure if you don't have a if you're not dealing with any super sensitive data like the attack vector between cloudflare and for someone to get in the middle there and intercept that request is pretty unlikely um but if you're dealing with something that is super sensitive certainly you would want to have uh the encryption on on your server as well yeah i mean i think it all depends on sort of what the data you're protecting is what the threat model is what you're thinking someone might uh where someone or how someone might attack and like what their incentives are and what how hard it would be for them to do it okay so why don't we here within this docker file let's actually separate it out into what's called a multi-stage docker file so we'll first have kind of a build stage that copies in our code installs all the dependencies etc and then from there we can have one docker file that or one image sorry that is dedicated for uh migrating the database and then another one that's dedicated for uh seating or running the actual api server itself and i just saw that my my father showed up this time donnie i know your father popped in uh in our first stream so uh glad to have you here dad uh he's not my number one fan for sure um those are the best that's right that's right uh and so for a multi-stage docker file uh we will name each stage you don't have to you can just reference them by index but it's easier if we name it so let's call this as builder and so at this point we will have copied in our code installed everything one question comes to mind do you actually want to separate the migration because if you run the migrations and there's no like there's no new migration then prisma will automatically say okay there's only one so we can just skip this so what i was what i was thinking is we would keep it separate which just seems cleaner from an organizational point of view to me and then we can still run it with each deploy potentially there's also presumably cases where if you make a breaking change to your database schema you wouldn't want to automatically run it right like there could be a situation where you need to wait until after you've deployed the api server before you run your migration or vice versa um that's true ideally you design your schema changes such that they are non-breaking uh and there's some tricks that you can do to enable that in terms of like adding a new column writing to both the new column and the old column then deprecating the old column in the api server and then finally uh removing that from the database it does require much more work than bringing the app down for 30 seconds running your migration and then bringing up the new version and so there are certainly trade-offs and for a project like this uh it's probably not worth it to to really enforce zero downtime migrations for everything for a really massive app that is getting tons of traffic they have much higher incentive to build these no these zero downtime migrations such that they don't have to have random down times just to upgrade the app version that makes sense um yeah so what i was thinking here is that we would then we still want all our source code that pop-up is getting annoying i should turn that off but we will have a second stage basically here and that will be yeah so we'll also make it from the same image or we could just override our command when we actually execute it could be a another way to handle that so like if we let me kill this like rather than doing multi-stage what if we just we have our default command be let's say whatever we're calling at the end of this um [Music] there okay so now if i run it we should just run the api server and not the prisma stuff but that should be okay because we still have that same docker volume where those changes were getting stored nicholas says hello from argentina hey how's it going glad to have you here uh looks like it got mad yeah why did it why'd it crash didn't like it new prisma client okay so these uh this prisma generate command generates a schema file or something right yeah okay so before i was running it inside that container with this bootstrap script how it was before right and so now that file is no longer like when we kill the container bring it back up it's not necessarily getting stored yeah because i believe that yam a prisma generate adds something into node modules so it loses it i guess okay and so realistically we want that file to exist in our new container but we also don't want to necessarily run that every time like we could run it and have it saved outside the container and then mount that uh schema file into our is it this file that gets generated or a different one uh no so it doesn't generate as an input yeah if you look into node modules i think it should add something like dot prisma yeah so that's what it generates also it does actually include his key model okay yeah yeah now that that makes sense yeah and so we're not copying our node modules from our local system because the dependencies could be different um before we were thinking we could run it within our build within our docker file in that case we would need maybe we maybe we can just do that i had i had taken that out since last time but maybe that would work just fine um this one uh we want to do that probably right oh after we've copied in our copy do we only need that prisma schema actually i'll just do it after we copy everything uh that's i guess not sure last time i used prisma was in a typescript environment so ah well yeah let's just copy in our run and it was schema prisma schema if i could type prisma nicholas says too much covet here shared the link with some friends hope they can join us yeah there are a number of places right now that are really struggling with kovid so sorry to hear that i hope you're managing during these times my dad says he is in asheville north carolina which is my hometown growing up and hello to cheyenne from india glad to have you here cannot kill container what's still running oh and let me just i'll just paste a link to your original repo in the chat so that people can check it out if they haven't yet the site that the production version of the site is leaguedex.com and x and the repo is here i have a fork of this repo on my github which is what we're working with right now um so it looks like that worked that's good so we were able to uh use our schema and generate the necessary prismo files in the build step which allows us to run it directly okay let me make sure we're not fooling ourselves zoom in log in great so that is good i also saw there was a so pm2 which is that process manager you're using on the droplet they have a docker config that adds some niceness around productionizing your environment okay that we could add you basically just add one line to the docker file and then it essentially wraps your application in some of the norse the niceness that pm2 provides out of the box um so we could do that as well not found uh is that okay yeah that's fine okay uh and i should probably turn off swarm mode if we're not actually using it let's see so by default league decks actually tries to see if you're in game but there's no check to actually check if you have an account or not so so it's going to be complaining about it's not found got it got it got it um and then we had a question from ismail um about swarm mode yeah so i had swarm mode on by default because i was experimenting with it off stream i don't think we're going to need it here because we're going to deploy onto a a single droplet on digitalocean we know we have a fixed number of cpu and memory and so we can just specify within our configuration how many copies of the app we want we can manually wire up the uh the networking like swarm mode or something like kubernetes or an orchestrator is really nice when you you might be adding more copies down the line and so it automatically handles a lot of the configuration for having anywhere from one to n copies but since we'll probably just have a fixed number like let's say one staging api maybe two or three copies of the production api with uh nginx load balancing across them we we don't necessarily really need swarm mode but i was saying we should go check out this pm2 docker thing [Music] yeah so i think we can just run this so i will run it there but i'll use yarn um this will be actually even less frequent than our and then rather than run our app we will run this command source i think there's a typo in the beginning by the way uh so instead of yawn installation at least no no that's that's right npm install yarn add good catch it's actually funny because i started using yarn because i didn't want to uh prefix my scripts with ron and then i ended up creating scripts but with ron in front of it and then just because i don't want to keep typing the same things over and over i'm going to add a makefile because that's that's just what i do so this looks really close to yaml uh no this is this is closer to just bash make and bash are slightly different each of these individual at the at the most uh the root level is a target or recipe when i invoke it it will just run each of the commands that are underneath it in series you can't have like multiple levels of targets like you could in yamo how you have multiple indentation um yeah make is just a super old unix tool primarily used for configuring builds of compiled languages but for one-off little commands i find it super useful i see like now i'll just do run bill or make build or make run local and tap complete will work and it'll be good to go okay so we just did that let's just try running it again see what happens uh make run logo and ali says hi i'm not so late no we just got started a little while ago we're about half an hour in um we're continuing on from where we left off last time so what did it not like about that pm2 runtime cannot find module but we installed it globally why is it not found oh wait can you go up yep so uh to install a global yawn you need to do yarn what is the global app uh yeah it's probably i'm not sure how to flag box and y'all uh yarn yeah so it gave us the npm yeah so it looks like yarn global ad pm2 so to just silently ignore our flag and say oh make build always right oh it works no it doesn't uh just a dub just a dev asks what my thoughts on running docker compose in production are um people often advise against it uh i think primarily because it's usually used as a dev tool if you configure things properly it's not too bad um the the one downside that really stands out is the fact that you now have all of your components tied together in this one config so like if you want to upgrade just one of them or scale up one of them and add another copy that can be a little cumbersome with docker compose um yeah so i think for smaller projects if it's just one copy of each component i think it's not too bad as long as you make sure to configure it right but it's it's not too bad otherwise uh and nicholas asks how do you manage compose between environments i'm assuming you mean like between a development versus production environment and for that i would i generally have multiple copies of my docker compose file so like i'd have a development one in which i am running the dev server for the client and then we're going to have a separate one where we are running an nginx server with a built version of the client source files or the the resulting static html css and javascript and then when we a digital ocean i'm not sure whether we'll use that docker compose file or just run the the necessary uh docker commands to instantiate those okay so now hopefully we have that pm2 package global so if we do make run local i guess i'm just going to comment out this addminer part since we're not using it no need to have it there and it seems like a savvy it is now happy yeah yeah yeah looks like it it's getting those requests um yeah and so what that pm2 package does um i guess it adds a few things for us which is nice not sure exactly what all these things mean but that's that's good seems seems useful that's how i always feel about devops oh interesting we might even be able to do the load balancing piece with pm2 because this is saying if we wanted to have multiple copies running that might be the way to do it awesome it's good to know for now we'll just focus on getting one copy up and running i believe also we should before we run our final command we should do user node and so this switches the user inside of that container from the root user which is used in here because it's necessary to install a bunch of stuff to a non-root user with a higher pid so i think this is like user id 1000 and so that just adds another layer of protection that if someone did find an exploit in your application or one of your dependencies and had access to the container um it's best not to have them have access to that root user so that should be another upgrade that we make nice emojis from ismail oh interesting sometimes the emojis come through as text in stream yard but it looks like those actually made it through and ally says it's hard to read even on the ultra fine monitor yeah i tried to zoom in quite a bit to make that better hopefully it's it's legible um i may also at some point upgrade to the 1080p but i haven't haven't done so yet i was hoping that just zooming one more click would be would be sufficient uh oh i mean quality seems fine for me and i'm on an ultra right so yeah one i wonder if it's getting degraded at all between are you looking at on the stream yard or on you're probably not on the youtube stream are you uh both i got both of both okay i actually just realized that i could go to comments to see all the messages oh yeah yeah for sure two person oh there it is oh yeah it looks looks okay on i just pulled up the version of the stream um all right so make build make run we wrapped it in pm2 we now wanted to add another uh a way for us to run those migration scripts and the seating scripts without having it in the compose for the final application right so what if we did like ah migrate db seed db uh and i'm just putting echo pass as a placeholder until we until we actually do something so if i were to run make cdb it would just print that to the terminal uh but we want to do [Music] docker run what am i actually naming these uh what does it get tagged as when i build it uh docker image less okay i have a ton let me grab for league tax okay so it's just getting that latest tag so i'm just going to run that container but i want to override the um the command by specifying another another command here so like if i just did this dash run i t then it should run bash instead of our normal uh set of our normal pm2 runtime command and just drop me into a shell on that on that container uh yeah so here this is now a shell running inside of that container so that is cool and what we wanted to do is run this guy so normally we would does the order on these matter like we need to first run this my generation need to migrate first and then generate a seed migrate first then generate and then seed yeah okay cool uh so let's get rid of this in this bootstrap and essentially we're just going to run this bootstrap sh instead of uh so this is pwd so if we just do there's a typo typo where's typo uh the very end of the line you named it boo strap boostrap there we go might be a cool name for a new app bootstrap so now what happens if we run that uh exit oh come on field okay okay okay no that's because the database is not running um or like this this can't access the database so really we almost need like a a docker compose for just the db or i could just run it i think ideally we we like start the db in a container we run this bootstrap and then we store all the data from the db and configuration stuff on our host system that then we can mount into the to the actual one right because we want to we want to be able to persist those data from this bootstrapping phase into our actual application running phase and so i think getting those stored into a volume on the host is going to be the way to go excuse me ali says he uh you must have a better pair of glasses than him because you're he's he's unable to see well maybe i should get new ones though i feel like i'm still blind even with those glasses uh okay so i'm gonna do it in just a separate compose because it'll be easier to configure so why don't we just do this rename very long file name ah then we're gonna do the dash dash file i always forget why will that not die come on doctor composed file dot yaml killed that terminal not sure what was going on there okay so we're all cleaned up from before we now have this separate docker compose file and in here we do want the database we'll want it on that we do want the api but we're going to do a little different we don't actually need the client we don't need addminer uh the database is fine we have this depending on the database and then i think we'll just have a command section something like something like that that's probably what we want yeah okay user source app bootstrap bootstrapper bootstrap sh uh let's see make bootstrap tv no didn't like it oh i needed up in my make man orphan containers okay so it looks like it successfully found the database container and ran that script except why is it listening we wanted just to oh i haven't rebuilt the container so uh the potentially the bootstrap inside of that container still contains those changes that we had before and why does it not want to die done done all right so now if we do bootstrap db what do we think is going to happen probably remove those containers all migrations applied seeding database exited with code zero perfect um code zero when you exit this program means it was successful so that is good uh now we would want to kill that and so if i i guess the test of whether that's working properly or not would be delete our volume so that we don't have the database migration and seeding applied run that one first then run our normal docker compose and see if we get uh see if we're able to boot up successfully without having to have the migration within the the main api server container it's in use and used by who i'll just we'll just force it that sounds familiar like if force doesn't work or cedar doesn't work then i'm lost okay so first let's do make build make run local so this should fail we think if the database volume was properly wiped out no so apparently those volumes were not where that database was stored so we're here maybe we should create an explicit volume for the database to store all the anything we want to persist so the data as well as any configuration stuff onto our host system that would be good or we could just put it in a specific volume my hands let's name it and then what are all my options for this uh so i could do a bind mount to our local database this looks this looks promising yes i'll just define it within our database portion here um and we want let's do i'm just gonna make a db data folder out here and then let's not have that one for now right and do you need to add this to the ignore or is that not necessary uh that will likely be necessary right yeah let's just run it for now and see ideally we'll get some files in there after we run it unless it doesn't like our keeps warning us about this so let's let's delete it make file there we go uh pushkar says looks like i'm pretty late uh yeah we've been going for a little while but we're we're still making progress so hopefully hopefully you'll still see some some useful things and ali wants to know what i'm drinking it is coffee it is 9 a.m in the morning here so have my morning coffee it's a little later for donny but he's also having coffee for for whatever reason i only drink coffee man it's what is your preferred method of making coffee um well if i'm not lazy then i i tend to use my french coffee press and um but my own bones are beans not bones uh otherwise i would just use um espresso or something in that friend what about you yeah i have uh it's called an aeropress so it's kind of in between uh uh french press and a espresso i don't know it's like a plastic thing with a plunger makes a pretty good single single cup in my previous job you had this really fancy coffee machine which did everything for you so hopefully if i get rich one day i will get my cell phone nice nice nice so our database let me see doctor yes so let's just jump into that database container and see what's going on docker exec dash id bash so do we have data in cd slash bar okay so we have all this stuff [Music] we need to we want to get those files on to the host system uh and ali was asking you a question donnie he says are you in college self-taught in between in between no i am i'm self-taught but i did join uh college uh but it's no i've i've actually been teaching i've done some workshops but i didn't really learn anything but the the positive side about college is that if you if the thing that you are studying is your passion then it's really great because you have this this big amount of free time to do your own thing so there are some people in my class that aren't like programming is not for them and you can actually tell that they're really struggling with it well i'm like okay so you give me my assignment i will do it in like two hours and then i got all the free time for myself but yeah basically i'm self taught um i i got mantled by just a death which is also viewing watching us right now oh yeah we saw a comment from him earlier yeah cool uh and then i i studied mechanical engineering graduated a while ago spent some time as a research scientist research engineer and then eventually switched over to software a few years back so somewhat self-taught somewhat uh academically taught i think most of my software knowledge though came from learning on on the job yeah so what we wanted to happen was we want to get all these data which are in this directory on our container and we want them to be stored locally in this file so that we can persist them outside of that container and for example once we're on our droplet we'll want those to be stored on the disk so when we take a snapshot a daily backup we'll have have those data in in the backup uh no declaration was found okay yeah so if i do it that way so let's do docker inspect on that container and just see what uh what mounts it actually has docker yes postgres docker inspect so then what do we actually have in terms of mounts we've got this volume yeah so it looks like it's just has this volume and it's not actually using our local one like if i do so b62 b62 maybe it just needs to be initialized all right so let's kill this delete that volume then start our app [Music] mm-hmm so yeah ali asked a really good question he says what are the design constraints in league decks making devops harder feel like a lot of moving parts and it's getting complicated for a single machine yeah so part of this is that donnie had a planned uh upgrade for the project uh and so he's going to be making a bunch of changes to the application adding typescript and so we wanted to just make the the process of updating and deploying rock solid so that it'll be very easy for him to do that the other thing that was missing a little bit before was he had just manually spun up the digitalocean droplet and went and installed and configured a bunch of things on there and so by moving it to a docker-based deployment now the configuration of the the virtual machine is pretty much it doesn't matter at all because everything dependency wise will be contained inside these docker containers and he could deploy it very easily to a new system once we get things locked down here you are right that it is it is borderline overly complex for a single server deployment but uh it's also a good a good learning process to uh to add all this stuff i don't know any additional thoughts donnie no you pretty much said everything uh yeah cool and now we're just kind of stumbling along trying to figure out why we can't get our postgres data to show up on our host system we [Music] is this one necessary for some reason what is this one post-grad socket what is this actually doing i guess the only thing i could add is that that we both have a staging and production environment yep yeah that's a good point as well so you want to have a staging environment and right now you you just essentially run your github action on the uh server itself and so that was becoming a bit of a problem when you would trigger a new build it would spin up a lot of resource usage on this droplet that also contained your database and your application and so part of what we're going to do is move that onto the github action github hosted runners and so that will enable the application system to be less impacted by development process and new deploys and that sort of thing and moving to docker helps to clean up that interface versus trying to do something like build it on the github runner match the [Music] the system os version and compatibility on the two systems and copy over the dependencies docker just helps to provide a nice clean interface for ensuring that what we build on the github runner will actually run as expected on the the droplet yeah so i was just looking at this and it seems like we're doing things correctly that's kind of like what our mount looked like we had this volume mount we do have data appended maybe the old container is still there and so it's using the configuration it already had [Music] api so this one might be our offending um hmm i thought that was going to be the trick but it was not postgres database already appears to contain yeah this is the line we want to go away we want to not have a database already and joshua is asking if docker's good on arm 64. so they have had capability to do cross multi-platform builds for a while with build x i actually have a video about using build x on my channel that goes into that when i experimented with it it seemed to work pretty well for most of the simple docker file configurations that i threw at it when i tried it on a very complex docker file from one of my clients it didn't succeed so i think in general yes it works pretty well but you'll probably run into when you get to more complex situations dependency so it's not the it's not the issue with docker it's just the fact that maybe some of the dependencies that you care about aren't built for multi-platform use yet um and so that situation continues to improve over time um so yeah i would just test it out with with build x and you can do that on regardless of what the architecture of your host system is so yeah i'll just go go give that a shot uh and i know they did just release to the general availability version of docker support for apple silicon so running natively on the new max is now possible but you will need to use build x if you want to build containers for x86 architectures let's just do a full docker system prune we're okay it all yeah i think it seems like the for whatever reason the database container had that had a volume that was persisting and so we weren't re-initializing um and so i think that could have been why we weren't getting what we wanted but i'm not sure not sure yeah this is the the beauty of doing it live right definitely always happens when you're live too right yeah i'm sure like after the stream i'll shut it down and then 10 minutes later i'll be like oh that's what's going on yeah like i i tend to when i uh do my programming sessions i tend to use the pomodoro technique so i'm always full so just take a break and every time after a break i'm like ah that is why it helps a little yeah yeah i think i probably have 50 gigabytes of containers from uh from other projects it's working its way through [Music] yeah i guess while that's working we can continue to think about so if we can get this data persisted then we can have our seed slash migration step that we were working on with this we'll also want to do the same thing within that and persist those data locally we need to add something that after essentially after we run this we and it succeeds with the migration we then need to kill the the postgres container right because the api server exits but then the postgres container is just sitting there running um so yeah we'll want to add something here we're working on that we haven't made it to that yet how are you doing on time donnie are you okay oh i'm fine he's fine all right good all the time a little bit sleepy but i've been programming too much ali says it's the real process the only reason i see you yeah i mean i think it's like it's it's sometimes useful to show the full process rather than getting everything working and then here's my 10 minute spiel on how to make it all magic yeah we got 18 gigabytes back nice uh he says just imagine someone saying yeah yeah docker's pretty easy hit start and run hand a real world project like donny's code base and not just a to-do app yeah exactly so you you start to run into the challenges when you have actual actual systems actual users it's also really annoying when i first got into programming i also watched a lot of youtube tutorials and you can just download everyone who does a tutorial they have two screens and then you will see them looking at the second screen and then they will start like oh yeah today we will be doing this and then they add all the code it's just i mean there's nothing wrong with it right but it's just kind of i don't know it doesn't connect with me so that's why i also started a youtube channel where i just okay this is the goal and you'll see how it works out and then you'll get a video of an hour of me struggling trying to get it to work yeah i think i think there's a balance too of like having the real process and then also having the high bandwidth uh video that you can just like go through all the all the steps yeah so i think having it having a balance can be useful i i like i think if you prepare well enough then it should be pretty easy to do without actual uh example code next next to you yeah yeah and there's also like some well it's pretty common to say that if you can't uh explain something wait how does it go again this is this quote yeah if you can't explain it you don't really know it i think is the the gist of it yeah awesome there was this quote but i can't remember whatever the point made all right so we're rebuilding those containers i guess in the meantime we could also quickly add a if i could type it correctly [Music] um uh yeah so what i was thinking here is that i would to the client one i'm debating between having a multi-stage or two separate doctor files uh but we have this stage where we essentially install our all our dependencies in our development one we're gonna want to have the command be this yarn start command um in our production one we'll do a run yarn build build and then copy our resulting files into an nginx container actually um but yeah we'll do that in a bit still chugging along here because right this one we've installed our dependencies this will be actually base and then from this and we want to copy essentially the entire working directory into this so what's the difference between base and just a period uh so base is saying copy from this this guy uh i see so yeah with a multi-stage build you can have named stages and then copy from one to the next versus the period is just on my host system my current directory yeah uh and so really can i just do from i forget if i can do from base just as like because that would be fine for my development one but then we need to build it here what can i just do from base run yarn build on bass ass and then do something like nginx uh version that's the latest nginx version good question good question docker yeah ali says one time he responded to a tweet basically saying he had noticed a typo in a tutorial but then the error just kind of went away and the the the typo went away and the error never showed up because the the creator had edited it off screen and then through the magic of video editing uh it no longer existed i wish that walk in real life oh there's a book i was just sleeping tomorrow's fixed just pause and then yeah so why don't we just use 1.19 is fine most likely and it would be copy from just builder uh then we need where does it actually put the put the files when you build them with yarn build uh it's public build so client build so this will be user source i think app build oh client oh we're already in client so it'll just be build uh and that then nginx expects the files to be located here oh was a lot of fun setting up nginx because i had like because of two environments i had to create separate folders and move them all i was so confused yeah eventually we'll probably need to do something like that where we'll this nginx container will actually have the configuration for production staging front ends as well as proxing to the back ends right but first just to get it running we'll do that and it looks like our build stage is done or our build process so let's go back [Music] we're still in the top level make and local gotta re-pull our images and what we're hoping is that when it initializes that db we get some we get some files in this guy huh look at that we got what we wanted yeah so it must have been just there was a pre-existing container that had already been initialized with a default docker volume that was not mounted onto the host and so it just kept reusing that even though we had our docker compose configured correctly uh we were not getting uh not getting these data i think it i actually noticed when when we booted up the first time with that new volume mount it gave us some warning in the terminal like scrolled by super fast it's like oh like volume already created ignoring or like not taking effect so shouldn't have ignored that maybe but that's good that means that we have these data you mentioned a good point we want to add that to both our git ignore we don't want to check in our check in our data that would not be useful yeah that one's happened to the project but i was trying to set up docu and i did it all on my droplet and i also used get on my droplets so i pushed it to get and then long later i realized i actually pushed my db data into my github repo and it was a public one oh be a nightmare so i had to reset all accounts and all that stuff because while it's in my version control so i'm kind of screwed good lesson uh yeah yeah okay so actually here it's okay because our db data is one level up uh in our directory tree from where this docker context is being built it should not be included anyways um does that make sense like because you're building the the container image uh the image in the server directory db data is living at our top level and so we should be okay on that front but i did add it to the get get ignore so that's good uh great uh so now i guess the original thing when we went down that rabbit hole was we wanted to run our migration on the docker compose bootstrap version and then run our api server because presumably we did we die here can't reach db what running bootstrap script why did it run the bootstrap script it shouldn't have right nope nope nope oh that's that's inside the the database container that's their bootstrap script not ours okay but our api is the app running successfully that'd be localhost 3000 it is admin oh oh no aha could not proxy api so is a server running or not i oh maybe maybe it did crash but that would make sense we would kind of expect it to right because yeah yeah so did it die here please make sure can't reach it looks like it didn't wait for the database to finish coming online before it started which is weird because we told it to but maybe maybe the database said it was ready before it before it actually was or maybe this is necessary to allow that connection let's see so you yeah so maybe maybe it is that that those data aren't getting or the the migrations aren't getting applied properly so let's try our bootstrap compose uh we want to copy this volume mount in i also should look up what what this is actually doing after environment son thank you for the kind words glad to have you here uh excited to to see everyone in the audience um and yeah donnie it's it's unclear to me if the proxy link's going to the wrong url or if the api server just isn't running um i think it's the latter but i'm not sure we could test it by going to yeah 5 000 uh api champions or champion i can't remember permission denied is that normal um no i've seen it before though okay on the stream today oh interesting but then i just ran properly yeah so we can just test localhost 5000 api champion it's coming online okay api server is running properly uh now now we've logged in okay so it was the fact that the it looks like it just hadn't been migrated and so the api server was crashing upon startup and so we weren't able to log in so we are making progress that is good sorry for the all the window switching there hopefully it's not too dizzying don't really need this so we'll do some research offline and what that actually does and whether or not we need it it was suggested on stack overflow but it doesn't seem like we actually need it [Music] uh the yeah so if we run docker compose in the background those background then wait some amount of time and then shut it down is kind of a hacky way to do it but would work like right now we run this and then we have to manually kill the docker compose process whereas it would probably be nicer like if if some new developers coming on the project they would want to bootstrap their db and then run local but for now leaving it that way it's not not too bad we can we can clean that up later yeah so now the process would be you clone the repo you build the images you run this bootstrap db you kill it you then run local and you should be up and running with your app let's jump back to the client side and keep working on this we got client uh build base docker i think there's like a dash dash target maybe build yeah so if we say docker build target equals base cd client then that should look within our docker file and close everything else uh find the the stage name base and build just that so now we should have this uh as our base can i just do folder this is the syntax i'm not sure about if we can just use that directly target base will understand the same what's that i think you just got uh line two and five are the same though or am i going insane uh you are not insane you are correct looks like that worked uh build from all my different sites are zoomed at different levels okay yeah it looks like that was was the way that we want to do it using previous stages new stage we built here and then we used it there and so here we're running yarn build as we can just add some comments and then we can do this cool are you familiar with finn uh i am familiar with it i am not proficient in it right god i get used to it man it's life-changing yeah yeah like i i mapped my piece code in a way that i only need to use my keyboard now like i can i can do literally everything it's like after a while it just gets so much easier although i do not recommend doing it during work like when you're not efficient at it because that doesn't go well together i've tried it hmm uh so just adding some comments our base stage essentially uh installs all our dependencies we then have a builder stage and this is the one that will run locally our builder stage and i'll do this our builder stage starts from there builds our static files and then our production stage copies those files into an nginx container so let's just see if we can build that production container and run it uh make build reduction all right we've got some comments let's see hey bobby bb uh welcome uh what make is like a build is a build tool yeah so make is a unix tool that's been around forever uh and basically you most people use it to like set up the compilation steps required to build like a c program or some compiled program but it's also useful if you're on a unix-based system to be able to just add a bunch of little commands that you can run one off so that's kind of what i use it for web dev junkie is a fan of using the mouse me too but that's just because i'm a noob when it comes to uh vim eventually maybe i'll i'll invest the time maybe i should make a series of spending 30 days learning vim and i make a video about it show how speedy i get i mean i should be homeless i i'm not using the the editor like i'm just using the extension because i tried setting up knee of him and it just every time i added the plugin it just didn't work although there is a uh a new editor called uh onivim 2 which is like really similar to vs code and uses its echo system for all the extensions but its main focus is on them and right now i think it goes for like forty dollars like one time purchase but you can also uh like the code is open source so you can just build it and use it for free cool yeah so it is now building our site uh building is always fun it seems like it shouldn't take that long to build the site how complicated can it be i wonder how it would be in like 10 years like would building be even worse or would it actually get better yeah i don't know i mean it seems like software the software industry has does a pretty good job of whenever there's performance gains on the hardware side we just write slower software we make it easier to write software but those sometimes take advantage yeah and utilize those extra compute cycles but it looks like that succeeded so now if we do actually let's just tag this uh t client production uh leaked decks client i'll do that again it's already cached so it should just take a second uh except that no our make file got copied into our into our source because it was not in the docker ignore so if i kill that and do what hmm i thought that would allow us to use the cache but instead we can just use this image directly in a new window while it's building again docker run uh what port is nginx run on by default uh i wish i knew i think it's like 80 80. [Music] uh oh 80 inside the container so i'll map 8080 to that so do dash p 8080 on my local system to 80 inside the container and then that i was actually surprised i was signing up a uh docker a mongodb uh docker instance and i was stuck for like literally 30 minutes and the only reason i was stuck was because i did the pulse mapping the wrong way and it just didn't tell me yeah yeah okay so that actually looks like it worked so we we ran our docker our nginx based production client image as a container we port forwarded from 8080 on my localhost my host system to port 80 inside the container which is what nginx is serving on by default and then i was able to load it up here and it looks good there is still then work to do on configuring the oh like the the proxy stuff so to get that all working uh it's definitely still a dude uh this we actually did this but there's still uh make termination of bootstrap process nicer cool i think that's probably a good place to wrap it up for today but yeah we were able to pull out the the database bootstrapping process from the main api server container um we persisted those data onto our local system with a volume mount in this db data um we still have a little bit of a to do to make that nicer probably do that off off stream just because it it's not that interesting we're probably just going to kill the docker compose process once it completes we added an nginx server and a production build on the client with a multi-stage build so here we had we now have sort of our base image a builder image that builds the static site and then our actual nginx image we still have some work to do on this to get things configured properly in terms of uh proxying and that sort of thing actually it might be useful donnie if after the stream if you can grab the nginx config from the droplet that you're currently using that would be a good base i think to start from that'll be good and then we had this thing that we found in a stack overflow post and we weren't sure what it did so we we should look it up and then i think the the other big thing is going to be uh creating a process for dumping slash restoring from production well any db from db uh and so that'll probably be a focus for for next time cool any anything else to add on no not really i mean eventually we started to add the build step for typescript but yeah yeah that'll just be that's pretty simple yeah it'll be in the docker files um yeah see if we have any additional comments it looks like ali is learning vim right now potentially that's cool nice nice getting uh wrapped around yeah with with tmux so then you have multiple multiple windows all happening at the same time and vim it's a lot but i just haven't had the time to invest it in it yet i haven't seen you use shell file a lot yeah so you could do something similar to what i do with makefiles in a shell script you could have individual functions and then configure it such that you could call those specifically or you could source that shell script so that you could call them they would be in your your shell namespace they're slightly less convenient i find for this style where you have a bunch of small commands that you want to run independently of another but yeah sometimes i use them but just it's a little more convenient for me to use a make file and then this project is not a surely it will take some time to build i'm not sure what that is in reference to but um anyways cool well we made some progress today we got stuck a little bit we we did some debugging uh we didn't quite make it to digitalocean so sorry if people got tricked by the the clickbait thumbnail that had digitalocean in it uh we'll probably just use the same thumbnail next time i suppose um yeah i think i think that should do it for today and thanks donnie for joining and we will see you next time cheers bye
Info
Channel: DevOps Directive
Views: 352
Rating: undefined out of 5
Keywords:
Id: G_5Da0WtULI
Channel Id: undefined
Length: 108min 15sec (6495 seconds)
Published: Fri May 14 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.