Docker and Nginx Reverse Proxy

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hey guys, Wes here! I've been getting a lot of questions lately about how to set up a reverse proxy using nginx So in this video we're going to dive in and use a combination of docker compose With nginx to create a reverse proxy for a few back-end services We'll use Python to write our back-end services and we'll also serve up a front-end application in Vue. So with that, let's go ahead and get started! Okay, so we're gonna be working within a project directory called "weather_report" and the strategy here is just to try and Demonstrate how we can use docker compose. To orchestrate a number of different services together Including an nginx reverse proxy, which will sit in front of a Vue.js application and two other Python services, so to do this we're going to work in a file called docker-compose.yml So I've got the project pre-built here just for ease of explanation and following along if you'd like to Check out the code. I'll post a link to the Github repository in the description of this video But to give you an idea of how this project is laid out We've got a docker-compose file at the root of our working directory here our project and then we're going to have four separate Directories we're going to have a precip service directory reverse proxy temperature service and weather report and each of these directories or sub directories of our Project directory here will be used to house the logic for each of the four services that we need to orchestrate So let's begin by taking a look at our docker-compose.yml file. So come over here and we'll just take a look at what this file looks like we're going to be using syntax version 3 of our docker-compose file and Then we're going to define our services and so we're going to have four services here We're gonna have a reverse proxy a weather report, a temperature service, and a precip service. So let's take a look at these one-by-one Our reverse proxy will be using an nginx image specifically version in 1.7.10 And the container name here is reverse proxy demo. It actually depends on other services orchestrated by compose here So it's going to depend on weather report, temperature service, and precip service, and then we're going to use a simple volume mapping here between our reverse proxy subdirectory nginx.conf and /etc/nginx/nginx.comf on this container, so volumes in Docker are a way that we essentially allow for data to persist across the lifetime of a container So we want our data to be able to be saved if you will if you know containers are ephemeral So if I spit up a container have that container read some configuration values Maybe create some files and I'd like those files to persist after the container itself is destroyed so if it crashes or if it's brought down during some Scaling event, then we can actually just map that Map this local file in this case just a simple nginx comp to the location where it would be read from inside of the container itself Then we're going to map port 80 locally to port 80 on the container because our nginx server running here will be listening for requests and on port 80 and in order for us to Hit port 80 in that container. We're going to map that to our own local host port 80 Now we'll take a look at our two Python services here or actually first we'll look at our view app This will be the service we use to generate weather reports. So it's called weather report and It is a weather reports named image with a container named weather report and we're going to give this one a build Context so we're actually going to create our own docker file here for our view Application and we're telling composed to find that in the weather report subdirectory And then this depends on our two python services and we're going to map port 8080 locally to port 80 on this can This few app itself will be built and then the static files there will get served using another edge next instance Directly in its own container here. So we'll take a look at that in a moment Then we have our two simple Python services here will be running flask api's With just some bare-bones simple demonstration end points But one service will be responsible for retrieving the temperature and the other for the precipitation And we will name these images appropriately and give them each their own build context with their own doctor files In the sub directories that we have for them The temperature service will be running on port 5001 and accessible on port 5001 This is just an arbitrary number in the container We could of course run both of these api's just like on port 5000 in the container It doesn't matter because we're actually getting them mapped to five thousand and one and five thousand and two which are different ports locally but just for consistency here, I'm going to we're gonna run this API on port five thousand one in its own container and the Precipitation service API on five thousand and two and then we will restart on failure for each of our three Services here that comprise our application Okay, so that was kind of a lot at once if you're unfamiliar with docker compose But this is actually a pretty simple docker compose file if this were for a production application Then there would be probably a bit more that would go into this We would want to do things like set environment variables that we can do here or we can read from environment files rather so that wherever we deploy this we could just read from some environment variable file to Specify things that each of the apps needs to actually run okay, so now let's head into our Reverse proxy subdirectory and here we just need a single file This is going to be used for The volume mapping that we have for our reverse proxy service, which is just running that base Pre-built or predefined I should the nginx image for us So let's take a look at our nginx comp for our reverse proxy So we have a bare-bones nginx comp here. This is definitely not production ready For a production level application there would be much more going on here Typically, especially in terms of things like SSL termination and performance considerations user permissions and anything else that's really necessary to configure a production-grade server, which nginx is definitely capable of This demonstration is really just to show you what's needed to set up a bare-bones reverse proxy just to kind of get a sense For how this would work and so we'll start with a user directive Www.assist.org We're setting worker processes to auto which is a decent starting point This would also be tuned most likely and would depend on things like the number of cores available on the machine and other system resources we're using the pit directive here to define the file that stores the process idea of the main process and We are including at C nginx modules enabled Starcom So we're in a sense including anything that matches this wild card Kampf path in the events context we are including worker connections or setting worker connections rather to 1024 something to keep in mind here is that this number refers not only to the number of client connections to our server But really all of the connections that our server is managing and so just something worth keeping in mind here And now we get to our HTTP context. This is where we actually set up our reverse proxy So we have a single server block here and we're going to listen on port 80 to server name localhost one 2700 one so when we visit localhost port 80 in our browse Dr. Campos is going to map that to port 80 in the container and nginx is listening on that port 80 and Then it's going to serve us one of three things. And again, this is not configured completely, but basically we're just defining three separate routes here that nginx will then proxy our requests to so If we hit location /and it's going to pass our request to weather report : 80. So what is happening here? Because this looks kind of interesting we have this sort of hostname weather report. And so how is this working? Well this works because of the docker compose network. So when we take a look at dr Compose we've defined a set of services in our docker compose file and by default, dr compose is going to network each of these services together and In doing so it allows us to access any of these services that are networked together within that network by their service name so our reverse proxy service can Can basically reach weather report and temperature service and precip service Using its its container name or its service name here weather So if we take a look we have weather report on port 80 temperature service We're accessing at Port 5,000 and one in precip service at Port 5000 and two so in our browser when we hit Localhost in this case. We don't even need a specified port 80 because it's it's implicitly the HTTP port so if we hit localhost forward slash Nginx is going to proxy our request to our weather report service on port 80. So become take a look here We're going to expose port 80 on the weather report service and it's going to serve up Our view application will have another nginx comp for serving up our static web app We can set a proxy set header here x-forwarded-for from the remote address and we'll do that for each of these various locations But note the pattern is the same here if we visit localhost at port 80 and then hit slash temperature as an endpoint This is going to route that request to our temperature service port 5001 which will be our flask capulet Which will be our flask application Returning results for temperature likewise if we hit slash precipitation We're going to route this to our precipitation service Okay, so that is really as simple as it gets in terms of a bare-bones reverse proxy again, not production-ready But it will suffice for the purposes of this demo So now let's go into our temperature service and here we need to do a number of different things It's worth keeping in mind how we set up our docker compose here So if we take a look at our temperature service we're expecting a build context of This temperature service directory to exist and for this we need to create a docker file So let's take a look at our temperature service docker file Here we have essentially a set of instructions for docker to run when it builds our container So we're going to use a Python 3.7 Alpine base image if you're unfamiliar with Alpine, this is just a really lightweight Unix Linux distribution That's used for Typically for small containerized applications and services We're going to set a work working directory of slash apps. So all of the subsequent commands we have here Containing relative paths will be relative to our app working directory We're going to copy a requirements txt file into our working directory and then run a Python command here pip install - our Requirements txt. And so all this is doing is Installing flask so our requirements txt file as flask equals equals 1.1.2. And then we have a subsequent copy command here to copy all the remaining files into that tanner and the reason we have to copy commands here is because as docker Reads through the docker file It is going to create a sort of like layered approach where we have these cached image layers And so this way if we've already done something That may take some additional time to do in the future if we were to do it over again We don't need to do it over again. If there were no changes then dr. Can recognize that it already has an image Essentially cached at that point and can just reuse it So we would you know typically break this up Line by line in terms of how we want to cache the various actions that we perform When we are building our image And then we're going to expose port 5001 and then finally run Python temperature server Which is going to run our temperature server app and here we have a bare-bones flask application We set the variable app to a new instance of flask and then at the route forward slash here We're just simply going to have a sort of mock temperature reading and so we're going to set temperature C to some random integer between minus 10 and 33 and then we're going to return as a Python dictionary here the temperature C as a key and then its value that we simply generated and Then finally when we run this particular module as an app We are going to run it on the loopback address here. Just local host and port 5001 So note that this port 5001 of course has to correspond with the port. We're exposing telling docker to expose here and In our docker compose file to bring it full circle. We're mapping port 5001 locally to port 5001 on that container now depending on how we choose to Architect our application. It's not really even necessary if we don't need to hit this API from from our local machine if it only needs to basically Communicate with other services its networked with and we wouldn't even need to map a local port here I'm just doing it here for convenience and things like debugging Okay, so that is our temperature service. Let's look at our precip service which is going to look extremely similar We have a very almost identical docker file with the exception of we're exposing five thousand two and we're running the precip server PI file requirements are the same and the precip server itself here is actually a bit different so we are still creating a simple flask application with a single endpoint and it's going to invoke this get precipitation method and so we've got the temperature C set to the integer conversion of getting the query string parameter temp So the way this is going to work is when we invoke the endpoint slash here in the browser we could do something like temp and then we could say like temp is 20 or temp is some number here as a query parameter and We are going to convert that to an integer and allow this to or allow our function rather to use that temperature when it Does its subsequent business logic here? So this is pretty contrived obviously, but I thought this would be something more interesting than just Putting out some static data there isn't there's no query string validation here or request validation whatsoever. So, of course It's fragile. It's again just for demonstration purposes so we say if that temperature is not provided then we're just going to Set the temperature value here to 23 The precipitation service is only going to use the temperature to print out whether the precipitation type is Snowstorms or rain? So we create some boolean values here to check on those and then we create a percent chance here just using a Random random value between 0 and 1 times 100 and then we're going to return that data as a Python dictionary here gets converted to JSON in the response and That is essentially the complete business logic of our demo precipitation service Again, we're going to run on loopback at port 5000 here Okay, so we have our two Python flask applications and then let's take a look at our weather report Application so this is a view app. And so I'll show you how I created this view view application very quickly Using the view CLI. I Just ran the command view create weather report Again this is using the view CLI, which you can Find and download if you don't have it. It makes creating a view app much much easier than it would be to create from scratch just by generating and scaffolding out the sort of typical structure of a view application for you Okay, so I've modified that slightly. So if you create the sort of default template view CLI Application then you'll find in the source directory here a hello world component And so I've modified the template the top third here if you will of this Hello world view file is our template And so I'm just putting on a simple heading weather report and then we are going to do some Interpolation here with these with these string templates Saying the temperature will be temperature and the chance of precip type is precip chance And so then if we come down to our script area I'm using Axios probably overkill for the purposes of this demo But Axios is a popular library use for making HTTP requests And so if this isn't installed for your app already Then here we can just run npm install Axios and Make sure that gets installed for our view app So with this imported the only other changes that I've made here are to the the data For which? I'm just setting some initial values for temperature precip type and precept chance And then invoking a life cycle hook mounted and only using async here so that I can use a weight on act without promise chaining and Essentially what we're gonna do is hit localhost Temperature and then from that result we're gonna pull off that temperature C value So if we head back to our temperature service just to refresh on how that's working This is going to return an object with temperature C as a key and temperature C is value that it calculated So then we will otherwise just console error on error. But what we're doing is we're going to set the value of this temperature data to the value that we get back from our service and then likewise we have This Axios weight Axius get and we're going to now invoke localhost precipitation with a temp query string parameter and we're going to pass it this that temperature which we just got from our Temperature API and then we're got there. This is kind of weird and a little bit sloppy, but I'll talk about that in a moment Anyway when we get the result there We're going to set the precipitation chance and precip type from the data that we get back from our precipitation Service and again to get a refresher there we can see that we're returning this Python dictionary Which gets converted to JSON and we just have these two keys and our object precip chance and and type Okay, so let's Let's just save all Okay, and then why did I say that this is kind of a weird setup. Well, it is a silly example But one thing I wanted to demonstrate is having a single front-end app Interact with two endpoints on the same host each of which are now pointing to two completely separate containers running two completely different applications Typically what we might do here in order to sort of reduce surface area and only make one HTTP request Well, probably have a service maybe Maybe just baked in to temperature service or have another service that sort of assembles this data but we might want to do something like just like get rapport or something and Have this one endpoint then communicate with any other back-end service It needs to assemble the data it needs to actually provide the front-end client with What it needs but again making two HTTP requests here just for demonstration purposes Alright I didn't really change anything else here And in fact You could run the view app now just by again, using the view CLI or just Yarn run serve and then let's take a look at how we can actually Serve this view app more properly if you will and we want to dock your eyes it and we want to serve this view app with static files for It up using nginx itself. And so for this first of all, we have a docker file here what I've done here is really just taken the standard suggestion from the view documentation about how you might have a multi-stage build here for container izing your view application we can see that we're using a multi-stage build here with node as our base image We're calling this the build stage setting our working directory copying package JSON and package lock JSON into the working directory running npm install and then copying all of those files Directly into our container and then running npm run build so here we are essentially just building out that dist directory that we get when we Build our view app that dist directory containing the static files that we need to serve to the client And then we have our production stage So here we are building this one on top of the nginx base image This is actually the same base image except for the possibly for the version here that we're using for our reverse proxy But this nginx container or image here is going to be used to serve up our app directory So we're just copying or rather. We're making an app directory for the container here and then from our build stage That was just completed. We are going to you copy from app dist those static files into our app directory and then we are going to copy an nginx.conf from here into the /etc/nginx.conf location That is of course on our nginx base image And so this means that we have another local nginx comp here for serving our view applications. So let's take a look at that one So there's a little bit more going on here Again you can find good examples of how to set something up like this. I'm just looking through the view documentation It's it's excellent Here we're using Internet user with one worker process We have the error log set here to warn and the process ID Name, which is the same and then in the event context events context rather We are setting 1024 worker connections again and then in our HTTP context here We are including at the nginx mime types. We're setting the default type to application/octet-stream what this is going to do is when nginx here will be serving static files if it if it gets something for which it doesn't know the type the mime type then it's going to Force the browser or request the browser Downloads the file as opposed to try to render it or display it in the browser window. We've got some log formatting Access log send file on and keep keep a lifetime up to 65 and then the really interesting part for this nginx comm serving our view app is the server block which is listening on port 80 and The root of which is our app Which is where we copied all the static files to to serve and then at location forward slash here We're going to be serving our index.html file This try files with the URI URI slash index that HTML this would be necessary if we're doing a history mode routing We're don't we don't actually have any routes in our application except for the base route. So it's not really necessary here and then we have like a custom error page and that sort of thing but really we're just Basically serving up this built view app on port 80 from this container Okay. So with that, I think I have worked through all of the basic parts of the application Now we can sort of demo it by using some simple docker compose commands So if I run doctor compose build This is going to build our application. This may take some time on your computer. Especially if you don't have these images locally the base images that is And now I'm going to run dr. Compose up Okay, so we can see some Some logs already from our precip service and our temperature service Because the precip is running on five thousand two and temperature on five thousand and one it's giving us lots of warnings that we are using a development server and which is true and And then we have our weather report service Showing us some logs here as well access logs actually because I have a window open. So let's go take a look at that window so in fact if we just visit localhost we should see our application because again this is implicitly running on localhost 80 and If we refresh the page a few times we can see that we're getting a new random value out for our temperature And it looks like we might have a bug in our a bug in our precipitation Service that we can go ahead and fix here because I'm only seeing the value 80 come out here. Let's go ahead and inspect and yeah, we're getting a 500 error at Slash precipitation so it's kind of cool to be able to demonstrate this in a way Because we can do a real bug fix here. Let's see what happens if I just visit Precipitation so we get our internal server error here if I visit temperature. We are getting a valid temperature out. So, let's see What's happening with our? our precip service in fact if we take a look at The logs from dr. Campos here. I'm seeing that the name request is not defined so let's go ahead and fix that bug really quickly if we go into precip server and It's upset that we're using a request here without importing it from flask so we're just missing an import And so with that, let's spin our containers down and then spin them back up again so we can control see here We could also run these as a background process or daemon process if we needed to But it's nice to have the logs here when we are debugging Now we will simply docker compose build and I'll run both commands here. Dr. Compose up Okay, so we are running and let's go give our application another shot Okay, so now we are back at our application we're not seeing any errors in the console and let's refresh a few times and note that when we have a Certain threshold hit for the temperature. We are now getting a a Well, if finally a kind of funny format here, but we are getting storms as the precipitation type and then three Instead of like 3% The chance of storms is 80. So that's pretty that's kind of funny We would clean this up and make this we could put a % there for instance to go ahead and clean it up Let's see if we can get some snow Yeah, there we go. So temperature will be minus 1 of the chance of snow is 12 Don't you love it when the chance of snow is 12 Okay, so that's a basic reverse proxy for routing traffic to a couple of different back-end services utilizing docker compose for some simple orchestration now in a production we would have probably a much more complex nginx comp setup and a little bit more work put into things like orchestration and security in this video I just kind of wanted to demonstrate how the pieces are sort of wired up together and to present a somewhat realistic Example in terms of at least how we can route requests between servers sitting behind nginx as a reverse proxy If you enjoyed this video, I would really appreciate it if you liked and subscribe and as always thank you for watching
Info
Channel: Wes Doyle
Views: 66,463
Rating: undefined out of 5
Keywords: nginx, reverse proxy, docker, flask, vue.js, nginx configuration, what is nginx, nginx tutorial
Id: hxngRDmHTM0
Channel Id: undefined
Length: 31min 32sec (1892 seconds)
Published: Mon May 11 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.