Microservice Architecture and System Design with Python & Kubernetes – Full Course

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this course you will learn about microservice architecture and distributed systems using a Hands-On approach a microservices architecture is a type of application architecture where the application is developed as a collection of services Giorgio from Canton coding teaches this course he does a great job teaching how to combine a bunch of different Technologies into a single application hey what's up everybody and welcome to this video on microservice architectures where we will be applying this architecture to an application that will convert video files to MP3 files in this Hands-On tutorial we'll be making use of python rabbitmq mongodb Docker kubernetes and MySQL to build this microservice architecture which is admittedly a lot but don't worry I'll walk you through every step so let's go over what this application is going to to look like from a top-down perspective so when a user uploads a video to be converted to MP3 that request will first hit our Gateway our Gateway will then store the video in mongodb and then put a message on this queue here which is our rapidm queue letting Downstream Services know that there is a video to be processed in mongodb the video to MP3 converter service will consume messages from the queue it will then get the ID of the video from the message pull that video from mongodb convert the video to MP3 then store the MP3 on mongodb then put a new message on the Queue to be consumed by the notification service that says that the conversion job is done the notification service consumes those messages from the queue and sends an email notification to the client informing the client that the MP3 for the video that he or she uploaded is ready for download the client will then use a unique ID acquired from the notification education plus his or her JWT to make a request to the API Gateway to download the MP3 and the API Gateway will pull the MP3 from mongodb and serve it to the client and that is the overall conversion flow and how rapid mq is integrated with the overall system okay so the first thing that we're going to need to do before we get started with writing any code or setting up any of our services is we're going to need to install a few things now the Links for all of these pages are going to be in the description and it's critical that we correctly install all of these prerequisites prior to moving forward with the course so please take the time to make sure that you've correctly installed all of these things so to start we're going to install Docker and I'm using Max so everything that I do like everything that I install is going to be for Mac so if you have Windows or a Linux system you're going to need to do some additional research to figure out how to install on those platforms because I'm not going to spend the time diving too deep into that but if you're able to successfully install everything on this list then that's probably one of the most difficult parts of this course and the rest of it should be a pretty smooth sailing so to start we're going to install Docker so I would hit this install Docker desktop and I'm using an M1 MacBook Pro so I would install with apple chip if you're using an Intel MacBook Pro or MacBook then you would use the Intel chip installation and of course you're just going to click the link and install it that way and following the successful installation of Docker you should be able to do Docker version and get a Docker version here now once you've gotten Docker installed we'll move on to installing the kubernetes manline tool and as you can see the kubernetes command line tool allows you to run commands against kubernetes clusters and we're going to be deploying our services within a kubernetes cluster so that's why we need this command line tool so again I'm going to choose to install for Mac and there are a couple of options to install for Mac you can use Homebrew or you can install the binary using curl I believe I installed the binary using curl and I selected the Apple chip and I just ran this command and I didn't validate the binary but you do have to make the binary executable by running this command and of course you're going to need to move the binary to a location that's within your path now I'm not going to get into details on how to do this this documentation is pretty detailed and installing binaries and adding them to your path isn't within the scope of this video so if that's a little bit too advanced for you you can just use the Homebrew installation which pretty much automates all of this for you and once you've finished with the installation you should be able to type Cube CTL or cube cuddle or whatever you like to call it and if you type that you should get the output that provides information about this command line utility now following the installation of cubecto you can move on to installing minicube and minicube is a local kubernetes focusing on making it easy to learn and develop for kubernetes so basically this is going to allow us to have a kubernetes cluster on our local machine so this way we can make a microservice architecture on our local machine without having to actually have a kubernetes cluster deployed to like a production environment or something like that so this goes on to explain the requirements to use minicube and the installation instructions are here and you're just going to select for your operating system so I would select Mac OS and I would select arm 64 as the architecture and of course we'll select stable and I used the binary download for this one as well but again if you're not familiar with how to install binaries and configure your path just go with the Homebrew installation and then once it's installed we should be able to start a cluster from our local machine and your Cube CTL will automatically be configured to work with mini cubes so you don't need to do anything there so once mini cube is installed you should be able to do mini Cube start and everything should start up for you and there's actually another thing that I forgot to add to the list of what we need to install so we're going to install this K9s and we're just going to use this to sort of help manage our kubernetes cluster so the installation instructions for this are just here in the documentation you can just do Brew install canines of course if you're on Mac but of course they have the installation instructions for other operating systems as well so just figure out which one of these is best for you and once you've finished installing canines you should be able to type canines in the command line interface and it should pull up our cluster which is just our mini Cube cluster which you can see here and you can quit by just using Ctrl C and let's go ahead and clear this now the next thing that we're going to need to install is python3 more specifically because we're going to use Python 3 to create our first service a very simple authentication Service and all of these services are going to be relatively simple because the focus of this tutorial isn't necessarily the services but how all of the services intercommunicate and how a distributed system is integrated as a whole so we're not going to spend too much time on creating large services for this microservice architecture anyways to download python if you don't already have it you're just going to select for Mac of course you're just going to select this button here and just follow the installation instructions from there and lastly we're going to need to install MySQL because this is going to be the database that we use for our auth service so I installed MySQL using Homebrew so you can just do Brew install MySQL so let's just go ahead and copy this and we can just paste it now as you can see here it says we've installed your mySQL database without a root password and to secure it run mysql's secure installation but we're not going to do any of this because again that's not the focus of this tutorial so yeah in a production environment you're going to want to make sure you're securing your MySQL installation and you're going to want to follow all of the best security practices when working with and installing a database server but in our case we're just going to leave the installation the way it is which again isn't something that you're going to want to do in a production environment this is just so that we can get to the actual meat and potatoes of the tutorial so we can access our mySQL database by just running MySQL with the user root and we don't even need a password so we should just be able to do MySQL U root and this will give us access to the database and I think that's it for the things that we need to install for now we're probably going to need to install some more things later but for our first service I think that that's all that we need so as mentioned before we're going to start with our auth service that's going to be this first service that we create and deploy to our cluster on our local environment and just quick note everything's going to be deployed on our local environment in our mini Cube cluster we're not going to deploy anything to a server I might create another tutorial on how to actually deploy this to a server or to a production like environment but for now we're only focused on the actual architecture so everything's going to be done on our local system within this mini Cube cluster so to start we're going to want to create a directory so we'll make dur and we'll call this dir maybe something like system design and we'll just go ahead and change directory into this system design directory and we're going to be writing code for multiple services and actually all of the services are going to be written in Python and if you aren't familiar with python don't worry I explained enough when writing the code where it shouldn't really matter which language you're most comfortable with and since we're focused on the architecture as a whole we're not necessarily going to be writing any complicated code anyway so anyways we're going to make a directory for Python and this is going to contain our python services and we'll put a source directory but we don't need a bin directory so we can just do let's just make a python directory change directory to python make directory source So within this system design python Source directory we're going to create the directory for our auth service so we'll make their auth and this author is going to contain our auth service code so let's CD auth and from here to start we're going to want to create a virtual environment and we're going to write all of the code for the service in one file that we're going to call server.pi and the reason we're doing this in one file is because it's going to be less than 700 lines of code and like I said before the service is going to be relatively small it's just going to be a very simple auth service and actually I forgot to activate our virtual environment so let's go ahead and do that and we should see that our virtual environment is running if we run this command here and I'm going to need to install a couple of things for my Vim configuration I'll go ahead and install pylent and pip install Jedi let's just go ahead and run this command that they're suggesting to upgrade pip and let's open server.pi and the start of our file is just going to be to import JWT which is Json web token and we're going to get into why we're importing that soon we're going to import date time and Os and we're also going to import from flask import flask and request and we're also going to import from flask mysqldb Imports my SQL so basically this is going to allow us to query our mySQL database this is going to be our actual server we're going to be using flask to create our server and this is going to be what we're going to use for our actual auth we're going to use Json web tokens and date time is so that we can set an expiration date on our token and Os is just going to be used so that we can use environment variables to configure our MySQL connection and you will see what I mean by that soon so let's just go ahead and save this and we're going to need to install a couple of things so let's just cat server.pi so we can see what we need to install and then we can just do pip install I believe JWT is like Pi JWT and we're going to need to pip install flask and pip install flask MySQL DB and we can open this file back up again and to start we're just going to create a server which is just going to be a flask object so we're going to instantiate this flask object and we're going to create a MySQL object which is going to be an instance of this MySQL which we passed the server now for the purposes of this video we don't necessarily need to understand the magic Behind These two lines of code here if we go ahead and save this and we go to the definition here we can get a general understanding of what this flask object is doing but the main thing that we need to be concerned with is once it is created it will act as a central registry for the view functions the URL rules template configuration and much more so basically this is just going to configure our server so that requests to specific routes can interface with our code and this MySQL object is basically just going to make it so that our application can connect to our mySQL database and basically query the database so following this we're going to want to set up our config so our server object has a config attribute which is essentially a dictionary which we can use to store configuration variables so for example we can set the configuration for our MySQL host and we can set it equal to OS dot environ.get which is just going to get the MySQL host from our environment so what do I mean by that so if we save this and we do export MySQL host equals localhost and we go into server.pi this code here this OS dot environ.get MySQL host is going to resolve to localhost that we set in our environment within our shell so if we were to go and print this server.config MySQL host and if we were to just Python 3 server.pi you'd see that it prints out the Local Host that we set in our environment variable So This Server is our application and server.config is just the configuration for our server or our application so we're going to create a couple of these all with different variables of course so this one's going to be MySQL user and the same here for the environment and this one will be MySQL password and this one would be MySQL DB and MySQL port and we don't need that one so this is going to be the configuration for our application and these are going to be the variables that we use to connect to our mySQL database and the next thing that we want to do is create our first route which is going to have the path login and the methods for login are just going to be post and this route is going to route to this function login and we can just write the code for the login function we're going to set off a variable called auth equal to request dot authorization and this request is the request that we're importing here and with this authorization attribute provides is the credentials from a basic authorization header so when we send a request to this login route we're going to need to provide a basic authorization header which will contain essentially a username and a password and this request object has an attribute that gives us access to that so once we instantiate this object we'd be able to do auth.username to get the username from the basic authorization header and auth.password to get the password from the basic authorization header and you'll see what I mean when we send the actual request but if we don't provide that header within the request then this auth is going to be none so that means that the request is going to be invalid so we're going to do if not off so if the header doesn't exist within the request we're going to return missing credentials and we're going to turn a status 401 which is just the standard status code that we would return in this case and the next thing that we're going to do is we're just going to check DB for username and password so the way that this login route is going to work is basically we're going to check a database so this off service is going to have its own mySQL database and we're going to check a user table within that database which we're going to create and the user table should contain a username and password for the users that are trying to log in or trying to access the API so actually before we do this part we want to go ahead and create a database and a user table and a user that we can use to access the API so we'll go ahead and save this and let's just clear and we just want to go ahead and create a file called init.sql So within this file we're basically going to create a user for our auth service and we're going to give that user a username and a password and then we're going to create a database a mySQL database called auth which is going to be the database for our auth service and we're going to Grant the user that we create privileges to the database and we're going to create a table within that database called user not to be mistaken with the user that we're creating within the mySQL database and that user table is going to be what we use to store users that we want to give access to our API and I know I'm using the word user a lot and the users that I'm mentioning here aren't interchangeable so let's just go ahead and get into it so I can show you what I mean so first we want to create user now this user is the user for our actual database so this isn't the user that we're trying to give access to our API and we're just going to call it auth user because it's the user for the auth service to access our database and we're going to say identified by and we're going to give it a simple password auth123 so this here is creating the user to access the mySQL database so this isn't the user to access the API this is a SQL script so we're basically just writing out some SQL queries and statements in this script to build our database essentially so then we want to create database and we're going to call the database off and we want to give this user up here access to the auth database and all of its tables so we're going to do Grant all privileges on off and all tables to auth user which is the user that we just created at localhost and we want to use the auth database when we create the table and then we'll create a table called user and the primary key is just going to be ID int not null Auto increment primary key and actually there's a typo here Auto increment and the next column is going to just be email and it's going to be varchar and we'll just put the maximum 255 not null and lastly we're going to have a password which is again a parchar and we'll just allow it to be long and after we create the table we just want to insert the user that we're going to use to test our application so we're going to say I'm going to give the user an email and a password and the values are going to be I'll just use my name you can use yours if you want or whatever admin123 for the password and can't forget this in my colon so so this is going to create our initial user or the user that will represent our auth service so basically we're going to use this credential to access the database via our auth service and then we're going to create the database and then we're going to Grant permissions for our auth service to make changes to this auth service database and we're going to go ahead and create the table here as well as our initial user and this is going to be the user that goes into the database which will have access to our off service API so we can just go ahead and save that and now we can just go ahead and run this script to create our initial database so just before we do that we'll just go ahead and go into our database and we can do show databases here and you'll see that we don't have the auth database yet so we can go ahead and exit that and clear and now we're going to do the same thing that we would do to log in but this time we're just going to pass in our init.sql file and it appears we have an error in our syntax near Auto increment primary key so let's go ahead and go back in here and I spelled increment wrong should be increment and let's just try that again and actually it's failing now because just now when we try to run the script the first time it already created the user but then the script failed so we already have this user so it's trying to create the same user again so let's just go ahead and delete their user drop database auth as well and drop user so now we dropped the database and we dropped the user that we created in that script because we want the whole script to run not just part of it so we can go ahead and clear this and let's just run it one more time so on SQL you root innate.sql and the script ran successfully so let's go in here and show databases and you can see now that we have this auth database and if we use auth and we show tables you can see that we have this user table and we can also describe user and it shows all of the fields for the user table which include the primary key which is the ID the email and the password and we can even select all from user oops select all from user and you see that we have the one user that we created in the script I named it my name for the email and admin123 for the password you can do whatever you want for this part as long as you make sure to use those credentials when you actually try to make requests to our API so we can go ahead and exit that and now we can go back into writing our code for our server.pi file so here we're going to check the DB for the username and the password that we pass in our basic authorization header in our request to this login endpoint and we're going to do that by making use of this flask MySQL DB here so this is basically just going to allow us to interface with our mySQL database and that's why we added this application config here which has all of the configuration variables to connect to the database that we just created and we're going to need to set environment variables for our host and our user and our password in our database which are all things that we just created and our host is going to be localhost and the port is going to be the default port for MySQL so we'll go here and we're going to create a cursor by doing mysql.connection.cursor and we're going to use that cursor to execute queries so we're going to say the result of the query is going to be equal to cursor.execute and within this execute method we're going to put in our query so we can just do it this way so we'll do select email we want to select the email and the password from our user table where email equals whatever email is getting passed in the request so we're going to do off we're going to take it from that alt dictionary or object we're going to take the username and we need to pass it in as a tuple so remember this auth object here gives us access to the username and the password from the basic authorization header so what we're doing here is we're selecting from our user table in our auth database the email that's equal to the username that's passed into this basic authorization header so we're going to be using email for our username and if the user exists within our database then we should have a result so if result is greater than zero because results going to be an array of rows I believe so if result is greater than zero then that means that we have at least one row with that username and in this situation we should only have one row with that username because the username should be unique but actually I forgot to set the column for the username to be unique so maybe we should go ahead and do that so we should be able to just go over here and add unique for the email and then save it and once again we need to drop our database we're going to drop the user and we're also going to drop the database and now once again we can run MySQL U root init.sql okay so back into our server.pi file so anyways at this point if resulted is greater than zero then that means the user exists within our database and if that's the case we're going to set the row that contains our user data to cursor.fetch one and this is basically going to resolve to a tuple which is going to contain our email so we'll do user row 0 that's going to be the email and our password which is going to be user Row one and next we just want to check to see if the username and the password returned in the row is equal to the credentials passed in the request and if that's not the case we'll say that the credential is invalid and if it is the case then we're going to return a Json web token so we'll say if auth.username not equal to email or auth.password because we need both of them to be equal to what we get from the database not equal to password then if that's the case we're going to return invalid credentials and we're going to return a 401 status code else will return and we're going to create this function we haven't created it yet but we'll return the results of the function called create JWT and in this function we'll pass the auth username and we'll need to pass in a secret for the JWT so we'll just have that in our environment and we'll just call it JWT Secret and we're going to pass in true and I'll get to what this means in a second but we're not creating this create JWT function yet so just bear with me for a bit and lastly we get to if result is not greater than zero so if result's not greater than zero then that means the user doesn't exist in our database and if the user doesn't exist in our database then that means the user doesn't have access so we'll just return invalid credentials as well for this one and a 401. and that is going to be it for our login route and the login function so now we want to go ahead and create this create JWT method or create JWT function actually so I'm going to go ahead and explain a little bit about what a JWT actually is first so we're going to go over the overall flow using basic authentication and jwts and I will try to clear up any compiled confusion that you might have up to this point in the tutorial so let's visualize the flow from a top-down perspective to get an overall understanding of what our code is doing so as mentioned before our micro services are going to be running in a kubernetes cluster and that clusters internal network is not going to be accessible to or from the outside world or the open internet our client is going to be making requests from outside of the cluster with the intention of making use of our distributed system deployed within our private kubernetes cluster via our systems Gateway so our Gateway service is going to be the entry point to the overall application and the Gateway service is going to be the service that receives requests from the client and it is also going to be the service that communicates with the necessary internal services to fulfill the requests received from the client our Gateway is also going to be where we Define the functionality of our overall application for example if we want to add functionality to upload a file we need to Define an upload endpoint in our Gateway service source that initiates all of the necessary internal services to make that happen so if our internal Services live within a private Network how do we determine when we should allow requests in from the open internet this is where our auth service comes in we can give clients access to our application by creating credentials for them with in our office database any user password combination that exists within our MySQL DBS user table is a user password combination that will be granted access to our application's endpoints this is where the authentication scheme called basic authentication or basic access authentication comes in this authentication scheme requires the client to provide a username and password in their request which should be contained within a header field of the form authorization basic credentials where credentials is the base64 encoding of the username and password joined by a single colon in the context of our off flow we are going to make use of this authentication scheme by taking the username and password from the authorization header when a client sends a request to our login endpoint and comparing them to what we have in our mysqldb if we find a match for the credentials we know that the user has access so we will return a Json web token to the client which the client will use for subsequent requests to our gateways upload and download endpoints which brings us to the other critical part of our off flow Json web tokens so what are Json web tokens a Json web token is basically just two Json formatted strings and a signature which comprise three parts each part being base64 encoded all three parts are merged together separated by a single dot which is how we end up with something that looks like this but let's break this down so what are these three parts well the first part is the header the header contains a key value pair for both the signing algorithm and the type of token which is of course JWT the signing algorithm is the algorithm that was used to sign the token which will allow us to later verify that the sender of the token is who it says it is and to ensure that the message wasn't changed along the way now there are both asymmetric signing algorithms with two keys a public and private key and there are symmetric signing algorithms which use just one private key we aren't going to go into detail about signing algorithms because it is not within the scope of this course but what you do need to know is that our auth service is going to be using the symmetric signing algorithm hs-256 for example our auth service is going to be the only entity that knows our single private key and when a user logs in using basic auth or auth service will create a JWT or Json web token and sign it using that private key that JWT will then be returned to the user or the client that way when the user makes following requests to our API it will send its JWT in the requests and our auth service can validate the token using the single private key if the token has been tampered with in any way or was signed using another key then our auth service will know that the token is invalid it's important that we know that the Json formatted data in the token hasn't been tampered with because that data is going to contain the access permissions for the user so without this signing algorithm if the client were to alter the Json data and upgrade its permissions to increase its permissions allowing itself access to resources that shouldn't be available to that particular user then at that point our entire system would be compromised this brings me to the next part of our JWT the payload the payload contains the claims for the user or the bearer of the token what are claims you ask or maybe you didn't ask but I'll explain it anyway simply put claims are just pieces of information about the user for the most part these claims are defined by us although there are predefined claims as well for things like the issuer of the token the expiration of the token Etc the claims that we're going to Define on our own are who the user is for example the username and whether or not the user has admin privileges which in our case is just going to be true or false now the last part of the token is the signature the signature is created by taking the base64 encoded header the encoded payload and our private key and signing them using the signing algorithm which would in our case be hs256 at the end of all this we are left with a token that looks like this and now whenever the client makes requests to our API and provides this token within the request we can determine if the client's token was indeed signed with our private key and our signing algorithm and if so we can determine the client's access level by checking the claims in the payload portion of the token so in our case we're simply going to allow the client access to all of our endpoints if a claim that we're going to call admin has a value of true when we decode the payload portion of the token and that's going to be our off flow so we're going to Define and create JWT and it's going to take in a username a secret and auth's and auth is just going to tell us whether or not the user is an administrator so we're going to keep the permissions simple we're going to either have true or false either the user is an administrator or the user isn't an administrator and this is just going to return JWT dot encode now this JWT comes from up here we're importing this JWT here and it's from this Pi JWT module So within this in code we're going to need to pass a dictionary containing our claims a secret and an algorithm so we'll start with the dictionary it's going to contain username and the username is going to be the username that we pass into the function and it's going to have expiration and expiration is going to be date time dot date time dot now and time zone is going to be equal to date time Dot timezone.utc and we'll just continue this on the next line and we're going to add that to date time dot time Delta days equals one this is just going to set the expiration of this token to one day so this token is going to expire in 24 hours and this IAT is just issued at so like this is when the token is issued so we're just going to do date time date time UTC now and actually we could have done the same thing above for it now but we'll just leave it and then we're going to have whether or not this user is an administrator and that's just going to be the bull that we pass here for alts and next we need to pass in the secret and we also need to pass in the algorithm and this is basically just the signing algorithm for our JWT and we'll just use hs256 which I believe is the default but a little verbosity never hurt anybody so actually so now we have our login route and our login function and we have the function to create the Json web token so essentially the flow is going to be a user is going to make a request to our login route using his or her credentials a username and a password and then we're going to check to see if that user data exists within our database if it does then we can consider the user to be authenticated and we'll return a Json web token which is going to be used by that user to make requests to the API and the endpoints that that user will have access to will be determined by this permissions here so whether or not the user is an admin we're going to keep it simple if the user is an admin will make the user have access to all the endpoints and yeah that's going to be the flow for logging in so we also need to create an endpoint for this auth service to actually validate the jwts so the way that we're going to do it is we're going to basically we're using this secret when we create the JWT and that same secret is going to be used to actually decode the token as well so that's how we know that this is a valid token for our API okay so while we're down here let's just go ahead and configure our entry point so we'll just do if name equals Main and all this means is basically when we run this file using the python command then this name variable will resolve to Main so let's just leave here and clear this so if we do Python 3 server.pi whenever we run this file this way the name variable resolves to Main so if we go back in here we can actually just print name and if we run this we'll see that the result is Main so that's all this is for so whenever we run our file using the python command we want our server to start so we'll do server run and we want our server to run on Port 5000 and we want to configure the host parameter like so which essentially is going to allow our application to listen to any IP address on our host so essentially if we don't set this host parameter like so the default is going to be localhost which means that our API wouldn't be available externally and as you can see here in this flask documentation configuring our host parameter this way tells our operating system to listen on all public IPS otherwise the server is only accessible from our own computer or from localhost so let me try to explain what I mean by that so basically any server is going to need an IP address to allow access from outside of the server so in our case our server would be a Docker container and our application will be running within that container when we spin up our Docker container it will be given its own IP address we can then use that IP address to send requests to our Docker container which in this case is our server and keep in mind when I'm referring to an IP address assigned to a Docker container I'm referring to the IP address assigned to that container within a Docker Network so when we spin up our Docker container it will be given its own IP address and we can use that IP address to send requests to our Docker container which in this case is our server but that alone isn't enough to enable our flask application to receive those requests we need to tell our flask application to listen on our container's IP address so that when request gets into our containers IP our application can receive those requests so this is where the host config comes in the host is the server that is hosting our application in our case the server that is hosting our flask application is the docker container that it is running in so we need to tell our flask app to listen on our Docker containers IP address but a Docker container's IP address is subject to change so instead of setting it to the static IP address of our Docker container we set it to this 0.0.0.0 IP address which is kind of like a wild card that tells our our flask app to listen on any and all of our Docker containers IP addresses that it can find if we don't configure this it will default to just localhost which is the loopback address and localhost is only accessible from within the host therefore outside requests sent to our Docker container would never actually make it to our flask app because the loopback address isn't publicly accessible so when we set host to 0.0.0.0 we are telling our flask app to listen on all of our Docker containers IPS including the loopback address or localhost and any other IP address available on the docker container for example if we connect our Docker container to two separate Docker networks Docker will assign a different IP address to our container for each Docker Network that means that with the 0.0.0.0 host configuration our flask app will listen to requests coming to both of the IP addresses assigned to the container here it's also possible to set the host config to a specific IP address that way our flask app will only listen to requests going to that IP address and would no longer listen to requests going to localhost or the other IP address from the other Docker Network so now that we've finished creating our login route we want to actually create another route to validate jwts and this route is going to be used by our API Gateway to validate jwt's synth within request from the client to both upload and receive or download MP3s or to upload videos and download the MP3 version of those videos and you'll see what I mean by that later on in the tutorial when we actually start to implement that so we're going to do another route and this one's going to be validate and methods are going to be post and we're going to define a function called validate and we're going to want to pull the encoded JWT from our request and we're going to require the JWT to be in a headers or a header called authorization and if the JWT is not present in the authorization header we want to return an error so we'll say encoded JWT if not encoded JWT we'll just return missing credentials and a 401 so if you remember from the explanation of the basic authentication scheme for that scheme we would need to have the word basic in our authorization header that contained our base64 encoded username and password separated by a colon well for our JWT we instead need to have the word Bearer in the authorization header that includes the token so let me quickly go over the format for the authorization header so that you understand what's happening so if we look at the documentation for authorization headers on mozilla.org we see that the format for the header is first type and then credentials here type represents the authentication scheme and credentials represents the credentials necessary specific to that type so from the perspective of the server handling the authorization header the type tells us what type of credential is contained within the header so if the type is basic from the server perspective we know that we are dealing with a credential which is a base64 encoded username and password separated by a colon if the type is Bearer we know that we are dealing with a bearer token which essentially means that we can assume the party in possession of the token or the bearer of the token has access to the tokens Associated resources now in the code for our validation endpoint to save some time we're just going to assume the authorization header contains a bearer token therefore we aren't going to check or validate the word that represents the type that comes before the credential in the header but in an actual production environment you are definitely going to want to spend the extra time to check the type or the authentication scheme present within the authorization header and within this authorization header we're going to require the token to be formatted as bear authentication so basically we're going to want the header that's synt with the JWT to look like this so it's going to have this Bearer as part of the string and then the token and as a result of that if the encoded JWT is present we're going to need to split the string so we're going to set encoded JWT equal to encoded JWT dot split and we're going to need to split it based on a space because there's going to be the word bear and then a space and then there's going to be the token so the array that results from this split is going to have the item with the word bear and it's going to have an element with the token so we're going to need the first index or the item at the first index of the array not the zero and then we're going to try and we're going to do decoded equals JWT dot decode and to this decode method we're going to need to pass the encoded JWT and we also need to pass our JWT secret the one that was used when we actually encoded the JWT which is going to be in an environment variable and we're also going to need the algorithm and the algorithm that we used was hs256 and if that fails we're just going to return not authorized and a 403. but if it doesn't fail we will return the decoded token and a 200. and that's pretty much going to be it for our auth service so we can just go ahead and save this so now we have our actual service and we have our init script for our database and now we're going to need to start writing all of our infrastructure code for the actual deployment so we're basically going to deploy all of our services within a kubernetes cluster as you already know so we need to create Docker images that we're going to push to a repository and our kubernetes configuration is going to pull from the repositories for our Docker images and create our deployments within our cluster and I know that this sounds kind of confusing but we're going to walk through everything step by step so don't worry so we're going to start by making a Docker file and as our base image we want to use a python image so we'll use this python 310 slim bullseye and after we write out this Docker file I'll go over what all the lines mean in a little bit more detail but for now I'm just going to vaguely go over what we're doing so first we're going to run our apt-get update and we also need to apt-kit install a couple of dependencies so we're going to need build Essentials and default live MySQL client Dev and then we want to pip install upgrade pip and following that we want to set our working directory to app and we want to copy first just our requirements dot txt which we haven't created yet and the reason we're copying this separately from the rest of the application is because we want to make sure our requirements are in a separate layer so that if our application code changes we can still use the cached requirements layer we don't need to reinstall or recreate the layer and I'll probably explain that in a little bit more detail later on so then we want to run pip install our requirements and then we can copy over the rest of our application and our app is going to be running on Port 5000 so we'll expose that port and finally we need to create the command which is going to be python3 server.pi so it's the same as when we're actually running this python3 server.pi from the command line and that is it for that so we'll save that now let's get into explaining the contents of our Docker file so basically when we build a Docker image we're building it on top of a base image which in our case is this base image python 310 slim Bullseye so we can think of an image as a file system snapshot for example the base image that we are building our image on top of is essentially a snapshot of a file system that contains all of the necessary dependencies to run python applications for instance we wouldn't be able to run a DOT Pi file on an OS that doesn't have python installed right so a base python image will have things pre-installed so that we don't need to worry about that so based on that understanding let's go a little deeper it's important to keep in mind when writing Docker files that each instruction in a Docker file results in a single New Image layer being created that means that the next instructions image layer will be built on top of the previous instructions image layer so that means that this from instruction creates a layer and then this run instruction creates a new layer on top of the previous layer that was built from the from instruction and this continues on until we reach the end of our Docker file this is important to understand because if we need to build our Docker image again Docker is smart enough to use cached image layers if nothing within the layer has changed and none of its preceding layers have changed so what do I mean by that so let's say that the dependencies for our application change resulting in our requirements.txt file changing when we rebuild the image we won't need to rebuild every layer again we only need to rebuild the layer that changes and every layer after it because of course every layer after it is based on its preceding layer therefore if the preceding layer changes so does it and the reason it's important to understand this is because optimizing your Docker file to use cached layers efficiently will significantly decrease the build time of your image and that might not seem so beneficial in this context but when we are talking about deploying production applications using CI CD pipelines the build speed is something that we want to consider now if you don't know what a CI CD pipeline is don't worry it's not necessary to understand that for this tutorial anyways with the understanding that each instruction results in its own layer and that if one layer changes every layer after that layer will also need to be rebuilt we can see why it's beneficial to separate the copy instructions for our requirements.txt from the copy instructions of the rest of our application because with this configuration as you can see if the dependencies for our application change resulting in our requirements.txt file changing we need to create a new layer to build onto with the new requirements being installed in this run pip install layer and of course every layer after that layer will need to be rebuilt as well but if we only make a code change and our requirements don't change we don't want to have to build the layer that installs our dependencies again because this is probably one of the most time consuming layers to build so as you can see we are copying the rest of our application to the app directory here and by the way this dot just means the current directory that we ran the docker builds command in so if we copy dot to our app directory which is our working directory or copying everything in the directory where we ran the docker build command in on our local machine in other words the directory that contains our Docker file on our local machine and if any of the code has changed since our source files are contained within that directory Docker will detect that there was a change and rebuild this layer and every layer following this layer will need to be rebuilt as well but as you can see the layers following this layer don't include the time consuming pip install command so that's just a quick example of why optimizing your Docker file to be more layer efficient is beneficial so now back to the rest of our instructions so after we build our base layer using the python 310 slim Bullseye image we then move on to installing our OS dependencies and all of these flags in purple here are to avoid installing unnecessary additional packages as well as avoiding taking up additional space with caching and stuff like that because we want our container to be as light as possible and we also don't want to introduce potential vulnerabilities present on packages that we don't even need and the reason we are combining all of these commands into one command is so that we can keep them all in the same run instruction therefore keeping them contained to one image layer because remember every Docker instruction creates a new image layer and then here we're just creating a directory to work in and this dirt is where our application source is going to live and these instructions we've already gone over and this expose instruction doesn't actually do much of anything other than serve as documentation to anybody that builds this image and it essentially lets them know what port is intended to be published so our app listens on Port 5000 so that is the port that we have here in the expose instruction and lastly we have our Command instruction and this instruction is the instruction that is going to be used to run our container this instruction sets the command to be executed when running the image so for example when we run our image the Python 3 command will be run on our server.pi file which is going to run our auth server in this case okay now let's go ahead and build this Docker file oh and I forgot to create our requirements.txt so we're going to do uh pip 3 freeze and we're going to freeze our requirements the current requirements for our application into a file called requirements.txt and if we go into this file you see it has all of the requirements that we needed to install for our application like it has this MySQL DB that we're using and of course flask and any of their dependencies as well so doing pip 3 freeze it basically freezes our dependencies into a file so that we know what dependencies we need to install to run this application so let's go ahead and try this again and now it says unable to locate package build Essentials so let's go back into our Docker file and that is a typo it's actually just build essential and let's try again foreign and now our image is finished being built so once we've finished building our image we actually want to create a Docker registry or a repository so you can just go to hub.docker.com and create an account here I already have an account so I will just sign in okay so once you've successfully created an account and logged into your Docker Hub account you should end up at a page that looks like this and from here you just want to click this repositories Tab and what we're going to do is we're going to create the repositories that we're going to push our container images to and then our kubernetes configuration is going to pull from this repository the repositories that we create for each individual service now your repositories are going to have the same suffix as mine because we're going to create the repository name using the same suffix but the prefix to your repository is going to be different mine's going to have this prefix and yours is going to be whatever the name of your account is so for example if we create a repository here and we name the repository auth because it's going to be the repository for our auth services images when we actually push to this repository from our command line I'm going to push to sweezytech auth and you're going to push to whatever your username is for your account and you'll see what I mean by that in a second so we're going to go ahead and create this repository for our auth services images and we're just going to make it public because we only can make one private one and then if we do a private one we're going to have to configure credentials within minicube which is a little bit more complicated so we're just going to do public and throughout this tutorial it will be possible for you to push and pull from my repository which can cause lots of issues for you as you follow along with this tutorial because my images may or may not be in the same state that they are at the part of the tutorial that you're on so just make sure you take the time to create this account and create your own repositories so we can just go ahead and create this and now as you can see here it tells you how to push to this repository and as you can see this is going to look different for you than it does for me specifically this part is going to look different for you so anyways now what we want to do is we want to tag the image that we just created here we just built an image using this Docker builds command it built the image based on our Docker file that we just created now we want to tag this image so we'll do Docker tag and we're going to just use this Shaw here you don't actually need to put in the whole thing but I'm just going to put in the whole thing and you're just going to tag it using your username from your Docker Hub account slash auth and then we're going to tag it as latest because it's going to be the most recent version of our image now if we do Docker image LS you can see that we have our tag here and you can compare this part of the image ID to the first part of this shot and just ignore all of these other images that I have you'll likely have whatever images you have on your system as well but none of that matters as long as you have this image tag here then you're fine so let's just go ahead and clear that and now that we've tagged it we can push it to our repository so we'll just do Docker push and again you're just going to use your username for your Docker Hub account and then auth and then latest and then just push that and once that's finished you can go to the repository and you can just refresh the page and you should see your image tag here so that means we've successfully pushed this image to our repository and now whenever we want to pull this image we could just do Docker pool and we could just do the name of our image and the tag or the URL for our image in the tag and we'd be able to pull it but that's not actually how we're going to be pulling these images our kubernetes configuration is actually going to be pulling the images so let's go ahead and clear and now we're going to make a directory called Manifest this directory is going to contain all of our kubernetes configurations so let's change directory to manifest and for all of these configuration files I'm going to go over them in detail after we write them out so if you're confused about the infrastructure code that we're writing just hang in there so all of our configuration files are going to be yaml files and we'll start with a file called off deploy.yaml and within this file we'll do API version apps V1 foreign is deployment now again this is the configuration for our kubernetes cluster and our service and if you're not familiar with kubernetes a lot of these configurations you probably won't understand I will try to go into more detail for these in a little bit so we'll do metadata name of our service or our deployment is going to be off and labels app is going to be auth as well and then we're going to do our spec in replicas we're going to want two replicas or two instances of our service and selector we're going to match labels and the app will be off strategy type rolling update and the configuration for Rolling update we're going to do Max surge equals three then we want to do template metadata for template labels app off and we'll do spec again containers name auth and this is where we're going to configure it to pool our image we're going to set image to remember this should be your username and then auth and then ports the container port is going to be 5000 because our application is running on Port 5000 so we'll just do the same for container Port as well and then we want to get our environment variables from config map file we're going to create this file after this so we're going to do config map ref and we're going to name the config map that we're going to create auth config map and again we haven't created this yet we're going to create it soon and here we're going to do Secret ref and we're going to store our secrets in a secret we're going to name it auth secret and we're going to create this file as well and let's just check formatting and looks fine so we can just go ahead and save that so now let's create the config map so we'll just do Vim configmap.yaml this config map is going to set environment variables within our container so for this one we'll do API version B1 kind config map metadata name is going to be auth config map and data are going to be our environment variables so we'll do MySQL post and since we're going to be using our local MySQL server we're going to need to reference that server from within our kubernetes cluster and luckily minicube gives us a way to access our host the Clusters hosts via this host dot miniq dot internal because basically within the cluster we're kind of in our own isolated Network so since our MySQL server is just deployed to our local host from within the cluster we wouldn't be able to just use localhost we need to access the system that's hosting the cluster and that's what this host mini Cube internal is for so our MySQL user is going to be off user we created and our MySQL DB is going to be auth which we also created and default port for MySQL is 3306 and actually we should do it as a string so these are going to be the environment variables that will automatically be exported within our Shale when we do the deployment so in other words if we were to run the environment command within that container all of these variables and their values are going to be present within the container so that's what this config map file is for and configmap is for environment variables that aren't necessarily sensitive data like passwords so we're also going to need to do a similar file for our secrets or sensitive data like our password to our database and of course in a production environment you would never push your secrets configuration to like a git repository or something because then your passwords would be easily visible at the repository so just keep that in mind when we're creating this file so we're just going to call it secret.yaml and API version once again V1 and this time kind is going to be secret and metadata we're going to do name auth Secret and we're going to do string data and we'll do MySQL password is going to be auth123 and our JWT secret is another secret that we need for this application and we're just going to make it a random name sarcasm and we need to set type to opaque and this is going to be our environment variables for our secrets and we can just go ahead and check if the formatting is okay and save that and lastly we need to create our service.yaml and we'll do API version V1 and kind is going to be service metadata will be name is auth that's going to be the name of the overall service and spec we'll do selector app off and we'll do type cluster IP and this cluster IP basically just means that the IP address assigned to this service is only going to be accessible within our cluster but again I'll go into a little bit more details soon and our Port is going to be 5000 our Target Port 5000 as well and protocol is going to be TCP and then we can just save that so once we have all of our info code for our kubernetes deployment we can actually start to deploy this off service to our cluster so let's go ahead and take a look at canines So currently we have nothing in canines there's no cluster and there's no context because our mini Cube isn't running right now so we can actually do any Cube start and once mini cube is started we should be able to go back into canines and if we change the namespace to All by hitting zero here we can see that we have our mini Cube pods running within the cube system namespace and you can see here our cluster is mini Cube so let's go ahead and Ctrl C out of there and we can clear this okay so let me just briefly go over what we're doing here so basically within this manifest directory we wrote the infrastructure code for our auth deployment so if we change directory back to our main directory we wrote the code for our auth service and we created a Docker file to build that source code into a Docker image and we then pushed that Docker image to a repository on the internet and within our manifest infrastructure code we're actually pulling that image from the internet and deploying it to kubernetes and that image contains our code so all of these files within this manifest directory when applied will interface with the kubernetes API which is the API for our kubernetes cluster to interface with our kubernetes cluster so these files are going to interface with that API to create our service and its corresponding resources like its config map and its secret and to do that all we need to do is do Cube CTL apply and then we're going to use this F flag for file and we're just going to apply all the files in the current directory this manifest directory and as you can see here our config map resource was created and our secret was created and our service was created but we actually had an error here for our deployment so it's saying that let's see a known field template so this seems like there's an issue with our auth deploy.yaml file let's go ahead and have a look so a template shouldn't be unknown so the spacing is really important in yaml files so we'll put this back one because strategy is actually part of spec but the way we had it before we had strategy a part of selector so we need to make sure that spacing is correct in these files so we'll go ahead and put this back and let's give it a try now so we can just apply again and it will only apply the files that have changed and now we're getting unknown field name so I'm assuming that it's an issue with the spacing again so let's just go over this so the issue here is that our config map reference the name shouldn't be at the same spacing it should be there and the same for the secret ref name so we can go ahead and save and let's try to apply again and now we were able to create our deployment as well so now that we've created these resources we can go into K9s and we can see that we have two instances of our auth service being created and now both instances are running and if we go into the logs by just pressing enter on first the Pod and then the container we can see the logs within the container and we see that our server is running and that's for both of these replicas and also within canines we can use the shell within the container by just entering on the Pod and then pressing s on the container to access the shell so now we're in a shell within our container and within this shell we can do environment and we can see our environment variables so you can see we have our secret here our MySQL password and we have our MySQL user here actually we can just environment grep MySQL and we can see all of our MySQL environment variables from both our config map and our secret and we can just exit the shell here and leave canines okay so to explain our kubernetes configuration I will need to explain a little bit more about kubernetes in general so throughout this course there's been a lot of mention of deploying our micro services to a kubernetes cluster but what does that actually mean let's first briefly go over what kubernetes is in simple terms kubernetes eliminates many of the manual processes involved in deploying and scaling containerized applications for example if we configure a service to have four pods kubernetes will keep track of how many pods are up and running and if any of the pods go down for any reason kubernetes will automatically scale the deployment so that the number of PODS matches the configured amount so there's no need to manually deploy individual pods when a pod crashes kubernetes also makes manually scaling pods more streamlined for example say I have a service that load balances requests to individual pods using round robin and that service is experiencing more traffic than the number of available pods can handle as a result of this I decide to scale my service up from two to five pods without kubernetes in a situation like this I'd likely need to go manually deploy each individual additional pod and then I'd need to reconfigure the load balancer to include the new pods in the round robin algorithm but kubernetes can handle all of this for you and it's as simple as running this command with this simple command kubernetes will scale up your service which includes maintaining the newly scaled number of PODS if a pod happens to crash and it will auto configure the load balancer to include the new pods basically with kubernetes weekend cluster together a bunch of containerized services and easily orchestrate the deployment and management of these services within the cluster using what we call kubernetes Objects which are persisted did entities in the kubernetes system I know that sounds a bit complicated so let me explain so for this part of the explanation let's go to the kubernetes documentation so it's explained here that a kubernetes object is a record of intent once you create the object the kubernetes system will constantly work to ensure that object exists by creating an object you're effectively telling the kubernetes system what you want your cluster's workload to look like this is your cluster's desired State now this sounds complicated but we've actually already done this multiple times for example we created a deployment object here in this yaml file this file is the above mentioned record of intent we are telling kubernetes that we want this deployment object to exist in our cluster in the state specified in our spec here for example we want two replicas to be deployed once this configuration is applied as explained here the kubernetes control plane continually and actively manages every object's actual state to match the desired State you supplied that means that kubernetes will keep track of the actual status or state of your deployment and make sure that it matches your record of intent in other words your yaml specification so bringing it all together we can say that our kubernetes cluster is comprised of a bunch of objects that we've configured that describe our cluster's intended state from there kubernetes will continually compare the current status or state of those objects to the specification or desired state from our original configuration and if that comparison ever differs kubernetes will automatically make adjustments to match the current status with our original record of intent in other words our original specification so how do we communicate with kubernetes to configure and or create these objects well let's once again take a look at the documentation it's explained here that to work with kubernetes objects whether to create modify or delete them you will need to use the kubernetes API when you use the cube CTL command line interface for example the CLI makes the necessary kubernetes API calls for you so basically the Q CTL CLI that we installed is interfacing with the kubernetes API to essentially run crud operations on our clusters objects in our case we are running our cluster locally using mini Cube so the end point for the kubernetes API in this case is on our local machine but in the real world your cluster will usually be deployed on some server and on your local machine you'll have a kubernetes configuration for the cluster on that server which will enable your local Cube CTL CLI to interface with the remote server but we don't need to go into the details for that in this video so now that we have a general understanding of what kubernetes is and how it is working we can now get into explaining our actual yaml configuration files so if we have a look at the documentation here we see that there are some required fields necessary when creating kubernetes objects using the dot yaml files those are API version kind metadata and spec we can also see a description for each field API version is which version of the kubernetes API we are using to create this object kind is what kind of object we want to create for example deployment config map secret Etc metadata is just data that helps uniquely identify the object and lastly spec is the desired state or record of intent for the object which we explained before as mentioned here spec format is different for every kubernetes object type for example the spec format for an object of kind deployment will be different from the spec format for an object of kind service and to see how to configure the spec for specific types of objects we can use the kubernetes API reference so let's go over the spec format for our deployment object configuration so first as you can see here we have all of the required fields we have API version which is the required field here we have kind which is the required field here and we have metadata which is the required field here and as mentioned in this dock the precise format of the object spec is different for every kubernetes object so anything within this spec block is our deployment spec so to see the actual format for the spec for a deployment we can just go to the kubernetes API reference here and a deployment is a workload resource so we would click this workload resource and we see we have deployment here so we can select deployment so this here is basically giving us the overall configuration for a deployment so it's showing the API version and the kind and the metadata as well and it also tells us the object metadata because as you can see here our metadata has its own nested fields so if we were to hit that object metadata we'd get a description for each field that we have within our metadata so name and it tells us that name must be unique within the namespace and that it's required when creating a resource and if we scroll down here we see that we have additional fields that we're not using currently but that we can use within our metadata block and then we have labels which we're using here and it tells us the format for the labels is it's a map of string keys and string values so for example this app would be the key and auth would be the value for our labels so let's go back and get into the spec configuration so as you can see here is spec and then there's a deployment spec link that we can select that will give us the details of the actual spec format for a deployment so we have our spec here and any of the fields nested within this spec block are going to be present here so we can get a detailed explanation of each individual field so for example the selector here is the selector that we have here and if we click label selector we can find this matched labels that we're using here down here and basically what match labels is doing it says here a label selector is a label query over a set of resources and to understand what I mean by that let's go back to what a selector is here so as you can see here it says label selector for pods existing replica sets whose pods are selected by this will be the ones affected by this deployment and it must match the Pod templates labels so as you can see within our template which we're going to get to for our replicas we're setting the label with key app and value auth so we're going to get into it but this template is basically going to be the configuration for each individual pod and our deployment is basically going to know what pods are part of the overall deployment because this selector is going to match the labels that are assigned to each individual pod in our template here so simply put our deployment knows what pods are part of the deployment because based on this template each pod is going to be deployed with a label that's a key value pair where the key is going to be app and the value is going to be auth and our deployment is going to select pods using the same label app off as the key and value and then we can go here to replicas and as you can see here replicas is the number of desired pods and you already saw how this replicas is working you saw that when we actually applied our configuration there were two auth pods deployed so if we were to increase this to say four then four auth pods would be deployed when we apply the configuration and then we can head over here to strategy which is here and this is just the deployment strategy to use to replace existing pods with new ones and basically this is the difference between here killing all of the existing pods before creating new ones which would essentially mean that our service is unavailable during the creation of new pods or replacing our old replica sets by new ones using rolling update which basically gradually scales down the old replica sets and scales up new ones and in our case we're actually configuring the max surge here to three which is this here and this is the maximum number of PODS that can be scheduled above the desired number of PODS so for example if our desired number of PODS is two and we need to do an update a rolling update it might be necessary to exceed the number of replicas while some pods are shutting down and newer pods are spinning up so this Max surge is just to give us some extra Headroom when we're actually needing to update our deployment and lastly we get into our template here and simply put template describes the pods that will be created we've already gone over this a little bit but let's actually go into the Pod template spec so everything nested under this template field is within the Pod template spec so we have here our metadata and metadata is the same for all of the types so object metadata is going to be the same regardless of the type so if we click this we see that it's still just name and all of the other fields that are possible within metadata and same thing with spec but this time the spec here isn't the same spec as deployment because remember each type has its own spec format so this spec is going to be pod spec here so if we select pod spec we get a different set of fields and descriptions for a spec for a template which is a pod spec because the template is the template for pods so as you can see here we have containers then we have containers here as well and this is where we Define our container so we need to name the container and we need to set an image for our container as well and remember we're pulling our image for our container from our Docker repository and that's the image that's going to be used for our container within our pod so that's image here and then we have ports which we can find here and this ports is actually similar to the expose instruction that we did in our Docker file doesn't actually serve as anything other than documentation so as you can see here it says list of ports to expose from the container exposing a port here gives the system additional information about the network connection a container uses but is primarily informational so not specifying a port here does not prevent that Port from being exposed so any port which is listening on the default 0.0.0.0 address inside a container will be accessible from the network so we've already gone over all of this when we were configuring our Docker file so you should be familiar so yeah this container Port 5000 is similar to our expose instruction in our Docker file it serves as documentation essentially and let's see if we can just search for environment from foreign so we have environment from here which is this configuration here and it's just list of resources to populate environment variables in the container so as I explained to you and showed to you our config map is where we Define our environment variables for our container which you were able to see when we went into the shell for our container so we're using an additional resource config map which can be seen here and it's the config map to select from for the environment variables for the container and down here we also have our secret ref or this one for that matter the contents of the target secret data field will represent the key value pairs as environment variables so essentially the secrets are being stored as environment variables as well and the secrets come from our secrets configuration and both this config map and the secret are their own individual kubernetes objects as well so basically whenever you see kind and a name that means that's an individual kubernetes object so as you can see for our configuration file for our config map it has its own kind and config map type so this is going to create another object in our kubernetes cluster and as you can see each object configuration is going to have essentially similar Fields overall like they're all going to need an API version a kind metadata but some things might differ like this here is a data field which we don't use in our actual deployment object configuration and here's another example when we created Our Kind service we still have the metadata field with its nested field name and again we have the spec for the kind service but of course this spec format is going to be specific to the kind service so it'll be different from our deployment spec format and this API reference documentation is very important and I will have a link to it in the description of this video so now we can start to write the code for our Gateway service so we can just change directory back to our python Source directory and right now we only have our auth service but we can also make their Gateway and we'll go ahead and change directory into Gateway and to start we want to create a virtual environment as usual and then we want to start our virtual environment and as you can see our virtual environment variable is the Gateway virtual environment now the next thing that we want to do is create a file called server.pi so our Gateway service is going to have a few dependencies so we're going to need to import and actually let me go ahead and install some Vim dependencies so we're going to need to import OS we're going to need to import grid FS pica and Json and we're also going to need to import flask and request and we're going to need to import flask Pi and we're going to create an auth package and we're going to create a validate module within that package and we're also going to create an auth service package and we're going to import module access from that package and we're also going to need to create a storage package and we'll import util from the storage package so these we haven't created yet but we're going to create soon and flask by we're going to use mongodb to store our files and this grid FS is basically going to allow us to store larger files in mongodb and I'll explain a little bit more about that when we get to it and this Pika here is going to be what we use to interface with our queue we're going to be using a rabbitmq service to store our messages and I'll go over that more once we get to it as well so to start we just want to create our server so server is going to be equal to flask so in our server config we can do URI and set it equal to mongodb at host.minicube dot internal at Port 27017 which is the default mongodb port and at database that we're going to call videos so if you remember this host mini Cube internal just gives us access to our local host from within our kubernetes cluster and this is just a mongodb URI that's going to be the endpoint to interface with our mongodb so our config is going to have the configuration to our mongodb that's on the Local Host that we haven't yet installed but we will then we're just going to create a variable called and we're going to do PI server and I will explain what this is doing in a second but first let's just go ahead and install all of our dependencies so we can go ahead and save this and we'll just cat server.pi so we can see what we need to install and we'll do pip3 install pika and pip3 install flask and let's cut that again so we can see pip3 install I believe it's high and once we've installed those dependencies we can go back in here foreign [Music] flask Pi which is flask hi and now let's go back in here and see and we have a type over here this should be a small l so now here what this line of code is doing so what we need to know about this line of code is that this Pi is going to wrap our flask server which is going to allow us to interface with our mongodb so if we go to the definition here we see that it manages mongodb connections for our flask app so this essentially is abstracting the handling of the mongodb connections away from us so beyond that we don't really need to understand what's actually happening for the purposes of our use case and once we've created that variable or the pi instance we're going to create FS for grid fs and we're going to create an instance of this grid FS class and we need to pass in our DB from our database which is going to be this video's DB so grid FS is going to wrap our mongodb which is going to enable us to use mongodb's grid FS so let me quickly explain what grid FS is okay so we're going to very quickly go over what grid FS is in relation to our mongodb so we're using mongodb to store our files files being both our MP3 files and our video files and if we go to mongodb's limits and thresholds documentation we can see that a binary Json document size has a maximum size of 16 megabytes and it's explained here that the maximum document size helps ensure that a single document cannot use an excessive amount of Ram or during transmission excessive amounts of bandwidth so they're basically saying that handling files over 16 megabytes in memory will result in Prof performance degradation so what they provide as an alternative is grid FS which essentially allows us to work with files larger than 16 megabytes by sharding the files for example if we go to the gridfs documentation here we see that grid FS is a specification for storing and retrieving files that exceed the binary Json document size limit of 16 megabytes and it says instead of storing a file in a single document gridfs divides files into parts or chunks and it stores each chunk as a separate document so in this case we'd no longer be dealing with files larger than 16 megabytes in memory because a file larger than that size would be separated into chunks and we're only dealing with the individual chunks at that point which avoids the performance degradation issue so when using grid FS gridfs uses two collections to store files and you can just think of collections in mongodb as tables so it's explained here that one of those collections stores the file chunks and then there's another collection that stores the files metadata so this collection that stores the file's metadata basically contains the information necessary to reassemble the chunks to create or reform the original file and if you're interested in more details about this gridfs or mongodb for that matter has very good documentation so you can come and read additionally about this but you'll see later on in the tutorial these are the two collections that we'll be working with to work with these files and the reason we're actually using gridfs is because working with video files there's a high probability that will eventually be working with files larger than 16 megabytes so this is essentially going to Future proof our application but for the purposes of this tutorial you don't really need to know much beyond what I just explained but again if you're the type of person that likes to dive deeper into these types of details like me then I recommend reading this page as well which is actually pretty interesting now the next thing that we're going to want to do is configure our rabbitmq connection so we'll just create a variable called connection and we're going to use Pica dot blocking connection which is essentially going to make our communication with our rapidm QQ synchronous and again the details of how this is working are abstracted away from us so we don't need to worry too much about that so in here we're going to add our connection parameters and we're going to pass to our connection parameters the host for our rapidm QQ and we're going to deploy our queue as a stateful set in our kubernetes cluster and it's going to be accessible via just the name rabbitmq and we haven't configured this yet we're going to configure it later but just know that this rabbit mq string is referencing our rabbitmq host and once we create this instance of a blocking connection we want to create a channel with just the connection that we just created dot Channel so let's briefly go over how rabbitmq is going to integrate with our overall architecture so we already know how the off flow works so we don't need to go over that again but now let's go over how rabbitmq integrates with our overall architecture so when a user uploads a video to be converted to MP3 that request will first hit our Gateway our Gateway will then store the video in mongodb and then put a message on this queue here which is our rapid in Q letting Downstream Services know that there is a video to be processed in mongodb the video to MP3 converter service will consume messages from the queue it will then get the ID of the video from the message pull that video from mongodb convert the video to MP3 then store the MP3 on mongodb then put a new message on the Queue to be consumed by the notification service that says that the conversion job is done the notification service this consumes those messages from the queue and sends an email notification to the client informing the client that the MP3 for the video that he or she uploaded is ready for download the client will then use a unique ID acquired from the notification plus his or her JWT to make a request to the API Gateway to download the MP3 and the API Gateway will pull the MP3 from mongodb and serve it to the client and that is the overall conversion flow and how rapidmq is integrated with the overall system okay so now that we have a clear understanding of the overall flow of our system we can now use that understanding to familiarize ourself with some key terms when considering microservice architectures those terms are asynchronous and synchronous inter-service communication and strong and eventual consistency let's start with synchronous inter-service communication because understanding that will make it easy easier to understand everything else so synchronous inter-service communication put simply means that the client service sending the request awaits the response from the service that it is sending the request to the client service can't do anything while it waits for this response so it is essentially blocked so this request is considered a blocking request for example our Gateway service communicates with our auth service synchronously so when the Gateway service sends an HTTP post request to our auth service to log in a user and retrieve a JWT for that user our Gateway service is blocked until the auth service either Returns the JWT or an error so communication between our API Gateway and our off service is synchronous which makes those two Services tightly coupled now on the other end of the spectrum we have asynchronous inter-service communication so with asynchronous inter-service communication the client service does not need to await the response of the downstream service therefore this is considered a non-blocking request this is achieved in our case by using a cue for example in our architecture our Gateway service needs to communicate with our converter service but if our Gateway were to communicate with our converter service in a synchronous manner the performance of our Gateway would take a hit because if the Gateway were to get many requests to convert large videos the processes that make requests to the converter service would be blocked until the converter service finishes processing the videos so let's say that hypothetically our Gateway service processes one request per thread concurrently or at the same time if we have two processes with four threads each that's eight concurrent requests if our Gateway were to get more than eight requests to process large videos the entirety of our gateways threads would be blocked awaiting the completion of the processing of each request so in this case synchronous communication between our Gateway and our converter service would not be scalable and this is where our queue comes into the picture as explained before our Gateway doesn't communicate directly with our converter service therefore it does not depend on the converter Services response this means that in our current architecture our Gateway and our converter service are Loosely coupled this decoupling is done by using the queue our Gateway just stores the video on mongodb and throws a message on the queue for a downstream service to process the video at its convenience so in this case the only thing holding up the threads on our Gateway service is the uploading of the video to mongodb and putting the message on the Queue which means that our Gateway Services threads will be freed up much quicker allowing for our gateway to handle more incoming requests so with the current architecture our Gateway service is asynchronously communicating with our converter service that is it sends requests to the converter service in the form of messages on the Queue but it doesn't need to wait for nor does it care about a response from the converter service it essentially just sends and forgets the message and this same thing is happening with the communication between the converter service and the notification service now let's get into strong consistency versus eventual consistency let's start with an example of what our application flow would look like if it were strongly consistent so let's say that hypothetically whenever a user uploads a video to our gateway to be converted to an MP3 we make a synchronous request to our converter service waiting for the conversion to complete and then in response the converter sends an ID for the MP3 back to the user once the conversion is complete at the point that the user received that ID it's certain that the video has been processed and converted into an MP3 and that the data is consistent with the update so at that point if the user were to request to download the data based on that ID the user is guaranteed to get the most recent update of that data so that is strong consistency eventual consistency on the other hand is a bit different so let's use our actual architecture as an example in this case for the sake of example let's say that hypothetically when our Gateway uploads the video to mongodb and puts the message on the queue for it to be processed we return a download ID to the user at that moment I know that we don't return a download ID to the user at that moment in our actual application this is just to help you to understand if the user were to upload a video that takes one minute to process but immediately after receiving the ID the user tried to download the MP3 the MP3 would not yet be available because it would still be processing in that case but the MP3 eventually will be available so if the user were to wait one minute and then request to download the MP3 with that same ID at that point the MP3 would be available therefore the data is eventually consistent and that is eventual consistency okay so the first route that we're going to make for our Gateway is going to be our login route and we're going to define a function called login and what this route is going to do is it's going to communicate with our auth service to log the user in and assign a token to that user so we'll set token error equal to access Dot Login and we're going to pass in the request and this request is from flask and we're importing it here and we're going to create a module called access that's going to contain this login function so let's just go ahead and save this and let's clear and in our Gateway directory we want to create another directory that we'll call auth service and we'll CD into that directory and we'll create a file called init.pi which is essentially going to mark this directory as a package and we'll also create a file called access.pi which is going to be the module that contains our login function and we're going to need to import OS and we also need to import requests and this request is different from the request that we import from flask this request is going to be the module that we use to make HTTP calls to our auth service so we can go ahead and Define log in and we take in request which is not to be confused with this requests and we're going to set auth equal to request Dot authorization and as we write the code for this just pay close attention to requests versus requests with an S at the end because they're different so this request object has this authorization attribute so if when we create this variable this variable resolves To None So if not auth that means that there's no authorization parameters in our request so that means that we need to return none for our token and we need to return for our error missing credentials and a 401 so if we go ahead and save this and quit and go back into our server.pi file we see that we're setting token and error equal to access.login so access.login needs to return a tuple and the first item in the Tuple will go to token and the second item in the Tuple will go to error so if we go back to the definition in this case when we're missing credentials we're going to return none for the token and we're going to return an error but upon the successful login of a user we're going to return none for the error and we're going to return an actual token so we'll set basic auth equal to auth.username and auth.password and we've already gone over basic auth so you should already be familiar with this and we're going to set our response equal to requests dot post now this request is going to be the request that's going to make the HTTP call to our auth service so let's save this actually and we need to install requests and let's go back into our access file so this request.post is going to make a post request to our auth service and the parameters or the arguments that we need to pass are the string the URL endpoint string and our auth header and the way that we do that is we can just create a formatted string and we'll do OS Dot environ.get and we'll just get our auth service address which we're going to create so we're going to create this off service address environment variable which is going to be the address for our auth service and we're going to need to access the login endpoint and to create the basic auth header we're just going to do auth equals basic auth and once this request completes this response is going to contain the result and we're going to check if response dot status code equals 200 that means that we're good we're going to return response dot text which is going to be our token and none for the error otherwise we'll return none if we don't get a 200 response it means we didn't get our token so we'll return none and we'll just return response.txt and response dot status code and we can go ahead and save that and this should be double equal and let's format and this is spelled wrong and we can save this file and let's just clear this let's just change directory back to our main directory and go back into server.pi so once we call this access.login we're going to check if not error and that's because if there is an error this is going to contain an error but if there is no error this is going to be null or none and if that's the case we can just return our token otherwise if there is an error then we're just going to return the error and that's going to be it for our login function now the next route that we want to create is our upload route and this is going to be the route that we use to upload our video that we want to convert into an MP3 so we're just going to call it upload and methods will be just post so we'll Define a function called upload and now for this route we need to make sure that the user has a token from our login route so we need to validate our user so we're going to set access and error equal to validate dot token and we're going to create this validate module as well and we're going to pass in the request so let's go ahead and create this validate module with this token function so we'll go ahead and save and let's clear and we'll make dur and this dirt will just call auth the other one we call it auth service because we're communicating with the auth service on behalf of the user or the client but auth is going to just be used internally so our Gateway is going to use this auth package to validate tokens that it receives from the client so we'll make their auth and let's change their auth and once again we need to create this init.pi file to mark this directory as a package and we'll create a file called validate.pi and we will import OS and requests once again and we'll Define a function called token which takes in a request because we're validating a token so remember the flow is the client's going to access our internal services or our endpoints by first logging in and getting a JWT and then for all subsequent requests the client is going to have an authorization header containing that JWT which tells our API Gateway that that client has access to the endpoints of our overall application so this validate token function that we're creating here is going to be the function that validates that JWT sent by the client so we need to first check to see if the client has the authorization header in his request or his or her request foreign is not in request.headers we're going to return none for our access and we're going to return an error that says missing credentials and it's going to be a 401. otherwise we're going to set our token equal to request dot headers authorization and if that token does not exist we'll return none as well and the same error missing credentials 401 but if the token does exist and the authorization header exists we'll set response equal to request dot post and this should be requests so don't make the same mistake I did and once again we're going to send a post request via HTTP to our auth service so once again we'll do a formatted string and we're going to get our host from the environment using the environment variable auth service address and we're going to access the validate endpoint and headers will be equal to authorization token so we're basically just passing along the authorization token to our validate request of our auth service and we want to check the response if response dot status code equals 200 then we're good so that means we're going to return response.txt and none and response.text is going to contain the body which will be the axis that the bearer of this token has and you'll see what I mean by that when we parse this else if the response from our auth service isn't 200 we're going to return none and we're going to return an error so response Dot txt and response dot status code and we can save that and let's go back to our root directory and back into our server.pi file okay so let's take a second to understand what's happening here with our validation before moving forward so let's quickly go back into our auth service and into the server.pi file and let's go over our login route so if you remember our login route here is going to take a username and a password or an email and a password and return a token and the token is going to be a Json web token and that token is being created here in this create JWT function and as you can see in this function we're encoding a payload and that payload is here and this payload contains our claims which is basically these data points within the payload so our username is a claim the expiration date is a claim and also contained within this payload is this claim here for admin which is just a bull that's going to be true or false and as I said earlier we're just going to allow anybody with admin equal to True access to all of the endpoints of our services so the token that's returned to the logged in client is going to contain this payload but the payload is going to be encrypted and when that client sends their token in their request and we validate it in using the validate endpoint for our off service we're going to first check to see if the token exists in the request and then we're going to decode that token and when we're decoding the token we're using the same key that we signed the token with this JWT secret which is how we know that this is a valid token because our auth service is the service that signed the token using this key and when we decode the token we're using the same key the service is key so if somebody were to send a token that was signed with a different secret key then of course it wouldn't work and when we decode this token with this decoded variable is going to include that payload that tells us who the user is via their username which is their email and their privileges which is the auth's claim which is true or false so now let's go back into our Gateway code code and here when a user tries to upload they need to have a token header which we're going to validate using our function to validate the token and the Gateway is just going to forward this token to our auth services validate endpoint and the response that we expect from our auth service is that decoded body so here when we get a successful response a 200 response what we're returning here in this response.txt is going to be the body of that token containing the claims so it's essentially it's going to be the decoded token where the body is visible and it's going to be a string which is going to be Json formatted and that means that here this access variable is going to resolve to that Json string that contains our payload with our claims so here we'll set access equal to Json dot loads access and if we go to the definition of this you see that it deserializes an instance containing a Json document to a python object so we're essentially just converting this Json string to a python object so that we can work with it in code and what's being converted into a python object is this here so our python object is going to look like this once we've decoded the Json so let's just go back and we can go ahead and close this so this access is going to contain that object and that object has the admin claim which is going to be a bull which is true or false and actually just to be clear let me show what I mean by that so this admin claim here is going to contain the auths which is a wool which is true or false and if it's true we'll give the user access to all of the endpoints so we're going to check for that claim so we're going to say if access which is the object that was converted from the Json so we're going to say if access admin which is essentially saying if the admin claim resolves to true then we're going to give the user access so let's go ahead and close that again so if the user does have access we're going to make sure that there's a file to be uploaded in the first place so if there's a file being uploaded the request should contain a dictionary in this request.files so we're going to say if the length of request.files is greater than one because we're only going to allow the uploading of one file at a time for now or the length of request.files is less than one because we want there to be exactly one file so we don't want more than one file then we don't want less than one file so if one of these is true then we're going to return exactly one file required and a 400. and this request.files dictionary is going to have a key for the file which will be defined when we send the request and the actual file as the value so that means that we should iterate through the key values in the request.files dictionary so we'll do for key which we don't need to use and file which is the value in request.files.items for every file we're going to upload it where we're going to do an upload and there should only be one file so this should only happen once so we'll do util dot upload and we haven't created this function yet but we will and this function is going to take us parameters to file our grid FS instance our rabbitmq Channel and the axis which was just explained above and this function is going to return an error if something goes wrong but if nothing goes wrong then it won't return anything it'll return none so to check if something went wrong we're just going to check if error and if there is an error we're just going to return error and after this for Loop completes if we've never returned an error then that means it was successful so we'll just return success with a 200 and that's what's going to be happening if the user is authorized but if the user isn't authorized so we need to go down here and do else so this is the block that's going to get executed if the user isn't authorized and in that case we're just going to return not authorized and a 401 and that's going to be our upload route and remember we still need to go and create this upload function but I'm going to get to that in a second because that function is going to be a little bit involved so before we do that let's just finish up our template here for our Gateway server so we'll do the final endpoint which is going to be server.route and this is going to be the download endpoint which is the endpoint that is going to be used to download the MP3 that was created from the video and this is going to be methods and we're only going to do git and we're going to Define download and for now we'll just pass this is just a template portion for this function or this endpoint and lastly we need to set if name equals Main we're going to run our server and again we're going to set our host to 0.0.0.0 and our port in this case is going to be 880. and this is going to be our Gateway service template including both the login endpoint and the upload endpoint and to be continued on the download endpoint so let's go ahead and save this and actually let's go back in here so I don't confuse you so now what we're going to need to create is this upload function here and this util as you can see is coming from this here so from Storage import util so we need to create a storage package and within that package we need to create a util module so we'll make their storage change there into storage and we'll create our init.pi file and now we'll do util.pi and we'll start by importing Pica and Json and we'll Define a function called upload and remember the parameters are the file our grid FS instance our rabbitmq Channel and our access which is the user's access now this function is going to be a little bit complicated so try to keep up and pay attention I'll try to explain things the best that I can so basically with this upload function needs to do is it needs to First upload the file to our mongodb database using gridfs and once the file has been successfully uploaded we need to put a message in our rabbitmq so that a downstream service when they pull that message from the queue can process the upload by pulling it from the mongodb and this queue is allowing us to create an asynchronous communication flow between our Gateway service and the service that actually processes our videos and this asynchronicity is going to allow us to avoid the need for our Gateway service to wait for an internal service to process the video before being able to return a response to the client so the first thing that we want to do is we want to try to put our file into the mongodb so when we put a file in the mongodb we're going to use this FS dot put function and it's going to be the file that we want to put and if this put is successful a file ID is going to be returned a file ID object to be more specific so mongodb is going to return a file ID and that's if it's successful but if it's not successful then we want to catch the error and for now we're just going to return internal server error and if this returns this means our file wasn't uploaded successfully and the function just returned so we don't need to do anything else after that but if the file was successfully uploaded we need to create a message to put onto our queue so we'll set message equal to a dictionary and it's going to contain the video file ID which is going to be the file ID object converted into a string and we also need to create an empty MP3 file ID within this same dictionary and for now it's going to be none but Downstream it's going to end up being set to the mp3s file ID in the database but you don't need to think too much about this right now we'll get to it later and we need to also have the username to identify who owns the file and the username is going to come from our access from our auth service and remember there was a claim for username there which contains our user's email and remember in our auth DB the email must be unique so this is a way to uniquely identify our user so that's going to be our message and now we need to put that message on the Queue so we're going to try and put this message on the queue using the channel that's passed to the function and we're going to do basic publish and we're going to set exchange equal to an empty string and this is just going to mean we're going to use the default exchange so I will explain a bit more about how rabbitmq works so for our purposes we're going to use a very basic rapidmq configuration and setup but we need to go over a couple of things so that we have a clear understanding of what's Happening Here let's start with the top level overview of how rabbitmq integrates with our system the first thing that is important to understand is that our producer which is the service that is putting the message on the Queue isn't publishing the message directly to the queue it actually sends messages through an exchange The Exchange is basically a middleman that allocates messages to their correct queue throughout the video I've been referring to rabbitmq as if it were a single cue but under the hood we actually can and do configure multiple cues within one rabbitmq instance for example in our case we'll make use of both a cue that we'll call video and a queue that we'll call MP3 so when our producer publishes a message to The Exchange The Exchange will route the message to the correct queue based on some criteria so how does our Exchange Route messages to the correct queue in our case well since we're going with a simple rabbitmq configuration for the sake of brevity you'll remember that we are using the default exchange by setting the exchange to an empty string and if we take a look at the rabbitmq documentation here the default exchange is a direct exchange with no name EG the empty string pre-declared by the broker and broker just being our rapidm queue instance and this default exchange has one special property that makes it very useful for simple applications and that is that every queue that is created is automatically bound to the default exchange with a routing key which is the same as the Q name so what does that mean exactly simply put that just means that we can set our routing key to the name of the queue that we want our message to be directed to and set the exchange to the default exchange and that will result in our message going to the queue specified by the routing key so with this overview we can see our video queue the exchange our producer and our consumer so let's say that this producer is our Gateway service and this queue is our video queue and this consumer is our video to MP3 converter when the user uploads a video our Gateway stores the video and then publishes a message to The Exchange that is designated for the video queue The Exchange will route that message to the video queue and the downstream service which is the consumer of the video queue will process the message the consumer in this case is our video to MP3 converter service so it will process the message by pulling the video from mongodb converting it to MP3 storing the MP3 on mongodb and then publishing a message to The Exchange that is intended for the MP3 queue but let's not focus on the mp3q just yet we'll get to that later let's just focus on the video queue for now let's say for example our producer is paddling on more messages than our one consumer can process in a timely manner this is where the capability to scale up our video to MP3 consumer comes into the picture but if we're going to scale up our queue actually needs to be able to accommodate multiple instances of our consumer without bottlenecking the entire flow so how do we manage that we manage that by making use of a pattern called the competing consumers pattern this pattern simply enables multiple concurrent consumers to process messages received on the same messaging channel that way if our queue is packed full of messages we can scale up our consumers to process the messages concurrently resulting in more throughput luckily by default our rapidm QQ will dispatch messages to our consuming services using the round robin algorithm which satisfies our needs this basically just means that the messages will be distributed more or less evenly amongst our consumers for example if we have two instances of our consuming service the first message would go to this instance and if another message were to come in it would not go to the same instance it would go to the next one and the same goes for if we already have a bunch of messages on the Queue the distribution of the messages will essentially go in a sequence from one instance of the consumer to the next to the next and then back to the first Etc so basically the messages will be distributed evenly in a round robin fashion so that they can be processed concurrently and that is going to be that so now that that's out of the way let's get back to right adding our code and then we're going to set our routing key equal to video so the routing key is actually going to be the name of our queue and we're going to have a queue called video where we put the video messages on and then the body of the message is going to be Json dot dumps message now similar to json.loads this dumps converts a python object into a Json string so as you can see it serializes the object a python object to a Json formatted string so it's basically doing the opposite of Json loads because the message we need to have a string contained within it not a python object and the python object I'm talking about is this here this is being converted into a Json string which is going to be the body of our message which gives our Downstream service all of the information that it needs to process the video conversion and we need to also set properties equal to Pika dot basic properties and within these properties we need to set the delivery mode and that's going to be set to pica.spec dot persistent delivery mode and this part is very important to make sure that our messages are persisted in our queue in the event of a pod crash or a restart of our pod so since our pod for our rabbitmq is a stateful pod within the kubernetes cluster we need to make sure that when messages are added to the queue they're actually persisted so that if the Pod is reset or the Pod fails when it comes back up or spins back up the messages are still there because if we don't set this if the Pod crashes or is reset then the messages are all going to be gone once the Pod is restored back to its original state so what we're essentially going to need to do is we're going to need to make our cue durable which means that the queue is going to be retained even after a pod restart and we need to make sure the messages within the queue are also durable which means that the messages are going to be retained even in the event of a pod restart or crash so when we actually create the queue we can configure the queue to be durable but that doesn't mean that the messages are going to also be persisted it just means that the queue will be durable so each individual message that we send to the queue we need to set this configuration here to tell rabbitmq that the message should be persisted until the message has been removed from the queue so anyways we're going to try to put our message onto our queue and if that doesn't work out we need to first delete the file from our mongodb because if there's no message on the queue for the file but the file still exists in the DB that file is never going to get processed because the downstream service doesn't know that that file exists if it never receives a message telling it to process that file so we'll just end up with a bunch of stale files in the database if we don't delete them in the event that a message can't be put onto our queue so if the message is unsuccessfully added to the queue we'll do fs.delete and we need to delete using the file ID and the file ID is the file ID object not the string it's defined here and once we've deleted the file we can then return internal server error so in this case in the event of a failure we will neither have a message on the Queue telling the downstream service to process the file and we will also not have a file in our database so it's going to be a complete failure so we can just return internal server error and at that point the user could just upload the file again if they want to try again and this is going to be it for our upload function and we're not using this error variable yet but we might use it later on down the line Depending on time constraints so we'll just leave it for now so let's go ahead and save and quit and we can clear okay so at this point we can go back to our root directory and we can start creating the deployment for this Gateway service so to start we just want to go ahead and freeze our requirements our dependencies into a requirements.txt file and we can start to create our Docker file now our Docker file is going to be quite similar to the one that we used for our auth service so we can actually just go to our auth service and get this Docker file and paste it into this one and everything is pretty much going to remain the same except for we're going to change the exposed port to 8080 and we also don't need this here but other than that the file is going to be identical so we can just go ahead and save this so now let's go ahead and build our image so we'll do Docker build and once we've built the image similar to before we can go ahead and tag our image so we'll do Docker tag and we'll go ahead and take the beginning of this and paste it in and your username for your Docker Hub account and this time instead of auth we're going to say Gateway and latest now at this point on your Docker Hub account you'll only have one repository which is the repository for our auth service and if we want to we can just create the Gateway repository manually but it'll actually create itself once we push to our username with the suffix Gateway so if we just do docker push and username slash forward slash Gateway latest you'll see that if you refresh the page here it automatically created the Gateway repository for us and if we go in here we'll see that we pushed the tag latest a few seconds ago so now we have our Gateway repository in our Docker Hub account so now that we can pull the image containing our source for our Gateway service from the internet we can go ahead and create our kubernetes manifest directory and we can go ahead and change there into there and similar again to our auth service we need to create a Gateway deployment yaml file and this configuration is also going to be very similar to our off service and this time we're going to pull our image from the Gateway Repository so remember your username and then Gateway and similar to the auth service we're going to create a config map called Gateway config map and let's not make the same mistake as last time this should be indented and for our secret ref we're going to create a secret as well and that's going to be it for our deployment and now let's go ahead and make our config map and the data that we need for our environment variables we need our auth service address and in kubernetes the service name will resolve to that service's host IP address so we can just put auth and then the port that the service is available at so we'll do all 5000 and that's going to be the address of our auth service within our kubernetes cluster and we can go ahead and save that and now let's create our secret and as of right now I don't think we have secrets for our Gateway service so let's just put a placeholder here for now and later we can add any secret variables if we need to and let's save that and we need to create service.yaml and the name of our service will just be Gateway and this service will have an internal IP address which will only be available within our cluster but our Gateway API needs to be able to be accessed from outside of our cluster so we're actually going to need to create another configuration called an Ingress to Route traffic to our actual Gateway service which I will get into in a second so we need to set ports equal to 80 80 Target port 8080 as well and protocol is of course TCP so now we'll create the ingress.yaml which is going to allow traffic to access our Gateway endpoint so let's take a second to understand what an Ingress is in the context of a kubernetes cluster okay so in order to understand what an Ingress is in the context of kubernetes you first need to understand what a service is in the context of kubernetes so with this configuration file we're creating a service and you can really just think of the service as a group of PODS so in our case we want to create the Gateway service and we want that service to be able to scale up to multiple instances or multiple pods so the service comes in and groups all of the instances of our Gateway service together and it does this by using a selector so in our case we're using this label selector to tell our service what pods are part of its group or under its umbrella so this label selector essentially by lines our pods to the service so any pod with this label will be recognized by the service as being part of its group so now we don't have to think about individual IPS for each individual pod and we don't have to worry about keeping track of the IPS of PODS that go down or are recreated we also don't have to think about how requests to our service are load balanced to individual pods the service abstracts all of this away from us so now we can just send requests to the services cluster IP which remember is its internal IP and we can assume that these requests will be distributed logically amongst our pods based on something like round robin for example which we already went over when explaining webinimq so now that we have a clear picture of a service we can get into what an Ingress is so we have our service with its pods and that service sits in our cluster which is our private network but we need to allow quests from outside of our cluster to hit our Gateway Services endpoints we do this by making use of an Ingress simply put an Ingress consists of a load balancer that is essentially the entry point to our cluster and a set of rules those rules basically say which requests go where for example we have a rule that says that any request that hits our load balancer via the domain name mp3converter.com should be routed to our Gateway service so since this load balancer is the entry point to our cluster it can actually Route traffic to the cluster IPS within the cluster so in this case it would route requests going to the configured MP3 converter.com domain to our Gateway Services internal cluster IP and if we wanted to we could even add a rule to our Ingress that says to Route requests to apples.com to a different service in our cluster for example so that's pretty much everything you need to know about Ingress for the purposes of this video so let's get into writing our Ingress configuration file so for the API version we're going to set networking.kh dot IO V1 and kind is going to be Ingress this time and we're going to name it Gateway Ingress and we're going to use the default Ingress which is basically an nginx Ingress and we can set some configurations for our nginx Ingress using this annotations key and we want to make sure that our Ingress allows the upload of some relatively large files so we're just going to set our Ingress body size to zero which is essentially going to allow any body size and of course you want to fine-tune configurations like these in a production application but again our focus is the overall architecture so we're basically just configuring this in the easy way possible to get things done and this is a typo and we're going to do two more configurations proxy read time out and we're going to set that equal to 600 and proxy same time out and we'll also set that one to 600. now for our spec and the rules we're going to Route request to the host MP3 converter .com to our Gateway service so remember our service name is Gateway and our service is available at Port 8080. so you're probably wondering how our kubernetes cluster knows if we're making a request to this host and basically what we're going to do is we're going to make it so that requests to this host on our local machine gets routed to the Local Host so we're just going to map this hostname to localhost on our local machine and we're going to Tunnel requests to our Local Host to minicube which sounds a bit complicated but just bear with me and we'll get to it so we're going to save this file and first we need to make sure that the MP3 converter.com gets routed to localhost so you need to open a file called Etc hosts and you're going to have to have pseudo permissions to do that so once in this directory you want to map this address 127.0.0.1 which is the loopback address which localhost also resolves to we want to map it to MP3 converter.com so once we do this whenever we enter This MP3 converter.com into our browser or if we send a request to this host it's going to resolve to localhost so we can go ahead and save that and now we need to configure a mini Cube add-on to allow Ingress so we'll do mini Cube add-ons and let's list the add-ons that are available and as you can see there's an Ingress add-on here that's currently disabled so we can do mini Cube add-ons Ingress enable I believe or we can do enable Ingress and once that's done as you can see here it says that after the add-on is enabled please run minicube tunnel and your Ingress resources would be available at this loopback address that we mapped to mp3converter.com so basically whenever we want to use our microservice architecture or test this overall architecture we're going to run this mini Cube tunnel command and while this is running whenever we send requests to our loopback address they're going to go to our mini Cube cluster via the Ingress and since we mapped mp3converter.com to our loopback address if we send requests to MP3 converter.com they're going to go to this mini Cube tunnel so just keep in mind so we control C out of this whenever we cancel this process then we're no longer tunneling the Ingress so as you can see here it says please do not close this terminal as this process must stay alive for the tunnel to be accessible so whenever we test we're going to have to run this mini Cube tunnel basically and we'll get to end-to-end testing a little bit later we're not going to test yet until we configure our q and our consumer service as well as our Gateway service which will be the producer service and that is how we are going to Route requests into our cluster and directly to our API Gateway so if we go and check K9s you may or may not see these two new items here but if you remember we've only deployed our auth service so far so we still need to deploy this Gateway service and similar to the off service we're going to have two replicas of our Gateway deployed so we can quit this and we can attempt to apply our configuration that we have here and it looks like we have an issue here so let's go into our secrets file and it looks like we forgot to capitalize Secret so let's once again try to apply and as you can see all of the resources or the objects were created successfully so let's go into canines and we're having an error with pulling the image here so let's see so it's actually not in error with pulling the image so the Gateway is failing because we basically don't have the rabbitmq queue deployed yet and it's trying to connect to this host here that we haven't yet created so that's fine for now so for now it's not going to be able to deploy so actually so it's not continuously running let's just scale that down for now until we create our rabbit mqq deployment so we can do Cube CTL scale and we can do deployment and we want to change the replicas to zero and we want to do that for the Gateway service so it says that our Gateway service was scaled so now you can see the the Gateway service is gone now it's not trying to be scaled up currently because we need to create that rabbitmq so let's go ahead and quit and from here we can start to get into the rabbitmq stuff so we need to create and deploy a rabbit and Q container to our kubernetes cluster so we're just going to change directory back to our source directory and currently we have our auth directory and our Gateway directory now we need to make a directory for our rabbitmq so we're just going to call it rabbit and we'll CD into rabbit and for rabbit mq instead of making a deployment like we did for the other two Services we're going to need to make a stateful set because we want our cue to remain intact even if the Pod crashes or restarts we want the messages within the queue to stay persistent until they've been pulled from the queue so let me go ahead and explain what a stateful set is okay so a staple set is similar to a deployment in that it manages the deployment and scaling of a set of PODS and these pods are based on an identical container spec but unlike a deployment with a staple set each pod has a persistent identifier that it maintains across any rescheduling this means that if a pod fails the persistent pod identifiers make it easier to match existing volumes to the new pods that replace any that have failed and this is important because if we were to have multiple instances of say for example a MySQL server each individual instance would reference its own physical storage and actually there would be a master pod that is able to actually persist data to its physical storage and the rest of the pods would be slaves and they would only be able to read the data from their physical storage and the physical storage that the slave pods use basically continuously syncs with the master pods physical storage because that's where all of the data persistence happens and most of the details surrounding this are not related to our architecture so I'll spare you the details actually to be honest there's probably a better way to configure our rapidmq broker within our cluster but this configuration will work just fine for our purposes we'll only be making use of one replica to achieve the before mentioned competing consumers pattern anyways the most important configuration that you need to understand for this particular service is how we are going to be persisting the data in our cues remember that if our instance fails we don't want to lose all of the messages that haven't been processed because then the users that uploaded those videos that produce those messages would just never hear back from us so basically what we want to do is we want to mount the physical storage on our local to The Container instance and if the container instance happens to die for whatever reason of course the storage volume that was mounted would remain intact then when a new pod is redeployed it will once again have that same physical storage mounted to it so let me show you what I mean by that to show you this I'll need to show you what the configuration file for our staple set is going to look like although we haven't written the code for this file yet just follow along so that you can understand where we're going with this so as you can see this configuration file follows a similar pattern to what all of the other configuration files did so I'm not going to go into detail about every single line if you need to please refer to the kubernetes API documentation that I introduced to you earlier but let's go ahead and have a look at containers here similar to our deployments this is going to determine the contain painter that gets spun up so the image we're using here is a rapidmq image but the part that we need to pay attention to starts here at this volume mounts Mount path so we want to mount a storage volume to our container right this Mount path is configuring where in our container we want the physical storage volume to mount to so basically anything that is saved in this VAR lib rabbitmq directory within our container we'll actually be getting saved to the physical storage volume that will persist even if the container fails and this particular directory is configured as the mount path for a reason this is actually where rabbitmq is going to store the cues when we create a durable cue and the messages when we configure them to be persistent and I'm going to go into detail about how we make the queue durable and the messages persistent a little bit later for now you just need to understand that we're mounting physical storage to this path and this is the path where rabbitmq will save cues and messages okay so now that that's out of the way the rest of the configure duration here under volumes is basically just the configuration for the physical volume to be mounted to the container for example we need to create an additional resource called a persistent volume claim and this config here basically links this staple set to the persistent volume claim that we're going to create and we're going to call that persistent volume claim rabbitmq PVC so what is a persistent volume claim in simple terms the persistent volume claim or PVC is going to be bound to a persistent volume and within the configuration for the persistent volume claim we'll set how much storage we want to make available to it from the persistent volume and the persistent volume will actually be what interacts with the actual physical storage and I know there are many layers of abstraction here but again for our purposes we really don't need to go into too much detail here all you really need to understand is that this configuration is going to make it so that the dura rabbitmq stores cues and messages will actually be persistent storage so whenever the Pod dies those cues and messages will be retained and the new pod will just reconnect to that persistent volume so let's go ahead and write up this configuration so now that we have an understanding what a staple set is we can go ahead and create a file called stateful set.yaml and actually we're going to want to make this in a manifest directory so we'll just make their manifest and CD into manifests and then we'll create the stateful set.yaml file in the here we'll set API version to apps V1 and this time kind is going to be stateful set and metadata will name rabbitmq and our spec configuration so the service name we're not going to use this so we'll just set it to not applicable and selector match labels as usual app rabbitmq now for our template and this is similar to the deployment it's going to be the template for our pods and we'll do metadata labels app is rabbit mq and our template spec or the Pod spec to be more specific through containers name rabbitmq and image is going to be rabbit mq three management and this is the official rabbitmq image and we're adding this management we're using the one that contains management because we want to have the graphical user interface to manage our cues as well included in the image and then we need to set ports and this container is going to need to include two ports we need the port to access the graphical user interface and we also need the port that handles the actual messages for example the messages that we send to the queue are going to be handled on a separate Port from the port that handles our connection to the graphical user interface and you'll see what I mean by that so we're going to set the name of this port the first port to http because we're going to use HTTP to access the graphical user interface and protocol will be TCP and container Port is going to be one five six seven two and we're also going to need a port for amqp which stands for advanced message queuing protocol and this is just the protocol that we use to send messages to the queue and container Port here is going to just be five six seven two and after ports we'll do environment from and we're still going to use config map and we'll call it rabbitmq config map and secret ref name will be rabbitmq secret and we're also going to need a key called volume mounts and we need to specify the mount path and this is going to be the path within the container that we want mounted so we'll do VAR lib rapid mq and the rabbitmq server is going to store the persisted data like the messages and the cues in this directory or at this path so we want to mount this path to our volume and our volume is essentially going to be storage that we connect to our kubernetes cluster which is where persisted data is going to be stored so name will be rabbitmq volume So within spec at the same level as containers we're going to do volumes name rabbit mq volume and we're going to use persistent volume claim and claim name will be rabbitmq PVC which we need to create and that's going to be it for this stateful set configuration so we can save that so now we need to create our persistent volume claim so we're going to do pvc.yaml and we'll do API version P1 and kind this time is going to be persistent volume claim and metadata we're going to do name rabbit mq PVC which we just referenced in our stateful set file and spec and access modes is going to be read write once resources requests storage one gigabyte and storage class name will be standard and we can save that and as usual we need to create our service.yaml API version V1 kind service metadata name of the service will be rabbitmq and our spec so type again is going to be cluster IP our service is only going to have an internal IP address which is only accessible within our cluster and selector app rabbitmq and ports remember that uh ports we're going to need to have the port for our graphical user interface so basically the rabbitmq like Management console and then we need the port for actual message transmission so we'll do ports name http protocol TCP and we're just going to do port 15672 and we'll do Target Port is the same and then we'll do our port for our message transmission which will be amqp again and we'll do protocol TCP and this one's going to be Port 5672 and Target board is the same port and we can save that and actually one second we need to go back in here so as you can see we're going to need to allow access to this port from a web browser so we're going to need to allow access from outside of the cluster directly to this port so that we can access rabbitmq's Management console so to do that we need to create an Ingress for this port as well so let's go ahead and quit and do Vim Ingress and we'll do API version networking.kh.iov1 and kind is going to be Ingress and a data name rabbit mq Ingress and spec rules and host and we're going to do it at rabbitmqmanager.com which we need to configure in our Etc host file and it's going to be http aths and path type prefix back end service and the back end service is going to be rabbit in queue more specifically rabbitmq's port number 15672 which is the port number for the Management console so we can save that and we need to open this file again and we're basically going to do the same configuration but it's going to be for rabbit mqmanager.com I think that's what we called it yeah rabbitmqmanager.com so we can close that save this and close it and let's make a config map and I don't think we currently need any environment variables but let's just make a template just in case we need to add some and we'll do the same thing for a secret and that should be everything so let's go ahead and try and apply this so we have a couple of Errors the first one here is just a spelling error so config map I misspelled the key metadata so let's go in there and fix that meta data and let's apply again and in the service.yaml it's saying that the API version is not set so let's go and service.yaml and that's because I put ape version so API version and let's apply again and it says that stapleset.spec.templetunknown field volumes and that's because volumes should be at the same level as spec so let's go into stapleset.yaml actually the issue is the opposite of what I said volumes should be at the same level as container so it's under spec so we need to move this in one so let's save that and let's apply again and now it seems all of the objects were created successfully so let's go into canines and we see that our rabbitmq pod is pending and let's go have a look and it seems something's not working as expected so let's do subscribe pod rabbitmq and we have an event here warning failed scheduling one pod has Unbound immediate persistent volume claims okay so it seems there's an issue with our PVC so let's go ahead and do qctl describe PVC and we have another event here warning provisioning failed it says storage class storage uh stranded so it can't find the storage class because the storage class is standard and I have a typo so let's go ahead and Vim into our PVC file and here is the error it should be standard so let's save that and let's apply again and actually for persistent volume claims as said here the spec for this is immutable after creation so we're actually going to have to delete the resources created using these files and we only really need to delete the persistent volumes claim but it doesn't matter we'll just delete all of the resources created with this file or created with these files so basically we're going to use this Cube CTL delete command and we're going to have the flag f for files and we're going to delete all the resources created with the files within this directory so let's just delete them all and now that we're done deleting them we can just go ahead and apply them again and it seems they're created successfully so we can go into canines and it looks like the container is creating and it looks like everything is going as expected so let's leave the logs and leave the container so yeah now we have our rapidmq instance running within our kubernetes cluster let's go ahead and quit since we configured an Ingress for this and we configured this route in our Etc hosts file we should be able to access rabbitmqmanager.com and that should take us to the rabbitmq manager so let's try that so let's go ahead and go to rabbitmqmanager.com it's not working because we forgot to do mini Cube tunnel so let's go ahead and clear this and do mini Cube tunnel and as you can see here it's trying to start a tunnel service for both our Gateway Ingress and our rabbit mq Ingress so let's see if we can access the Management console now so let's just refresh this page and I'm just going to accept the risk and there we go we have access to our Management console and let me just go ahead and zoom in here so the username and the password for this should just be admin and login failed so maybe that's not the correct credentials so let's just go to Google and type in rabbitmq Management console default credentials and here it actually says the username and password are both guest so let's try that so we'll do guest and guest and there we go we are logged in zoomed in a little bit too much and this is what the Management console is going to look like and you don't need to get overwhelmed by all of this we're going to limit our Focus to just this cues section so for example we're going to create our cues here using this add a new queue and yeah so we're going to make use of two cues one of them is going to be called video which is going to be the queue where we put our video messages and let me just show you to give you a bit of a refresher so remember while we're actually using our Ingress we can't exit this so we're going to need to open up a new terminal or a new tab so I'll just open a new tab and I'll zoom in here and I'll just change directory to system design python Source rabbits and actually what I wanted to show you is in the Gateway directory so in our server.pi when we upload we use this util.upload and in here we're putting the message on the Queue called video so routing key is just the queue so in the console here we actually need to create that cue so we're going to create a cue called video and this durability needs to be set to durable because if it's transient then that means that if the container is restarted or shut down or anything the queues no longer going to exist you're going to need to create it again so durable means that the queue will be essentially persisted so if the container restarts or something the queue will still exist afterwards so we'll just add this cue so now we have our video queue here so now let's see if we can start up our Gateway server so we can just quit here and canines quickly so yeah we want to spit up our Gateway service so we'll go ahead and let's just change directory into Gateway manifest and we'll do Cube CTL apply we'll apply all of the files in this directory and let's do canines again and as you can see now our Gateway service is able to start up with no issues so at this point we have our Gateway service and our off service and our rabbitmq queue service up and running within our cluster so that means that right now we can upload files and messages for those uploads will be added to the queue and at this point we have no consumer service to consume the messages from the queue to actually convert the files so we need to create an additional service and this service is going to pull messages off of the queue that the Gateway adds to the queue and it's going to convert them into MP3 and then it's going to store the MP3 and mongodb and put a message onto another queue called MP3 which will have another Downstream service pull from but I don't want to confuse you all too much so let's just do this step by step so we can leave this and we need to go back to our source directory because we need to create another service so we're going to make dur converter and this converter service is going to convert videos to MP3 so this is going to be the consumer service that pulls the messages off the queue so it knows which videos it needs to convert and where they're stored Etc so we'll make this directory and we'll just CD into that directory and let's clear and we're going to make a file called consumer.pi and in this file we're going to import Pica of course because we need to pull the messages off the queue sys OS time and from PI we need to import client and that's not how you spell import and we also need to import grid FS because we need to take the files from mongodb to video files and we also need to upload the MP3 files to mongodb and from convert which is a package that we're going to create ourselves we're going to import to MP3 which is going to be of a module within that package and we're just going to define a function called Main and our client is going to be equal to mongodb client and it's going to be our mongodb host which is on our local machine it's not deployed in our cluster remember so we need to use this host mini Cube internal which basically gives us access to our host systems local environment and the port for mongodb is 27017 and we're going to do DB Videos equals client dot videos so this instance of client is going to give us access to the DBS that we have in our database so we can do DB MP3s equals client Dot MP3s so these databases will exist within our mongodb and then we need our grid FS stuff so we'll do FS videos equals an instance of grid FS which we need to pass our DB Videos to and fsmp3s we need to do the same thing and now we need to configure our rabbit mq connection so connection will equal Pika dot blocking connection just like before and we'll do Pica dot connection [Music] parameters host equals rabbit mq and this is possible because our service name is rabbit and Q and our service name will resolve to the host IP for our rabbitmq service and channel will equal connection.channel so what we need to do is we need to create a configuration to consume the messages from our video queue and to do that we're going to use this Channel and basic consume and let's save this and the arguments that we need to pass to this basic consume is our q and we're going to get the Q name from the environment so the queue that we want to consume from in this case is the video queue but just in case we want to change it in the future we're going to just have an environment variable that contains our Q configuration so we'll do OS Dot environ.git and we'll name the environment variable video queue so that's going to be our q and we're going to need to create a callback function that gets executed whenever a message is pulled off of the queue so we'll say on message callback equals callback and we need to create this callback function so we'll go up here and Define callback so whenever a message is taken off the queue by this consumer service this callback function is going to be executed and it's going to be executed with the parameters Channel method properties and body and what we want to do when we get the message is we want to convert the video to MP3 so we'll do two MP3 dot start so there's going to be a function in our two MP3 module called start and we're going to pass in the body of our message and we're going to pass in FS videos and Fs mp3s and the channel and when we create this function you're going to see why we're doing all of this but for now we're just creating the Callback function that's going to call this function so we're going to set the result to error and if there is an error so if error so if there is an error we want to send a negative acknowledgment to the channel so we'll do basic Knack which stands for negative acknowledgment which basically means that we're not going to acknowledge that we've received and processed the message so the message won't be removed from the queue because we want to keep messages on the Queue if there's a failure to process them so they can be processed later by another process and here we're going to do delivery tag equals method dot delivery tag and this delivery tag uniquely identifies the delivery on a channel so when we send this negative acknowledgment to the Channel with the delivery tag rabbitmq knows which delivery tag or which message hasn't been acknowledged so it'll know not to remove that message from the queue but on the other hand if the error is none then that means there wasn't an issue with the conversion so we'll just go ahead and acknowledge the message so we'll do basic pack for acknowledgment and same thing we're going to do delivery Tech is method.deliverytag and if you see here method is one of the parameters that's passed to the Callback function and that's how we're keeping track of the delivery tag so that's it for our callback function and let's go ahead and format and we'll go ahead and print a message that just says waiting for messages and we can also put to exit press control plus c and basically once we run this uh main function here this is going to be printed and then we're going to do Channel dot start consuming and this start consuming is essentially going to run our consumer so our consumer is going to be listening to the queue or listening on that channel where our video messages are being put so we need to do if name equals Main we're going to try and run our main function and we're going to do accept keyboard interrupt so our main function is going to run until we press Ctrl C and interrupt the process and once that process is interrupted with bias pressing Ctrl C then that event is going to be captured in this try except and we're going to catch the keyboard interrupt which is US pressing Ctrl C to cancel the consumer process and in that case we'll just print interrupted and we'll try sis.exit and we'll accept system exit and then we'll do OS dot exit zero and this is basically us just gracefully shutting down the actual service in the event that we do a keyboard interrupt and that is going to be it for our consumer so we need to go and create this convert package and this two MP3 module and we also need to install some things and we also forgot to create a virtual environment so let's go ahead and save this and let's go ahead and do python 3 p e and B and now we can activate our virtual environment so Source then activate and we now are using our converter virtual environment and now we can just cat consumer dot pi or maybe it's better to do cat head in 10. so we need to install some of these dependencies so we'll do fifth three install Pica and Pi and actually I don't need the comma there so pip3 install Pika and Pi and let's go in here and it looks like we're good to go so let's clear so now we need to create our convert package so we'll just make dare convert and change directory into convert and we need to create the init.pi file and we need to create a module called to MP3 and then we're going to need to import a couple of things so we'll import Pica as usual and we need to import Json as well and we need to import temp file which I will show you what that's going to be used for in a second and Os as well and we need to import this binary Json dot object ID and from there we need to import object ID and I'll show you what this is doing or what this is going to be used for soon as well and lastly we need to import this movie Pi editor which we need to install and we're going to define a function called start and it's going to take in a message a gridfs videos instance a grid FS MP3s instance and a channel and for now let's pass so we can go ahead and recap so just to recap if we go into our consumer.pi file we're importing this module that we just created to MP3 and this two MP3 module contains the start function which is the one that we're creating now so we're going to be using this movie pie to convert our videos to MP3 so we're going to need to install pip install movie pi and from there we can start writing the code for our start function so the first thing that we're going to need to do is we're going to need to load our message which is essentially going to make it into a python object so let me just install something really quick so we're going to deserialize an instance containing a Json document to a python object so at this point our message contains the python object version of our message and the first thing that we want to do before converting the file is we want to create an empty temporary file and we're going to write our video contents to that temporary file so we'll do empty temp file is TF so we'll do TF equals temp file dot named temporary file and as you can see here it says create and return a temporary file and this temporary file is going to be a named temporary file as opposed to a temporary file that does not have a name so if we go to this definition here you can see that this temporary file has a name attribute where you can access the file's name so we're going to create this named temporary file and it's essentially going to create a temp file in attempt directory and we can use that temp file to write our video data to so we'll do video contents and we'll set out equal to FS videos dot get so now we're going to get our video file from grid fs and we're going to have it in this out variable or this out object is going to have a read method so we'll be able to write the data that's stored in this out variable to the file so we need to do object ID and message video fid which if you remember from our Gateway service in the actual storage util function we have video FID set to the file ID that was given to us after we put the video file into mongodb but if you also remember we have to convert that into a string because the FID that comes from the return value of this fs.put is actually an ID object so we needed to convert it into a string to put it into our message so we're taking our string version of our FID and converting it into an object and then we need to use that object version to get the file from our mongodb we can't get the file using the string version of the ID so just really quick let's go ahead and save that actually let's go back in here and for some reason it says no name object ID so let's see and I guess that's just an error so so once we have the video file data we can add video contents to empty file and we're going to do that by taking the empty file which is the TF variable and we're going to write to that file the data that's returned from this read method and this is the read method on the out variable or the out object as this read method which allows us to read the data stored in out so if we go to the definition here we can see that we can read the bytes from the file so the bytes that are returned from this read method here are going to be written to our temporary file here and then what we want to do is we want to convert our video file into audio so our temp file currently has the video file so we'll do create audio from temp video file so audio is going to equal moviepi dot editor Dot video file clip and then we're going to take the TF or the temporary file name which is actually going to resolve to the path of the temporary file and we're going to extract the audio from that file so all of this is going to resolve to our audio file being stored in this audio variable and the last thing we need to do is we need to close our temp file and with this temp file module here the temp file will automatically be deleted after we're done with it so basically after we close the file it will automatically be deleted so we don't need to worry about cleaning that up so now that we've extracted the audio into the audio variable we need to write the audio to the file or to its own file and we're going to do that by first creating a path for our audio file so we'll do temp file path equals temp file dot get tempter and this is going to give us the directory on our OS where the temp files are being stored by this temp file module so we'll take that dur and we'll add it to our desired file name so we're going to name it the video file ID which we'll take from message video file ID .mp3 so what we're doing here is we're first taking the path to our temp directory and we're appending our desired MP3 file name to that path so we're left with the full path to the file and we want to name the MP3 file just the file ID of the video because it's going to be a unique ID so in this case we won't have to worry about collisions with file names because every video file is going to have a unique ID and that's going to be the name of the MP3 file as well so then we want to do audio Dot right audio file and we're going to write the file to the path that we just created so we're taking our audio file that was created using this movie pi and this object is going to have a method called Write audio file and we basically just need to tell this right audiophile method where we want to write the file and what we want to name the file so that's why we're just passing this path that we created here and once the audio file is created the temporary audio file we can save the file to so first we need to open the file so we'll open the file at the path we just created and we just want to read the file and we'll set data equal to that file that we opened dot read and then we'll set file ID equal to fsmp3s dot put and then we're just going to put that data that we extracted from the file so we're storing our MP3 file in our grid fs and at this point we don't need that temp file anymore so when we do F Dot close we also need to go ahead and do OS dot remove TF path because remember in this case this write audio file method created the temporary file and not this temp file module so we have to actually delete this temp file manually ourselves and lastly we need to update our message and remember we have this MP3 FID in our message and we want to set that equal to string version of the FID object that we got from uploading the MP3 to mongodb and lastly we need to put this message on a new cue or a different queue that we need to create called the MP3 queue so we'll do try Channel Dot basic publish and we'll use the default exchange Again by just putting an empty string and our routing key it's going to be mp3 for the MP3 queue but remember we're going to get those from the environment the names of our cues from the environment so we'll just call this environment variable MP3 Q and our body of course is going to be Json dot dumps because we need to convert the python object into Json and our message will be the input and we need to make sure the message is persisted until it's processed so we'll do pica.basic properties and we're going to once again like before set the delivery mode equal to pica.spec dot persistent delivery mode and if we can't put the message on our queue and let's just catch the exception as an error just in case we need that variable and then we'll do FS MP3s dot delete FID so basically if we can't successfully put the message on the Queue saying that there's an MP3 available for that message then we want to delete the actual MP3 from mongodb as well because if we don't put the message on the Queue then the file in mongodb the MP3 will never get processed anyway so we need to make sure we remove the MP3 from mongodb if we can't add the message to the queue and in that case we're just going to return failed to publish message and the reason this will work is because let's go ahead and save this if we go back into our consumer.pi file if you remember if this start function here fails then we're going to return an error and that error is going to be stored in this error variable and if there is an error then we're going to send a negative acknowledgment for the actual message that's on our video queue so that means that that message will not be removed from the queue and it can be processed again later so that's why we need the actual start method to fail completely because we're going to basically attempt this whole function again if something goes wrong so that's why we need to delete the file from mongodb if we can't put the message onto the queue and let's just go ahead and format and we can save that and that's going to be it for our consumer so what we need to do now is we need to create our Docker file and our kubernetes configuration to create the service within our cluster so let's pip freeze our requirements into a requirements.txt file as usual and then we'll create a Docker file and this Docker file is once again going to be the same as the previous Docker file so we'll just copy this and paste it here and let's just save that and we're going to need to add something additional here called ffmpeg which is just a dependency for the movie Pi module and we need to change this to consumer.pi and we need to change our exposed to I think Port 3000 and let's go ahead and save that actually I made a mistake since this is a consumer we're not going to expose any port because it's not going to be a service that we're making requests to this service is going to consume messages from a queue so it's going to act on its own so we'll save that and let's do Docker build and once that's done we can do Docker tag and we'll just take a piece of this and we're going to use your username for your Docker Hub account and this time we'll call it converter and latest and then we can do Docker push converter latest and now if you go to your Docker Hub account foreign you should see that you now have a converter repository as well and you should have the tag latest here as well so now we can make our manifest directory and for this one we just need to create a yaml file for the deployment and the secret and the config map so we're not going to need to create a service configuration for this one so we'll do API version apps V1 kind deployment metadata name is converter labels app converter spec and we'll do four replicas for this one and selector match labels app converter and strategy is going to be type rolling update and rolling update Max surge we'll just double the number of replicas and now for the template labels app converter and we'll do spec actually spec should go back here and containers name equals converter and image is going to be your username converter and environment from will be config map reference and name is going to be converter config map which we'll create and we'll also do Secret ref which will have the name Secret or converter Secret and that's going to be it for that so let's save and we'll create our config map and we'll set API version to V1 kind config map metadata name converter config map and data so we need our mp3q name because remember we're using the environment variable to select the queue in our code and we need our video queue name and actually while we're doing this we need to go create our mp3q so back in our Management console we have our video Cube but we need to create another queue so add new q and we're just going to name it MP3 and it's going to be a durable queue as well and we'll just add Q so now we have both the MP3 queue and the video queue and we can save this now we need to just create our secret.yaml and we don't have any secrets for this service so this is just going to be a template file for now foreign let's save that and let's make some space so now let's go into canines and check to see what we have running so far so we have our auth service our Gateway and our queue so now we're trying to deploy our consumer which is the converter so we'll just apply all the files in the current directory and it seems everything was created so let's see if we run into any issues let's see if we can get a better view of the logs so we'll do Cube CTL logs follow and just select one of these converters and we're getting no module named binary Json dot object ID so I must be pretty sleepy because I don't know why I didn't see this before but object ID here is clearly missing a b so let's go into our convert to MP3 file and go over here and change this to object ID and we'll save that change directory and just to double check let's check to see if there's a linting error and go to definition so we're good to go so we need to rebuild the docker file so we'll do Docker build and we need to update the repository so we need to tag it again first of all foreign Docker push converter latest and now that that's pushed let's once again try to apply our configuration that's in our manifest directory and let's check canines hmm actually let's try to let's first um delete and let's see if we can clear this out really quick and now let's let's check to see if they're closing or shutting down okay we're good so now let's try Cube CTL apply and let's check canines and let's see well we got one two and three running and four running so that's good news all right so at this point we have our auth service deployed we have uh multiple instances of the converter service deployed to pull from our rabbitmq which we have deployed as well and we have our Gateway deployed so at this point we can see if uploading files results in messages getting put on the Queue and we can also see if those messages are being pulled off of the queue by our converter service and that'll probably be the most difficult part to get configured because once we have our uploads resulting in the proper messages being put on the Queue and the converter service consuming those messages and converting the videos and storing them in mongodb that's pretty much the entire functionality of this microservice application so at that point we'll just be creating a service to send notifications when in p3s are finished being converted or when videos are finished being converted to MP3 so let's test the end-to-end functionality of uploading and having those uploads be converted so we can quit this and let's clear and we still want our tunnel to be running so make sure that your mini Cube tunnel command is still running and you have another tab opened to work with whatever it is that we're working with at the moment and we want to test by uploading a video file and when we upload that video file we want to see that the message gets added to our video queue and then gets removed from our video queue and another message gets added to our MP3 queue and we don't have a consuming service to consume the mp3q messages so all the messages should just be piling up at the mp3q if our end-to-end functionality is working as expected and also if it's working as expected we should be able to download a converted video file which would just be an MP3 file from our mongodb so let's go ahead and try to test that and I actually don't have Postman or anything like that installed on this laptop at the moment so I'm just going to use Curl to test this but if you're familiar with Postman you can use that to test as well if not just follow along with the commands that I use and make sure you have curl installed of course and at this point we can just go on YouTube and download a video and we don't want the video to take too long to download so we'll just do something short so this one's 37 seconds Mark Zuckerberg says he's not a lizard so we'll just go ahead and take this and copy the link to the video and I'm going to use this YouTube download tool to actually download the video from YouTube so if you want you can just do Brew install YouTube DL and of course it's already installed for me but that's just in case you want to do the same thing to get a video or you can use whatever video you have on your machine already so I'll just do a YouTube download and then paste in the URL for the video and for some reason that video doesn't work so let's just try a different one so yeah sorry Mark and this video is relatively short so let's go ahead and do this one we'll just take the URL and once that's finished downloading you should have the file in your current directory and just really quick we're going to go into our mySQL database and we're going to show tables actually we need to use database auth and I spelled database wrong and actually it should just pu's auth so now we can just show tables and let's just select all from user select all from user now when you do this you should have credentials here from when we created the database and we used our SQL script to create the user in the beginning of this video so you should have an email and a password and these are the credentials that we're going to use in our basic auth when we send a request to our login endpoint to get a Json web token to upload the video so I'm just going to exit this and I'm going to send a curl request and it's going to be a post request and I'm going to send it to http MP3 converter.com because remember our Gateway Ingress resolves this host name and we configured this hostname to resolve to our local host or our loopback address so we're just going to use this when we send requests to our Gateway and it's going to be the login endpoint and in curl you can just do this u-flag to do basic authentication credentials and I'll just do Giorgio email.com and the password is admin123 for me and we're actually getting an internal server error so let's go into canines and check our Gateway logs and that's too small so I'll just do Cube CTL logs f Gateway and it's saying in alt service access dot pylon 16 object has no attribute txt so we can just go into that file so we need to change actually we can just go into the file from here so it's line 16. yeah so I'm just going to change directory and I'll activate this virtual environment foreign and it's because the response is response.txt and not response.txt and let's just see if there's any more and let's check the directory for txt and we should exclude our virtual environment so invalidate.pi so we're going to need to change all of these so I hope there's not too many sorry about that but of course this requirements.txt is supposed to be txt and maybe we should check all of the service directories because I don't remember it should be ignoring the virtual environment there so yeah let's just see how it goes so we need to go back into the gateway and we need to do Docker build again and then we need to do cker tag and this is for the Gateway and then Docker push and we'll just delete all of our Gateway resources and just recreate them and it looks like those are up and running so let's try to get our token again and now we were able to successfully get our JWT so we can just copy this JWT and let's go back to the directory where our video file is so that's in converter and here we need to do curl and this is going to be a post request as well but this time we need to add our file and actually to make this easier let's change the name of this file so we'll move this to test.nkv now we can do curl x post file equals at test.nkv and we can't forget the header which needs to be authorization and remember it has to be a bear token and then we'll paste in the token and that's going to go to our MP3 converter.com upload and let's see what happens and we are getting 403 Forbidden so let's go ahead and go into canines and maybe that's coming from our auth service let's just check the Gateway nothing hmm so actually it looks like we're using the wrong uh host name here we have MP3 convert but it should be mp3converter.com let's go ahead and sudo Vim our hosts yeah so it should be mp3converter.com so let's go ahead and try it with that and let's make sure our tunnel is still running and let's see okay so now we're getting an actual error from the actual Services I believe so let's go into canines and we can assume that that error came from the Gateway server all right so we're hitting the server now with our upload and maybe our Gateway server is getting an error from auth but it seems auth is returning a 200 for the validate so if we're getting the 200 from the validate endpoint on the auth service then that means that our Gateway is returning a an internal server error when we get a 200 from validate so let's change directory into our Gateway and go into server.pi and let's go to validate so for the upload endpoint we validate and at some point I guess we're returning a 500 but hmm that's a bit strange try and let's get another token really quick and let's try with the new token uh and that's because we aren't in the directory of our file so let's change directory back to converter where our test file is and let's try this again and still an internal server error so yeah it's a debug this so I'm just going to scale all of our services down to just one replica so that we don't have to check multiple replicas for the logs so we can just do Cube CTL scale and we're going to scale the deployment Gateway down to one and we also need to scale our converter converter down to one and we also need to scale our auth service down to one so when we go into canines now you can see that we're terminating all of these extra replicas and we're going to have one auth service running and one converter running in one Gateway running so once that's done terminating we can go ahead and I'm just going to open some extra tabs here and run the logs for our Gateway and same for our auth service and the same for our converter that's strange there's nothing printing for the converter so let me see something foreign [Music] and check the logs so the Gateway is what's returning the 500 validates returning a 200 so the validation is going through and we're not getting anything on the converter so if the validation is completing but the Gateway is still returning to 500 then that means that something's happening between validation and actually adding the message and uploading the file so if we go into our Gateway server.pi file we can go to validate and our validation happens here and it's successful and we can let's just assume that we're making it to the upload and here's where we're returning a 500 so in this case we're catching the error and we're returning a 500 and we're not doing anything with the error so we can't really see what's happening so let's go ahead and print the error here and here as well we'll print the error so then we can see if we're either getting an error when we're trying to publish the message or if we're getting an error when we try to upload the file so hopefully we can get some more information by printing these errors so we can go ahead and save this and we're going to need to change directory to Gateway and we need to do Docker build and tag and push again so we'll do Docker build because we need to add the code changes with the print statements and then we'll do Docker tag and it's going to be the Gateway repository that we're pushing to so we'll do username Gateway latest and then Docker push Gateway and once we've pushed we can just delete all of our resources for the Gateway using our manifest files and then we can just recreate them on actually we don't want to use we don't want it to scale up more than one right now because we're debugging so let's do Cube CTL scale deployment to one and for the Gateway and let's change directory back to where our file is and our logs since we shut down that container that we were following the logs for we need to do Cube CTL logs again for the new container and let's check our tunnel it's asking for the password again so we need to do that yeah and let's try this again so it was a success that time so I think what was happening was the old Gateway deployment like the old replicas weren't connected to the host because the host variable wasn't resolving to the actual rabbit and Q host in the container and that's something that happens sometimes so basically like huh let's see if we can let's see if I can prove what I'm what I suspect is happening so for example if we have our rapidm QQ here our Gateway is connecting to the rabbitmq using the name rabbitmq which is the name of the stateful set so let me try and clarify so let's change directory to the Gateway and if we go into storage and util.pi actually not storage util it's just in server.pi our connection to rabbitmq we're using this name rabbitmq which is the name of the service because if we check our rapidmq manifestservice.yaml you see the name is rabbitmq so in kubernetes the service name will resolve to that Services host so in server.pi we're depending on this name resolving to the host for abadin Q but it seems that in kubernetes if a container is connecting to a host via this name if that host changes or restarts it seems that the containers that reference that service name still reference in older host I believe so for example if I were to go in here so first let's send that again uh let's change directory to converter and let's send that curl request again hmm and we're getting an internal server error again but this time it's not our internal server error so let's see what's happening so it says we're referencing this error before the assignment hmm so let's go into Gateway where we did that storage util.pi print error print ah here we're not catching the error for example up here we're doing accept exception as error but here we're just doing accept so we need to do exception as error foreign Docker build and Docker tag and Docker push oh actually that's pretty bad so actually I just did that in the converter directory so I basically built the converter image and pushed it to our Gateway image so I pretty much just overwrote the Gateway image with the converter image uh let's see if we can just cancel that and let's change directory to Gateway and now here we can do Docker build but let me just double check to see what I did so yeah I did Docker build in the converter directory so it built the docker file within this directory which is our converter Docker file and then I pushed it to the actual Gateway repository and what did I change yeah I made changes to the Gateway though so Docker build in the Gateway directory to build our Docker file for Gateway Docker tag Docker push and now the latest image for our Gateway repository is going to be this most recent one so that should resolve the issue or accidentally pushed the converter image and we'll do Cube CTL delete again for our Gateway resources and apply and actually once again I forgot to scale so scale it to one foreign so it looks like that time we got to success it worked so so we're still terminating one of the replicas mini cubes asking for a password again because we had to redeploy the Ingress as well so all right so let's go ahead and set up the logs again for our Gateway and let's send the request again for the upload and again we need to go to where the file is and we get a success auth is successful and the converter is writing the audio so that's successful as well so back to my explanation of what I think the issue was before so our Gateway is using the service name for this rabbitmq service to connect to that host but I think that if for example we restart this rabbitmq the Gateway will still be resolving to the old host using that rapidmq service name so the Gateway won't actually be able to connect if we were to restart this unless we restart the Gateway which would refresh its reference to the host I believe so let's just try and test that theory because I don't really like not knowing what happened so if we send again we should get a success and then if we go into canines and we just delete this pod so we're going to delete our rabbitmq pod and it's going to make it restart and the converter breaks because it's it needs to connect to the rapidmq host as well foreign so once the rabbit mq pod restarts let's just restart this one refresh the converter now at this point if my theory is correct the Gateway is still referencing the older rabbitmq host using the rabbitmq service name so it's it should fail if we try to upload right now so if we do this again we get an internal server error so if we reset the Gateway it will reset its variable for rabbitmq or it will reset what it resolves rabbit in Q2 which would be the new host so resetting this should actually make it work so now if we do curl upload we get success so yeah my theory is pretty much correct and it's kind of annoying but yeah just keep that in mind so for example let me just recap what I said in server.pi we're connecting to our rabbitmq using the service name so the service name in kubernetes resolves to the host so yeah the theory is that the cluster IP address that this service name resolves to which is the cluster IP address of the service the rabbitmq service the theory is that IP address changes when we restart the rabbitm coupon but the Gateway pod still references the old IP address using this service name which is why when we restart the Gateway pod this rabbitmq its reference for this rabbitmq service name gets updated which is why it works after doing that but anyways the good news is it seems that everything is working so when we upload a file we're successfully if we were to clear this and do logs for our converter it seems that we're successfully writing the audio so in order to confirm that the end-to-end functionality is working correctly we can check for the existence of a audio file from our video conversion in the DB and that's how we'll check to see if everything is working from end to end and also if we go to our queue let's refresh we can see that our mp3q has four messages ready and remember we don't have a consumer service pulling from the MP3 queue so all of the messages are just going to pile up here so at this point we should have four MP3s in our database and we uploaded the same video every time so all four MP3s should be the same audio file but at this point we should be able to download one of these four MP3s from our mongodb and test it to see if it's working correctly and as you see our video queue is empty because any message that we put on the video queue was processed by our consumer service which converts them to MP3 and then that service then puts them onto the MP3 queue so to confirm that everything is working correctly let's just check our mongodb to see if we have four MP3 files which should all be the audio file for the same video that we uploaded four times and we can also go ahead and do it one more time and we're getting an internal server error so let's go ahead and actually don't remember if we restarted the Gateway so let's just restart that just wait till this terminates and we'll do logs again for Gateway and we'll upload that again and we get a success so now there should be five messages in our queue the MP3 queue so as you can see we now have five messages in the queue so now we should have five MP3s in our mongodb database and to test that if you installed mongodb earlier in this course you should have shell which should put you directly into the mongodb that's running on our local host or that's running on our local machine and we should be able to do show databases and it shows that we have this MP3s database and this videos database so this database should have five audio files stored so let's see if we can just use MP3s and now let's try and show collections and as explained before with grid FS the actual file data is stored in these chunks and the files will have a reference or the file is essentially like the metadata for a collection of chunks so if we do DB dot FS dot not chunks fs.files.find this will show all of the objects that we have stored and actually I forgot that when I was testing this prior to making this tutorial I uploaded a bunch of videos that were converted to mp3s so my database is going to have more than five yours should only have the five or it should only have as many MP3s as you sent upload requests that were successful so let's actually do it this way so we can go to the queue the MP3 queue and we can actually let's see get a message from the queue so let's just do get message and we see that there's an MP3 file ID that has this ID so let's go ahead and copy this and here we can do DB Dot actually let's uh show collections again and we can do db.fs.files.find and remember we want to use the actual object version of the ID to find it and to do that we'll do underscore ID for the key and then we'll do object ID and inside of there we can put our actual ID and as you can see that actual object ID is stored successfully inside of our MP3s database so now what we want to do is we want to download this and see if it's an actual audio file and to do that we can go ahead and exit this and let's clear and again if you installed mongodb using the instructions earlier in this video you should have this files and you can put for the DB mp3s and we want to get by ID and we want the local file to be called we'll just do test dot MP3 and we need to use this object to get the ID same way that we have to do that within our database so we'll put that in which is the string version of the ID within this object syntax and we should be able to download the file that way so it was able to connect to our mongodb on the local host and it was able to finish writing to test MP3 so now this test MP3 file should contain the sound for our video file so it should contain the sound for this file so let me just go into this directory on the user interface okay so now in this directory which is the directory of our converter service we have this video file which is the test MKV file so here's a burger again the double double and I'm just going to start eating away with a bag here and uh and yeah it's cut that a bit short don't know about copyright or whatever so this audio file should be the sound for that video file so let's go ahead and just as you can see it's an audio file then we can just go ahead and play this so here's the burger again the double double and I'm just going to start eating away so yeah it looks like our end-to-end functionality is working up to the point where we put the message on the MP3 queue so at this point we are uploading our video and adding the message to a queue so we're essentially when we upload a video it gets put onto mongodb then we create a message and add it to this video queue and then our consumer converter service is going to pull off of this video queue convert the video into an MP3 and then put a new message on this MP3 queue saying that in mp3 for a specific file ID exists within mongodb so the last thing that we need to create is a service that's going going to consume This MP3 cue and that service is just going to be a notification service that's going to tell our user that a video is done or a video conversion to MP3 process is done so the service is essentially going to pull the message off the queue and it's going to have the ID and the email of the user and it's going to send an email to the user saying hey this ID is available for download as an MP3 and then from there there's going to be a download endpoint that we create on our Gateway where the user can use his or her token to basically request to download a specific MP3 using the file ID that's sent in the notification service email so essentially if you've gotten to this point where you're actually having the messages put on the MP3 queue and the mp3s stored in mongodb then you've essentially completed the most difficult part of this entire tutorial because from there we're just going to pull messages off this queue and send an email and that's pretty much it so if you've gotten this far this is like a major checkpoint so from here onward we're just going to create that additional service and add the code for the download endpoint on our Gateway okay so at this point in the tutorial what we have left is our notification service and we need to update our gateway to have a download endpoint so we can just start by updating our Gateway and then we'll move on to creating our notification service after that so we're just going to change directory into our Gateway directory and let's go ahead and clear this and we can actually close these other tabs that had our logs foreign we can just leave that running for now and we're going to need to update our server.pi file so let's go ahead and open that file and from here we're going to need to import a couple of additions so we need to import send file which is going to be the method that we use to send files back to the user that downloads them and we're going to be pulling that file from mongodb so just like before we're going to need to use the binary Json object ID so we'll import object ID and make sure I spelled it right this time and we're also going to need to change our configuration for mongodb so right now we're setting the configuration for our URI to include the videos database but that kind of limits us to just using the videos database but we need to use both databases the MP3 and the videos one so instead of using this configuration this way we're just going to create two separate instances of Pi for each database and if we have a look at the flask Pi documentation here essentially what we need to do is this here and it says you can create multiple Pi instances to connect to multiple databases or database servers so we're going to do two of these and one's going to be for our videos database and one's going to be for our MP3s database so we're just going to change this to video and we can go ahead and include the URI when creating the instance and can't forget the comma here and we're also going to remove this config here so we're not going to use this anymore because we're going to configure the URI for each instance within the instantiation of the pi class so we'll remove that and for MP3 it's going to be similar we'll change this to MP3 and we'll change this to mp3s and now for grid FS we need to create two separate FS instances so this one can be videos and this will need to be set to video because we're using the video instance for the videos fs and this one can be MP3s and this will be changed to MP3 and since the old grid FS instance was referenced by the variable FS we need to update that so here we're uploading a video and we were uploading it to the videos mongodb instance but now we need to change this to FS videos specifically and that is all that we need to do for that and now we can go to our download template endpoint that we created and we can actually write the code for the download function so we can remove this pass and we're going to do the same validation that we do in the upload endpoint so we can just copy this so we're going to copy the validate token request and we're going to copy loading the access from that variable that validate token resolves to so we can just go ahead and copy that part and go back to our download and paste that in and we need to add the else here and actually I just realized we're not checking the error here in our upload so let's go back to upload and we need to do if error so if there's an error when we try to validate we need to return error and let me just check something really quick actually no we should be good foreign we don't need to have an else so basically if there is no access then we're just going to return not authorized anyway so the download function is going to be pretty straightforward we first need to make sure the file ID exists in the request so basically the notification service is going to send an email to the user informing them that the conversion job is done and in that email it's going to give them a file ID which is the file they should download when they send a request to the download endpoint and that file ID is required so we're going to do FID string equals request Dot args.get and it's going to be fid and basically if this parameter doesn't exist in the request then it's going to return none so if we go here we see that it Returns the default value if the request data doesn't exist and the defaults for the default value is none so we can set this if we want but the none is the behavior that we want so if FID doesn't exist in the request then this FID string will equal none so that means that we can just do if not FID string then we can just return FID is required oh and I updated the upload endpoint to handle the error but I didn't update this one so we'll do if error return error and anyways if the FID string does exist then we're going to use it to get our file from mongodb so we'll do try and we'll set out equal to FS MP3s so this is the mongodb instance for our MP3 database dot get and remember we need to use the object ID which is here and it's a mongodb object ID and we're importing that here and to that we're going to pass the FID string which is essentially going to convert our FID string to a object ID which is what's needed to get the object from mongodb so the data for our MP3 will be referenced by this out variable so then we can just return send file and we'll return out and we'll do the download name which is going to be the name of the file and we're just going to set it to FID string plus MP3 and this is also going to be able to determine the mime type for the file so this is all we need to do to return the file to the client except exception as error we'll print the air and we'll return internal server error and that's pretty much going to be it for our download endpoint so we're just going to get the access via the validate endpoint from our alt service and then we're just going to check to see if admin is true in that user's access and if so we're just going to return the file for the past file ID and we're not going to check the user's email or anything in this case we're just going to assume if they have the file ID then they have access to the file but of course feel free to expand upon this however you like so let's go ahead and save that and since we changed the code we need to build this Docker image again and push it to our Repository so we'll do Docker build and then we'll do Docker tag and Docker push and now we can redeploy this so I'm just going to delete all of the resources for Gateway just in case so I'll just delete everything in the Manifest directory and then I'll just apply them again and we'll check that later for now we can just go into creating our notification service so let's change directory to our source directory and we need to make their notification and we'll CD into notification so this notification service similar to our converter service is going to be a consumer service so we're going to have some similar code to the converter service so we're going to copy some things from our converter consumer.pi file into our notification consumer.pi file so let's go ahead and create a file called consumer.pi and we're just going to go to our converter consumer.pi file and we'll just copy everything over but we're actually not going to need any of that and we're also now going to need any of this but we're going to create a package called send and a module called email that we're going to need to import and this is where we're going to write the code to send the email and our callback functions basically going to stay the same but we are going to change this to email dot notification we're going to create a notification function and it's only going to take in the body of the message but other than that the Callback function is going to be the same and of course we need to change this to mp3q because this consumer service is listening or consuming from the mp3q and everything else is going to stay the same so now we need to go and create this function here this email.notification function so let's save this file and we'll make a directory called send and change directory into send and we'll touch init.pi and we're going to create a file called email.pi so in this file we're going to write all of the code to send in email notification to the client and we're going to just use this python documentation here which gives an example of how to send simple email messages which is all we really need to send so our code is going to be similar to this here but instead of sending the message via our own SMTP server we're going to use Google's SMTP server so we're just going to send it using the same SMTP server that Gmail uses and I will show you how I'm going to do that now so first we want to import SMTP lib and Os and for this tutorial you don't really need to know what an SMTP server is in simple terms it's just a server to send receive and relay outgoing mail between email senders and receivers so we're essentially just going to be using the same SMTP server that Gmail uses within our application to send emails so we need to import the SMTP lib and Os and we also need to import from email.message email message and this is just going to allow us to create an instance of an email message and you'll see what I mean by that in a second so we'll do Define notification and notification will take in message from our queue and then from there we will try and we'll set message equal to json.loads the message we're going to turn our message from our queue into a python object and then we'll do MP3 FID equals the FID in our message and we need a sender address which is going to be the address that we're using to send the email and I'm going to recommend that you create a dummy Gmail account to send the email because in order for this to work within that Gmail account you're going to need to enable or authorize non-google applications to log into your Gmail account so like basically the default setting when you create a Gmail is like only Google applications can log into that account so for example like the Gmail application on your phone can log into that Gmail account but in order for this application to log into that account you're going to have to authorize non-gmail or non-google applications and that means any non-google application can log into your Gmail account if they have your credentials so it's not recommended that you use your actual Gmail account for this because I wouldn't recommend allowing non-google applications to log into your primary Gmail account if you want to be safe you should create a dummy account so the sender address is going to be the email address of the Gmail account that you want to use to send the email so I'm going to store it in an environment variable and the environment variable is going to be called Gmail address and then we're going to also do sender password and it's going to be in an environment variable called Gmail password and I too am going to create a dummy account for these credentials and I'll get into that in a little bit and then the receiver address which is who we're sending the email to is going to be the user that's using is going to be the user who's associated with the JWT because that's the user who uploaded the original file and then we're going to create the message that's going to be in the email it's going to be an instance of email message and we'll do message dot set content and it's just going to be a simple message it's just going to say MP3 file ID and then MP3 fid which is coming from here and then is now ready so then the receiver of this email can just take the file ID that we give them and they can send a request to our download endpoint using this file ID to download that file and we can just say the message subject which is just the same subject that you would put when you're writing an email in Gmail and that's just going to be MP3 download and message from is going to be sender address and message two is going to be receiver address and lastly we need to create an SMTP session for sending the mail so essentially we need to connect to Google's SMTP server and then log into our Gmail account and then send the email so we'll set session equal to SMTP lib dot SMTP and we'll put in the Gmail SMTP server which is smtp.gmail.com and then we'll do session dot start TLS and what this does is it puts the connection to the SMTP server into TLS mode and TLS is transport layer security so it's essentially going to make sure our communication between the SMTP server is encrypted and this is essentially to secure our packets in transit so that they can't be intercepted and the data within them read so in simple terms it's just to secure our communication between our application and the service you don't really need to know details about how this is working either just know that it's necessary and then after that we're going to do session.login and we need to log in using our Cinder address and sender password and once we've done that we can do session dot send message and in there we're going to put the message which is the message that we're instantiating here and we're customizing that object here and then we're sending it here and we're going to put the sender address and the receiver address and then once we've sent the message we want to close the session so we can do session dot quit and once that's sent we can just print male scent just so that we can see that it's done and if any of this fails we want to just print the error so we're in this try block here so we'll just do accept exception as error and then we'll print the error and then we'll return the error and the reason we're catching the error here and returning it is because the call to this function is expecting an error so the error should either be none or it should contain an error for example so if we go back up One Directory into our consumer file we see that notification here let me close that notification here is expecting an error and if there is an error then we're going to send a negative acknowledgment so the message can stay on the Queue and be processed by another process but if there's no error we're going to send a basic acknowledgment which means that the message can be removed from the queue so that's going to be it for that and there's a typo here this should just be session and there's also no import for Json so let's do that and we haven't installed these dependencies yet either so that's going to be a problem so we'll just go ahead and save this and we can just cat the head of our file and this file is called email dot by and we'll do actually we didn't start a virtual environment yet so let's change directory and do Python 3 Dash MV and V and start our virtual environment and now we can do fip3 install actually one second I don't think we need to install that I think that's already part of yeah we don't need to install that uh let me install this for my Vim so yeah this is part of Python's standard Library so we don't need to install that and let's see if we need anything installed from consumer.pi we need to install Pica and that's it so we'll do pip3 install Pico and that's going to be it for that so now we can go ahead and we can actually just copy the docker file from our Gateway into this directory and then we have Docker file and we don't need to expose anything and this needs to be changed to consumer.pi and everything else is going to be exactly the same so we can just save that and let's make some space here and we can also copy the Manifest directory from our converter.pi so now we have the Manifest directory from our converter dot pi and we'll change the name of converter deploy to notification deploy.yaml and we can go into that notificationdeploy.yaml file and we'll change all occurrences of converter to notification so I just did it using Vim said command but you can do it however you want just make sure you change all occurrences of converter to notification so we want our deployment name to be notification and then we want the app label to be notification and we'll leave replicas the same and we need to match label notification as well and Max surge is going to be the same and the templates app label is also going to be notification and container name also notification and we're also going to create a Docker repository called notification as well and the same for the config map and the secret so we can just save that and then we need to go into our config map and we need to change this to notification as well and the mp3q is MP3 which is correct and then we need to go into our secret and this changes to notification and we don't have a secret here so let's save that and let's clear change directory back to our root now let's go ahead and dock our build oh and we forgot to do pip freeze now Docker build and Docker tag foreign and then notification latest and now Docker push and notification latest and this is going to create the notification repository in your Docker Hub account and now let's check the docker Hub account and you should have a repository that was last pushed a few seconds ago and if you go to it you should have a tag latest so the next thing that we need to do is we need to configure our Gmail account to allow non-google accounts to log in okay so I've already created a dummy Google account or a Gmail account here so once you've done the same you want to click on this and go to manage your Google account and from here you want to go to security and you can just go down to the bottom here and there's this box here that says less secure app access and I've already turned this on but you're going to want to click this and then change it from off to on foreign I'm allowing apps and devices that use less secure sign-in technology to access my account so make sure that this is set to on before continuing with the tutorial and another thing that we need to do is well me specifically if I go to mySQL U root and I use auth and show tables select all from user you can see that I'm actually using a fake email here so the notification service will try to send an email to this fake email so if you used a fake email for your credentials here you should update it to an actual existing email I'm just going to send it to my dummy email so I'll essentially be sending the notification to myself but you can send the email to whatever email you want to use because at that point that email is just receiving a message so it's not insecure to use an actual legitimate email for the actual sent message so I'm just going to update mine so if I go ahead and select all from user again you can see that my email has now changed and that's actually the email that I'm using here so I can just exit this and that means that I'm going to have to get another token when we test this later on and you will have to do the same if you've changed the email and let's just go ahead and clear so at this point we can go ahead and deploy the notification service so we can just do Cube CTL apply everything in the Manifest directory and we can check this and it looks like it's working as expected and we could just change directory to our source directory and actually let's scale up our Gateway and our auth service as well so I'll just apply all manifests and that's going to scale it up to the two replicas that we have configured in our manifest directory for the all service and we can do the same thing for Gateway and Gateway has two instances deployed as well and the same thing for converter so that's scaling our converter up to four instances so at this point we have everything we have two instances of our auth service four instances of our converter two instances of our Gateway four instances of our notification service and one instance of our rapid mq service so now let's just check to see if our tunnel is running it's asking for the password again because I redeployed the Gateway Ingress so I'll put the password in and our tunnel is running and now I'm just going to go ahead and log in again but this time I'm going to change the email so gmail.com 63 and let's see if we can get the token okay so now we have a token which I will just copy and now I'm going to send and upload request so let me see if I have I need to go to converter and I'll delete the test MP3 that I had from last time and I'll upload the test MKV file but I'm going to change this token so this is going to be the curl request to upload the video file so what we're doing right now is we're testing the end-to-end functionality including the notification service so we're first uploading a video so this is the curl request to upload this video and the header authorization bear is going to be the token that I just got from the login into point and then our upload URL and I'm assuming this is because we need to restart our Gateway service so I'll just delete both of these so they can restart and let's try again and we have a success so if everything's working as expected the message should be being consumed from our MP3 queue as well so let's just restart this and it says we have six unacknowledged now let me check this email Ah that's because I forgot a very important part so let me change directory to the notification service actually I didn't forget so what is the issue okay so for some reason we're experiencing an issue so let's just check the logs for our notification service just send it again okay I'm going to scale everything down Gateway converter all notification actually I'm just going to scale everything down to zero and restart it so the only thing I have up right now is the rabbit in QQ and now I'll scale everything up to one okay so let's try and send the request and now so it seems that somehow the messages aren't being acknowledged actually there's one more thing that I forgot so I need to go into our notifications manifest secret file and we need to set up our Gmail address environment variable and Gmail password so we can just use this placeholder and change it to Gmail address and Gmail password and that's because if you remember in our notifications send email dot Pi file here we set our sender address to an environment variable and we also set the sender password here to an environment variable so we need to provide those credentials here so I'm going to do my credentials and you need to do the ones for the account that you're using foreign and make space and just to be safe I'm just going to scale everything down so I'm scaling everything aside from the queue down to zero and then I will scale it back up actually for the notification service I need to reapply so I'll just remove the resources first and this is for notification and then I will reapply those and let me check to see if those variables are working as expected and they are working as expected so let's see if that resolved the issue so let's just go ahead and send again but it's in our converter directory foreign so it seems that it's still not getting acknowledged so it seems that the messages are being consumed from the queue so the issue has to be in our notification service within the consumer.pi code there's probably an issue so we're consuming the message but we're never acknowledging the message but we're never acknowledging that we've processed the message so our email that notification we basically have everything in this try block and if there's an issue we're printing the error and returning the error [Music] okay so we need to make a small change to our notification send email dot Pi file so here we need to add in the port 587 and this is the port for TLS and start TLS so I'm thinking that this should resolve the issue that we're having so I'll go ahead and save this and then we could just do Docker build and actually we're in the wrong directory so let's do Docker build and Docker tag and Docker push and while that's pushing let's check the queue So currently since we had those unacknowledged messages in our queue and our notification consumer service isn't currently consuming because we scaled it down we had messages that were ready but they just got swallowed up by the service and it seems that they're getting acknowledged at this point so this should go down to zero all right so they were all processed so that means that we should have emails for all of those messages so if I go to the inbox here you can see that we have the emails for the messages and let me check spam and I think some of those messages had the old email the one that wasn't a real email so that's why I only we're not getting 11 messages we're only getting these ones but anyways the email contains the MP3 file ID string here so we should be able to take this and use it to download the actual file so we can clear this and to download the file we're going to do curl and we want the output of the file to go to a file so we'll just name that file download MP3 download.mp3 and it's going to be a get request and the header is going to contain our authorization and it's going to be the authorization token and the URL should be mp3converter.com and don't need the port and it should be the download endpoint and you should put a URL query parameter here that's called FID and set it equal to the ID from the email so my email ID is this ID here which is the same ID that I'm using here and then if you hit enter on that it should download the file and now you should have a file in the directory called MP3 download.mp3 and we should be able to open this file up or listen to this file and it should be the video that we uploaded to audio so let me just go to that in the user interface so as you can see I have this file here called MP3 download.mp3 and if I play it so here's a burger again the double double it will play the sound from the video that we uploaded so that means that our end-to-end application is working and let's just clear here and we don't need this so let's quickly go into canines and we have everything scaled down so let's just reapply the initial configuration so I'm just going to delete the configuration for all of our services except for the rapidmq of course because no need to scale that up or down there's only one pod for that and then now we can just apply auth manifests converter manifests Gateway and notification and now if we go into here we have all of the instances that we have configured for our services so let's just move that video file from converter let's just move that here and let's check our cue really quick so currently our queue is empty and let's go ahead and upload this file actually let me make sure I have a valid token so I'll just do a request to the login endpoint to get the token and there's the new token so I'll just try to upload and let's just Spam it a couple of times so it seems that our cues are processing both the videos and the MP3 messages and at this point it seems that they're all done and if we go into our email we have a few more downloads here so we can just go ahead and copy the file ID for one of those and now we can just go ahead and attempt to download that we'll do a different name for the file so I'll just put something random just something dot MP3 and let's change this to the file ID that we just copied from the email whoops that's the JWT let me go copy that again and let's download and we have this file something.pi so let me go to the user interface and play that so in our source folder we have something.pi so here's a burger and as you can see it is working as expected and that is going to be it for this tutorial I think that if you were able to make it to the end of this tutorial and you were able to get everything working you should definitely be proud of yourself because this one was a difficult one and yeah if you have any questions feel free to post them in the comment section I'll try to help you as best I can if you're having troubles getting everything working and I hope that this has been helpful to you and if you've made it this far in the video and if you haven't already please don't forget to like the video and subscribe to the channel for more content like this and yeah I'll see you in the next one
Info
Channel: freeCodeCamp.org
Views: 302,525
Rating: undefined out of 5
Keywords:
Id: hmkF77F9TLw
Channel Id: undefined
Length: 304min 11sec (18251 seconds)
Published: Tue Nov 08 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.