Nest.js File Upload to AWS S3 + Rate Limiting

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this video I'm going to show you how we can use nest.js to upload files and upload those files to AWS S3 so we'll be able to send requests to a server upload any kind of file and make sure we see that file in S3 for storage and later retrieval I'll show you how we can provide validation to the file we're uploading to our server and additionally we're also going to apply rate limiting to our API I'm sure we don't allow too many requests within a certain period let's Jump Right In and show you how we can accomplish all this all easily with nest.js I'll see you there before we jump in I just want to let you know that my Nest GS microservices course is now out feel free to check out the description for a discounted link to this nest.js course which has about 300 students and growing so thank you so much for everyone that has bought the course and is really seeming to enjoy it so far in this course we dive deep into implementing complex nest.js microservices and if this is something you'd like feel free to check this out all right let's jump right into the video so we're going to use the nas CLI to generate our project by running Nest new I'll call my project uploader and I'm going to use pnpm as my package manager then I'll CD into the uploader directory and Run pnpm Run start Dev to start up our server in development mode so I've opened up the project in vs code and you can see we have our default boilerplate Nest project out of the box with our HTTP server Exposed on Port 3000 right now and we have our app.controller with a single get route that will return a simple text string of hello world and we have our server started so we can launch a test request at our server so in Postman I've created a new get request at localhost 3000 and we can send this off and get that hello world response back so let's use the nest CLI again to generate a new module called upload then I'll use it again to generate a controller called upload and lastly will generate an upload server this this is all going to be so that we can accept requests to upload a new file and we want a dedicated module to handle all of this and now we can see the new upload folder created here with our upload module upload controller and upload service so I'll go ahead and actually get rid of the app.service and the app.controller files as we're not going to use them and in the app.module we'll also remove them from here so that we only have the upload module and then here we have our controller in service which is where we're going to start from so before we start implementing file uploads we need to install the types for the file upload package that nest.js uses called multer so run P npm install Dash D for development types slash multer and this is the actual module that will handle the form data that is being sent to the server when we send a post request over a HTTP and send that multi-part form data format malter is going to be what's used to actually parse that incoming request and give us a buffer that we can actually use to do things like upload to different services or a another backend like AWS S3 which is what we're going to do and now in our upload controller let's go ahead and Implement a new post route to actually accept the upload so I will leave this unlabeled because we're already at slash upload and then to actually parse the incoming requests file we need to use an Interceptor here so we'll call use interceptors from nest.js common and then we're going to accept the file Interceptor and the file interceptor's first argument is the name of the field where the file actually resides in the HTML form that's being sent to this route so we'll specify the name here and just call it file and now we'll have the actual function name that we'll call upload file and we can get access to it by using the uploaded file decorator and now we have access to the file variable which would be of type express.multher dot file so now we have access to the raw file this easily and we have access to the underlying buffer here which is going to be the actual data stored in memory of this file and we get additional metadata about the file like the destination mime type here the path size and so on so we can use this information to further upload our file including passing the buffer to our service layer which is what we're going to do firstly let's just go ahead and log out the file to make sure that we're receiving it correctly from the client and let's go ahead and Implement that next so with our server running I'm going to go ahead and open up host man and launch a post request at our upload route so send a request to localhost 3000 slash upload and and importantly we're going to click on body here and make sure we select form data now so in this form data this is where we're going to specify the actual key and of course we want to select a file because we're going to upload a file and so we need to make sure this is the HTML key in our form that matches what we selected on our route in the upload controller remember we specified the file Interceptor and we specify the name of the form field where the file lives well this is the same name here file and file so now we need to actually upload a file so you can send whatever kind of file you want to the system right now and send off this request so you can see we have a 201 created sent back and if we look at the logs we can now see our sample data being logged out including the file name the type and the actual buffer here as well as the size so additionally I want to show you how we can implies simple validation to our file upload so perhaps we want to restrict the file type or the size of the file we can do this very easily by supplying a new pars file pipe from nest.js common and inside of here we're going to supply an object an options object where we specify an array of validators and we can provide a new Max file size validator where we're going to specify the max file size is going to be 1000 bytes additionally we can add another validator a new file type validator and this will take an options object where we specify the file type that we want to allow but let's say we only want to allow jpeg images in our system so now that we have this parse file pipe in place if we try to send off a new post request you can see we have failed validation because the expand affected size is less than our limit of one thousand so we have a 400 bad request sent so for now I'm gonna go ahead and comment out these validators so we can continue uploading our file to Amazon S3 which is going to be our next step so we're ready to set up the AWS SDK in our application so that we can actually configure it and upload our new uploads to S3 which is where we're going to store our uploads so in our terminal let's stop our server and let's P npm install a new package called AWS SDK slash client S3 and we're also going to install the nest.js slash config package which is going to allow us to read an environment variables into our application so we can securely store our AWS credentials and provide them to our AWS client so first things first let's go into our app module and make sure we set up the config module by calling config module.4 root and we'll initialize it with an is global set to true so that we have this config module globally available and we don't have to keep re-importing it next we're going to go into our upload service and set up our S3 client so in our upload service let's set up a Constructor where we inject the config service of type config service from nest.js config and now that we have the config service we can set RS3 client up so let's clear a new read-only variable called S3 client and set it equal to new S3 client and we import the S3 client from AWS SDK client S3 and you can see we need to provide an options object where we specify the region where we actually are connecting to S3 so we can take this directly from the config service and get the AWS S3 region environment variable and I'm actually going to change this to get or throw to make sure that we throw an error if this environment variable isn't available and by default now the AWS S3 client is going to look for environment variables in our system including the AWS access key ID and the secret access key and it's going to use these credentials to authenticate this client allow us to connect to AWS so let's go ahead and set that up now so in AWS console make sure you create an account or log in and then we're going to go ahead and click on the account in the top right and then go to security credentials then we're going to scroll down to the access Keys section and click on create access key and feel free to create a separate IM account with specific permissions to only upload to S3 if you want to be a bit more secure for this example I'm just going to create an access key for my root user I'm going to take note of the access key here and the secret access keys so let's copy this access key and in our project we're going to create a dot m file then I'm going to set the AWS access key equal to the access key and I'm gonna copy the secret access key and I'll paste in the AWS secret access key and set it equal to the secret access key of course we never want to commit the dot m file to get because it can contain sensitive information so we can easily add the dot m to our git ignore as well and this will not be committed to our repository now don't forget we also need to provide the AWS S3 region and I'm going to use Us East 1 feel free to use whichever region you would like lastly make sure this AWS access key is actually AWS access key underscore ID in order for the AWS client to pick this environment variable up correctly it needs to be exactly as shown here so now now that we've defined these credentials the S3 client will be able to authenticate us and we can now upload to S3 before we do this I'm going to go back into the AWS console and go to the S3 page where we're going to create a new bucket to upload our uploads to so I'm going to call this bucket name Nest Js uploader and I will be using Us East one and I'll use all of the default settings here to block all public access by default and create this bucket so now that we have the name of the bucket we can finally Define a new method in our upload service called upload and in this function now all we have to do is call away this dot S3 client dot send and we're going to send it a new put object command which will take a put object command input parameter to specify the parameters of this object so let's specify this object where we need to firstly Define the bucket we're uploading to so this will be the bucket we just created whichever name your bucket is in my case it's going to be an sjs uploader and then we need to define the key which is going to be the name of this file that we're uploading as well as the actual body which is going to be the buffer content so to Define these I want to actually go ahead and pass these in as parameters to this method so let's accept a file name of type string and then we'll take the actual file here which we will Define as a buffer so then we can set the key equal to file name and the body will be equal to the file so now that we've defined our upload method let's go back to our upload controller and make sure we firstly inject the upload service in here so we'll Define a new private read-only upload service of type upload service and then we're just going to call await this dot upload service dot upload so now I need to supply the name of the file and the file itself to our upload method we're going to call file dot original name and then file.buffer to our upload method now back in Postman we can send off a request to upload our sample file we can see we have a 201 created and importantly if we go back to S3 now we can open up our nest.js uploader and see that we have our file that has been uploaded to Amazon S3 so we can see how we can quickly and easily Implement file upload with nest.js let's go ahead and see how we can easily Implement some rate limiting to our file upload API to prevent Brute Force attacks to implement rate limiting to our API we're going to go ahead and install a new dependency called nest.js slash throttler now we can go ahead and restart our development server and go ahead and back in our application in our upload module let's go ahead and set up the rate limiter module so we'll add a new Imports array and we're going to import the throttler module for root so the throttle or module is going to accept an options object where we can specify the time to live which is going to be the maximum number of requests within this time period we want to allow so let's say within 60 seconds we want to allow only three requests to come into our system that's going to be these options and then we need to specify a new guard to guard our routes so open up an options object in the providers array we're able to provide a app guard and then we're going to call use class and set it to throttler guard now if we go back to postman and send off one two three four requests we can see on the fourth request we're getting a four two nine too many requests and getting this exception from the throttler saying there's too many requests so this is the exact Behavior we want additionally we can make this configuration a bit more Dynamic by instead calling dot for root async and now we can provide a use Factory where we can inject the config service from the SCS config and in this config service we can then pull out the limit and TTL from a config defined in our environment variable so we'll call get or throw and call this upload rate TTL and then we'll specify the limit being set to config service dot getter throw we'll call this upload rate limit finally don't forget to provide an inject property here where we specify the config service now in our dot m let's go ahead and specify these two environment variables so the upload rate TTL will keep at 60 seconds and we'll change the upload rate limit to three requests so now we're reading this in from our dot m make sure we restart our server so these variables are read in and we should see the same behavior if we send off three requests the fourth one gets that four two nine so hopefully this video has been helpful for you thanks so much for watching and I'll see you in the next one
Info
Channel: Michael Guay
Views: 14,571
Rating: undefined out of 5
Keywords:
Id: tEZERHLge-U
Channel Id: undefined
Length: 17min 17sec (1037 seconds)
Published: Sat Apr 22 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.