Build & Deploy a Serverless CRUD REST API for S3 w/ API Gateway, AWS Lambda and Serverless Framework

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey everyone i hope you're all doing well and thanks for tuning in to another tutorial today's video will be an extension of the uploading files to s3 video i made previously and as the title suggests we'll be building complete crud api that connects to an s3 bucket using api gateway and aws lambda of course we'll also be using the serverless framework as well as github actions for a deployment pipeline but before we begin i just wanted to thank the person in the comments for suggesting this idea and more generally if you have any ideas or suggestions or something you want to see always feel free to drop a comment below but for now let's get started now the first thing we'll be doing is setting up the deployment configuration and some of this may be repeated so if you watch some of my previous videos feel free to skip right ahead now in order to set up a deployment timeline we'll first need to obtain the programmatic access keys from aws so to do so we're going to head over to the iom management console and we're going to go ahead and click users and we're going to add a new user i'm just going to name this s3 crud user feel free to name it however you wish and we're going to go ahead and check off programmatic access we're going to click next permissions and then we're going to attach existing policies directly now the list of policies that we need to attach are as follows and the first one is i am full access as this will give us impermissions um for our deployment pipeline and we will also need s3 full access for this project so we'll check that off as well and we also need api gateway administrator as well as push to cloudwatch logs access so we'll check those off and we also need cloud formation since this helps us deploy our serverless application and last but not least we need lambda full access let me just type the oops lambda full access awesome okay so once those are all checked off we can go click next tags uh we don't need any tags so we'll click review and once everything looks good we'll go ahead and create our user and now we'll be presented with these uh keys and this is the only time we'll have access to the secret access key so i'll make sure to make note of it um so we'll copy that over and we're gonna go ahead and head over to our uh repository and click settings and add these as repository secrets and we're going to click on new repository secret and we're going to click secret access key and add this and the next one will be the access key id so we'll copy that over just make sure you don't get the extra space at the end as i almost did and we're going to name this aws access key id and we're going to add this as well and awesome so once this is done we can actually begin to write our github actions deployment file and of course to do so we can head over to your editor in this case i'll just be using vs code and we're going to go ahead and create a new folder in the root directory called dot github um and within that folder we'll create another folder called workflows um this is just the way that github actions works and within this directory we'll create a file called main.yaml and the reason why i'm naming them yaml is because i'm going to this job is only going to be triggered on the main branch and i've actually already uh set up everything i'll written everything out beforehand so as to save some time um i've gone over this a bit more in detail in one of my previous videos so feel free to check that out but basically just to highlight what's going on um this this part is specifying that we will only deploy our serverless application when we push through the main branch and this part actually specifies the job and the corresponding steps essentially we're using node 2.0 12 and x sorry and we'll also be installing the dependencies and the main part here is this part we're using the serverless github action which uh basically allows us to deploy our serverless application to the aws cloud so we're going to go ahead and save that and leave that as is now the next step we want to do is actually write out the serverless.yaml file and to do so we'll go ahead and go in the root directory i'm going to create a file called serverless.yaml and we're going to go ahead and actually name this project so the first thing we want to specify is the service and this is we'll just call it s3 file api because that's pretty much what it is we'll also specify the provider and this is essentially the cloud provider in this case we'll be using aws so that will be the name and all our lambda functions will be using node.js 12.x as that is the latest version currently supported by of node on aws um and also just write a stage and this is just pretty much telling you the environment of which your service is currently in and we're in development stage so we'll just leave it as that and we also want to specify the region and this is basically the region of which your service will be most closely deployed to and generally you want this to be the region or the data center that is closest to the users that you're trying to serve so we'll just click us west one here um it doesn't really matter in this tutorial um the next field we also want to specify is the api name um since we've already written that here we can just reference the service variable by doing dollar curly brace self colon service and so that will reference that dynamically and because all the lambda functions um i don't think they will be highly intensive unless you're uploading a really large file or something we can just keep them all as this using the same memory size and timeout so we'll specify this as a global field and we'll just use 128 megabytes and we'll also use a timeout of 10 and this is going to be in seconds so feel free to play around with that depending on the types of files that you're going to be working with um another field that we want to write or specify is an environment variable so we'll do environment polling and we're going to call this file upload bucket name so of course this isn't going to be a secret or anything it'll just help our lambda functions to be able to easily access the bucket name that we'll be basically uh working with so to do that we'll actually create a custom variable soon on uh immediately after this but we'll just reference it first so it'll be custom dot file but get name and um just bear with me here now we'll actually um initialize that very low so we're gonna we're gonna all go ahead and um create this custom field and um create the variable called file bucket name and um because i haven't created the bucket yet we'll do that in a second i'm just going to pre-name it here and i'm going to call it s3 file bucket and i'm also going to include the stage at which it's in so basically you can have multiple buckets depending on your development stage um in this case will be named like ester file bucket dev um but yeah you you kind of get the idea so we'll leave it as that and now we can actually go ahead and write the configuration for the actual bucket so to do so we actually want to specify this under a field called resources and the first one is going to be uh lowercase and immediately afterwards we want to specify another field which is uppercase but also called resources it's a bit funny but that's just the way it is so we'll leave it like that and we'll just call this thing file bucket and of course the type of this resource is going to be an aws s3 bucket and as for properties depending on your use case you should definitely feel free to experiment or look at the documentation to review the full set of properties that you can specify but um the one main mandatory one that we need is the bucket name and um because we have already specified the custom variable called file bucket name here which stores our s3 bucket name we can reference that directly which is really convenient so we're going to go ahead and do that file bucket name and of course please do note that s3 file bucket names must be globally unique so if your s3 bucket coincides or matches the same name as any other bucket in the world or yes i believe it's in the world um you that will that will cause your deployment to fail so make sure you choose a globally unique name so hopefully nobody's taken this name so far um but yeah and one last thing we want to do is actually just for general good practice is to make keep the access control field as type and this make sure is our bucket everything in our bucket is kept private and yeah so that pretty much wraps up the section for setting up the serverless file in terms of deploying the s3 bucket now we can actually go ahead and write the configuration for deploying each individual lambda function as well as its api gateway triggers so in order to specify our laminar functions we're going to go ahead and uh write this under the functions field and we're going to call our first function the s3 file uploader maybe um oops spelled that wrong again this will basically basically be our post endpoint as this will help us upload files um and we'll just write where the handler is going to be stored and because i haven't written out yet i'm just going to specify the path beforehand and i'm going to name the file upload.js and the function is always going to be handler since this will be our lambda handler feel free to name it however you wish um this is just the convention that i currently prefer and of course we want to name this file and this will be our s3 file uploader and you can also include a description for the sake of time i'll leave that out but in the source code below i'll make sure to uh in the description below i'll make sure to include that in as well and anyway so the main part that we want to specify for this lambda function or all of our laminar functions in this video actually is the event trigger and this is basically the apac gateway trigger that will actually allow our lambda function to execute and the trigger that we want to listen to is http as they will be listening to those types of requests and the path of this endpoint you can specify this however you wish um this will perhaps be like your rest resource um that you want i'm just going to name this file and of course we also need to specify the http verb or ver or method um and this because we're uploading files here this will be a post endpoint and that's pretty much it for setting up the api gateway trigger we also want to specify item role statements and basically this field um allows us to specify um permissions for our lambda function so that like as to what the lambda function can access in terms of other aws resources and the default behavior of the serverless framework is to put this field under the global provider block which applies to all lambdas but that's generally not considered a good practice as we want to assign granular level of permissions for each individual function as it's generally just more secure and offers you more control and so to do that um to do what we've done here we actually need to install a plug-in and this will be um we can reference this under the plugins field so we'll specify a new field here called plugins and we're going to call this the serverless i am roles per oops per function public and so before we move any further we actually need to install this plug-in as a global uh sorry not global as a production dependency so we'll type that out surveillance item rules for function and once that is done we can go ahead and actually specify item role statements per function and this will come in really handy as we'll see that each function has a different set of i and rule statements um but yeah so we'll we'll continue on and we're going to head i'm going to go ahead and write our first irmo statement and i will be starting off with an effect and we are essentially going to allow our lambda function to perform this action um and the action will be um allowing our limit function to add things onto the s3 bucket so we can just do put and there's a bunch of operations i believe the main one we want is put object i can't spell but object but for the sake of keeping things a little bit less granular or just simple we'll just include an asterisk which is just a wild card which matches put any types of operations with this prefix so we'll leave it like that and of course we also need to specify the resource of which we want to apply this effect or this permission and of course we this needs to be an arn or amazon resource name i believe and we're going to uh match as follows um the s follow the this is the current um prefix for all s3 buckets so we'll leave it like that and up because we have already created a custom variable for our file bucket name we can reference it as such and just make sure to include um a slash with an asterisk at the end so as to match any um nested directories inside your s3 bucket as well and that is pretty much it for the s3 file uploader or the post endpoint and um we won't actually be writing a lambda handler for the put endpoint or basically handling the update portion of card operations and this is the reason for that is that by default without enabling versioning on the s3 bucket any uploads of files with the same name will essentially update or overwrite the existing file on the bucket so to handle updates or put operations in your application and we can just call the post endpoint a second time and re-upload the file under the same name and this will essentially upload the existing thing in the s3 bucket so yeah we'll leave it like that now we can actually move on to writing the get uh get endpoint as well as the delete operation so we can actually copy the post endpoint configuration as it'll be really similar and we'll just copy this twice and make sure our indents are all aligned and we're gonna go ahead and rename something so instead of file uploader we'll call this s3 file get um this can just be file get and make sure to change the file name um and we also want to change the http method as well and of course we also want to change the imroral permissions as well to get and yeah that looks good and we also want to change the delete one to delete so we'll leave it like that delete and s3 file delete and of course uh the method should also be delete and of course the method can also be in lower case but um for legibility purposes let's just keep it as uppercase and one thing we want to be careful here is that the action that we want to specify is not um delete asterisks and the reason for that is because there's actually um an operation called delete bucket and we obviously don't want to give um our alignment permission to delete the bucket maybe you do but that might be the use case for another lime that perhaps um in this case we only want to delete things in the bucket so we'll actually be more specific in granular and call this s3 delete object specifically and yeah that pretty much wraps up that uh one small thing we want to handle is um allow our get and delete endpoints to actually specify what file um we want to delete under what a certain like uh path into the s3 bucket and to do so a good way to do this is basically to use path parameters and to do so we can specify as such by using slash and specify your path parameter key under curly braces i'm just going to call this file key and this is essentially the key into the s3 bucket of which we want to delete so yeah we want to do a similar thing for our delete endpoint as well so we'll just do write that oops and yeah and before we just wrap up this section we're going to go ahead and actually just create our git ignore file so as to not deploy our entire node modules because that's just a huge folder and we definitely don't want that in our repository so that should handle that okay awesome and yeah that pretty much wraps up this section about um setting up the serverless configuration file to not only deploy our actual s3 bucket itself but also set up all our lambda functions as well as their api gateway gateway triggers and yeah so now we can actually go ahead and begin to write our lambda functions and because we've already specified that our all our lambda functions will be under the source folder we're going to create then we're going to create a folder called source and we're going to go ahead and implement the um upload file first or the lambda handler and we've actually already pre-implemented this from my previous video i'll drop a link down in the description so if you want a more detailed explanation of what's going on um feel free to reference that as well and basically just to highlight we're using lambda proxy integration so that's basically what this comment is telling you and basically we want to stringify the response body to match that specification um and yeah that's pretty much it but right now the current code for this only handles or is meant to handle big um images and upload them as base 64. um of course if you want to generalize it um it should just be some minor tweaks for example like changing the content type um to perhaps like handle pdfs or whatever other types of files you wish to upload so we'll leave it like that and we're going to go ah go ahead and move on to handling the get endpoint so we're going to go ahead and write get.js because that's what we specified here as our file name and we're going to do a really similar thing and basically just begin by importing the aws sdk and i know i haven't actually installed it yet so we'll do that in a second but we'll just require it for now and we're going to actually initialize the s3 sdk so that we can have actual access to their corresponding functions um so yeah we're done that now we can actually begin to reference or retrieve the environment variable that stores our bucket name so i'm going to call this bucket name and it's going to be under process dot m dot file upload bucket name awesome and of course we want to set up the framework for our lambda handler which is expect which the function is actually called handler as we have specified and it's going to be an asynchronous function that takes in a one parameter which is the event um this is just going to be a giant json argument and yeah awesome so now we've set up the framework um i'm not sure why i have to switch the tabs every time something's wrong in my editor today so bear with me there anyway so one of the good practices to do when writing your lambda handlers for debugging purposes or whatnot is to basically just log the event it helps you see what's going on and what's actually coming into your lambda function and after that we one thing we want to do is actually initialize this response variable and essentially what this is is the actual response object that will be returned um from your endpoint and you will see in a second when we test this um that we can actually see what's coming out and because we're using the lambda proxy integration one field that we need to specify is whether the response is base64 encoded and this part is not so we're going to write false here and we also need the status code and um just as a happy case we'll keep as 200 for now and we can modify in our cash statement right now um so just setting up the framework for the try catch um awesome so in the try block um what we want to do before we actually try to retrieve the s3 file under a certain key is to actually specify the parameters and this object will include the bucket name so we can reference it as as that and we also want to include the key and now because the key is going to be passed under um as a file sorry as a path parameter as we have specified here um we actually need to retrieve that and that field is going to be under event dot path um parameters and we don't need to json parse this or anything it's already parsed for us so event or path parameters and it's going to be called file key as we have specified and because the way that s3 essentially s3 directories within the bucket works is that sometimes you might have a path for example i'm just going to note here you might have a path like images slash i don't know image one dot png or something like that for example this is your path right and this is essentially saying that your image1.png file is under the images sort of like directory inside your s3 folder and because this is going to be passed as a path parameter this slash here needs to be uri encoded so it's actually going to need to come in as percentage 2f i believe is the ui encoding for a for slash so and because we want to do that we must then um actually decode this in our lambda function so we're going to do decode uri component and pass in the file key into this function and that should give us the correct key when we actually try to retrieve the file so yeah that's our parameters now we can actually just um perform the retrieval and it's quite simple we just call an await s3 dot delete object um oops sorry not delete object i was getting ahead of myself there getting much ahead of myself i apologize for that it should be get result um and we're going to go ahead and of course await for that function called um this i believe is get object and we're going to pass in the parameters and of course we want to wait on a promise so we're going to do dot promise and yeah so now we want to modify the response body i guess just for a good practice and make sure that it's um a stringify json for lambda proxy integrations and we can just include a quick message saying um successfully oh if i can type today successfully retrieve file from s3 and we can just include the get result as well um for debugging purposes um feel free to just remove it if you wish and for the uh catch block we can just log the error for cloud watch logs um and we can also include it in our response body again we need to make sure that we stringify this so the message can just say um failed to get file um yeah and of course we also want to include the error message um in our response body as well and this is the case where they're setting the status code becomes a bit useful because um when the lambda fails we obviously want to change the status code i'm just going to use 500 here not 5000 but feel free to make it more specific to your use case as well and of course last but not least we definitely want to return the response and yeah this is pretty much the implementation for uh retrieving your file from um the uh the sorry the s3 bucket now we can go ahead and move on to writing the file for deleting the s3 objects so to handle our delete file it's actually going to be really similar um we're just going to create a new file called delete.js and the handler is going to be really similar to our get js file we just actually just copy most of this over and paste this over here and just make sure the things that a couple of things that we need to change here everything else is going to be similar we're going to decode the uri component which is going to hold the file key of which we want to delete at and of course we want to specify uh you use the delete object function instead of a get object and i guess this is going to be delete object so we'll just include that in our message as well and it will change successfully the log message as well successfully delete a file from s3 fail to delete and awesome that is pretty much it for the delete object uh sorry the delete endpoint laminate handler as you can see it's really simple most of it is duplicate code um for uh from the get endpoint so feel free to refactor it if you wish um just for keeping things um nice and clean i realize i made a small nitpicky detail i'm just going to quickly edit it on screen but i'm going to name this data because get result doesn't sound really good after all it is the data that we will be retrieving anyway so i'm going to leave it like that and yeah that's pretty much um all our lambda implementations for um all the credit operations um once again the post and the put share the same lambda essentially for s3 at least post input are the same thing pretty much here um so we only have one lambda function for that and that's upload.js we also have the get.js file for our lambda handler for the getendpoint and of course we also have the delete.js file for our delete endpoint um and yeah now we can actually finally try to deploy this and hopefully everything works but for now we'll be right back once everything is deployed so we're back and i realized i forgot earlier to actually mention that we need to explicitly add the production dependency of the aws sdk um so we just type in this command and we should have it installed i've already done that so let's just head right over um to our deployment um so if we take a look here um we can see that our deployment actually succeeded here and if we take a look at s3 and we can see this is our bucket name and it's currently empty as expected and what i want to show you right now in order to test our actual end points is to is that um we can actually head over to api gateway and we can get the base url um so we can actually call these endpoints um so one that once that's loaded we can click um into our api and then we can head over to stages and click onto dev and i guess we can just start off with by um by adding some images and so i'm going to copy this this will basically be our endpoint that we'll call and i'm going to be using postman which is a handy tool that i'm more familiar with um but there are other options as well but basically this allows us to call this endpoint and right now i've attached a file this is just a preloaded base64 image that i've generated and i'm just specifying the file key here this is just how the lambda is currently written um but feel free to tweak it to however you wish to specify like for example the key of which you want to upload the file into so this basically means i'm uploading into a folder called images and if it doesn't exist it will automatically create it for you and then under that file folder i'll create um i'll attach this file and i'll label it file1.png so we're going to go ahead and click send and hopefully this all works out you can also monitor cloudwatch logs um to see any logs that you have included but for this tutorial we'll just keep it simple since that's kind of covering my other videos as well so feel free to take a look at that um but anyway yeah we'll just keep it simple as you can see here it has been successfully uploaded it took a while um because image is kind of big but yeah let's hold it over to s3 and just to see if it worked and as you can see here the images folder is created for us and we get the following.png and now we can go ahead and test our getendpoint so of course the let me just copy this over the endpoint is going to be the same and we're just going to use get here we don't need any request body but we do need um the file key and uh it's important to note that um this key is needs to be the full path of the file so if you know where it is then that's convenient um generally you would store this in perhaps like a database or something that references uh your file paths and then you will call this api to actually get the data um but anyway since we already know where it is um we're gonna type it out so um to do so we're gonna type images and of course uh we got we have to remember that um we actually need to encode the uri component so when once this is encoded the slash will turn into a percent 2f and of course this can also be lowercase f i'm just going to keep it as uppercase and um that's a slash and then we also need uh file1.png so if we go ahead and click send and try to retrieve this image it will come back as binary data i believe yes and as you can see here um it's taking a while to parse because it's really big but basically we've successfully retrieved it and um if you recall i put everything under um this data field um sorry this data field yes and this is the entire data object and basically it gives you things like such some metadata like the content type and whatnot and the main um the key area of which the data is stored is under this field the body field and this is essentially a giant um javascript typed array i believe of binary data and you can treat this as a buffer i believe but anyhow the point of the tutorial is to keep things flexible so you can depending on your application purposes you can feel free to modify this or work with this or stream it however you wish um but basically this is like a basic implementation of how you can actually retrieve uh data from your s3 bucket um so yeah that's the get endpoint and just to showcase for completion purposes we can let's demonstrate how the um delete endpoint works so we're going to select delete and the endpoint and the api base url of the api is going to be the same we have the file uh key as the path parameter we don't need any request body or anything like that and we can hit um actually let's just showcase that the object is still here so yeah as you can see it's still there and we're going to hit send and momentarily the object should be deleted and we get a successful message and as you can see the object is gone and there you have it that pretty much wraps up this entire tutorial for how to build a crud api um using api gateway and aws lambda that is connected to amazon s3 and of course if you learned something or found this video helpful please do consider hitting that like and subscribe button down below as it really helps out the channel and series and if you have any other content or tutorials that you'd like to see feel free to leave comments below as well and as always i hope you're all staying happy and well out there but for now i'll see you in the next video
Info
Channel: Jackson Yuan
Views: 1,381
Rating: 5 out of 5
Keywords: aws, aws lambda, api gateway, amazon s3, s3, crud, crud api, serverless, serverless api, rest, rest api, serverless framework, github actions, ci/cd
Id: Too-U4bcJEs
Channel Id: undefined
Length: 29min 1sec (1741 seconds)
Published: Fri Apr 30 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.