Do you want to get into Serverless Computing? One of the top-paying cloud skills today? Then this course is for YOU! Do you want to get AWS Certified? Yes? Then know that Serverless Computing is a key
focus area in AWS Certification exams, at Associate as well as at Professional level…
and this course is a must have for YOU! Hi, My name is Riyaz and I’ll be your instructor
in this course. I’m a Solutions Architect and a Co-founder
of Rizmax Software. I’ve been in the IT industry from last 14+
years. The number one goal of this course is to provide
you with the most complete, comprehensive …yet cost-effective online resource,
to help you become the AWS Professional that you've always dreamed of becoming. This course is packed with over 20 hours of
hands-on tutorials and together we’ll build at least five serverless projects, end-to-end. We’ll explore every bit of syntax, step
by step, so you understand exactly what we’re doing and WHY we’re doing it. By the end of this course, you’ll have mastered
the AWS serverless architecture and you’ll become confident to take on real world serverless
projects. We’ll learn serverless computing from scratch. We'll dive into it from scratch… and then
we'll dive into all its core features and into many advanced features as well… we'll
learn how it works, why we use it and how to use it… We'll begin with AWS Lambda and API Gateway…
and cover everything from DynamoDB, Step Functions, to… AWS SAM, the Serverless Framework, CI/CD Tools,
Serverless Best Practices, Serverless Architecture patterns and everything in between. We'll not only discuss this stuff, we'll implement
it, together, step by step. We'll build serverless workflows, process
streaming data, setup authentication and authorization, build Serverless APIs, Serverless Web App,
Android and iOS Mobile Apps,an Alexa skill, an IoT app, and so much more… step by step,
all in this one course. Yes! you heard it right! Before beginning this course, you should have
access to an AWS account and be familiar with the basics of AWS. There are no other pre-requisites. I'll be using NodeJS for all the demos. And, I've included a section on JavaScript
and Node.JS fundamentals. So even if you're new to NodeJS, you should
still be able to follow along. I'm super excited bout this course, because
I believe, this is one of the most comprehensive guides you can get on AWS Lambda and Serverless
Computing. I designed this course, as the course I wish
I had access to when I was learning. And it was a lot of fun creating this course. And I'm sure, we'll have a lot of fun together,
as we go through this course. So, let's start. Hey there… my friends, my name is Riyaz,
and welcome to this AWS Lambda and Serverless Computing course. Let's get started. This course is divided into several sections. We'll begin by building a simple hello world
example with API Gateway and AWS Lambda… and this should get you started with Serverless
Computing… We'll first cover the basics of AWS Serverless
Architecture covering AWS Lambda, API Gateway and DynamoDB… and then we'll dive deeper
into these topics and cover the advanced concepts as well. We'll also learn building serverless workflow
patterns using Step Functions… and then I'll teach you the best approach to building
serverless applications… which is using frameworks like AWS SAM and the Serverless
Framework… We'll take this further and I'll show how
to automate and streamline your serverless delivery and deployment with CI/CD tools like
AWS CodeCommit, CodeBuild and CodePipeline… We'll also learn some of the best practices
of Serverless Computing… and study different serverless architecture patterns. We'll not only study these serverless architecture
patterns, we'll implement them… as we build several end-to-end Full-stack projects. And I'm sure this course will give you enough
practical experience to take on and tackle real-world serverless projects. I'll be using NodeJS for all the demos in
this course. And, I have included a section on JavaScript
and Node.js fundamentals at the end of this course. So, if you're new to Node.js, do go through
that section before you begin with the next section. We'll also use JSON and YAML extensively in
this course. And I have included an optional section on
JSON and YAML fundamentals at the end of this course. So, if you're new to JSON or YAML, do go through
that section before you begin with the course starting from the next section. Alright… This entire course is available in the form
of easy to follow videos. You can pause, rewind or speed up each video
to suit your needs, so you can learn at a pace that's comfortable for you. And I'll BE there, if you ever have a question
or need help. All the source code is also available for
download inside the resources section. So, before we go sprinting down the aisle
and take it all in… I just wanted to take a moment and say… Thank YOU… for being here… I am so honoured to be able to share my knowledge
with you here today… I hope this journey is both fruitful and exciting
for you… I enjoyed creating this course for you and
I hope you enjoy it too as you follow along. So, let's get started in the next lecture. I'll see you there in just a few seconds 😊 In the recent years the computing technology
has been evolving at a phenomenal rate. Data storage has become very affordable and
computing power has vastly increased. And this has allowed us to build applications
that leverage these advancements in the computing space. Cloud computing for example has made it possible
for us to build such cutting-edge applications at a fraction of the cost, not just for big companies
but also for individual people. Public cloud providers like Amazon AWS,
Microsoft Azure, Google Cloud Platform have removed the entry barriers and made computing
very easy and affordable for the common people. Having said that, the modern-day applications
demand high level of power high level of compute power as well as speed. For example, if your Facebook app doesn't
show you what you need to see as soon as you open the Facebook app, most of you might simply
close the app and move on. So, the application has to be fast and responsive. And at the same time, it must be scalable
with almost no downtime. Serverless Computing is this new trend in
cloud computing which attempts to solve many of these challenges. Serverless computing will help you build
next generation of systems that can handle demanding workloads and scale indefinitely
without having to provision or manage any servers. Now, if we look at the traditional architecture,
there are several activities and tasks that you must perform in order to run any particular
application. First, you got to create and setup servers,
install operating systems, install and setup databases, manage software patches and hardware
updates, manage capacity and scaling, Manage High-availability through load-balancing and
so on. And of course, this server infrastructure
and computing power has its own costs, which are often substantial. On the other hand, with Serverless Architecture,
all these mundane tasks of managing the underlying infrastructure are abstracted away from you. So in a way when you use serverless architecture,
you won't really feel like you are using any high-end servers, because you're not the one
who is actually taking care of it. That's the reason this approach is called
as Serverless. That means, Serverless Computing still uses
servers, but you no longer have to worry about managing them or worry about the uptime, availability
or anything that has to do with the infrastructure part. With Serverless, you kind of get to be more
productive. You can completely focus on writing your code
without having to worry about the underlying support system. So you simply upload your code to the cloud
provider and it can run on-demand as and when needed. With Serverless, you write your code in the
form of functions, just like you'd write a function in any programming language. And these functions then run in the cloud. So, you segregate your application logic in
to small independent functions or microservices and upload them to the cloud provider. These functions are stateless and can be invoked
in response to different events. These events could be file uploads, database
updates, in-app activity, API calls, website clicks, or sensor outputs just like those from
IoT devices, and so on. And all this can be accomplished without having
to manage any infrastructure or having to size, provision and scale a bunch of servers,
and without having to worry about performance and availability. And this seriously allows us to laser focus on
our application code and not get caught up in multitude of infrastructure issues. Now, to talk in little more technical terms,
these serverless functions often run in Docker-like containers and hence several instances of
these functions can be run concurrently, like in a Docker swarm for example, thus making
them highly highly scalable. If you're new to these terms like Docker
or Container, nothing to worry. You really don't need to know about these terms to
be able to build serverless applications. All this containerization happens behind the
scenes and is completely taken care of by the cloud provider, in our case AWS. Just for your information though, containers
are a lightweight alternative to virtualization. They allow you to run your code in isolated
preconfigured environments. All this however is abstracted away from us
when we build serverless applications in the AWS cloud. So no need to break your head if this is new
to you. In this course, we'll go over everything that
you need to know to master serverless computing. So, please hang in there. You'll have more clarity as we build our first
serverless application in the next couple of minutes. So, in a nutshell, Serverless means “Event-driven
Computing” using “small independent stateless functions” running inside “containers in the cloud”. That's the short definition of what Serverless
is all about. So, you have your code running on the cloud
platform. AWS in our case. And whenever the triggering event occurs, the
cloud platform spins up a container or initializes a container, loads the function in it and
executes the function. And this happens almost instantaneously, thereby
allowing us to build applications that respond quickly to new information and thus enhance
user experience. Once the function completes execution, it
optionally returns a response back to the caller, and then finally exits or shuts down. AWS provides a compute service called as AWS
Lambda which essentially allows you to create and run serverless functions in the cloud. And in the very next lecture, I'm going to
take you through a quick hands-on lab where we will create a hello world api using AWS
Lambda and API Gateway. AWS Lambda and API Gateway are the two of
the core services of the AWS serverless platform. So, if you're ready for a quick hands-on,
lets jump in, get our hands dirty and we start learning by doing. In the next few lectures, we're going to create
our first serverless API. In this hands-on lab, I'm going to introduce
you to two key pieces of AWS serverless platform – AWS Lambda and API Gateway. So, if you're ready, let's get started: Sign in to your AWS Console at aws DOT amazon
DOT com. This is the AWS Dashboard. The first service we'll look at is the Amazon
API Gateway. Simply start typing API in the search box
and you should get be able to get to API Gateway. If you haven't created in APIs before, in
this account, this is the screen you are likely to see. AWS does change its GUI often, so you might
see a different UI, but more or less, it should provide you the similar functionality. So, let's get started. Here you'll find an option to create an API. We're going to create a new API, manually. So, let's choose New API option and give our
API a name. Let's say, we call it, Hello World and hit
the Create API button. Awesome! You'll now be presented with this Resources
screen where we create Resources and methods for our API. Go to Actions and click on Create Resource. Let's call it “message”. And AWS will suggest a resource path or an
API endpoint. This will be appended to our API root URL. Hit the Create Resource button. Instead of the manual approach, there are
quicker ways of creating APIs with API Gateway. And I will take you through each of those approaches
later when we dive deeper in to this. For now, let's get familiar with the API Gateway
console. Alright, once we have our resource created,
we'll add a method to it. A method is an HTTP verb like GET, POST, PATCH,
DELETE and so on. So, from Actions menu, choose Create Method. And here you can choose any of the HTTP verbs. I'm going to choose the GET method here, so
we can test our API directly from the browser. Select the Get method and click the checkmark
button. We have a couple of Integration types here. We have Lambda function, HTTP, Mock, AWS Service,
and VPC Link. What I'm going to do right now is select a
Mock response. So, we simply hard-code and return a dummy
response from within the API Gateway. Click Save. This will take us to the Method Execution
screen. It does sound a bit scary when you see this
screen for the first time. But don't worry. It's easy when you understand the idea. When we make an API call, the incoming request
is passed from the Method Request block to the integration request block where we map
or transform the incoming request to the format that our backend understands. In our case there is no backend since we are
simply mocking a response using the Mock integration type. The response received from the backend is
then mapped or transformed in the Integration Response block to match the format that the
calling application expects and then the Method response relays the response back to the client. We'll get to the details of each of these
later. For now, let's mock a response by returning
a hardcoded JSON string. Click on Integration Response here. And expand this response for HTTP 200 status. Then, under Mapping Templates, you'll find
a template for application/json content type. Click on that. And this is where we map the response returned
by the backend to the response expected by the client application. So, let's simply add a simple JSON string
here. Let's say a key message with value Hello World. Save that and we're done. We can now deploy this API so we can test
it from the browser. So from the Actions menu, choose Deploy API. We've to create a deployment stage here. So we can have different stages for development,
testing, and production pipelines, for example. So choose New Stage and give it a name. Let's say Test and click Deploy. So now we're on the Stages screen and our
API has an invoke URL to call the API. And the URL also includes the name of our
deployment stage. Let's click on this URL to open it in a new
tab. It says Missing Authentication Token. This is because this is the root of our API
and there is no GET method at the API root. Our GET method is at the SLASH message endpoint. So, if we go to the Resources section, and
look at the methods, you'll see that our endpoint is at SLASH message. So, let's copy the endpoint path. And we'll paste it right after the API root
here. Hit Enter. And there we go. We see Hello World printed at the output. Amazing! We can also test this method from the Method
execution screen right here by clicking this Test button. We don't have any Path or Query parameters
for this method. Simply click the Test button and we should
see Hello World here as well. Awesome! That was fairly easy right? We created an API endpoint with a few clicks
on the API Gateway console. And since it was a GET method, we could also
see the response directly in the browser. This was a very simple example. What we basically did here is we mocked a
response within the API Gateway. And, more importantly, this was accomplished
without setting up any servers. Meaning it's a serverless API. And the intention of this lab was to give
you a feel of the API Gateway console. In the next lecture, we'll make this API a
little more dynamic by replacing this mock response with a serverless function using
AWS Lambda. So, let's continue to the next lecture. In this lecture, we're going to create a serverless
function or a Lambda function and hook it up with the API we created in the last lecture. This will allow us to return a dynamic message
instead of simply hardcoding a Hello World string. So let's get started. On the AWS Dashboard, from the services menu,
I'm going to select Lambda from under Compute. And open it in a new tab. Click on Create Function to create a new serverless
function. We can author it from scratch or use a blueprint
as a starting point, or we can also choose a function from the Serverless Application Repository. This is the repository of serverless apps created
by the AWS developer community. We're going to author the function from scratch. So we keep the Author from scratch option
selected. Let's name our function. Say, Get random message. And for runtime we can use any of these runtimes. We are going to use the node.js for all the demos
in this course. Node 6.10 has been selected by default here. We can also use node 8.10. Let's use the default 6.10 for now and in
the next video I'll show you how do use the node 8.10 runtime as well. Then for Role we can choose an existing role
or we can create a new role either from templates or we can create a custom role ourselves. We are going to create a custom role. So, choose create a custom role option from
the dropdown. And this will take us to the IAM, Identity and Access Management We can either accept the default role name
or change it. I'm going to change it slightly, just in case
I have an existing role with the same name. So, I'll add underscore role at the end, for example. If we click on the View policy document option,
we should be able to see the role policy here. This policy simply grants access to CloudWatch
logs, so our Lambda function can write logs to AWS CloudWatch service. So click on Allow. And that's it. The role has added to our Lambda function. Click on the Create function button to continue. Our function is ready now and you can see
that it has access to Amazon CloudWatch logs. If you scroll down, you can see the function
code. Its using the Node 6.10 runtime and handler
is index DOT handler. Index dot handler simply means a function named
handler defined within the index dot JS file. So it's fairly straightforward. The event argument receives the incoming data
from the triggering event. The context object sets the context of the
function from the execution environment. Callback is the callback function that returns
the response back to the caller. We'll look into the specifics of these arguments
a little later. For now, we don't need either of these. So we'll create an array of few messages and
return a random message as the function response. So let's declare an array variable called
messages. And inside it, I'll add a few message strings. Let's say Hello World, Hello Serverless. And I am going to paste in some more strings
that I already have with me. Just to save some time. And then inside the handler function, we'll
pick a random string from this messages array. So lets say, let message equal to messages
array. And inside it, we can use Math dot Random. Math dot Random is going to give us a random
number from 0 to 1, excluding 1 of course. So we multiply that by 10 to get a random
number between 0 to 10. And as you can see, we have about 10 messages
here, so we are trying to get a random number between 0 to 10, excluding 10 of course. And then, we'll use Math dot floor to convert
this number into an integer. So now that we have a random message, we simply
return it back via the callback. So we can remove this part, the HTTP response part
here and simply return the message variable via the callback. Just like that. That's about it. Save the function. It's a pretty simple function, good enough
for us to get a hang of AWS Lambda. To test this function, click on the test button. We know that serverless functions are event
driven, So, we'll need to configure a test event. Our function does not expect any input data,
so we simply create an empty event to test it. Name the event. Let's say empty Event data. Alright. Click Create. And now we should be able to test this function. Click the test button once again. And we have a success response. Lets expand it. Doing everything I love! Awesome! If we test it again, its going to return some
other string. Yay! I'm learning something new today! And yes, indeed we learnt to build serverless
stuff today. Amazing! You'll see the log output here. And you'll be able to see the same log output
if you click on this logs link. It will take us to CloudWatch console where
these logs are stored. Lets click the latest one and its going to
show us basically when the function started, when it ended and some other information like how
long this function ran, what was the billed duration and so on. Now we can hook this Lambda function to our
API in the API Gateway to make our function return this dynamic message. Why don't we do that in the very next lecture. So, let's continue to the next lecture and I'll see you there in just a bit. Now that we have our Lambda function ready,
let's go to the API Gateway console and hook this function to our API. Let's open the GET method. We'll change the integration type from Mock
to Lambda function. So click on Integration Request. Change the Integration type from Mock to Lambda
function. My Lambda function is in the region us-west-2. So if you look at the Lambda console, you will
find the region code in the URL here. Alright, then we simply start typing the name
of our Lambda function, i.e. Get Random Message, and that shows up here. And click on save. It will ask us to confirm that we want to
switch to Lambda integration. Because we are changing the integration type
from Mock to Lambda, its going to ask us. Click OK to confirm. And it's going to ask to give permissions
to API Gateway to invoke our Lambda function. Click OK and AWS will automatically assign
the necessary permissions to the API Gateway service to invoke our Lambda function. Let's go back to the Method Execution screen
so we can test this new API. Click Test. There are no parameters for this method. So simply click the Test button. And we see the Response Body – Over the
moon! But now, the response we see is slightly different
from what we had earlier with Mock integration. It's a plain text string. And if you see the response from our earlier
test, we see a JSON string or a JSON object with key message and a corresponding value. So, we have to actually convert this Lambda
response to JSON format. So, let's go back in here and click on Integration
response. Expand the HTTP 200 status response and under
Mapping templates, select application/json content type. Here we have to specify the transformation
rules to map the incoming data to the expected JSON format. We create a simple JSON string here, just
like we did in the last lecture. We add a key called message and in the value,
instead of hardcoding, we pass the body of the Lambda response, the response we receive from the Lambda function We can access the Lambda response body using
Dollar Input DOT body. Just like that. Dollar Input is a predefined variable. And we'll discuss more on this later in the
course. For now, just save this. And back to the method execution screen, let's
test it locally before we deploy the API. Click on test. And now we have the JSON response. Wow! But we see the double quotes appearing twice here. So to correct that, let's go back once again
to Integration Response and we can just remove these quotes here. Save. And let's try it again. Click on Test. And there we go. Hello Serverless. Click it again and we should see some other
message. Wow! It's a great day today! And indeed it is. Right? Amazing! Now, before we can test this in the browser,
we have to redeploy our API. So once you've completed testing locally,
we can choose Deploy API from the Actions menu. Select a stage and Deploy. And now our API has been deployed. This is API root URL and we have to append
it with our resource endpoint i.e. SLASH message. So, if we refresh this page, we should see
a new response every time. And we can see a different message as we refresh
the browser. World at my feet! Over the moon! Amazing! I hope you're over the moon as well! So I hope this quick hands-on on AWS Lambda
and API Gateway has you pumped up and excited and made you interested to learn even more
about this amazing world of serverless computing. And if it has, let's continue further and we'll
dig into this even deeper. So, thank you for joining me here and I'll
see you in the next lecture in just a few seconds. We used Node.js 6.10 runtime in the last lecture. So, I wanna quickly show you the same example
using Node.js 8.10 runtime. So, inside the Lambda function, we simply
switch the Runtime from 6.10 to 8.10. Save the function, and its still going to work, because
this JavaScript code is valid in Node 8.10 as well. So, let's test it out. And we can see that it is working well. And if we wanted to use the async/await feature
that's supported by this new Node 8.10 runtime, we can do that as well. So what I'm going to do is simply add async
here before the function definition. And in that case, we won't have a callback. So we remove the callback. And inside the function, instead of the callback,
we simply return the message. Just like that. We don't need to await anything here, so I'm
not going to use the await operator. So, save the function. And let's test it out. And its working fine. So it's still a great day today! And if we go back to our API in the browser
and test it, its going to work. We don't have to re-deploy the API in this
case, because we did not change anything in the API Gateway. So refresh this page, and we can see that it's
still working. Just to demonstrate that the API is actually
showing the response from our new Lambda function running on Node 8.10, lets go back to the Lambda
console and change our message strings slightly. So what I'm going to do is simply add an extra
exclamation mark to each of these message strings. Save the function. And now if we refresh the browser, we should
see messages with two exclamation marks. And its working as expected. So it's still a great day today! Awesome! So thank you for your time and in the next
lecture, we look at some key features along with some pros and cons of Serverless Architecture. Now let's talk a little more on the pros and
cons of serverless computing. First, as we discussed, there are no servers
or operating systems, or hardware or software to maintain. And that makes your life as a developer or
DevOps so much easier. So kind of more me-time for you and you can
be more productive and be able to laser focus on creating stunning applications. So serverless is all for faster innovation,
high productivity, and faster time to market. And in most cases serverless applications
require little to no administration. Alright, the next benefit is easy and efficient
scaling. Serverless applications can be scaled automatically
or at the most with a few clicks to choose your desired capacity. And there is no need to create any specialized
scalable architecture or designs. You can get a large number of serverless functions
running within seconds, and each function runs for a few hundred milliseconds to a few
minutes. And you can allocate resources for each of these
functions individually. And this is going to allow us to scale our application
easily and at the same time efficiently. Thirdly, serverless approach provides built-in
high availability and fault tolerance. You don't need to have any specialized infrastructure
to make your applications highly available or fault tolerant. All your applications get this benefit of availability
and fault tolerance by default, irrespective of whether you're just building a hello world app for your
testing, or you are building the next Facebook or YouTube. Service Integration is another benefit. AWS provides a host of services that readily
integrate with each other, and this is something that is going to allow you to perform a lot
of stuff very very easily – for example you could be sending text notifications, emails, running
analytics, hosting APIs, storing files, running automated workflows, deploying machine learning
models and so on. And all this can integrate seamlessly with
your serverless application. Then the real benefit of serverless is
that there is No idle capacity. You pay only for what you use and no more. For example, with traditional architecture,
say you created a server with 100 Gigs of memory and you're only using about 10 Gigs of it. You'll still have to pay for the 90 Gigs you
never used. But with the serverless architecture, you pay
only for what you use. So if you used 10 Gigs, you only pay for those 10 Gigs. Also, with AWS Lambda which is the core component
of Amazon's serverless platform, you pay only for the time your code runs. So, there is no charge if your code is not
running. If your code runs for say 100 milliseconds
you are charged only for that 100 milliseconds and no more. And that's really a very fine-grained control
and certainly this results in a substantial cost savings for your business! Even with all these benefits, serverless
may not be the solution to all your problems. Its not a silver bullet and does come with
some challenges, not many though. First one is Vendor lock-ins. So, there are a handful of cloud providers
like Amazon AWS, Microsoft Azure, Google Cloud Platform, IBM Cloud and there are some more. And they really want to you use more and more
of their services, for obvious reasons. And they want to build an ecosystem of related
services that work together. This may not be a deal breaker though and
we'll discuss what is called as multi-provider serverless later in the course as a way to
alleviate this challenge. Another concern with Serverless is the public
cloud. The serverless architectures run on the public
cloud. Some use cases or industry-specific regulatory
requirements may be a deterrent to using this public cloud. So these cases might be very few, but then again
if your use case lies in this category, then serverless may not be a good fit. Having said that, Serverless can be run on
private clouds as well, so you could still leverage serverless in such cases. However, we're not going to get into that
discussion here. Let's just focus on AWS serverless platform
for our purpose here. Another point to keep in mind is the level
of control. So essentially, you are giving up some degree
of control by letting someone else manage the infrastructure for you. And in some situations, wherein you need more
control of the hardware resources or of the OS level resources, this might prove be a
limitation. The
core component of the AWS Serverless Architecture is the compute service called AWS Lambda. Lambda lets you run your serverless functions
in the AWS cloud. AWS Lambda was launched back in 2014 and that's
when the idea of serverless was born. The other players quickly followed suit. And with growing use of social media, streaming
data and Internet of Things devices, the relevance of serverless computing is rising very very rapidly. AWS Lambda is still the market leader in the
serverless space. If you look at this Google trends graph comparing
the major players in the serverless space, you'll see the that Lambda has been growing
consistently and at a much faster pace than the other players. Although Lambda itself is fairly new having
launched just a few years back, the other players like Microsoft Azure functions, Google
Cloud Functions, IBM Cloud Functions are even newer. Similarly, AWS as a cloud provider is still
preferred highly as compared to other players like Microsoft Azure, Google Cloud Platform
and IBM Cloud. Besides being the first to enter the serverless
space, AWS Lambda has its strength being part of the huge AWS ecosystem, thereby allowing
seamless integration with host of other AWS services. In this course, we're going to mainly focus
on the serverless offering provided by the AWS platform. So, in the following lecture, let us look
at the core services in the AWS ecosystem that form the foundation of the AWS serverless
platform. And we'll also talk about some additional
AWS services that are popularly used along with the core services in the AWS serverless
space. As part of its serverless platform, AWS provides
a set of web services that you can use to build and run your serverless applications. So, in this lecture, let me give you a quick
overview of different AWS services that form the AWS Serverless platform. All these services are fully managed, meaning,
you can focus on your core application logic and AWS does the rest for you. There are three core services that you can
find in almost every serverless application and these are AWS Lambda, Amazon API gateway
and AWS DynamoDB. These services are the foundation or the backbone
of the AWS Serverless platform so to say. And, then there are additional web services
that you can use in serverless applications depending on your use case. And these could be Amazon S3, SNS, SQS, AWS
Step Functions, Kinesis, Athena and so on. So, let's go over these services briefly and
then we'll discuss the common use cases of the serverless architecture as well. When it comes to serverless applications,
AWS Lambda is the most important service that you will need in your applications. AWS Lambda lets you run your code without
having to create or manage any servers. You simply upload your code to Lambda and
Lambda does the rest for you and ensures that your code runs on highly scalable and highly
available infrastructure. Each piece of code that you upload to Lambda
is called as a Lambda Function and it runs in an independent isolated environment also
called as a container. As discussed earlier, with Lambda you pay
only for the time your code runs and there is no charge when your code is not running. Lambda provides fine-grained access control
letting you decide who can invoke Lambda functions and which services can be executed by the Lambda functions. Lambda also comes with version management
capabilities allowing you to manage your deployments efficiently. Second service is the Amazon API Gateway. API Gateway, as the name indicates, helps
you create and publish APIs and it tightly integrates with AWS Lambda to let you create
completely serverless APIs. Again, this is a fully managed service and
you can use it to build RESTful APIs. RESTful API are the APIs that follow a client-server
model, where client is stateful and server is completely stateless. These APIs can then be consumed by your web
or mobile applications, allowing you to interact with different AWS services through your code
running on Lambda. API Gateway can handle thousands of concurrent
API calls. And also gives you full control to create your
APIs with fine-grained access control and version management capabilities. The third core service of the serverless platform
is Amazon DynamoDB. DynamoDB is highly-scalable high-performance
NoSQL serverless database. DynamoDB can scale on-demand to support virtually
unlimited concurrent read/write operations with response times in single-digit milliseconds. And then, you have DynamoDB DAX or DynamoDB
Accelerator, which further can bring down the response times from milliseconds to microseconds. DAX is a caching service that is provided by
Amazon on top of Amazon DynamoDB. AWS Lambda, API Gateway and DynamoDB are the
three most commonly used services in serverless applications. And we'll dive deeper in to all three of these
services in the upcoming sections of this course. Now, depending on your use cases, you might
need other web services as well. Let's look at these additional services right
in the next lecture. Amazon Simple Storage Service or S3 is a very
simple and intuitive web service that you can use to store and access your data from
anywhere on the web and with fine-grained access control. S3 also lets you build static websites that
can interact with your Lambda code. So, you can really use S3 as a front end of
your serverless applications. With serverless applications, since there
are no servers, you need a reliable way for different web services or different software
components to talk to each other or communicate with each other. And AWS provides two services that you can use
here for inter-process messaging – SNS and SQS. SNS or Simple Notification Service is a fully-managed
notification service that allows you to publish notifications and any services or software
components that subscribe to these notifications will receive these messages. Second service is SQS or Simple Queue Service.
This is a very simple and intuitive messaging service that you can use to send and receive messages
at virtually any volume. SQS allows multiple publishers and consumers
to read and write messages from the same queue, and it can retain messages up to a certain
pre-defined time period or until you explicitly delete the message. And then, in any serverless application there
could be several Lambda functions working together, and your application might need
a way to orchestrate these functions i.e. execute them in a specific order or depending
on certain logic or certain conditions that might be known only at the runtime. So AWS Step Functions is a service that helps
you with this orchestration. AWS Step Functions let you build visual workflows
to coordinate different Lambda functions to work together and form a larger serverless
application. Then coming to Analytics, Amazon provides two web
services Amazon Kinesis and Amazon Athena. Kinesis is a platform for streaming
data applications. And if your application requires you to work with
or analyze streaming data in real time, you may want to use Amazon Kinesis. Amazon Athena is an interactive query service
that you can use to query your data stored in Amazon S3 using standard SQL syntax. You also have debugging tools like AWS X-Ray,
and debugging & logging tools like AWS CloudWatch. Then we have AWS Cognito, which is serverless
user authentication and management service. It supports authentication of users for your
application via username and passwords and also via Federated Identity or Open ID providers
like Facebook, Google, Twitter, Amazon and so on... and you can also have your own custom Open ID providers. So we discussed some of the many AWS services
that we can use with our serverless applications and we'll explore these services throughout this course. AWS also provides you with fully functional
SDKs that you can use to build your serverless applications. AWS Lambda supports several programming languages
like Node.js or JavaScript, Python, Java and C# or .NET Core, and also Google Go. Throughout this course, we'll build several
demos and learn how to make use of these services within our serverless applications. In the next lecture, let's go over some of
the prominent use cases for serverless architecture. Let's look at some of use cases for serverless
applications. The serverless architecture allows you to
build just about any type of application or back-end service that you can think of. Let's go over some of the most common use
cases where you can use serverless architecture. The first and the most prominent one is application
backends. Serverless allows you to build backends for
Web, Mobile or even IoT applications. And later in this course, we'll build all
these three types of backends together, with hands on. Serverless web applications and their backends
typically make use of AWS Lambda, API Gateway, DynamoDB, S3 or even Step Functions as needed. End-users might interact with your applications
through web, mobile smartphones, or even IoT devices. And these frontends act as event sources to trigger
our Lambda code. Each incoming request from the end user is
typically received by one of the endpoints exposed by API Gateway. API Gateway then triggers a call to Lambda
function and Lambda function then coordinates with different web services like S3 or DynamoDB
to generate a response and then returns that response back to the end user, possibly through
the same channel like API Gateway. Amazon S3 can be used to create the app front-ends
in this case. These frontends are wired to the serverless
backends by means of APIs that we create using the API Gateway. Such applications could have very unpredictable
usage patterns with periods of extremely high user traffic followed by minimal to no traffic. And your application must be geared to handle
these spikes in traffic without the need for any expensive infrastructure. With AWS Lambda, your applications can scale
on demand and you pay only for what you use. And in most cases, such serverless applications
require almost zero administration. Another use case for serverless architecture
is real-time data processing systems or streaming data processing systems. You can use Amazon Kinesis or Kinesis Firehose
along with Lambda, DynamoDB and S3 to create your real-time data processing systems. These systems typically have variable workloads
and serverless approach helps here by helping with the automatic scaling during high workloads
and then scaling down and saving costs during the idle time. Amazon Kinesis is a service that lets you
collect, process and analyze real-time streaming data from multiple sources, at absolutely any scale. And you can literally process terabytes of data
every hour, coming in from thousands and thousands of sources simultaneously. For example, social media streaming data from
several sources can be loaded into Kinesis simultaneously and then processed in real-time
using AWS Lambda and DynamoDB. Kinesis also provides Data Analytics services
that can be used to build Real-time Analytics applications. Later in the course, we go through hands on
demos where we'll process streaming data from Kinesis as well as from DynamoDB streams using
AWS Lambda. Automation and continuous delivery are even
more crucial when building serverless applications. These tools make your life as a developer
or as DevOps so much easier. A typical serverless application would have
several many Lambda functions and a number of APIs or API endpoints. And configuring each of them manually, maintaining
their versions and following a uniform development process or deployment process manually is not a great idea. Its time consuming and prone to errors. Automating these tasks fully or at least partly
will make your application life cycle easier and manageable in the long run. Ideally, you should be able to maintain a
codebase in a source control system or a version control system, run tests, and deploy your
application automatically. The only time you should deploy Lambda functions
manually or configure API Gateway using the web console is while you learn. Once you begin working on real serverless
applications, you should have a well-defined deployment process and ideally it should be
automated. There are several tools that play an important
role here and we'll quickly go over these in this lecture and later in the course we'll
learn how to use these tools as well. There are two frameworks that help you work
with your serverless applications efficiently – The AWS SAM and the Serverless Framework. AWS SAM or the Serverless Application Model
is a tool provided by AWS, while the Serverless Framework is a popular third-party tool
provided by a company named Serverless Inc. First, let's talk about the AWS SAM or the
Serverless Application Model. SAM is a lightweight version of AWS CloudFormation. CloudFormation is a service that allows
you to automate creating and deploying various AWS services quickly using a text-based template
file. AWS SAM uses similar template file but with
a simplified syntax, more suited for serverless applications. CloudFormation internally converts this SAM
template into the standard CloudFormation syntax to create and deploy our serverless resources. We'll dive deeper into SAM and learn how to
use it later in this course. Now, lets discuss the alternative third party
framework called the Serverless Framework. This is an open source tool and works on similar
lines as the AWS SAM. The template syntax it uses however is slightly
different but equally easy. The serverless framework is quite a popular
tool and many organizations are using it for their application deployments. The Serverless Framework also has an open
source plugin system which allows any one to extend its functionality. And hence it provides rich features apart
from being able to deploy serverless resources. We'll learn how to use the Serverless Framework
later in this course. Irrespective of which framework you use, continuous
integration or continuous delivery is something you should consider using in order to automate
your application deployment process. AWS provides a host of tools for this purpose. AWS CodeCommit is a version control or source
control system that provides a GIT based repository and you can use it as your codebase. This allows you to maintain private repositories
of your application code. As of recording this video, AWS Free tier
provides you access to unlimited private repositories with about 50GB of storage and 10,000 Git requests
per month for up to I believe 5 active users at no cost to you. Then we have AWS CodeBuild which allows you
to build your serverless code and create or update AWS resources automatically via CloudFormation. This allows you to deploy your serverless
applications as well. AWS CodePipeline then is a service that allows
you to define the delivery or deployment cycle from Source repository through deployment
and automate the application deployment process or the application deployment pipeline. Later in the course, we'll learn how to use
these tools together to automate the deployment process for our serverless applications. That's all for this lecture. In the next lecture, let get our development
environment setup so we're ready for hands-on practice as we dive deeper in to serverless
computing with AWS. So thank you for your time and I'll see you
in the next lecture in just a few seconds. In this video, we will create and set up a
new AWS user to access AWS. This will allow us to connect to AWS resources
from our local computer. We'll also install the AWS CLI tool so we
can connect to AWS using the command line or the terminal. If you're going to practice the examples presented
in this course with me, then please do follow along with this video and have your local
environment setup before we dive into the hands-on sessions. To get started, you'll need access to an AWS
account. I am assuming that you did the hello world
example with me and so you already have an AWS account. However, if you still do not have one, this
is the time to sign up for a free AWS account on the AWS website at aws.amazon.com. You
can also login to the AWS using your amazon.com retail account. So, pause this video here and grab your AWS
account details. Once you have them with you, sign
in to the AWS Console at aws.amazon.com and we'll continue from there. Alright, I hope you're logged in to your AWS
account now. So let's setup a new AWS user. It's fairly simple to do that. From the AWS Console dashboard, search for
IAM and open it. Go to the Users page and click on Add User. Let's name the user as serverless-admin. You can name it whatever you like, but since
we're going to use this user later on with the Serverless Framework to deploy our applications,
naming it as serverless-admin might make it easy for us to remember that this is a framework
user. And, since we'll use this user to connect
to AWS from our applications, we only need to provide this user with programmatic access. AWS Console access is not required since we
will not be logging in to AWS console with this user. But if you want to login to the AWS console
with this user, then you may want to enable the console access as well. I am going to leave it unchecked though. When you choose only the programmatic access,
AWS will not create any password for this user, and will only create a set of API credentials
i.e. an access key and a secret key. Click Next. Now we need to tell AWS, what permissions
this user would require. Click on Attach existing policies directly. Search for and select Administrator Access. This is going to provide this user full access
to your AWS account. Ideally, we may not want to do this, but the
Serverless Framework is kind of permission hungry and requires a lot of permissions for
the framework user. So, while we're learning, its best to use
administrator access, so we can focus on learning. So be sure to keep the credentials for this
user in a safe place. Click next and review if everything looks
fine. And then click Create User button to create
this user. AWS will show us the API credentials. Copy or download them to your local computer
and keep them in a safe place. There are several ways to setup these credentials
on our computer. We are going to simply add them to our environment
variables on our computer. I am using Windows 10 for this course. The steps to setup the environment variables
on Mac or Linux are slightly different and I'll come to it in a bit. If you're on Windows, preferably login with
administrator access and then from the start menu, search for ‘Environment Variables'
and choose to edit environment variables for your account. Then, under User variables, click New. Add API credentials we downloaded
as key-value pairs here. So, lets add AWS_ACCESS_KEY_ID as the key and
our actual access key for the user we created as its value. Similarly, we'll also add AWS_SECRET_ACCESS_KEY
as key and our actual secret key for this user as its value. It's also a good idea to setup a default region. So, lets add one more environment variable,
AWS_DEFAULT_REGION, and set this up with value for the default AWS region that you're using. I'll be using us-west-2, and you could use
the same or any region that's closest to you. You can choose a suitable region from the
top-right corner of your AWS Management Console. You could also do this same thing via the command
prompt or PowerShell by using the set command as shown here. If you're on Mac or Unix, you can setup your
environment variables via the terminal. So simply open up your terminal and use the
export command as shown here. In the next lecture, lets install the AWS
CLI so we can interact with AWS using the command line or the terminal. In this lecture, we'll install the AWS CLI
or the command line tool to work with AWS from the command line or the terminal. I am assuming that you've created an IAM user,
downloaded the credentials and have set up the environment variables. If not, please review the previous video. Specifically, you should have added the AWS
Access key ID, AWS Secret access key and the default region to your environment variables. Alright, lets continue. We can simply Google for AWS CLI install and
we should be able to find the page. I am also putting a direct link in the resources
section of this video, so you can use it to go to the download page. Use the left sidebar to locate the installable
for your operating system. I am using Windows, so I will locate it here. If you are on Mac or Linux, you can use the
appropriate link. The easiest way for Mac or Linux OS is to
use the bundled installer using the link here. I'm on Windows, so I am going to find the installation
package on the Windows link. If you scroll down, you'll find upgraded MSI
installers and these use the latest version of Python. You can download the appropriate version. I am on 64-bit machine, so I'll use the first
link. If you're not sure whether you have a 64 or
32-bit machine, simply download the installer from the third link and it would automatically
detect the type of machine you have. So, let's download and run this install and
we should be good to go. If you're on windows, you can use command
prompt or PowerShell and on Mac and Linux, you'll use the terminal to run the CLI commands. An AWS CLI command always starts with aws
followed by the name of the service and then the name of the action that we intend to perform
and then we specify additional options relevant to the requested action. For example, if we say, aws help and hit enter,
it would give us the syntax documentation for using this CLI. You can scroll by using the enter key. And use Ctrl+C to exit. Similarly, if we say, aws Lambda help, and
hit enter, we would get the documentation for using CLI with the AWS Lambda service. You can find the same documentation over the
web as well. Simply Google for AWS Lambda CLI commands,
and the first result should take you to the appropriate page. I have also added the link to this documentation
in the resources section of this video, so you can open it directly as well. That's about it. In the next lecture, we'll install Node.js,
VS Code and Postman. I'll be using node.js for all the demos in
this course. So, I'll show you how to install node.js on
your computer. And optionally, you can also install an IDE
or Integrated development environment of your choice to write your code. I'll be using the IDE that I like which is
VS Code and if you'd like to use it as well, I'll show you how to install it. Many of our demos will include HTTP or REST
APIs, so you'll need some kind of API testing tool. You can use any tool of your choice. I like using Postman tool. So if you'd like
to use it, I'll show you how to install it If you're familiar with these steps or have
these tools already installed, you can skip over to the next lecture. Alright, let's install Node.js. If you are on Mac or Linux, open the terminal
or on Windows, open the command prompt or PowerShell. First let's find out if you've node on your
machine or not. So, run node space dash dash version. You can see that on this machine, I am running
node version 8.11.1. On your machine, you may or may not have node
installed or you might have a different version. Either way, I want you to install the latest
version of node. So, open up your browser and head over to
nodejs.org. On this page, you can see, we have two versions
of node. One is the latest stable version, LTS version
eight dot eleven dot four, which is recommended for most users at the time of recording this
video and there's always a newer version which is experimental and it might not be stable. So, I want you to install the latest stable
version that you see in your browser. The latest version might change in future;
however, the installation steps would mostly remain the same. So, let's go ahead... download the LTS version for your operating system and run the installer. Simply follow the prompts and your installation
will be done. Once done, go back to the command prompt or terminal and let's run node DASH DASH version one more time. And, you can see I upgraded my node to version
to eight dot eleven dot four Now let's install an IDE if you don't have
one already. You can use any IDE of your choice or you
can as well use any simple text editor like notepad, notepad++, sublime or the likes. I prefer using VSCode and this is available
across platforms. So, if you'd like to use VS Code as well,
go to code.visualstudio.com in your browser and here download the installer for your operating
system. Run the installer and follow the prompts. The installation should be straightforward. Optionally you can check the options to enable
Open with Code context menu and associate related files with VS Code. I like to have them all checked. Continue with the setup and you should be
all set. We also need a tool to test our APIs. So we're going to install Postman app. You can also install the postman Chrome extension
if you like. So, head over to getpostman DOT com SLASH
apps and download the installer and run it. And the installation
should be straightforward. That's about it. Let's test the setup in the very next video. Now let's test the setup to make sure we've
done all the steps correctly. Open VS Code or launch command prompt or the
terminal if you are using other IDE. To open the terminal from within VS Code,
simply run Ctrl backtick (Ctrl+`) and run node DASH DASH version You should see the node version we just installed. Now let's test if we can connect to AWS from
our machine. From the terminal, type “aws sts get DASH
caller DASH identity” just like that and hit enter. And, this should display information about
the AWS serverless-admin user that we created and added to the environment variables. If it does, that means we have configured
the environment variables correctly. If you see an error, that means the environment
variables were not setup or were setup incorrectly and you'll have to set them up again. So if you're seeing any errors, you might
want to review the earlier video where we added environment variables. That's about it, we have our environment ready
for building the demos in this course. So, let's keep going. Now we have our development environment ready
and we also have a fair idea of what we're going to learn in this course. So, before we continue further, in this lecture,
I'll share my recommended approach to taking this course, so you can get the best value
from it. First off, you should watch the videos. And if something is not clear or apparent
at first, feel free to watch that video again. And, if you find that I'm going too fast or
too slow, this video player has a playback speed control. Make use of that and adjust the speed to your
liking. So, if you find my pace too slow, you can
speed me up by up to 2 times and if you find that I am going too fast, you can slow me down
as well using the same playback speed control. I also highly encourage you to code along
or build along with me. Do write the code, try out the demos, test
out the examples that I show you. And this will only help solidify your understanding
of the ideas and concepts more effectively than simply watching the videos. Also, be mindful of the AWS pricing if you
are experimenting a lot. However, overall, most of the demos that I
show you in this course, will be well within the free tier, but a subset of them may
not be free. So, do check the current pricing on the AWS
website. And, occasionally, there could be times when
you need help or need more information. And to help you out in these situations, I provide
some resources like source code files, solutions as well as additional reference links. You'll find these attached to various lectures. And you can certainly download these and compare
my solution with yours. And if you still are stuck, or have some other
queries or questions related to the course, please do not hesitate to ask them. Ask in the Q&A section. And I am always here to help you and to guide
you. Also help other students if you can. Participate in the Q&A section, open new discussions
and get the ball rolling. That's about it. There are some background topics – specifically
JSON, YAML, Node.js, JavaScript. And you may want to review them before we
dive in to the main serverless topics. I have created two optional sections for those
of you who are new to these topics. These optional sections are included at the
end of this course. If you are already familiar with these topics,
feel free to skip them. If you're are new to these topics though,
then I highly recommend going through those optional sections before you start with the
next section. These optional sections as I said are available
at the end of this course. That's all for right now and let's finally
dive in and get the ball rolling. We already covered what serverless computing
is and I also introduced you to AWS Lambda in the very first section of the course where
we created our first serverless function using Lambda. Just to summarize, serverless computing is
a cloud computing execution model in which the cloud provider dynamically manages the
allocation of infrastructure resources, so we don't have to worry about managing servers
or any of the infrastructure . And AWS Lambda is an event-driven serverless
computing platform or a compute service provided by AWS. The code we run on AWS Lambda is called as
a Lambda function. And as we saw in the hands-on lab we did in
the first section of this course, we can simply write these Lambda functions right inside
the Lambda console, or as we'll see in this section, we can also write them locally on
our computer and upload them to Lambda. Later in this course, we will also learn how
to streamline this development process using frameworks like SAM, the AWS Serverless Application
Model, or by using the Serverless Framework. Once the code is deployed to Lambda, it is
ready to run. And it runs whenever it is triggered by a
preconfigured event source. Lambda functions can be triggered by a numerous
event sources like API Gateway calls, S3 file uploads, changes to DynamoDB table data, CloudWatch
events, SNS Notifications, third party APIs, IoT devices and so on. And we're going to look at many of these examples
in detail in the demo sessions towards the end of this course. The Lambda functions run in containerized
environments which spring into action only when triggered by an event-source. So no resources are consumed, and therefore
there is no billing for the idle time. We are charged only for the time our Lambda
functions run, and billing is done in increments of 100 milliseconds of the compute time. So, if a Lambda function runs for say 90 milliseconds,
we'd be billed for just about 100 milliseconds, while if it runs for say 200 milliseconds,
then we'll be billed for 200 milliseconds. And about 1 million requests per month are
free. So, in short, AWS Lambda is very low cost
and does not require any upfront investment. We'll look at Lambda pricing in detail later
in this section. That said, let's continue to the next lecture
where we'll do a quick walkthrough of the AWS Lambda console. Welcome back. In this lecture, lets do a quick walkthrough
of the AWS Lambda console. I have the Lambda console open here and you
can see the Get Random Message function that we created back in the last section. If you click on this box with the function
name, you'll see the function code here. This is the function code. And you can either edit it inline or upload a
ZIP file or you can upload the same zip file to S3 and then upload it from s3 using this
option. We're going to look at this a little later. Then we have runtime. There are various runtimes that are available
– C# or DOT NET Core, Google Go, Java, Node.js and Python. We're going to use the Node.js 8.10 runtime throughout this course as it is the latest Node.js runtime available as
of now. Handler is Index DOT handler, which means
the function with name Handler located inside index DOT JS file. As you scroll down, you'll see environment
variables. Here you can add key-value pairs that the
function code needs at runtime. So you can add environment specific constants
or API keys here and then you can access them inside your code. If the values of these environment variables
are sensitive, then you can also explicitly encrypt them using this KMS service. And if you do so, you'll also need to decrypt
them within your function code. And we are going to dive in to this later
in this course. Then we have tags. Tags can be used to group and filter
your functions. These are case-sensitive key value pairs. Execution role is the Lambda execution role
that we added previously. You can make changes to the role here if needed. You can choose another role or create a new
role from templates or a create a custom role using IAM. Then we have basic settings. You can add a function description if you
like. You can also adjust the max memory that your
function requires. And its worth mentioning here that the CPU
resources allocated to the function are always in proportion to the memory size we choose
here. You can choose memory from 128 MB all the
way up to 3008 MB. And then you can set timeout in seconds. You can set this value anywhere from 1 second
to up to 15 minutes. So maximum amount of time any Lambda function
can run is about 15 minutes. This was earlier limited to 5 minutes. Now AWS allows Lambda functions to run up
to 15 minutes. Timeout is one of the essential considerations
when building serverless apps. Ideally, your Lambda functions should be designed
to perform single-specific task and not perform several tasks. You can segregate your application logic in
multiple Lambda functions, each performing one single specific task. This approach is often used in microservices
architectures and such microservices based architectures are best suited for serverless
applications. Then we have VPC settings under the Network
section. Here you can attach your Lambda function to
a VPC. You simply choose your VPC and add desire
subnets, at least two of them and select the VPC security group here. When you attach a Lambda function to a VPC,
the function will run within that VPC and will have access only to the resources inside
the VPC. That means it will lose access the resources outside
the VPC. It will also lose access to the internet. And in case it requires internet access, you
need to open appropriate outbound connections in the selected security group and VPC will
also require a NAT gateway in that case. We're going to look at running Lambda functions
within VPCs later in this course. Then here, you can define DLQs or Dead Letter
Queues. These are useful when Lambda function ends
up in errors even after multiple retries. So if a Lambda function errors out, Lambda
can send this event payload either to an SQS queue or as SNS notification. We'll be learning how to use DLQs later in
this course of course. Then we have concurrency limits. Here we can define the maximum number of concurrent
executions possible for your Lambda function. All AWS accounts receive a concurrency limit
of 1000 and this is applied across all the Lambda functions in your AWS account. If you reserve concurrency limit for a particular
function, then remaining concurrency limit will be shared by other Lambda functions present
in your account. So if you reserve 500 concurrent executions
say for this function, then the other 500 will be available to remaining Lambda functions
and if you reserve 900, then only 100 will be available to the remaining Lambda functions. And finally, Auditing and Compliance can be
set up using AWS CloudTrail service and it allows you to log the function's invocations
inside CloudTrail if your organization requires that for auditing and compliance purposes. Then, we have test options in the top right
corner. You can use the dropdown to configure test
events. Simply select configure test events option
and you can add your event here, save it and then use it for testing just like we did earlier. Under Actions menu, we have options to create
versions and aliases. We will deep dive in to this later in the
course. You can also delete or export your functions
using these options here. If you click on export, you'll be able to
download the function code an AWS SAM file or as a ZIP file using this Download deployment
package option. We'll learn about AWS SAM later in this course. Under Qualifiers, you can choose the versions
and aliases. The Throttle option can be used to throttle
the function and you can use this in emergency purpose. Then we also have monitoring tab that allows
us to monitor the function at runtime. You can click through to CloudWatch or X-Ray
services to view the logs and traces in detail from here. That's more or less all about the Lambda console. Lets talk about Permissions model in the very
next lecture. AWS Lambda uses a decoupled permissions model. The service or event that triggers the Lambda
function only requires the necessary permissions to invoke the Lambda function. Here, you will see different triggers on the
left. We have API Gateway event trigger added to
this function. If you click on this box, you'll be able to
see and edit the trigger configuration below. And you can add more triggers from the left
side. There are several event triggers that can
trigger a Lambda function and you can see them listed here. When you add an event trigger to your Lambda
function, it is automatically assigned an appropriate IAM policy to invoke this Lambda
function. This role is called as Lambda invocation policy
or Function policy. On the right, you'll see the list of services
our Lambda function has access to. Clicking on the box will show you the specific
resources and actions that this function's code has access to. These permissions are called as Lambda execution
role. And if you click on the Key icon here, you
will be able to view the invocation or function policy as well as the execution role assigned
to this function. So, in effect there are two sets of IAM permissions
applied here. The invocation permissions via the function
policy and the execution permissions via the execution role. The function policy is used by the triggering
event or the service to invoke the Lambda function while the execution role is used
by the Lambda function to access different AWS services that it depends on. The function policy and execution role are
independent of each other. Different event sources that trigger the Lambda
function are not required to have the permissions that Lambda function code requires to complete
its job. Thus, the two are effectively decoupled thereby
improving our application's security. That said, let's continue to the next lecture
where we'll take a closer look at the Lambda handler function. When we wrote our first Lambda function, we
made use of the Cloud9 Editor or the web-based editor integrated into the Lambda console. Now, let me show you how we can write these
functions locally on our computer and then upload them to Lambda. We'll also take a closer look at Lambda handler
function as we write our function. So, I'm going to create a new folder here, let's
say greet-me. And inside this, we'll create a simple Lambda
function that takes in our name and language and greets us a hello in the chosen language. Before we do that, lets look at Lambda syntax
in Node.js 6.10 and Node.js 8.10 Let's open this folder with VS Code, or you
can open it any other code editor or IDE of your choice. We'll create a new file, say index.js. And inside it, let's write the Lambda handler. Exports DOT handler, equal to, EVENT, Context
and a callback. And inside the function hander, we write our
code and finally we make a call to the callback function, like so. This is Node 6.10 style or the callback style
handler function. We can easily write the same handler using
async/await syntax that is supported by the latest Node 8.10 runtime. So let's look at the syntax for async/await
based Lambda handler. I will copy this code, comment it out and
paste it below, so we can change it. We simply add async before the function, and
we won't have a callback argument in this case So we remove it. And instead of calling the callback, we simply
return the result using return statement, just like so. And now since this is an async function, you
can use the await operator to wait for any promises to resolve. So, let's say, I remove this and lets define
another function. Say, we want to resize images. So we write a function that returns a promise. Const Resize Image EQUAL TO some data, fat
arrow function that returns a new Promise that can either Resolve or Reject. And inside this, we have some processing and
finally if there is an error, we reject the promise with the error. Otherwise we resolve with the result. This is a typical function that returns a
promise. And we can call it inside our Lambda handler. So, Const data EQUAL to event dot data. And then we call resize Image function, followed
by a THEN block. But since our Lambda handler is declared as
an Async function, we can make use of the await operator. So we could simply say, let New Image EQUAL
TO await Resize Image. And finally we can return New Image. This is how we can write Lambda handler using
the Async/Await syntax. This syntax is only supported in Node.js runtime
8.10 and above. And we'll be using this syntax throughout
this course for all the demos. Irrespective of the type of handler you use,
whether callback style or async/await style, there are two important arguments to the handler
function – event and context. The event object is dependent on the event
source. So the structure of the event object varies
from event to event and it acts as a source of input data for our Lambda handler. And context is what sets the general environment
for our Lambda function. So, you can use the context object to retrieve
information about the context in which the function executes – like the function name,
remaining time, memory limit and other additional information. So, I'll clean this up and let's continue
to the next lecture. In the next few lectures, we'll look at Event
and context objects in more detail. In the last lecture, we saw that a Lambda
function can have two arguments – event and context. In this lecture, let's understand the event
object in little more detail. The event object holds the input data or input
parameters that we want the Lambda function to act on. The structure of this event object depends
on the event source. We have these different event triggers that
can be used to invoke our Lambda functions and each of these triggers will send the event
data in a particular structure as we'll see in this lecture. Lambda supports two invocation types - synchronous
and asynchronous. The invocation type – whether synchronous
or asynchronous, depends on the event source. For example, an S3 event is always asynchronous,
while an API Gateway or Cognito event will be synchronous. And we cannot control this. However, in the cases where we invoke the
Lambda function through our own application using the invoke method of the AWS SDK, for
example, we can choose the invocation type – either synchronous or asynchronous. Then, we have two types of event sources – push
based events and pull based or poll-based events. Push based events push the event data to Lambda
in order to invoke the function. On the other hand, in case of pull events
or poll-based events, Lambda polls the event stream to look for event data. Example of push event is an S3 event or an
API Gateway event. Example of pull or poll-based events is DynamoDB
stream event, a Kinesis stream event or an Amazon SQS queue event. In this type of event, Lambda pulls event
data from the DynamoDB, Kinesis or SQS and invokes the Lambda function. Now, let's look at sample event data for few
events. Lets select the option to configure event
data from this dropdown. Choose Create new test event. And here, we have several event templates. Let's select one, say API Gateway AWS Proxy. It has a typical structure – we have HTTP
body of the message. This is HTTP POST method, it has its body,
and all other different parameters. It has some query string parameters and then
it also has the path parameters just like a typical HTTP request. Then, if we switch to another event type,
lets say a DynamoDB Update event, we can see the structure presented by a DynamoDB stream
event. Whenever we add, modify or delete items in
the DynamoDB table, DynamoDB can push event data to DynamoDB streams which then can be
polled by Lambda. So this event has the DynamoDB table item
keys. The New Image that holds the item data that
was inserted or modified. This is an INSERT event, which means this
item was added to the DynamoDB table. Further we have another item that holds old
image and a new image. This is a MODIFY event which indicates that
an item was modified. The old image is the item data before modification and new image is the same item data after modification. And finally we can have a REMOVE event that
indicates that an item was deleted from the table. And in this case, we will have only the old
image of the item data. So we can see that each event source will
have its own predefined event data structure and we write our Lambda function according
to the event or events that it needs to process. If we are invoking the Lambda function from
our code, using AWS SDK for example, we can have our own custom event structure. In the next lecture, lets look at the context
object which is the second parameter to the Lambda handler. In the last lecture, we looked at the event
object which is the first parameter to any Lambda function handler. Lambda handler also has an optional second
argument, called the context object. Let's understand what all we can do with this
context object. The context object provides us with some useful
runtime information while the Lambda function is executing. For example, we can use the context object
to find out how much time is remaining before the Lambda function times out, what CloudWatch
log group and log stream is associated with the Lambda function, what is the AWS request ID
of the current invocation of this Lambda function, and so on. This context object provides different methods
and properties which we can use to get this runtime information. So, if we write, context DOT get remaining
time in Millis, this is going to give the time remaining in milliseconds before the
Lambda function times out. This is a method. We also have different properties. So, for example, context DOT function Name
will give us the name of the Lambda function. We can use this to invoke the same Lambda
function again, programmatically, using the AWS SDK, for example. Then we have, context DOT function version,
that returns the version of the currently executing Lambda function. context DOT function ARN, will give us the
ARN of the currently running Lambda function. context DOT AWS Request ID will give us the
request ID of the current invocation. We can use this request ID for any follow
up inquiry with AWS support if we need to. Context DOT Memory Limit in MB will give us
the memory size allocated to this function. Context DOT identity will give us information
about the AWS Cognito identity provider, if we are using that. This is useful with the AWS Mobile SDK. Context DOT Log Group Name and context DOT
Log Stream Name will us the names of log group and log stream associated with our Lambda
function. If we're using AWS Mobile SDK, we can have
context DOT client context. And this property can give us additional information
about the client application and client device. For example, we could have Client Context
DOT client DOT installation ID or App Title. We could also have Client Context DOT Custom
to access custom values set by the mobile application, for example. We can access environment variables with ENV
DOT platform version or make or model and so on. And I have also added the link to the context
object documentation. You can find this link in the resources section
of this lecture. That's more or less all about the context
object. In the next lecture, lets explore how to perform
logging and error handling from within the Lambda functions. Now, lets quickly look at error handling and
logging in this lecture. First, the error handling. If you are using callback style Lambda handler,
like so… , then you can throw errors using the first argument to the callback function. So, we could have const Error EQUAL TO new
Error, “An Error Occurred” and you can pass the error as the first argument to the
callback function. And, if you're using the Async/Await type
Lambda handler, then you can simply throw an error using a throw statement. Like so. And of course, you can wrap your code within
a try catch block for better error handling. If you want to print the error message to
the console, you can use console DOT error like so. Similarly, we could have console DOT log,
like so… .to output a log message, OR console DOT info for informative message, AND also
console DOT warn to print a warning message, for example. This is more of a Node.js or JavaScript syntax,
nothing specific to Lambda and we can use it for error handling and logging. And these messages printed to the console
will get logged automatically to AWS CloudWatch service. Let me show you what I mean. Let me copy this. And I will paste it in to our Lambda function
code on the AWS Lambda console here. And save it. Just for the sake of testing. And then let's test this out using the test
button. And we can see the messages printed to the
console here. And if we scroll up, we can find them in the
log output as well. And we can access these messages only when
we use the test button. AWS also writes the Lambda console output
to CloudWatch. So, in real world situation, where we don't
have direct access to the console, we can access these logs via the CloudWatch service. If we click on this logs link, it will take
us to CloudWatch. And we should see these same messages logged
there. Let's pick up the latest log stream. And we can already see these messages logged
here. And if we go back to the Lambda console…
and I will close this… and also close this terminal window. If we wanted to output this dynamic message
to the console, for example, we could do that as well. So instead of this string, we can log this
message variable. I'll remove rest of the code. Save that. And if we test this, its going to print a
random message to the console – Hello Serverless! And it should also show up in CloudWatch. So, we go back to the log streams and click
on the latest stream. Its not showing up all the output yet and
it takes a few seconds to show up. So, let's wait one or two seconds and then
refresh this. And now, we can see Hello Serverless message
here logged to the CloudWatch service. Now, instead of logging a message, let's throw
an error. We can do that using throw statement, like
so. Let's say, “This is a random error”. I'll save that. And this line of code is of course not reachable
now, but that should be fine for our testing. So let's test this out. And we can see the error output on the console. And it would also show up at the top of the
screen here. Let's go to CloudWatch and look at the latest
log stream. And we can already see the error message showing
up here. That's about it. This is how we can use error handling and
logging with the Lambda function. And since this was just for our demo and testing,
before we continue further, I'll remove the throw statement and save it. And if we test it now, it should work normally. Shooting for the stars! Wow!! Amazing, we gotta shoot for the stars as well! Alright, let's keep going. We'll do a quick hands on lab in the very
next lecture. In this lecture, let's do a quick hands on
lab. We started with “greet me” Lambda function
earlier. So, let's complete that now. I have the function open in VS Code. Basically, its just a blank file named index
DOT js, inside a folder named greet me. What we are going to create here is a multi-lingual
greeting function. So we pass in person's name and language choice
and the function will return a greeting in the chosen language. Just an extension of plain Hello World
function. So, let's begin. We'll declare a constant, say, greeting. This would be an object. It'll have different language keys with corresponding
greetings. So English we say Hello, In French they say
Bonjour. In Hindi, we say Namaste. And then I'm going to paste in greetings in
some more languages here. You can certainly get more creative here. We'll keep this simple for easy understanding
though. Now we have greetings in a couple of languages
here. So let's go ahead and write the Lambda handler,
shall we? Exports DOT handler, equal to, async, event. We don't need the context object here, so
we can drop that one. And inside the curly brackets, we can write
the handler function code. This function is expecting to receive PATH
Parameters as part of a HTTP GET request coming in from Amazon API Gateway. And we'll use Query parameters as well. A PATH parameter is mandatory parameter on
an HTTP request while the Query String parameter, by convention, is optional. Let me show you what I mean. Let's go to the API Gateway console for a
moment. We'll be creating a Greet Me API Endpoint,
right inside this Hello World API. Let's go the stages and then to the Test stage
and open this Invoke URL in a new tab. We could have our endpoint like so. Say greet Me, followed by name. So I will put in my name, Riyaz, for example. And this name here acts as a PATH parameter
because its part of the URL path. And we could have any number of path parameters
separated by SLASHES like so. We'll stick to one to keep it simple. And then we add a Question Mark. Any key value pairs that we specify after
this question mark are called as Query Parameters. Say Param EQUAL TO value. And we can add multiple key-value pairs here
separated by an Ampersand Character like so. So this one… is going to be part of the
path parameters, while these two will be the query parameters. And all these parameters will be available
in the event object that gets passed to our Lambda handler. And before we create our endpoint in API Gateway,
lets build our Lambda function first. So, I'll close this for now. Let's continue writing our Lambda handler
in the very next lecture. Back in to VS Code, let's continue writing
our Lambda handler. Inside our Lambda handler, we have to capture
the PATH and Query string parameters from the event object. The name of our path parameter is NAME. So lets say, LET name EQUAL to event DOT Path
Parameters DOT NAME, for example. And Path Parameters is a predefined attribute
of the API Gateway AWS Proxy event. And if we go back to AWS console for a moment
and open up Lambda console. Let's open any one of the functions here and
open up a test event. Create new test event. And let's select the event template for API
Gateway AWS proxy. We can see that we have an attribute named
Query String Parameters here. And we also have Path Parameters attribute
here. So we should be using exactly the same structure
and attribute names to access these parameters in our Lambda handler. So back in VS Code, we can use query parameters
to capture the language key. And by using query parameter for language
key, we are essentially making this parameter optional. And since end user can pass in any number
of query parameters, I'm going to use destructuring here. So we can capture language in variable lang,
and if there are any additional parameters passed in by the user, we can capture them
in the info variable, so I specify it with triple dots. EQUAL TO event DOT query string parameters. Just like that. So language will go in to the variable lang
and rest of the query string parameters will go in to the info variable. Then we can use these parameters to construct
our greeting message. So, Let message EQUAL TO. And I'l' use template strings here. Inside it, we could say, DOLLAR, Curly braces,
and we'll use the greeting for the language key passed in the query string parameter. We first check if we have a greeting for the
chosen language. If we do, we use it, else we'll use a default
greeting in English as a fallback. Then we add a single SPACE followed by the
name passed by the end user. So if we choose English, and pass the name
as Riyaz, it should generate a message – Hello Riyaz. And that's about it. Finally we have to return an HTTP response
back to the API Gateway. So we construct a response object. LET response EQUAL TO object. Message COLON message, and we could also return
INFO colon INFO to display the additional query string parameters that were passed in
by the end user. And let's also add a timestamp just for the
sake of demonstration. I'm going to use the moment library to get
the timestamp. So before we can use it we need to install
the moment npm module. So let's open the terminal. I'll first create a package DOT JSON file
using NPM INIT. I'll accept the defaults by hitting ENTER
key multiple times. And, this should create a package DOT JSON
file for us. And now, we'll install the moment library
using NPM INSTALL moment with a save flag. Back to our code, we can now reference the
moment library, like so. And then, we can get the unix timestamp using
moment DOT unix. Finally, we just have to return this response. So we could simply say, return response, just
like that. However, in this case, we are using API Gateway
Proxy. So, API Gateway expects the Lambda function
to return well-formed HTTP response instead of just the data or just the response body. We have to return an HTTP response object
that has a status code and body at the bare minimum. So, we could say, status Code as 200, which
indicates success. And we can return our response object in the
HTTP response body, as a JSON string using JSON dot stringify response. Save the file and that's about it. Let's deploy this function to AWS Lambda in
the very next lecture. To deploy the Lambda function we just created,
we simply have to zip these files together and upload it to AWS Lambda. We could upload it using the command line
or using the web based AWS Lambda console. For now, we'll upload it using the web based
console. Later on, I'll also show you how you can use
the command line or terminal to do the same thing. And later in the course, we're also going
to learn a more streamlined approach of using frameworks like AWS SAM and the Serverless
Framework to deploy our code. For now, as we have just started learning
serverless and Lambda, lets use the simple web based interface to upload our code. So we simply go to the folder that contains
our Lambda function code. Open the folder and select all the files and
create a ZIP file. I'll name it Greet Me DOT zip. And I'll copy this path here. And now, we can open the Lambda console. And we'll create a new function. Let's name it Greet Me. We'll use the NodeJS 8.10 runtime. And we'll use the same role that we used the
last time, the basic execution role. Create the function. And our function is ready. There is no trigger here yet. Later on, we'll add API Gateway as a trigger. So, in here, we select the option to upload
a zip file. We can also upload the zip file from S3. So in case our file is larger, say over 10MB
or so, we can upload it to S3 first and then specify the S3 URL path to that file here. Since our ZIP file is pretty small, we'll
upload it directly. So, let's select the option to upload it directly
and upload our file. And hit save. The file has been uploaded here and if we
scroll down, we should be able to see our code We can access all the files from the left. We also have our node modules folder with
the moment library inside it . So let's test this function now. We need to create a test event first. We'll select the API Gateway AWS Proxy event. This is a sample event template. We'll modify the path and query string parameters. Our Lambda function expects Name as a path
parameter, so, I'll add my name Riyaz as Name. And in Query string parameters, we'll pass
the language key as lang. Let's say “H” “I” for Hindi, for example. So the function should return “Namaste Riyaz”
which means “Hello Riyaz” in Hindi language. I'm also going to add a few additional parameters. We could say, City as Mumbai, Country as India,
website, for example, is academy.rizmax.com We also have a give a name to our event. Let's say Greet Me Event. It doesn't like spaces, so I'll remove it. And hit create. Now, we're ready to test. So click on Test. And we see “Namaste Riyaz” as the message
since we chose Hindi as the language. And rest of the parameters like city, country
and website are included in the info attribute. And we also have the timestamp here. Now, if we go back to the test event and change
the language, from Hindi to say XX, something non-existent, it should fall back to the default
English message. And now it says, Hello Riyaz, in English as
expected. And if we choose, let's say, Spanish. It says, Hola Riyaz. Awesome! Later in this course, we'll integrate this
function with API Gateway, so we can trigger the function via an HTTP call. Before we do that, lets play around a little
bit more with Lambda. We'll do some more hands on with Lambda in
the next lecture. So lets continue to it. In this lecture, lets create a Lambda function
that can resize images on the fly, in a serverless fashion, of course. Let's open the Lambda console and we'll create
a new function. Let's name the function as resize images and
will use node JS 8.10 runtime. And we'll create a new custom role. This will take us to IAM. And in here, I am going to create a new role. Let's say Lambda S3 execution. This policy currently only provides CloudWatch
permissions and we are going to change that in a bit. For now, click allow. And then hit create function. We can now go back to IAM and update the role
permissions. I'll open it in a new tab and we'll edit the
role. Search for Lambda S3 and this is the role
we want to edit. We will attach a policy to it. Search for S3. We'll use S3 full access. You can use more restrictive policy if you
like. But I'm going to use S3 full access, just
to keep it simple. That's about it. We close this tab and go back to Lambda. And if we refresh the page, we should see
the S3 access showing up on the right. Now we have to write the Lambda Handler and
we're going to do that locally on our computer. We'll then use AWS CLI to deploy the code
to Lambda. Before we do that, I'll increase the timeout
of the function a bit. Let's say about 2 minutes, just to be on a
safer side. Save the function. Now to write the handler code I am going to
create a new folder with the same name as our Lambda function, resize images. And I have a few images here with me. These are fairly large images. We going to resize them and make them smaller. Alright. Let's open this folder in VS Code. And in here, I will create a new file say,
index.js. We're going to need a few libraries for image
processing as well as for file I/O. We'll use imagemagick for image processing. So constant IM equal to require imagemagick. Note the spelling. And then we have constant FS equal to require
FS. Constant OS equal to require OS. Constant UUID v4 = require UUID SLASH V4. And then, we're going to need one more utility
here. I am going to use object destructuring. Inside curly braces we write, promisify equal
to require UTIL. And finally we also need the AWS SDK. So, Constant AWS equal to require AWS SDK. Followed by AWS dot config dot update. Region as US West 2. You can set this region as per your case. I am using US West 2 as my Lambda function
is in the region US West 2. We'll also define constant S3 equal to new
AWS DOT S3, in uppercase. And note that all these npm modules are actually
part of the AWS Lambda Node.js environment. So, we don't have to package them in our deployment
zip file. So let's write our Lambda Handler. Exports DOT handler, equal to async, event,
followed by a fat Arrow function. Before we write the handler code, let's go
back to the Lambda Console for a moment. Open the test event. And, I am going to look for S3 Put event template. It's here. In this event we can see that we have a Records
array. And within that we have an S3 attribute, which
has an object and inside it we have the key. The key holds the name of the uploaded file. We also have the bucket name here. So we can access the file name using record
dot s3 dot object dot key and bucket name via record dot s3 dot bucket dot name. These are the values we're going to need in
our function. So whenever a new file is uploaded or put
into this S3 bucket, our Lambda function would trigger and the event object will give us
the file name and the bucket name. We can then use this information to access
the image and resize it. So back to our code in VS Code. There can be more than one files added to
S3 simultaneously. So we write event dot records dot map. And inside it we'll write a function that
takes in each record and processes it. We'll make this function async. And once all the files have been processed,
we can return the control back from the Lambda Handler. So, we could say, let files processed equal
to, this. And then we'll await for all the promises
to resolve, using promise dot all and pass “files processed” variable to it. Just like that. Finally, we can console log “done” and
also return “done”, for example. The only part that's remaining now is writing
the map handler function. Let's do that in the very next lecture. We wrote a part of our Lambda function in
the last lecture. Let's complete rest of the function in this
lecture. So, inside the Lambda handler, we have the
map handler function. It's currently empty. Let's complete it. We first capture the names of the S3 bucket
and the image file from the event object. So, let bucket equal to, and if we go back
to Lambda, we can see that it's located in S3, object, bucket and the filename is located
in S3, object, key. So, we should say record dot S3 dot bucket
dot name. And similarly, filename equal to record dot
S3 dot object Dot key. Just like that. Then we have to do a couple of things. First, we get file from S3. Then we resize the file. We read the resized file and finally
we upload the new resized file to S3 in the destination bucket. That's all we need to do. So, first let's get the file from S3. We'll declare a params object with Bucket
as bucket and key as the filename. Then we can say S3 dot Get object, Params. And if you like, you can also use a callback
here, like so... But since we're using Async/await syntax,
we can convert this function to a promise, using dot promise, like so. And then, we can either use THEN, like so,
or use await. We'll use await as it makes the code very
simple to read and understand. Let's capture it into say input data. Now that we got the image file form S3, we
have to resize it using the imagemagick library. The imagemagick library doesn't have direct
support for promises. So, we're going to use a library called promisify
that we declared earlier. So, let's go back up and here, I am going
to declare a constant. Say resize async equal to promisify IM dot
resize. What this is going to do is it will convert
the callback style resize function into a function that returns us a promise. So we can use it with async/await syntax. Let's declare a variable say, temp file equal
to os dot TMP DIR. This will give us access to the temp directory
on the Lambda container environment. Followed by slash, followed by UUID V4 function. Followed by Dot JPG. This will generate a unique name for our target
image file. Then let's create an object “resize args”
for example. And we'll add a couple of attributes to it,
like SRC DATA for source data and pass the Body from the inputData file we got from S3. Followed by DST PATH or destination path,
which will be the temp file. And we also need to specify the width of the
resized image, so let's say 150. This will create an image, a resized image about 150
pixels wide. And if you like, you can review the imagemagick
documentation to understand different parameters that we can pass here. And then we call the resize function, with
Await Resize Async. And we'll pass Resize Args object to it. This is going to generate a resized image
file and place it in the destination path we specified, i.e. the temp file. We can then access the resized file from the
temp file path. And we'll use the read file function from
the FS library to do that. The FS read file function also uses the callback
style. So we have to convert it to return a promise
using promisify. So I'll quickly do that. Constant read file async equal to promisify
FS dot read file. And we'll also need to unlink the temp file
once we're done using it. So, we'll write constant unlink async equal
to promisify FS dot unlink. Just like that. Now to read the file we can say let resized
data equal to Await read file async, temp file And now that we have the resized file, we
can upload it to S3. To do that, we can say, let target file name
equal to, and we can use the same filename as the original image file and maybe modify
it a bit. We will first strip the extension from the
filename using substring, like so. Then we can append, let's say, DASH SMALL
DOT JPG, for example. Then to upload this file to S3, we declare
a params object. We have to pass the bucket name. We'll use the original bucket name concatenated
with say, DASH dest, for example. We'll also have to create this bucket in S3. We'll do that shortly. We'll then add a key which will be same as
the target file name. Followed by the body. This would hold the content of the resized
image. So, we can pass a new Buffer with the resized
data, to it. We also provide a content type, which in our
case will be image SLASH JPEG. That's about it. Finally, we can upload the file to S3 using
S3 DOT put object, params. Convert it to a promise. And then we await for the upload to complete. Once that's done, we can return the call after
unlinking the temp file. So, we can write return await unlink async
and pass temp file to it. And we are done. Once all the files are processed, the Lambda
handler would return “done”. This is a single file and there are no external
dependencies. So, we could simply copy and paste this file
in the Lambda code editor. But I want to show you how to deploy Lambda
functions using AWS CLI. That's just another way of doing things. So, let's continue to the next lecture. Before we can test our image resizing Lambda
function, we must create the source and destination buckets. So, lets open S3. I'll open it in a new tab. I'll create a new bucket. Bucket names must be unique globally. So, I will add my name to it. Say Riyaz dash images for example. And we can just hit create. Then we'll create a destination bucket. And we use the same name followed by DASH
dest. Hit create. So, we have two buckets created. Let's open these two buckets. This is the source bucket. We have to configure the source bucket such
that it triggers our Lambda function whenever new file is added to it. To do that, we go to bucket properties. Scroll down and look for events. Click on that. And we'll add a new notification. Let's name it Resize Image Notification, for
example. And we'll be looking for Put event. So whenever a new file is put into this bucket,
this event will be fired. We can add file path prefix and suffix here
to filter the files if we like. So we can add say DOT JPG as a suffix, so
this event will be fired only when a JPEG image is put in to this bucket. For destination, we choose AWS Lambda. And then we can select our resize Images Lambda
function from here. Save the configuration. And we have an active notification. That's about it. In the next lecture, lets deploy our Lambda
function so we can test it. In this lecture, lets explore how to use AWS
CLI to deploy a Lambda function. To complete this lab, you need to have AWS
CLI installed on your computer. So, if you haven't already done that, please
review the AWS CLI setup video from the very first section of this course. Assuming that you have the AWS CLI installed,
lets continue. I am going to open the resize Images folder
that holds our Lambda handler code. We just have to zip the contents of this folder. So I'll zip this index DOT JS file and name
it as resize images for example. Back in VS Code, we can see the zip file right
here. Open the terminal. First, we have to upload this zip file to
S3. So we say, AWS S3 CP, to copy this file. File name is resize dash images DOT zip. Destination is S3 COLON SLASH SLASH, followed
by the bucket name. I'm going to use the same source bucket that
we created in the last lecture, Riyaz dash Images. This is not going to trigger our Lambda function,
because this is not a JPG file, but a zip file. So we should be good with that. And then we specify the same name for the
target file. Hit enter. This should upload the file to S3 and it did. And now we have the file uploaded to S3, we
can deploy that file to Lambda. I'll just expand this a bit. We already have the Lambda function created
earlier. So we simply update the Lambda code. To do that, we say, AWS Lambda Update DASH
function DASH code followed by DASH DASH function DASH name. Our function name is Resize Images. Then we have to specify the S3 bucket. Bucket name in my case is Riyaz DASH images. And S3 key will be the name of the file, that
is, resize DASH images DOT zip. And then we say DASH DASH publish. This is all we need. Hit enter. And there you go. We have our function deployed to AWS Lambda. If we refresh the Lambda console now, we should
see our new code here. Awesome! And for some reason, if you aren't able to
see your new code, make sure you have selected the latest version of the Lambda function,
from here. We are now ready to test our Lambda function
with S3 trigger. Why don't we do that in the very next lecture. To test the S3 trigger for our Lambda function,
what we have to do is, simply upload the JPG image file to the source S3 bucket. So I have the source S3 bucket here. And I'm going to upload two images to this
bucket. Select these two image files and simply drag
them over here. And upload. And hopefully, once these files are uploaded,
it should trigger our Lambda function, which in turn should create two resized images in
the destination S3 bucket. So now that our files have been uploaded to
the source bucket, let's go to the destination bucket, and refresh to see if we have the
resized images there. And we don't see them here. So I must have goofed up something. Let's find out what went wrong. Let's go back to Lambda console, and choose
monitoring. We do have an invocation, so our Lambda function
was indeed triggered by the S3 upload. So there might be some other issue. Lets go to CloudWatch logs to find out what
that could be. Open the log stream. And it says “The specified bucket does not
exist”. Alright. If we look at the source bucket, its Riyaz
DASH images. And the destination bucket is riyaz DASH Image
DASH dest. Ahh. So this is where I goofed up. The destination bucket should have been riyaz
dash IMAGES DASH dest and not riyaz DASH IMAGE. I missed an S in here. Our Lambda function is actually looking for
a bucket with name same as the source bucket but concatenated with DASH dest. So lets go back here and we can drop this
bucket and create a new one with the correct name. So, I'm going to select this bucket and delete
it. Type the bucket name and confirm delete. And then lets create a new bucket. With the correct name this time! Riyaz DASH IMAGES DASH dest. That's what I should have done in the first
place. Nevermind, that allowed me to show you how
to debug Lambda code. So that's a good thing. So now that our bucket is created, lets test
the function by uploading the image files again. I'm going to delete these images first. Now, lets upload them, once again. And this time I hope it works just fine. So let's refresh the destination bucket. And we do see two images, Lambda blue SMALL
DOT JPG and Lambda orange SMALL DOT JPG. Awesome! Let's download these two files and open them. And these are the two resized images. If we look at the original images, these are
large ones, as compared to the resized ones. Let's also look at the CloudWatch logs. Refresh. Open the stream. And we do see “done” logged in here two
times. So the Lambda was triggered twice, once for
each file. Awesome! Before we end this lecture, lets quickly look
at the S3 trigger in the Lambda console. Click on Configuration. And if we click on the S3 Trigger here, it
would show us the trigger configuration. And you can enable or disable the trigger
from here. So I'll just disable this trigger as we are
done with this demo example. Save that. That's the end of this hands-on Lab. Hope you found it useful. In the next lecture, lets continue exploring
Lambda a bit more. There are certain limits that AWS imposes
on our use of Lambda service like the maximum timeout or max memory limit, or size of our
deployment package and so on. Let's explore what some of these limits are
in this lecture. For a complete list of limits, I have added
a link to Lambda Limits documentation page. You can find that link in the resources section
of this lecture. First let's look at the function limits: Each function can be allocated with memory
size between 128 MB to 3008 MB, in 64 MB increments. Other resources are allocated based on the
memory size we choose here. Then every function we write gets a default
Ephemeral disk capacity of up to 512 MB. This is the temporary storage space, or the
SLASH TMP directory space allocated to the function. And if you remember, we did make use of this
in the last hands-on where we created a Lambda function to resize images. Each Lambda function can run up to a maximum
of 900 seconds, which is about 15 minutes. This limit was 5 minutes previously. Now AWS allows a maximum timeout of about
15 minutes. There are also size limits on the request
and response body payload size. For synchronous invocations like in API Gateway,
this limit is up to 6 MB. For asynchronous invocation, the request
body payload size limit is about 128 KB. Then there are limits on the size of deployment
package that we upload to Lambda. The maximum package size is about 50 MB when
compressed into a zip file and about 250MB when uncompressed. And if your deployment package size is under
3MB, you can edit it in the online cloud-based editor available in the Lambda console. Total size of all the packages within a given
region is limited to about 75 GB. Finally, there are concurrency limits at per
account level. Up to 1000 concurrent Lambda executions are
allowed per region across all Lambda functions within that region. You can have this limit increased by contacting
the AWS Support. You can reserve concurrency limit at the Lambda
function level as long as it is within this maximum limit. Also remember that when you reserve concurrency
limit for any function, then the other functions can only use the balance limit that is unreserved. All these limit values are current as of recording
this video that's October of 2018. These might change in future. And, it's a good idea to check the actual limits
on the AWS website. I've included a direct link to it in the resources
section of this lecture. That's about it. In the next lecture, let's look at Lambda
pricing model. In this lecture let's discuss AWS Lambda pricing. AWS Lambda offers sub-second billing, and
we are charged only for the time it takes for our Lambda code to execute. Lambda uses of very simple pricing model. There are two parts to Lambda pricing – the
number of requests and the duration of each request in GB seconds. One request is one invocation of any Lambda
function in our AWS account. Up to 1 million requests per month are free
and up to 400 thousand GB seconds of compute time per month is free as well. AWS will charge us only if our usage goes
beyond this free tier. Beyond the free tier, AWS charges just about
20 cents per 1 million requests. And, beyond the 400,000 GB seconds of free
compute time, it charges something like 0.00001667 dollars per GB second, as of recording this
video of course. And this might change in future. So you should always check the current pricing
on the AWS website. The total bill is the sum of the request charges
and the compute charges. Let's quickly look at a simple example. Let's say we have two Lambda functions in
our AWS account. One function has 128 MB memory allocated and
it executed 2 million times in a month and ran for lets say 200 milliseconds each time. Another function has about 512 MB allocated memory,
and it executed about 3 million times in a month, and ran for 300ms each time. So, the total number of billable requests
will be 2 million plus 3 million minus the 1 million free requests, which comes to about
4 million requests. So, request charges will be 4 million into
0.2 US dollars per million, which is 0.8 US Dollars or just about 80 cents. Similarly, to calculate the compute time,
we first calculate the computes seconds. For the first function, the compute seconds
will be 2 Million into 0.2 seconds, which is equivalent of 200 milliseconds, which comes
to around 0.4 Million seconds. Similarly, compute seconds for the second
function will be, 3 Million into 0.3 seconds, which is equivalent of 300 milliseconds, and
this comes to about 0.9 Million seconds. Lambda first normalizes the total compute
time to GB Seconds and then sums the total across all the functions. So, the compute time in GB seconds for the
first function will be 0.4 Million seconds into 128 by 1024, which comes to about 50,000
GB Seconds Similarly, for the second function, this will
be 0.9 Million seconds into 512 by 1024 which comes to about 450,000 GB Seconds. So, the total compute usage is 50,000 plus
450,000, which comes to about 500,000 GB Seconds. Out of these, 400,000 GB Seconds are free,
so chargeable compute time will be 500,000 minus 400,000 which is just about 100,000
GB Seconds. So, the total compute charges will be 100,000
into 0.00001667 which comes to around 1.67 US Dollars. The total charges for the month would be sum
of the request and the compute charges, i.e. 0.8 plus 1.67 which is just about 2.47 US
Dollars per month. I hope this gives you an idea of how AWS Lambda
pricing model works. And for learning purposes, this is virtually
free to use as we are unlikely to exceed the free tier while testing, unless you're testing
extensively. So that's about it on Lambda pricing and with
that we also come to the end of this introductory section on AWS Lambda. We're going to dive much deeper in to the
specifics of Lambda in the later sections of this course. In the next section, I'll take you through
the basics of another important service in AWS Serverless stack. It is the Amazon API Gateway. So, let's keep going!