Hey guys, welcome to this session on AWS
lambda. AWS lambda is considered to be one of the most used and popular
services, but what makes it popular and effective at the same time, let us see
that in this session. So, before moving on with the session, let us take a quick
glance at the agenda. So, first we'll be learning what is a
distributed application architecture. After that, we'll move on with what is
AWS lambda and why is it used. After seeing that, we'll move on to the core
concepts of AWS lambda and its use cases. After looking at that, we'll see
what are the benefits and limitations of AWS lambda. Finally, after learning
all the concepts, we'll be doing a hands-on on AWS lambda to understand it
better and practically. So, let us move on with the session. As they already told
you, in this session, we'll be looking at a hands-on. So, before moving on with the
session, let me briefly show you what is going to be the output of our hands-on.
So, this is our localhost and this is our Elastic Beanstalk. First in localhost
what I'm going to do is I'm going to upload this in doc1.pdf and I uploaded it. So,
the file has been uploaded successfully. So now, let me open Amazon Web Services account, and I'll
open my S3 bucket. So first I will open the S3 service and inside that
there are already buckets which I have already created. So
first let me go to 'from the bucket' because in this bucket I'm going to store
all the files which have been uploaded. So you can see already there was
a doc1, and right now another file has been uploaded, and also because
it is the same name the file will be overwritten with another name of the
current time and the file name. So, we have updated one more file. Let me show
how it works in elastic Beanstalk - so it is the same operation, the same
elastic Beanstalk. So, let me upload another file, so I am uploading this photo. So, I uploaded, and the file has been uploaded successfully. So, in s3 you can see that the file has been uploaded. So, the final
output is going to be: we go to this bucket image, you can see I
already had two images and now there is another image which we uploaded already
in the bucket. the from the bucket so here you can see
- the bucket PDF and if you go inside you can see the doc one and this one was
uploaded from this localhost so you can see here
that it has the same name as this one so it is exactly the same file which was
uploaded here and it was copied to that bucket so how did we do that
using AWS lamda that we'll see in this session so first let let us learn the
theory concepts of this session and then we'll move on to the hands-on part
so come on layers okay is now let us begin with the session so typically an
application comprises of three components the front end the back end
and the database server before the cloud technologies emerged when a company
wanted to host a application what they did was they used to host all the
software components that is the front end the back in the Hana database
service on an individual server so what happens here is whenever you want to do
an operation like consider this website is for storing photos like google photos
so you upload a photo and then press the button upload in the website so what
happens now is the website does not process it goes to the backend service
so whenever you click something here so the process goes to the backend code
some code is triggered so what happens is that code runs and after that if you
wanted to upload the photo it will store somewhere if you use the backends help
and it will store somewhere and it will show the response in your website also
that needs to be information stored in the database because there is a name for
the photo there is a link where the photo is stored and also there will be
other properties like size and something else so they will be stored in the
database service so what happens here is all the software components are hosted
in a individual server whenever you wanted to do an operation you click some
button or do some operation on the website which will trigger a service in
the backend so that will happen and if there needs to be any data stored it
will be stored in the database and the output will be shown on the website but
what is the problem with hosting all these software components on a single
server the main problem is it has limited
resources so let me explain this so consider your website is getting a lot
of traffic and it is using 80% of CBO's resources and at the same time the
back-end service needs 50% of the CPUs resources to run operations in the
backend so what happens here is front end service uses 80% but packing service
only gets 20% of CPUs resources to work on and if a service wants to use some
resources there are none left for it to use so right now the system falls under
a deadlock and the whole server will be in a
problem so what happens is they cannot be skilled and also this is the reason
why the websites get crushed Hosting all the software components on a single
server is quite easy but it comes with all these possible drawbacks and
demerits so what is the solution for this let me provide you the solution
with an example let us see that so the solution to this is using and
distributed application architecture so what is the distributed application
architecture it is basically using dedicated servers for each of the
software components the front-end software component is hosted on its own
dedicated server so does the backend and the database so their own different end
ceremony hosts the website the backend server only does the backend operations
and the database server who only is used when there is a need for the database
service but how does this improve the situation so let me explain that
consider the backend servers and there are a lot of traffic or workload so what
happens now basically in an individual server when all the software components
was hosted so what happened there was back-end server will consume most of the
CPU resources and the other services will not have enough CPU resources which
leads to a crash but here the huddles but what happens here is only the
back-end server will be having workload on that the other two servers will be
left unharmed so considering the back-end server has used all of its
resources and has crashed the other two servers will not have any harm on them
the front-end server will still be hosting the website and this website
will be still visible to the users even though the back-end services won't be
running at the current time the users will not be able to use the services
provided by the website but the website will still be visible on the browser so
how does this solve the scaling problem in this particular architecture you just
have to scale the particular servers which you want to scale actually so here
we consider that the back-end server is using all of its resources and you
you want to scale that particular server so you can just do that you can just
scale your back-end server instead of scaling all the three components
obviously this is going to reduce a lot of cost and time and for example if you
want to only increase the space for your database service and you want to scale
your database service you can just scale that so I understand this better let me
give you a real life example let us consider movie photos as the example so
here you can see the website so this is the front end server and the images
which are being retrieved are using the back end service to do that and also the
links for these images the data that is the name of this image the size of the
images are all stored in the database service and whenever to retrieve this
particular image the database service Pro X the link and the backend service
will use that link to show the image to us so let us consider this and let me
explain how all these services work together so whenever you search anything
on the search bar so I have two images named hallow and hallow years so I'm
searching a LOF hollow over here you can see there is foot wide over here one and
two so the names are Hollow and Hollow ears and the sight lines are this so
this is the site link for the first image this is a site link for the second
image and the baton service sir it leaves these information and it will
produce it over here so this is how the front and the back end and available
services clubbed together so now we have learned what is a distributed
application architecture and how the front end the backend are available
works together so now let us move on and see what is a SS lambda so a dubious
lambda is a server less compute service which means you don't need to worry
about servers while building and running applications you will just have to code
as per your needs say within lambda and relax it will take care of everything
else like provisioning servers we will learn more in depth as we move along so
now what distinguishes lambda from ec2 an elastic beanstalk the other compute
services right but there is a difference let me give an idea about that
first let us compare London easy to later we'll move on comparing it with
elastic Beanstalk let me start with the first different lambda is a platform as
a service while ec2 is an infrastructure as a service in lambda it provides you
and platform to run and executes your back-end code but in easy to it provides
you virtual computing resources the second difference is that lambda is
fixed to a few languages like Python Java C sharp and a few more no
restrictions ec2 you can install any software you want in the platform given
to you and then you can choose the environment like node Oh jewels or
dotnet and push record into it in lambda but not an easy to easy tool you have to
decide the operating system and then install all the required software's and
then you have to upload your code or curve internally and then Xavier
and then moving on lambda does not provide you the luxury of choosing poles
and one x but an easy two you can configure every single aspect like
different instant types and security preferences so this is what makes them
different I hope you understood the difference between these two services
now let us discuss what distinguishes lambda and elastic beanstalk let me
start off for the first point lambda can only run you back in code while
Bienstock can run your entire application without you worry about the
infrastructure which it turns on secondly lambda provides the resources
based on your workload but in Beanstalk you have the flexibility to choose the
instant type and other configurations in lambda you don't need to worry about any
configurations or instant types so the last difference is lambda is stateless
system and elastic beanstalk is a stateful system so what is the
difference between them a stateless system is when the output is based only
on the current inputs not using any other inputs which were sold before but
in a stateful system it may provide different values for the same inputs by
comparing the older inputs being stateless provides lambda the ability to
create millions of process threads and execute the model the different inputs
simultaneously would Elias now we understood how does he see do an elastic
Beanstalk services differ from a double is lambda moving on will first take a
look at benefit lambda provides and the limitations of lambda first let us look
at the benefits provided by lambda later we'll move on to the limitations of
lambda the first benefit is that it provides a
server less architecture so you don't need to worry about provisioning and
managing servers you just have to concentrate on building and running the
application so to be very simple you will be given a console you choose a
language write your code and hit on it choose an ec2 instance according to the
required processing power the next point is could freely there are multiple
programming runtimes and editors so you can just choose it freely encoded in
like how we do it in an offline editor like in Visual Studio or an eclipse the
next point is no ritual machines needs to be created we don't need to create
and configure any ec2 virtual machines as I already told they are provided by
lamda according to the processing power needed for your function the next point
is pay-as-you-go paas ego is a feature provided for all services in AWS and
what is here is in lamda you will be only charged for the number of seconds
your lambda function runs on for the particular seconds how many services
have run and that's it you won't be charged anything more the fifth point is
monitor your performance you have a default monitoring option in lamda which
is connected with cloud watch and this particular lambda function with
generates multiple logs and metrics for you to visually and textually understand
what is going on in your lambda function so whatever advantages a service
provides there will always be some limits to it so now let us take a look
at the limitations of AWS lambda the first limitation of variable is lambda
is the maximum disk space provided is finer than 12 MB for the runtime
environment this means the /tmp directly storage can only store 512 MB at the
same time why do we need last /tmp directly storage in lambda is that
because being a temporary storage for storing the current lambda functions
inputs and outputs because the same lambda function there is no guarantee
that it will execute two or more time the next limitation is the amount of
memory available for lambda during execution is 128
to 3008 mb so this is the amount of ram you can choose between a the 128 or 3008
and between that it is a 64 mb increment like 128 plus 64 and it keeps on adding
the third limitation is that the function timeout is set only to 900
seconds so the maximum amount a lambda function can execute is 15 minutes and
the minimum is 3 seconds so if you want to lambda functional to execute more
than 15 minutes that is not possible it can only execute to 15 minutes and
the default is 3 seconds so maybe the lambda functions execution completes in
one second but it has to run at least 3 seconds to stop the fourth limitation is
that the only available languages in lambda editor can be used for writing
the code the languages are like Python c-sharp Java no dot J's go and Ruby okay
all right guys we have seen the pros and cons of aw lambda so now let me give you
a brief idea on how lambda actually works okay guys now let us go on so
first what you do is you write your code on the lambda editor or upload it in a
supported programming language in a zip file so it is not that you have to
create only one lambda function we can create any number of lambda functions
your application needs and after that lambda executes the function or the code
in your behalf you don't need to run it so but to run the code you need to
trigger the lambda function right so how do you trigger it doesn't run
automatically you need an external aw service which can trigger and invoke the
lambda function so for example it can be an s3 bucket or a database so what
happens is whenever a write operation occurs in the database your function
should be triggered so you send that as a trigger and whenever a new record has
been uploaded or inserted in the database the lambda function will be
automatically triggered and retrieve the information which you need from that so
after that we know that the lambda code is running but where does it run it
needs a sewer or a computer to run right so what if this is it provisions servers
and also monitors and manages it so how does it provision servers so lambda
functions have very types of cold so if you could require a
lot of crossing power it will choose an instant type which has more processing
power and DRAM or else if your lambda code only executes for two seconds it
will choose the lowest possible instance this save your money and time okay guys
now we understood how lambda actually works so what are the various concepts
in lambda let us look into it so now we are going to see various concepts in
lambda and the four concepts you are going to see is functions run times
layers and log streams so what is the function of function is a script or a
program that runs in AWS lambda so the lambda when it whenever it is invoked
this function runs so you can see here this is the function name and here you
write the code so the function processes the even then returns a response so let
us see some function settings so here you can see the code the code is the
logic you use in the lambda function which you write over here and then
runtime lambda runtime executes your function so whatever runtime you choose
whatever one time you choose that particular code you will be it will be
written here so that will be executed by the lambda runtime and then the handler
the handler is where you mentioned the functions name with the files name
so whenever the lambda function is invoked this particular function this
particular function is executed and then tags tags or key value pairs which you
can mention for any aw service just to track their cost or track their metrics
and then description description is describing a function which you can give
while creating a function in lambda and then timeout you have to set the timeout
between after 3 seconds to 900 seconds because that is a capacity or that is a
timeout period for a function which I've already discussed in this session so let
us move on with runtimes so what is the runtime a runtime allows
functions to run in different languages in the same base execution environment
that means in the same environment you can run a Python file a Java file and a
node dot JS file and this runtime sits in between the lambda service and your
function code so it is in between so whatever code you send there are
multiple runtimes and Italy choose the correct runtime for your file if it is
Python chooses Python one runtime if it is Java chooses the Java runtime and it
runs and gives you the executed response so also see you can you can take a look
at multiple various Sun time so these are the latest supported runtimes and
these were already announced so first its dotnet code 2.0 c-sharp so and then
grow programming language and then Java note the jails Python 3.7 and Ruby 2
point 5 and the other supported languages are node.js but older version
8.1 0 and Python an old version of 2.7 and 3.6 ok now let us see what are
layers ok so lambda layers or a distribution
mechanism for libraries custom runtimes and other dependencies like so instead
of downloading using the code which you write in your lambda function you can
create layers and store those particular allow the result custom runtimes which
you want to run your program in and you can store it in multiple layers and you
can create only five layers for a particular lambda function and you can
upload it so that there is no confusions while the core is choosing a particular
library or a custom runtime for example if your code needs to run a particular
particular library which it needs for uploading information to excel sheet for
a CSV file so what you do is you you upload that particular library to a
layer keep it in order so that you ER so that your code chooses the appropriate
layer and to gets out the libraries so after that also your layers lets you
manage then development function code independently from unchanged like you
don't need to change your code or resources that uses you can just upload
the information in its zip file as layers and take
information out of that and you can create multiple layers the maximum
number of layers is 5 perl function and you can use layers which is provided by
AWS customers are already available layers or you can create your own layer
and then come to the log streams this is the part where you monitor the lambda
function so normally lambda automatically monitors your function
invocations and reports metrics to cloud watch if you don't want to just watch
metrics you can write the code in your lambda function so that you can get
logging statements for each and every step your function goes through and you
can look at the execution flow and how your lambda function is performing and
whether it's working properly or not so moving on now let us see how aw is
lambda works with s3 so here we're going to see how exactly an operation in the
s3 bucket can trigger the alias lambda function so consider a user is trying to
access a website which you can use to upload photos so if you upload a photo
here it will be stored in the AWS s3 bucket which it is connected to also
whenever output operation occurs that is in files uploaded to the s3 bucket the
lambda function will be triggered so for example you can use it as a put
operation or a gate operation so consider it is a put operation for now
so whenever now the user is uploading a photo here so it gets uploaded to the s3
bucket and then it's uploaded the lambda function is triggered so the lambda
functions code can be anything you can make any micro-service out of it so if
you want to store the the files name and the location you can store it in the
database using the lambdas code or you can watch the cloud watch matrix and you
can also look at the logs which you have coded in the context of your program so
after that you can also copy this particular file into another s3 bucket
using your lambda code also is it was a get function like if if it was a gate
operation if the user is trying to download a photo again the photo will be
downloaded which is from the s3 bucket which is stored there so you can use
that also as a trigger in the lambda function and you can
make any micro-service out of it so this is how s3 is used as a trigger with
lambda functions so now we understood how this works theoretically so now it
is time to move on for practicals so now let us do an
hands-on on creating an s3 bucket and then using a lambda function to copy it
to various multiple s3 buckets so why does it go to various multiple s3
buckets is because we have mentioned a different type of file extension so if
it is an image it goes to a different bucket if it is a PDF it goes to a
different bucket so whatever the file extension is it goes to a different
bucket so let me explain that part now okay guys now we are going to do a
hands-on using multiple available services so let me show you what we are
going to do exactly before moving on with the hands-on part so what we are
going to do is I already told you how Amazon s3 works with AWS lambda so what
you are going to do is we are going to create a simple website which can upload
a file to the s3 bucket and whenever a file is uploaded to the s3 bucket the
lambda function is invoked so what happens is we have we are going to
upload three types of files one is the dot jpg file a dot PDF file and a dot
txt file so whenever of image files uploaded it
goes to the image bucket whenever the PDF file is uploaded it
goes to the PDF bucket and whenever a text file is uploaded it goes to the
text bucket and we are going to do this all with a simple lambda code and then
what we are going to do is we are going to use the s3 buckets uploading that is
put as or post as a trigger and whenever an object is uploaded to the s3 bucket
the lambda function will be triggered quickly is now let us see this on the
air base management console and how to do this practically mercury rise of the
beginning of the video I told you that I'm running this file both on the local
host and the elastic Beanstalk so to understand it better first let me
explain the local host pot so let me show you the code first so this is the
code there are two files one is index dot PHP and the other is file logic dot
PHP so let me briefly explain this a simple form which has an input and
button so input is you click on file and button whenever you press the button it
o it uses file logic dot PHP to execute the uploading to this 3-part so here I'm
mentioning the bucket which I'm gonna upload to and here you can see my
credentials so this is my region and version I've given latest and these are
my secret keys and the key which are so that it can access my AWS account and
that particular bucket in my AWS account and upload it over there so here you can
see if if that particular object with the same name exists what it does is it
adds dollar type in front of it so if that same file exists what it does is
the normally if it does not exist dollar key will be just dollar key but if a
file exists in the same name if doc one dot PDF one doc I'm again uploading doc
one dot PDF so it takes a current time match it in front of the file name so
this is why we saw in the beginning of the video there was one doc one and it
had another doc one file with a time duration in front of that so now we've
seen this code now let me show you the Python code which is used for the lambda
function so in the lambda function using that we are going to send this
particular objects from one bucket to various buckets using their extensions
that also briefly and let us move on to how to do this using the aw
infrastructure so this is the Python key and first I'm importing some things
which I need actually so I am inputting chess and I am importing o a start path
this is for getting the extension and I am importing both o three this is mainly
important because go to three is there a Robles SDK for Python this allows Python
developers to build and run applications on AWS using this so you can write
Python applications but using Python programming language when you include
this particular moto3 so and I will explain this briefly so this is the
source bucket getting the name here this is the file
name I'm getting it here and copy source I'm viewing bucket source by Karen Kiki
so this is why I imported Jason so I'm recording the cob resources the
bucket which I'm going to upload it from and this is the object which is going to
be uploaded and this is our cloud version 4 so if I printed over here it
will be available in the cloud watch streams like you can put to that log
streams and check out these things when the function has started so when the
function started this will be there and after that this information will be
given so and then here comes the logic logic is pretty simple you first get
check whether the object exists if it doesn't yet check whether extension
first you get the extension or the last four characters that is is that dot jpg
dot PDF or dot txt so even if it is a file with a different X technician if it
is a dot PNG the file will only be available in that particular bucket it
won't be copied to any other bucket if it is got PA jpg' you go to throw the
bucket image if this P do to the bucket PDF you have it is check see if you go
to to the bucket txt so s3 dot copy object bucket this is the destination
bucket this is the filename and this is the copy source that is the file which
will upload through from the bucket that is from the bucket is the name of that
particular bucket so also if it does not exist so this will be printed in the
cloud watch log stream okay guys now we've seen the code let us move on to
the alw implementation part okay guys let us start from the beginning so first
you have to go to the AAA management console because you have to create a
role so first thing you do is you create a role I've already created a role test
so I'm going into that and you have to attach a policy and I have created a
policy already which is s3 and score move so you have to create a policy and
use this particular JSON code inside it so what I'll explain what this means so
I am giving effect hello and also over here so here I am giving the action
slogs create create log group log stream and events so I am allowing all these
events to happen in the cloud watch whenever I am using this particular
policy so in s3 I'm allowing all operations like post get put whatever
operation it is I am allowing it to happen so I am giving effect as alone
also they should have an unique resource name so on providing that so without
this particular policy enroll you can't do this so you can basically do what I'm
going to do right now so we cannot copy s3 buckets objects in the multiplicity
bucket so you have to create this role and policy and use it in the lambda
function so the first thing we are going to do
when I was never going to create four different s3 buckets one is from the
bucket which is the main source bucket and we wanted to create three more
buckets which which are for image PDF and txt so let us do that before moving
on with the history part let me tell you how to do this without copying and
pasting this JSON code over here first you have to create a role which
lets lamda call other aerial services so here after clicking on create role you
have to choose lambda so that it can be calling other UW services and next you
have to give it permissions you can see here there are a lot of permissions that
is there are policies available so before we created a policy s-300
school move and we pasted that JSON to allow access to cloud watch locks and s3
so what I'm going to do now is I'm going to give full access for s3 from
searching here so searched s3 full so I got full access for s3 and I'm searching
cloud watch watchful so here you can see I got the
old watchful access to so I gave both so next time with the tags I don't need any
tag so I'm reviewing I have to provide a roll name so I'm going to give this as
practice because I'm not going to attach this I'm just explaining you how this is
going how to create a role and how to attach policies to it so you can see
here there are two policies attached Amazon s3 full access and cloud watchful
axes so what I'm going to do now is I'm going to create this road so I've
provided create this role and now the role has been created and as I showed
you before here you can see this and also I can go into the policy and show
you the JSON port this allows all the actions for s3 and the other policy allows all CloudWatch
Jackson you can see in the JSON so you can see here it allows everything so it
allows all cloud watch actions so that is what I wanted to show you now let us
move on with the s3 part so I am offering the s3 management console so I
already have a bucket this is for a beanstalk I created earlier so now let's
create four more buckets as I told you already
so first bucket is from so this is what so you don't need to
configure anything you don't need to give any tags so you just click Next and
it already sits permissions so block all public access and also if you want you
can review it once and then create bucket so this is how you create a
bucket it's actually pretty simple so then I am creating to the bucket image and I've created that okay next thing is
I'm creating pour the bucket PDF and I'm going next next next and create and
finally I'm creating two bucket text so I'm giving next next next and I've
created buckets so right now we have created four buckets the four buckets
which we need which is in the code for errors so what I'm going to do now is
I'm going to move on to create a function in the lab so first what we'll
do is we'll go to the lambda management console and create a new function so you
can see here I already have a function so let me create a new one so author
from scratch use a blueprint and browser the suppressor repository so using a
blueprint is that there there are already many things so here s3 get
object Python config will change triggered there are many things so what
we're going to do is we are going to author it from scratch because we are
using our own code so I am giving the function name as AWS lambda K 2 plus
lambda demo and I'm writing my code in Python so I'm giving pipe in 3.7 as my
run time and another thing so what I told you we have created the role and I
attached a policy too so we have to use an existing role here so the existing
role which I told you was test so I am using that particular role here I am
creating the function so the function has been created
successfully so what we have to do now is see you can I first explain the
dashboard so here is the configuration and here is the monitoring so you can
see cloud bot matrix and cloud watch logs insights so right now it is loading
so let us see so it'll be empty right now after the function starts executing
then we can see some data over here also we can see the logs in cloud watch let
me show that later so first we'll configure it so first I
am adding a trigger our trigger is going to be s3 so also what I'm going to do is
the bucket is actually from the bucket so whenever an object is uploaded to
from the bucket it has to be triggered so our buckets from the bucket and here
I'm post or copy so however it may be if an object is uploaded to from the bucket
it progressed and the function starts executing so it is all object clearance
and here I don't need to give anything if you just want to copy one particular
suffix like if you want to copy only JPG files then you can mention it over here
so right now I'm implementing that in the code so you don't need to give that
also give an able to over here and add it so this trigger will be added so you
can see here sugar has been successfully added okay so the next thing is now we
have to use the code here so you can already see there is a code written a
simple code so let us consider this as our runtime right now so the runtime is
Python 3.7 this is the editor to edit the code this is the handler which I
showed you so lambda function and the school lambda function dot lambda
handler so even my operation I mean even my functions name is lambda handler so I
don't need to actually change the anything okay so let me copy that code
and paste it over here and let us save this particular lambda function go here
and copy this code I'm tasting it over here so the code has
been copied so you can see the imported function imported libraries and here is
the print function and here is the cloud watch-info and here is the logic to copy
from s3 to another s3 bucket so now let us save this so once we save it it is
saved so now whenever so now whenever a object is superior to
those three bucket from the bucket then this process will start happening first
let me show whether it happens or not so you can see here from the bucket is
empty and to the bucket image is empty and even these two will be empty because
we just created them so right now I'll manually upload a file - from the bucket
so I'm adding files son adding this image I'm providing next next next and
upload I am uploading this file - from the bucket s3 so you can see here that
the file has been uploaded now let us cross check with - the bucket image so
it has to be uploaded - to the bucket image because our lambda will be tripled
right now and that function will be running and this process would have
happened so let us either first it I am you can see peridot jpg file has been
uploaded here so right now it manually works so what I have done is I've
created a simple web page which can upload files to s3 and then the process
happens automatically so what we are going to do now is we are going to
create the same web page which was running in the localhost now we are
going to make it run you in the elastic Beanstalk - using URL so you anyone can
upload a file right now and it it gets segregated according to the extension
and it gets copied so I told you after execution of the
function you may see you can see the cloud watch matrix so let me go to
monitoring so here you can see cloud watch matrix and here you can see some
data so this is invocations one so the function has run in one time and the
duration was the duration is given here and also the success rate was 100% there
was no errors so you can see that here also let me show you the locks
so a lot has been created I am opening that so you can clear so function start
cloud watch so let me open the code first okay so here function start cloud
watch function start cloud watch and you can see the details wing dividers or log
stream name law group name request ID so log stream name log group name and
request ideas window so after the request ended it is ending the report
and the report has been sent and you can see here it took five hundred and sixty
six point one zero milliseconds and duration was 600 milliseconds so it is
founded so memory size the maximum memory size was 128 MB so there was no
more memory needed for this function so now let us move on with elastic
beanstalk and will create and deploy elastic beanstalk application over there
so let us get started so let me click on get started so first thing I have to do
is I have to create an application so my application is going to be AWS lambda
demo and I have to choose the platform so Michael was in PHP if your code was
in any language like if it was in Java or go or Python you can choose that so
right now I'm choosing PHP what I'm going to do is right now I'm going to
create the application later I'll upload the code to make it more clear so the
application is creating right now so it would take some time so let us
wait and start the processor okay guys now you can see it has been created
successfully so let me show you what actually is there before we do this so
there is some default PHP page over there so this is the default PHP page
and what we are going to do is now we are going to upload and deploy our
application here before that let me explain what actually that is a little
change in the code because the forgiveness before uploading our
application and deploying it let me show you and explain you the simple change
which I made in the code so this is the code so here you can see index dot PHP
so this is there is no change in this but in pelagic dot PHP you have to
change the path because you are going to upload it to animes on Linux environment
and the path changes there so the slash where slash app slash current slash your
file so it index dot PHP and file logics dot PHP will be in this directory and
whatever file you upload to elastic Beanstalk will be in this directory so
what I'm going to do is you have to click on all the files and
choose over here and you have to archive it into a zip file so if you ask why
should we zip it rather than make the file a rad file or a tar file because
elastic means to environment only accepts dot zip files so you can just do
it in a normal way or instead of using winds up you can just click on them and
you can just send them to a compressed file which will automatically create a
zip file so I have already done that so now let us upload this file to the
elastic Beanstalk so let us upload now so I click on this
button so I have to choose a file so I what I have to do is I have to go back - aw stem so this is the file which have
to upload so AWS uploads file logic and index files or within this particular
zip file you should not create a file on top of this and give that you have to
create a zip file just clicking on these files so I'm opening it and I'm naming
it as a obvious lambda demo so it is a w s-- lambda demo so I am going to deploy
this so deploying it takes time so let us wait until then and we'll check out
our application works you guys now a file has been uploaded
you can see the running version is a doubly lamda demo this was the name
which I gave for this particular application so now let me open the URL
and show you the website which is running so here you can see our website
is running fine so what I'm going to do now is I'm going to upload few PI
students and check whether it gets uploaded in our s3 bucket and it is
moved to the respective s3 buckets for image PDF antics
but before uploading let us check whether the s3 buckets are empty or not
so that to make sure that there were no files before and only the fluid files
are getting uploaded right now so I'm going to from the bucket it's empty and
checking all the other buckets just to make sure okay now all the buckets are
empty now let us upload few files so I'm going out to three different types of
files and check whether each of the file goes to its respective bucket so first
I'm going to upload a PDF it is successful and then I'm going to
upload an image and it is successful too and then I am going to upload a text
file so first let us check whether all these files have been uploaded here so
now we can confirm that the doc 1 PDF the order jpg and X 1 dot txt have been
uploaded to the s3 bucket so now let us check whether each of the respective
files with different extensions get uploaded to their dedicated s3 bucket so
first let me go to to the bucket image let me refresh it and you can see two
dot jpg file over here so next to the PDF bucket and you can see the PDF file
over here and then to finally do the text bucket and I am refreshing it and
you can see the text 1 dot X if I lower here so right now our lambda function is
working and it corresponds with whenever we are uploading a file through the
elastic Beanstalk the elastic Beanstalk is successfully running our application
and it gave us an instance to run it on so whenever you upload a file using that
our files are getting uploaded to from the bucket and lambda function is
triggered and using this particular logic it builds to the bucket image it
goes to the bucket image and then PDF and then text so this is what happening
in our handsome so before finishing this hands-on first
let me explain the complete process which we did to make this happen so
first we created a policy and the first one gives it a role and we attached a
policy to allow all the functions which we wanted to do in this new bucket so
and then we created full different s3 buckets one for source three for
destination and then we created a lambda function and we uploaded a code which
basically copies from one particular way three buckets to s3 bucket which we give
the destination name and we also created a trigger for s3 which is whenever an
object is created so s from s3 from the bucket whenever an object is created
this particular toggle happens and this function code runs so this function code
runs so whenever this function code runs what happens is a file from the
particular sources three bucket is copied to the destination is three
bucket and then what we did we launched our application we uploaded our local
application into classic Beanstalk and we deployed it and we right now have a
new RL to run our application so then now we are going to tell what so let me
refresh it once more so we upload with multiple files after that and you can
see there are multi multiple logs over here so this was the first time and then
you can see whenever the next time the function is coming as function start and
that is the next function so the function will be keep on running so you
can see log stream name log group name request ID this is one particular
function execution and this is one particular function execution
this is one this is one so we the function has been executed this many
times and the function will started at the first time so this is how we have to
use cloud watch also you can see the cloud watch matrix over the monitoring
tab so here in the monitoring tab in cloud watch matrix now you can see there
are totally fine rotations so the cloudburst has been indicated five times
that means the function has been run five times and here you can see the
socks rate the success rate has been 100% and
there are no errors because all the files which we were applauding God
uploaded on the function never got any other than in it ran successfully
well right now I hope you guys know how to use lambda to write a code and use
that and use other services like s3 as a trigger to run your particular
application so I never learned multiple parameters who learned I am too so that
they have to create a policy first and then we know how to create s3 bucket we
know how to write coding a lambda editor and then we know how to use cloud watch
log streams and also metrics and also we learn how to upload your local
application into elastic Beanstalk so that you will get your own URL you can
access it from anywhere in the world so use cases of lambda there are various
use cases of lambda but right now we will discuss three of them so the first
one is the surveillance website the second one is the automated backups and
the third one is filtering and transform data so let us see Sir Willis website so
I already told you what is the service architecture the service architecture is
where you just have to code and it automatically takes care of provisioning
and managing servers and its infrastructure completely so what you
can do here is you can host your static website on st your static website is
basically HTML CSS JavaScript or typescript files it cannot run
server-side scripts you can only run client-side scripts so in server-side
scripts like PHP and asp.net cannot be hosted on s3 as a static website and
using s3 for static hosting is very cheap and you can write lambda functions
and connect them with the s3 to correlate it you can make a static
website available using AWS lambda by writing some code so that the users can
keep track of any resources which is being used on the website so next is
automated backups so the word says you everything the sentient automated
backups you can automatically create backups you can key
lambda evens and scheduled them at a particular time on a particular day and
create backups automatically in your AWS accounts so to create backups you can
check whether there are resources idle and take those content and just back it
up and delete it from that particular place and also you can generate reports
using lambda in no time using code or just connecting it with cloud watch so
you can generate every report and you can see how much of data has been backed
up how much of data has been deleted and you can manage it very easily and
schedule it on time so the third use cases filter and
transform data so you can connect lambda with other Amazon services like s3
Genesis redshift and database services like RDS dynamo DB or Amazon adora
so you can print with the data before sending the data to any Amazon storage
service or database service you can filter them using the code and you can
easily transform the code and load the data from between lambda and do all
these services now let us discuss land uprising first let me explain what is
free tier 3 tier is provided by Ed bliss for a 12-month period and in that period
you can use free services which are provided by L Bluest you can use free
tier eligible services you can use services like ec2 s3 and even lambda for
free but they have their own limitations for example in lambda you can use 1
million requests per month or 400,000 GB seconds of compute time per month for
free and anything exceeding that will cost you also you might be guessing what
is GB seconds and 1 million requests 1 million requests is when the lambda
function is triggered 1 million times and GB sequences Giga bits that is
thousand megabits per second that is the transfer rate so 400,000 GB seconds of
compute time per month is allowed 400,000 GB seconds of transfer time is
allowed per month for any given lambda function and then requests as I told 1
million requests three and then after that each one
million requests cost two point two dollars and after that duration I
already told you what is GB seconds so 400,000 GB seconds per month is free and
after that every GB second you use will cost you the number which is given there
that is point zero zero zero one six six six seven dollars so that is all about
lambda prising okay guys we'll come to the end of this session
I hope this session on cloud computing and AWS was informative to you if you
have any doubts feel free to comment about it below thank you