Hey guys, welcome to this session on
Amazon Web Services. Amazon Web Services is the leader of the cloud industry, but
how did they become this indisputable cloud leader? They provide scalable and
reliable cloud services, and you just have to pay for what you have used. To
state a fact, Netflix, one of the largest media service
provider, has its entire architecture hosted on AWS. In this session, we'll
be learning AWS end to end. So, before moving on with the session, please
subscribe to our channel so that you don't miss our upcoming videos. Right
now, let us take a quick glance at the agenda. To start off with, let us quickly
brief on cloud computing fundamentals, and after that, we'll be looking into
what is AWS? and why do we need AWS? After that, we'll look at
various services provided by AWS. After learning all these services, we
will learn how to architect an entire website or an application using AWS.
We know that lambda is one of the most used AWS services. So, we'll have a
dedicated hands-on for that and also learn a lot deeper into that particular
topic. Also, at the end of this video, we have a set of interview questions with detailed answers for any people who want to crack an AWS Solutions Architect
interview. Also, this video covers most of the AWS services, but still to
become a professional AWS Solutions Architect,
you need to put much more effort and also you need to take up a professional
course. I would suggest you to take up a professional AWS Solutions
Architect course provided by Intellipaat. So, check
out those details in the description. Right now, let us move on with the
session. Let us start this with 'Before the rise of Cloud.' So, what was the approach
to run an application on the Internet before Cloud? How do they run it?
How were they running it? Let me give you a brief up with that. So, the first thing
a companies does is it buys stacks of servers and hardware components. The
second thing is it has to maintain and upgrade these servers and databases
and the hardware components according to its needs. Then it has to keep on
monitoring and reporting it. The company has to have reports in order to
upgrade or downgrade the softwares and the hardwares, and finally, it has to
consider traffic of its applications or its websites in order to scale. Then
finally, the company have to recruit top security professionals so that it can
handle incoming unauthorized accesses and threats.
It sounds tiring, right. Now let me tell you what are the disadvantages of
on-premise setup. The first disadvantage is maintenance of
servers. To maintain servers, you always need professionals who can
maintain the servers 24 by 7, then there will be an increase in expenditure every
time you upgrade your hardware because in an on-premise setup, most of the time when
your websites or applications traffic goes down, your hardware components will
be idle. When they're idle, it is a total waste of money and total waste of time
to have those many hardware components. Also, it increases your expenditure
because you have to maintain all of the hardware components which are not even
in use. Then data privacy and security is poor because whole data privacy and
security system and the control is in your hand. The company has to provide
data privacy and security. It's not going to be in a third party's hand. It is
going to be in the same company's hand. then scalability and flexibility. So, what
is scalability is that when the traffic goes higher, it should
automatically or manually scale up or down according to the needs, but in an
on-premise set up, scaling up is very easy but scaling down is not that easy. Now
moving on, let me introduce you to Cloud Computing. Let me first tell you what is
cloud computing. Then, I'll give you more explanations on its advantages. So, in
the simplest terms, cloud computing is a technology where a resource is provided
as a service through the Internet to a user. It can be anything, for example,
Google provides Google Docs, Google sheets, through the Internet and it is a
software which is provided through the cloud, and that softwares are hosted in
Google's personal cloud platform which is GCP (Google cloud platform). Now
moving on to the cloud computing benefits: The first benefit is better
data privacy and security. AWS or any other cloud provider like Azure or
Google cloud platform will always hire top security professionals because they
have their own products running on their cloud service. Second, there is no
maintenance worries. You just have to pay the subscription fee or the money which
you have going to take for the particular services, you don't need to
worry about maintenance of those hardwares. All of the maintenance will be
taken care by the cloud providers. Third is faster data recovery. You can store
your data in multiple points like you can store your data in US and UK. So, when
your main service data or main database data is lost, you can take data from some
other location and recover it even faster than you expect. The fourth point
is scale dynamically. So, scale dynamically is very simple. It is auto-scaling, that means, whenever your websites or application's traffic
increase, the system automatically scales up your hardware components, and when the
traffic goes down, it scales down your hardware components. You just have to
pay for the particular hardware components which are running currently.
Then finally, reduced costs. All the businesses needs to reduce their cost.
They always want to cut expenses. So, using a cloud provider or using the cloud
computing technology makes it very easy so that they can cut maintenance costs,
they can cut cost of security professionals, and also the scaling
process is seamlessly very easy. Ok guys, a quick info: If you want to
become a professional AWS Solutions Architect, you can take up this
AWS solutions architect course provided by Intellipaat. In this course, you will
be learning all the concepts which is required to crack the AWS
certification exam, and also there are three major projects involved for you to
understand it better. Now let us continue with the session. So, next is, what is AWS?
We all know that AWS is a cloud computing platform, that is, AWS Amazon
Web Services is a cloud provider which provides a lot of services on the
Internet to the user. Many companies like Netflix, Unilever, and Expedia are using
Amazon Web Services for their own personal needs. Netflix are completely
into AWS, that means, all of their infrastructure right now depends on AWS.
So, think of AWS as capability. Now, let me give you few examples of
other cloud providers. The first one is Microsoft Azure. Azure
is owned by Microsoft, and they also host their own products on it like office 365,
and also you can do native integration of IDEs like Xcode, IntelliJ IDEA, and
visual studio. Google Cloud: We all know Google is the biggest brand
name in the Internet industry. Google cloud came around 2011, and it has become
a hit and it is now the third most popular cloud provider in the world.
Alibaba Cloud: We all know Alibaba. Alibaba is a China-based company.
Alibaba cloud is also called as Aliyun. It basically provides services to
businesses who want to host their services online.
Next is IBM cloud. IBM cloud is also similar to Azure Google cloud and Amazon
Web Services. It also provides services like compute, networking, and storage
services. Then comes vmware. vmware is a software virtualization company. So,
basically they provide virtual machines via the internet to users or
companies. Finally, salesforce. Salesforce is also a
cloud platform, and their main tool is CRM, that is, customer resource management,
and they are considered to be the number one at it.
So, why is Amazon Web Services so successful. They have started AWS
in the year 2006, and it is still the number one cloud
platform. Let us see why. Before even a cloud company thought of it, Amazon Web
Services already revolutionized the IT industry by introducing a new way to
use servers to the companies and the businesses. There are three key
points which made AWS successful. The first is, it's simple and per-hour
billing. The billing system of AWS is very simple that you will be only
charged for the particular services which you use, and it is also based on
the number of hours you use, not the days or months. The second reason is Amazon's
brand name. Amazon.com is a household name all over the globe because it is
the biggest e-commerce platform. Third reason is, it's easy profile set up. First thing you have to do is provide your details, your email address, your username and password, just like you create a social media profile. The
second step is you have to give your credit or debit card details and that's
it. Your AWS profile has been set up, but what makes Amazon or Amazon Web Services
peculiar. So Amazon has its own leadership principles and it makes its
employees follow that. Let me give you a few examples of them: The first is
customer obsession. Customer obsession is that leaders start with the customer and
work backwards. The first thing they follow is they meet the customers' wants,
that means, whatever the customer wants they want to give, then they work
backwards, they reverse engineer. The second is 'Invent and Simplify.' Leaders
expect and require innovation and invention from their teams and always
finds way to simplify. They also need inventions and also they need to
simplify it. The simplified most easiest user-friendly manner is more attractive for a customer.
Third principle is ownership. Leaders are owners. All the employees in AWS or
Amazon need to have that feeling of ownership. The fourth point, this
point is common for anybody: 'learn and be curious.' Leaders are never done
with learning and always seek to improve themselves. So always keep on learning
and be curious in what you do. The next is the future of AWS. So, what is going to
be the future scope and the job trends of AWS? Let us see. So, the market share of
2018 quarter 4, that is the last three months of 2018, Amazon Web Services
had 32.3 percent of the cloud computing share, Microsoft
Azure comes second with 16.5 which is almost the half of Amazon
web services, then Google cloud platform and then Alibaba cloud, and
after that comes the other services. Other cloud providers like IBM cloud,
VMware, and Salesforce. What are the AWS job trends right now. Let me give you
few job trends. So, the four job trends are: AWS SysOps administrator, cloud
developer, AWS Solutions Architect, and cloud software engineer. So, the salary of AWS
SysOps administrator is $111,000 to $160,000 and of the cloud developer is $95,000, and an
AWS solutions architect's salary varies from $98,000 to
$150,000, and that of a cloud software engineer varies from
$63,000 to $93,000, and these are all
the numbers for a fresher. The salaries may vary with your experience. If you are
an experienced person in the cloud industry or in AWS, you might be earning
more than this. This is just the starting. Moving forward, now let's talk
about the thing that we are here to learn for, let's talk about the AWS
services. Let's see what kind of domains do
AWS give its service into. So, AWS provides services in compute, it gives in
storage, database, security, management, customer engagement,
app integration, etc. So we got to discuss each one of them one by one as we move
along. So, first up the compute domain or the AWS domain is compute. So, I
think I have a question from Shubham. So Shubham is asking me
among these domains what is the difference between storage and database.
Alright Shubham. So, storage is basically used when you have a workload wherein you want
to upload binary files. What are binary files? Files like video
files or mp3 files or photo files, all these files are called binary files
because they're not data. It's basically content, and that content
is basically binary in nature. So all your videos, all your music, any
kind of file which you execute, your games,
all of those are binary files. When you compare it with database, database
usually deals with databases textual in nature and has a proper structure; it
could be unstructured as well but basically textual data that a human can
read is included inside a database. On the other hand, files that run on
computer, for example, any program or any video file, any music file, or any other
file in that case, these kind of things cannot or should be stored on a storage a
kind of platform. It should not be stored on a database; it can be but it should
not be because it unnecessarily makes the size of the database big which actually causes a problem when you are querying through the data, when you are using a database.
So guys, this is the difference between storage and database. Shubham, is your
doubt clear about the difference between stories and database? Yes. Others, guys if you have any doubts in these domains,
you can ask me. Let me explain you all these domains one by one. So, compute
domain basically deals with servers. So if you need servers, or if there
is a workload which needs processing, the compute domain will have services that
you can launch and implement that workload. We will discuss more on this as
we move along. Then you have the storage
domain, like I said, deals with storing binary files on the remote
servers, so for that we have dedicated services and we are going to discuss those
dedicated services in storage. Then we have a domain called database. In the database domain you have lot services. So, if you
have structured data you have one kind of database for that or you have one
kind of database service for that. If you have unstructured data, you have another database
service, so we will discuss more on that as we move along. Then there is a domain
called security. So, all security related to the application that you have
uploaded to the servers that you are using to the account that you are using,
for all those kind of things would be included in security. So there are
specific services for each kind of workload that I just mentioned. We're
gonna discuss that when we reach the security domain. Then we have the
management domain which would include monitoring, which would include deploying
the whole architecture at once. All those kind of services come in
management. Don't worry if you don't understand it, I'll explain it more as we
move along in that domain. Then we have customer engagement. So sending email,
sending notifications, all those kind of services come under customer engagement,
and in the end we have app integration. So services like queues, for example, if
you have an application on which you have to give a lot of jobs, it's better
to have a queue where you will store all your jobs and that queue is separate
from the server which will be executing your jobs. So, these kind of
integrations are called app integrations and we will be discussing the services
in that domain as we move along, all right. So guys, these are the domains the
main domains that are there in AWS. There are a lot of other domains as well
but we'll be focusing more on these domains since this is what is
actually going to be asked in your solutions architect exam, and at the same
time, these are what you will generally be using when you become an AWS engineer. So, moving forward guys, let's start with the compute
domain, and let's see what all services are there in the compute domain. Ok guys,
a quick info: If you want to become a professional AWS solutions architect, you can take
up this AWS solutions architect course provided by Intellipaat. In this course,
you will be learning all the concepts which is required to crack the AWS
certification exam, and also there are three major projects involved for you to
understand it better. Now let us continue with the session. Let's
discuss the AWS services in the compute domain. So, here are the set of services
which are included in the compute domain of AWS. So, they are: EC2, Elastic beanstalk, Lambda, Autoscaling, AWS load balancer, AWS ECR, and AWS
ECS. Now for the sake of explaining you guys, I have taken the liberty of
shifting some services from some other domains which I think should fit in this
domain, but you don't have to worry, the explanation would be the same. It's
just that you would find it somewhere else in the AWS management console, like
for example, autoscaling would not be under the compute domain. It would be
under some other domain. I'll show you when we move on to the AWS management
console as to where you can find each and every service. For now, guys, let's
start with the first service which is AWS EC2, and let's see what is it all
about. So guys, Elastic Compute cloud is nothing
but a server, it's a raw server, and it's just like a fresh computer that AWS gives to
you. So, what you basically do is you ask AWS for server, and that
service is called EC2. So what you do is you specify the kind of processor
that you want, you specify the kind of RAM that you want, and then you click on
launch, and what happens after that is you get a server which is basically of
that exact configuration. Now what do you do? You will have to connect to it
remotely. So, if it's a Linux machine, you will connect through SSH. If it's a
Windows machine, you'll connect through RDP, and once you connect to it,
it'll give you the UI of how an operating system actually is. If you
installed it on your local it'll be exactly the same; it's just that now it
has been in basically launched on the infrastructure of AWS and that can be
accessed using various tools like the RDP
tools or SSH tools. I'm going to discuss more about EC2. I'm going to
basically launch an EC2 instance in a moment, but before that let's discuss
all the services which are there in compute and then we'll do that
EC2 demo as well. So guys, elastic cloud compute, like I said, it's
just like a raw server that is given to you, and on this raw server, you can install
anything, you can make it anything, you can basically make it a web
server, you can make it a database server, it can be anything, right. That is what
EC2 is all about. Now, in the diagram as you can see, you can launch either a
single EC2 instance or you can launch multiple EC2 instances. You also have an
option of creating an EC2 instance and then installing some
softwares on it and then you can launch multiple copies of that particular EC2 instance so that you don't have to launch or you don't have to specify,
or you don't have to install all the softwares all over again. You can create
multiple copies of the EC2 instance again. At some point of time, if
you feel you want to increase the configuration of your system, you can
also do that. Let's say if my RAM was 8 GB over here and then I want to
make it 16 GB, even that is possible in EC2, and that is why the name is Elastic
Compute cloud. Elastic means that you can increase the size of the instance or
decrease the size of the instance configuration as and when required. So
this was about Elastic Compute cloud or that is easy to write. Next service
is elastic beanstalk. Now elastic beanstalk is an advanced version of
EC2. How is it an advanced version of EC2. In EC2 what you could do
was you could just launch a server and then you could install softwares on
it. You can make it a database server, you can make it a web server, you can make it anything. With Elastic beanstalk,
you get certain restrictions on EC2, and there is a certain amount of automation
involved. So what exactly is the elastic beanstalk? Elastic beanstalk
basically is a web application server. You cannot install any other
software on it. It is a web application server on which you can upload your
website and you don't have to install any software, you don't have to do
anything. Like I said, we talked about what is infrastructure as a service and
what is platform as a service. So infrastructure as a service is EC2
where you get the whole server, you get the access to the operating
system. etc. Elastic Beanstalk is platform-as-a-service. So in this what
you get is a dashboard. You don't get access to the operating system, you don't
get access to the softwares that you have to install on that server. Everything is pre-configured. All you do is you say that I need a PHP server. It
will launch a PHP server and give you a dashboard where you'll have an upload
kind of a button. You click on that upload button and you'll have to put or upload
your website over there. So once you have uploaded your website files, they
automatically go into the path where they have to go, and all you have to now
do is just go to that IP address or the domain name of that particular elastic
Beanstalk instance and you will be able to access your website. If you compare it
with what if the same thing you have to do in EC2, you'll have to first
install the softwares then you'll have to upload the file using FTP because
there's no dashboard to upload it. So you'll have to download an FTP client,
connect to the instance, upload your files in that particular
folder and then if you go onto the IP address of the EC2, only then you'll
be able to access the website. With elastic Beanstalk what they did was like
if you have a use case where you have to deploy a web application, you don't have
to do all that manual stuff of installing the software or
or putting your files on the server. All you have to do is you have to open
elastic Beanstalk, select the environment that you want to deploy, and upload your
website over there. That's it. So it's an automated version of
EC2 in which you have certain functionalities of putting a website
over there, but there is a limitation that only it can be a web
application server. It cannot be a back-end server for you. Elastic
Beanstalk is only used to deploy your websites, guys remember that, because the
next service is a little different from elastic Beanstalk,
and it also has some limitations. Next service is AWS lambda. Now AWS lambda
again guys is an automated version of EC2- it's an advanced version of EC2
but with some restrictions. Now what are those restrictions? It cannot deploy an
application. You cannot upload your website on it, and
it cannot host application for you. What is AWS lambda? AWS lambda is
basically just used for doing your back-end processing. Now what is back-end
processing? You might wonder, so let me give you an example: let's say you have
an image processing application. So what you do is you have an UI or you have a
website through which you can upload an image and what that website does is it
stores the image on its storage and then it reduces the size of the image,
and then you can download it again. Now you might be wondering this is one
website so ideally everything should be happening on one server in this case, but
that is not the case over here guys. So what happens is your web application is
on a separate server, the processing happens on a different server, and
AWS lambda specializes in processing. Now why is AWS lambda preferred for
processing is because when you launch a server, you have to select the
configuration that you want, i.e., you have to select the processor, you have to
select the RAM. with AWS lambda, you don't have to specify any configuration; you
don't have to choose what server size should be for my application to cater to
the workloads that are coming in. AWS lambda, what it does is, it sees what kind
of workload it is being given, it automatically scales up if it has to in
terms of its configuration and then executes all your workload and gives the
result to the server which is the web application server where it has to
gives the result on the website. So basically only processing happens on
AWS lambda, only website deployment happens on elastic Beanstalk, and
AWS gives us these two wonderful services so that we can create a distributed kind
of an architecture wherein, you know, if there's a fault in
one server it's not like my whole application will go down, I have certain
redundancies in place, I have distributed my work among multiple computer nodes so
that even if one gets faulty, it's not like my application will go down.
We'll discuss more on this as we move along and we talk about autoscaling.
But guys, remember this, AWS lambda is only used to run your background code.
AWS elastic Beanstalk is only used to deploy a web application. EC2 can be used
for anything, it's your own private computer: You can install any software,
make it backend server, make it a web application server, make it a database
server for that matter, do anything with it. That is what AWS EC2 is.
Now if you see the diagram over here, as you can see that let's say there's an
e-commerce application and in that e-commerce application there's a trigger
for basically buying something, let's say you order a package. So when you
order a package what happens on Amazon? That entry is made into the database, so
that entry is stored let's say in dynamo DB which is a kind of database in AWS.
Now what happens is once the data is stored in dynamo DB, you want to do
some processing on the data and then go ahead and store it somewhere else.
So, once the data has been stored in dynamo dB, I need the processing to be
done. One way to do it is you do everything on the same server where
basically, let's say, my package order confirmation happened on the web
application. The web application server triggered this particular action and
that I have my package order confirmed, and that's when it issued a
command to store the data in dynamo DB. That is all done by my web application
server. This server itself can also do the processing for dynamo DB
and then store it on redshift warehouse, but you know the processing takes a
lot of time, and that is the reason I differentiate my processing on a
different server or I make my processing do happen on a different server so that
there is no overhead on my website so that my website is not becoming slow
irrespective of the fact that what kind of workload is running in the backend
because that workloads being managed my AWS lambda. So my website can be up
and running. It will always be available to all the users irrespective of the
fact how huge of processing I have to do. So that happens in AWS lambda.
One more cool thing about AWS lambda is that
whatever job you give to AWS lambda, it's not one server which does the job.
AWS lambda takes the job, executes it in one server, and if it gets one more job in
the meanwhile what it does is it launches the second server on which that
job will be executed. Similarly, if there is a third job which is coming, it will
be executed on the third AWS lambda, so so that's how it works. Once
the processing is done, you know, it can also give the communication back to the
web application and that's how you get the message that the operation is done,
but what you don't understand is how many computers or how many servers are
working in the backend. So now you know. In conclusion, AWS lambda is used for executing a back-end code.
AWS Elastic Beanstalk is used to deploy a web application. An AWS EC2 is a raw
server which you can use to make that server anything: it could be a web
application, it could be a back-end server, it could be a database server, etc.
Okay guys, a quick info: If you want to become a professional AWS solutions
architect, you can take up this AWS solutions architect course provided by
Intellipaat. In this course, you will be learning all the concepts which is
required to crack the AWS certification exam, and also there are
three major projects involved for you to understand it better. Now let us continue
with the session. Next is load
balancing. What is load balancing basically means? Why do we need this kind of a service? Now guys I told
you that whenever we create a production grade application, we basically deal with
distributed computing. So when you talk about distributed computing, we have to talk about redundancies so that my application is highly
available, now what does that mean? Imagine these three servers are your web
application servers. Now imagine you just had one. In that
case what will happen. If there is any kind of fault in this particular server,
my application will go down. So what I do is I launch three exact
copies of that server, and what happens in that case is if this server
goes down my user can view my website on this server. If both these servers goes
down, my website can be viewed on this particular serve, but now you
might be wondering how will the user know which server to go on, and
that is exactly what elastic load balancer is all about.
So elastic load balancer what it gives you is it will give you one domain to go
on give you one IP address to hit on you hit that IP address and elastic load
balancer will automatically analyze where to send the requests so the Rasik
load balancer constantly keeps a check on all the instances which are running
in your cloud environment and it sees which of them are healthy in which
around them are unhealthy if there is a server which becomes unhealthy what your
load balancer will do is it will stop the routing traffic to that particular
server right and it start routing server to your traffic to other servers and
this is the main job of velocity or balancer which is to distribute traffic
among all the healthy instances which are out there right also one more
important thing over here is guys if if all the three servers are functioning in
the healthy state in what will it do in that case so it will distribute the data
equally among all the servers now you might be wondering how will that help so
let's say I just have one server over here with around 16 GB of RAM and let's
say an i5 or an i7 processor right so it will be able to serve a limited amount
of users let's say there are 10 people right so who are there on the website
let's say the server will be able to serve 10 people now what happens if
there are 20 people tomorrow in that case you always have to plan ahead and
you have to keep more servers in your group so that or in your architecture so
that if there is more traffic my requests can go in the other servers so
that the load is actually decreased on the first server right so what load
balancer does is it will never make one server max out on its performance it
will always distribute the traffic equally among all the server so that the
processing or you know the overhead on any server does not go up and my
application should not become slow right now you might be wondering that how do I
do it do it constantly keep a check on how much traffic is coming on the
website and accordingly deploy servers so that answer I will give you in my
next service which is AWS auto scaling so what is redouble the sort of scaling
it automatically cael's up the number of servers based on
how much traffic is coming onto those servers now how does it do that you can
set a certain threshold let's say there are four instances which are running on
my architecture so what I can teach auto-scaling is that whenever the
collective CPU usage goes beyond 80% launch one more instance in the group
and the load balancer should now route the traffic to the new instance as well
right so this is what auto scaling is similarly when the CPU usage goes
collectively goes below 40 percent let's say decrease the size of my fleet
decrease the size of my server fleet so in that is one who is at the moment the
collective CPU usage goes below 40 percent it will decrease the number of
instances in your auto scaling group right and that's how it works for is and
the auto scaling service cannot exist alone it always has to work in
conjunction with AWS load balancer why because if the size of the fleet is
increasing if the size of the fleet is decreasing this should be an entity on
top of it which will distribute the traffic equally among all the instances
right so if you are making use of auto scaling you will always make use of AWS
load balancer on the other hand you can just make use of AWS load balancer and
you might not want to use auto scaling that is fine but if you are using auto
scaling you absolutely have to use a load balancer for your traffic routing
alright so this is what auto scaling is guys and next service is elastic
container registry now what is the elastic container registry for this guys
you have to know what docker is right so if I have to give you a brief about what
docker is docker is basically a tool using which you can launch operating
systems in the minimum size possible so the minimum container size that I know
of is 40 MB so what you can do is let's say we were talking about distributed
computing right so in distributed computing what happens is each of your
server plays a different part in your application right so similarly what we
can do is we can launch containers now what country
they act as a virtual server right but they take the minimum resources possible
they take minimum space possible like I said 40 MB is the size of a container
which holds an operating system right so in those containers you can deploy
applications okay and these applications then run as if they were running on a
separate server they are isolated from the operating system on which the
container is running I'm sorry guys if I cannot go in depth of what docker is
right now because it's altogether a separate topic but you can understand it
like this that it's basically a mini virtual computer that you can run in
your system and ECR service basically what it does is it stores these
containers in a repository like you have github for storing code you have ECR for
storing your container images right for example like I said operating system 40
MB size the image has to be stored somewhere so it will be stored in easier
now if you want to run those images you have a service called ECS so what you
see is does is it runs any docker image on the AWS infrastructure right and it
orchestrates is in a way so that if there is anything wrong with that
container service that container service can again be launched now don't get
confused guys with what we do in easy to now you might have a question that I
cannot multiple ec2 instances in spite of that why don't I launch just multiple
containers because in that case what will happen is you're still the machine
on which the containers are running that becomes no point of failure that means
if there is anything wrong with that machine any number of containers which
are running on your particular system will go down right so this is also a
redundancy that you can make in your architecture that you can also run
applications on containers which are running on an auto scaling
infrastructure that is a machine on which docker is running but if there are
processing or if the CPU gets an overhead and the CPU processing
increases you can scale your machines and you can scale your containers as
well right so if it's a little complex for you guys you can just ignore this
service which is hicieron easiest it's only for those guys who understand what
docker is so this is is nothing but it's a repository on
which you can store container images and ECS is nothing but a service to run your
docker containers if you don't know that is what docker is I'll just give you
guys a link after this class it's basically a video for introduction to
docker it's a half an hour 45 minutes video you go through that and then again
you can go through this recording wherein you can find a description for
what ECR any CSS and that'll make more sense to you in that case alright so
guys we have discussed ECR we have discussed CCS alright guys so moving
forward now let's go ahead and do some hands-on I think it's enough of theory
for compute let's go ahead and launch some compute instances in AWS alright
so what I'm going to do is let me first show you how you can create your account
on AWS for so let me just jump onto my browser alright guys so the first thing
that you would be doing is heading on to aws.amazon.com ok so once you are on
this website you will see a big orange button over here which says create an
AWS account right just click on that and that will take you to the next screen
wherein you will have to fill out your details so fill out all your details
over here alright and once you have done that let's say let us enter some pseudo
values so let's say it's ABC at inter part.com password I can set anything
over here similarly I'll set anything over here
AWS account name let's say it's in telepath - right well click on continue
then it will ask you whether what is the common type is it a professional account
or it is a personal account so we'll select personal account because we are
just trying out AWS rights we want to use it for ourselves gifts on the move
let's say I give boot 1 2 3 4 5 6 7 8 9 and the country or region you can select
any region that you are in your address the city that you are in postal code so
let's say the city Mangla state let's say it's cannot car
and once everything seems fine to you just click on create account and
continue once you do this you will reach this screen where you'll have to enter
your debit card number or credit card number right enter everything over here
and click on secure submit once you've done that the next page will ask you
what kind of account do you wanna grid is it a business
account or a standalone account what what purpose will you use this account
for just it's all logical you just know what you have to answer just remember
this you are creating a personal accountant it's no way related to a
business select everything click on finish and you will have a new AWS
account created for you now one awesome thing about aw skies is that it will
give you free Theor that is you can launch instances free for one year right
and every account that you create on AWS or when you sign up on AWS you'll get a
one-year free tier account where you can launch a certain ec2 instance for free
and that too for 750 hours a month so 750 hours is actually 30 days and that
too for when you write so one year of free instances you can launch so I will
show you guys how you can launch instances as well but this is how you
can sign up for AWS so once you have signed up for AWS guys next thing would
be to basically login right and for that you'll have to once you go to
aws.amazon.com just click on AWS management console and it will take you
to the sign in page okay so on the sign-in page just enter the email
address that through which you want to connect one second guys so this would be
the email address that I want to enter followed by the password and this will
make you sign in into AWS so this is a step that you should reach once you have
completed your sign up okay and now what I want to do is I want to launch my
first server on AWS now how would I do that so we have
studied about the ec2 instance right so for launching the easier instance here
is the domain so the domain that we went through
was compute right so in computer you have these many services you have easy
to ECR ECS then you have lambda elastic beanstalk there are other services well
we have not touched those services because they would go beyond the scope
of what we intend to do in this session this session is an overview for AWS
foreign AWS Solutions Architect right so the services that we have
picked up there there are all the basic services that you should know about and
once you understand all those services understanding the rest of AWS would be a
cakewalk for you right so there are so many services in AWS you have ground
station security about a fact you know if you are into game development then
you have Amazon game lift but all these services are basically confined to a
certain kind of work that you want to do like I am NOT going to develop a game
right or I'm not in gaming but I'm not into game development so this service is
not for me similarly there are IOT services that
you have to know of but I do not every company would use IOT correct some
companies would use IOT and for you to learn the IOT service it would be a
waste of time for you right and that is why what we have done is we have picked
up some sources which are essential and which most of the organization's will
use who are into IT and one their application of energy and we have just
selected those applications right or those services also your AWS solutions
architect exam would be confined to only these services that we are learning all
right now what we wanted to do was we want to launch our first server and for
that there is a service called ec2 so either you can find it under the domain
compute that is easy to overhear or you also have an awesome option over here to
search for any kind of service let's say I want to launch or I'm going to go to
ec2 so I can just write easy to in the search path and I will get the
respective result over here let's click on that and now this link would
basically open your ec2 dashboard right so from here you can launch your first
ec2 instance there are a lot of options in the left Kyle do not worry about each
of these options these are options that we will be studying about when we
purposefully just talk about ec2 source and that will
be in the further sessions that we want to have for now just understand how
launched any zero instance and for that you just have to click on this blue
button which is here it says launch instance just click on that once you
have clicked on launch instance you will get an option to select the operating
system that you want to run in your server right so there are a lot of
operating systems to choose from you have the Amazon Linux EMA which is a
custom Linux that AWS has created then you have Red Hat os that you can run you
have Susie Linux you have one two there are host of operating systems that you
cannot find your windows as well right so you choose whichever you want and
always ensure you choose an operating system which says free tier eligible
which would mean that it'll fall under the free thing it will not be charged
for you ok so let's say I select Ubuntu so I'll click on select and now it will
ask me what is the size of the server that you want how much of CPUs do you
want how much of memory do you want so the only server so there are a lot of
options over here to choose from right by the only server that is free for you
would be T 2 dot micro ok which has one CPU and 1 GB of RAM which is enough for
demo purposes when you didn't want to try out AWS ok so you would select T 2
dot micro if you want to be under the free T and not be charged you just
select T 2 dot micro next you will click on next and then it will ask you for all
the details that are over here do not worry about anything just leave them
blank click on next next step would add ask you for the hard drive storage how
much of hard drive storage do you want so by default it's a GB when you're
launching a Linux instance so we leave it at default if you want even change
this right then click on next you can add tags over here what are tags tags
are nothing but metadata to your instance right like for example I can
say the name I can add a tag that the name of the instance is something then
the department to which this instance belongs to is there so all those values
I can write over here and those will serve as a metadata for the instance so
that when somebody is searching for all the ec2 instances for let's say the IT
department they just have to type in department equal to 80 and list all the
instances so for those kind of searches you have tags click on next then it will
give you the option of configuring your secure
group what is a security group it's nothing but a firewall it's a very
simple firewall guys in this what you have to tell is so basically the all
these rules that we can add over here these are all inbound rules right so
inbound means what kind of connections are allowed on this server so there's a
SSH connection which is allowed right so you can select the protocol that you
want to allow on this server right so right now what has been allowed is the
SSH protocol so that we can login to this server SSH means that right if you
select SSH it will fill out all the details for protocol and port range by
itself now who do you want the SSH protocol to be used by right if you want
it to be accessed by anyone you can just give anywhere why is this option helpful
this option is helpful when the IP address of your computer is not fixed
let's say you can access this instance from office but you also wanted to Excel
from home as well right so if you want to access it from
home then you have to select anywhere if your IP address is not changing you can
select my IP right let's say this is my IP address right now and only I will be
allowed to connect to this instance using the SSH protocol but it's the
general use case that everyone deals with is that they want to log in from
anywhere right so I'll select anywhere it fills the data by itself and now
finally I can just click on review now so if you wanna delete a rule guys since
in this rule I did not define anything I can just believe this from here and now
let me click on review and not ok so now I can review all the settings that I've
done once I feel everything is correct I will just click on launch and this is a
very important step guys now to log in to any server which is there on the
remote system imagine like if somebody gets your IP address anybody can access
that server and they can make any change that they want right but it's obviously
not like there's a security layer that we add or which AWS has added to the
servers that it launches it gives you a key pair right what is the key pair a
key pair is nothing but it will give you a key that you will have to use while
you're connecting to the instance ok so as you can see there's no key pair right
now so what I can do is I can create a new key pair let's name this key pair
let's say test - into the pot let's say this is the name correct I will click on
download keep it so until I unless I don't create a new key but it with this
launch instance button will not be active so this will download a PEM file
for you this is a file which of the PEM extension right so this will be used to
connect to our instance and finally when everything is set let's click on launch
instance so now you can see your instances are now being launched this is
a message that you're getting so I can just go to the instance and see that ok
so there is an instance being in launch the instance state is spending which
means it's still in the launch process I can give this instance a name let's call
it test ok and now it's in the running state - great so now once you've
selected test you can refer to all the details of test in the below panel so
here is the IP address which will be used to connect to it right the instance
type which is T 2 dot micro the state which is running right then you have the
security group so view inbound rules you can see there's a SSH rule that you have
added over here right and then the next step would be that what is the key pair
name for this instance it's test in teleport so on and so forth right now
that you have launched in since the next step is to connect to it now when you
want to connect to this instance there's a software called putty that even
download right so here is a software that I have already downloaded if you
want to download this just search on google for download for it you will get
this link go to this link right now guys there are two things that you have to
download one is you have to download putty which you can use by clicking on
this link it will take you on this page just select if your window is just
select protein 64-bit just click on it and label B it will basically download
the putty software for you the other thing that you will have to download is
put ijen right this is also required I'll tell you why it is required so this
is basically put ijen and the other software that you need is putting ok so
once you have both the software's in place and you have installed both the
software's next step would be to connect to our instance so this software of mine
is 40 gen let me launch it for you this is pretty gen guys the fam file that you
get guys if you have to use it with putty the way you can use it is by
converting this Pam file first took PPG because
that is what you are putting software would accept right so first I'll have to
load this file on my petition so let us do that so I click on load and then I'll
select the PEM file which is this let's say successfully imported great now I
want to save the private key if I say the Prime Huber you you will it will ask
you are you sure you want to save this key without a passphrase yes right and
let's name this private key something let's name it let's say test so as you
can see the format now is being changed to PPK
great let's save the file the file has been saved next step is launch your
putti software right this is my putti software the instance that I want to
connect is this this is the IP address that I'm going to connect to I'll
mention it over here then I'll go to SSH because I mentioned the IP address in
the session part now I also have to put the key using which I will connect right
so for putting the key you have to go into SSH then you'll be clicking on
earth in Earth I have to select the PPK file which I created so this is the file
test let's select that and let's click on open so now it will give you a
message that this service is not cached in the depository just click on yes on
this message and now you will see the screen which says login as right so now
since I was launching and one to instance what I have to enter over here
is Ubuntu I'll hit enter and now it will verify the key that I inserted and will
be able to connect to the put T to the server that I created on AWS using the
key that I put right and now I am logged in on my so now I can do anything over
here I can install any software that I want so this is how you can connect to
an easy to instance which has launched a linux ami or Linux OS okay for
connecting to Windows guys you do not get a PEM file you do not get a PEM file
what you basically get is an RDP file along with the password ok so you will
be given the password you will be given the username you will be given a RDP
file so you select the IDP file and then it will ask you for the password just
enter the password that was to you by the AWS management console you
click on connect and then you will be able to launch your Windows instance so
there are only two types of OS that you can launch on AWS one is lyrics second
is windows so I told you how to connect to Linux instances
let me also walk you through how to connect to a Windows machine alright
guys so now to in order to launch a windows instance again we'll just click
on launch instance will let's select a Windows OS which is free tier eligible
let's select this write T 2 dot micro is the instance serving on launch let's
click on next let's leave everything at default let's click on next here you can
see the default size is 30 GB because in Windows it takes a lot of space so it's
30 GB over here let's click on next and figure the security group here you can
see that instead of SSH you have RDP right because for Linux instance because
it's a command line what you do is you collect through its SSH but because
Windows is a GUI based OS you have to connect through RDP alright so we'll
click on review and launch and now we will select the same key pair which is
death and let's click on launch instances so our instance is now
relaunch guys we can just go here and we can see it's in the pending state all
right so let's name this instance as Windows
all right and now in order to connect to it this is the IP adresses arrogate now
just click on actions so select the machine that you have to connect with
once it's launched you'd be able to click on actions and you'd be able to
click on connect so in the connect so when you click on connect you will get
all the options of how to basically connect to this particular server so
let's wait for this instance to be in the running state and then let's review
okay so the instance is now in the running state car is now let's click on
actions let's click on connect and this is basically the way to connect to it
you can download the RDP file by clicking over here so as you can see I
got the RDP file the user name is administrator
and if I click on get password it says password not available yet please wait
at least four minutes after launching an instance ok so the password will be
available once the instance is four minutes since launched but this is the
way you get the password ok now if I click on the RDP file it will directly
give me this kind of a window where it will say are you sure you wanna connect
to it let's click on connect and now it is asking for the password so all you
have to do now is wait for the password to be available over here all right and
once the instance is ready you'll get the password here once you have the
password just put the password over here click on OK and you should be able to
connect to your Windows instance so let's give it the time let's wait for
this instance to get in the running or in the password state and then we'll
just enter the password here and click on ok and let's see how it goes alright
guys so let's try now so I just click on actions I click on get Windows password
I get this page now what I have to do is I have to choose the key pair path so in
this M file will work so I'll just enter the test and telepods dot Pam remember
the PEM file will work and not the PPK let's click on open and now let's click
on decrypt password so guys this is the password for connecting to my instance
let's copy it go to our instance let's paste it over here and now let's click
on OK now it says the identity of the remote
computer cannot be verified it's ok just click on yes and now you should be able
to connect so so here you go guys here is a server
launched on AWS for you it's a fresh server you can do anything that you want
on this right you can install any kind of tool on this you can make it a
database server make it a web server you can make this over anything okay so this
is how you can connect to a Windows instance I've showed you how you can
connect to a Linux instance as well right this also you can install any
software on this server and it can become anything for you right now let's
go ahead and come back to our slides alright guys so I've showed you how you
can connect to an easy-to instance we got to know how to connect to a Linux
instance we got to know how to connect to a Windows instance right and let's
talk about the other services well so let me come back to my dashboard so
likely to realize ec2 is a infrastructure as a service where you
get access to the operating system right now there is a service called elastic
beanstalk and there's a service called lambda let's look at elastic beanstalk
how is the dashboard look for elastic Beanstalk right so as you can see it
says welcome to a double elastic beanstalk and it says just select the
platform upload an application and run it that's it you don't have to connect
through ssh to that instance in order to install the software and get your
application ready right so as you can see when I said get started it gives me
create a web app right so you can only create a web application here it not act
as your back-end server right it can only host applications let's give it a
application name let's say it's test ok and then let's choose a platform
platform what do I want that software or that web app to run so I can put my web
app in dotnet I can put my web app and go I can put my web app in PHP it's all
my choice let's click on PHP right and the application code let's have the
sample application first and now let's click on create application so these are
all the settings that you have to do guys nothing much right and now it will
start to create your elastic Beanstalk it will not give you an access to the
operating system remember this guys you will not
to get an access to the operating system all you will get is a dashboard on which
you can upload your website and it'll be hosted for you when I created the easy
to server I could not do anything on it I had to install the software then I had
to put my application on it and only can I be able to access it but in this case
everything is done automatically I can just upload my code in that's it alright
so let's wait for this to be ready and then we'll go forward all right guys so as you can see my
elastic beanstalk is now ready and now if who did this particular URL which
elastic beanstalk has given me you'll be able to see the web app right this is a
sample application which has been deployed I can click on upload and
deploy and just I have to choose a file click on deploy and that website will
get deployed automatically over here I will just go to this link refresh it and
my website will be visible over there so I guess now you know when I said that
you just get an access to a dashboard you are actually not getting an access
to the whole operating system right you do not have a control on that all right
let us look at lambda as well so let's see what happens if i click on lambda
all right so when I click on lambda guys this is the dashboard that you get and
as you can see it just says run so if I run it it says hello world right you
just have to enter the code here it will give you the output that's all what
lambda is it will not host your application it can just give you actual
output in the form of JSON or in the form of textual content it can just give
you that okay now if I change something over here let's make it hello world 1 2
3 if I run it it says hello run world 1 2 3 so if you want to create a function
just click on create a function over here author from scratch use a blueprint
browser less app repository you can just see if there's any code that you want to
take from what has been done before you can do it from here that browser
melissap repository user blueprint here you have a lot of blueprints which most
of the companies use right you don't have to write the whole code from
scratch you can just click the blueprint that you want for example it's a micro
service HTTP endpoint that is you just have to hit on the API and will give you
the result if that is the kind of lambda function you want you can also do that
right then you have kindnesses firehouses local json there are a lot of
things over here don't get confused with all the Jawbone's you use over here
write this you will be able to get once you have a hang of all the services in
AWS which we will teach you in the upcoming sessions right for now you just
understand what lambda does right lambda you give the code it will run the code
that is it it will not host an application elastic Beanstalk it will
host the application for you and easy to do you can do anything
you can also configure your ec2 server to basically become AWS lambda but you
will have to manually install all this office and then you'll be able to do
that job so I guess hope guys that you are now
clear with basic services of the compute domain of AWS that is easy to elastic
Beanstalk and lambda auto scaling and load balancing we will do it as we move
along in the sessions because that requires you to know a little bit more
right so let's move on and come back to our pivot ease and let's start with our
next domain now all right so our next to mean is the storage domain ne WS and
let's see what all services do we have in the storage domain so these are the
important services that we have in the storage domain of AWS our first service
is Amazon s3 then we have Amazon glacier Amazon EFS and AWS Storage Gateway so
let's look at these services one by one and understand what they do
so where is s3 is an object storage service which basically means that all
the files which are uploaded on the s3 they are basically regarded as objects
objects for us as lemon users they don't differ much
I mean you won't see the difference in terms of when you are using the file or
downloading the file that you know it was a five before and now it's an object
object basically is at the back end that is how you store a file is in the form
of an object right so it's basically on the infrastructure side that it makes a
difference that each file in s3 bucket acts as an object right now why do we
use s3 why do we need a storage service right it is because like we have
discussed previously that we need distributed systems on an application
the more distributed it is the more fault-tolerant it becomes when I say
fault tolerant basically it can tolerate faults in its nodes each node in the
application whether it be the storage nor the compute node it will be the
back-end compute node the database node each of these node if
fail right it can tolerate that failure any application can still be working
okay so Amazon s3 is a fire storage system by
AWS which says that it will give you the availability of 99.99% times that means
only there's a point zero one person probability that you know your service
is gonna fail otherwise ninety-nine point nine nine it's actually nor
ninety-nine point nine and it's ninety-nine point nine nine nine nine
four times nine right that is the kind of availability that AWS guarantees that
your object will have and obviously you can increase this SLA this basically
code is a service level agreement that what kind of service relateable is
provide to you can further increase this SLA by providing redundancies by using
certain techniques wherein you can take a backup of your bucket at every 24
hours so that if there is a data corruption you can always get that data
back you know from the vaults and then store it back again in the bucket but
that is only when you have failure in the s3 service and for that I told you
the probabilities point zero zero zero one percent okay that is the kind of
service that AWS floyd-- so you can be less assured that if you
want to host files on AWS if you and if you host it on s3 your files will be
available pretty much all the time right that is what you have s3 for now what
are the common use cases where you will be using AWS s3 you can imagine it like
this that if there is a website where in the logo is there if there are certain
images on that website that have to be loaded at the time of web page reload
all those images will be taken from s3 and will be presented over there right
so rather than storing all these files on the server on which the website
exists you can store all these files all these images on s3 and then you can just
get it from there right s3 also provides you the facility
of hosting static web files right so you can host a website also by using an s3
bucket and all you have to do is enable static hosting on that bucket and you
will be able to host static websites in that case all right
our next service guys is AWS glacier so AWS glacier is an extended version of
the s3 service which is the glacier does not give you a direct access to itself
it basically takes a backup of the s3 service so let's say you created a
folder in s3 so a folders in s3 are basically called buckets right the root
folder in s3 is called bucket so if you have a bucket and you have a lot of
files inside that bucket and if you want to take a backup of all those files you
can take it using the glacier service in AWS right and glaciers all this the
reason we have two services over here the reason for that is that s3 you can
get the objects instantaneously the moment you put the link of the object
you can download it right you can access it but when we talk about glacier it
takes time for the object to be retrieved it takes sometimes take stands
in are sometimes two or three hours to retrieve a file in Amazon right so that
is the kind of service that glacier is and it's a backup service and it's also
cheap so the main difference between s3 and glacier is that if you are taking a
backup in glacier it'll be very very - I think it'll be one tenth of the cost of
the same size of files which exists on Amazon s3 the reason for that is glacier
is strictly a backup service and because it is low priced because of that there
are some compromises in its performance we're in the time to retrieve the object
it takes time right it takes two three hours to retrieve an object in Amazon
glacier when you compare it with Amazon s3 it's instantaneously and that is why
even the price is higher on the s3 services more about pricing we'll talk
as we move along in the session right but right now it is very important to
understand the functionalities of these services right so why do we use AWS s3
for hosting our files and those files can then be retrieved on whatever
application we want it's basically anywhere on the internet if you put that
link you will be able to download that file so that is what AWS s3 is for you
if you want to take a backup of AWS s3 then you can use AWS s3 issue which will
help you to take backup of any buckets or files which are there in your s3
inside right after gives you guys the other service that we're gonna discuss
is EFS so what is EFS EFS service is again a storage service but it's
different from s3 how is it different from s3 that EFS service can be mounted
on your operating system as a volume it isn't that interesting so you can mount
amazon EFS as a volume on any computer on the AWS network they'd show you let's
say you launch a server of windows on AWS and you feel that you know you need
a network driver if you guys have worked with network drives then EF is IG's
exactly a network drive it mimics the usage of a network drive right and it's
also scalable that means the size increases as and when you need it right
so that is scalable it is it can be attached to multiple computers that it
can serve you as a shared drive that is it there could be tens on hundreds of
computer which have the same volume inside them and that same volume would
be EFS for you so how does that help that helps when you have a scalable
architecture when their weather has seven or eight systems and whatever
changes one system is doing that has to be seen by the other system as well so
in those kind of cases you use EFS wherein they will have a common drive on
which the data will change dynamically no matter which server is changing it
all the changes will be available on the other servers as well and that is what
EFS is for you guys right so EFS can be mounted on Windows machine it can also
be mounted on Linux machines and the way you use it is I just told you it acts as
a shared drive and where you use it you use it where you want shared data
between multiple servers which are working in an architecture alright so
that is what EFS is for you guys then our next service is AWS Storage Gateway
so it basically helps you to connect an on-premises
system to the AWS cloud infrastructure so if there is any storage application
which is there on your on-premise systems and you want it to be connected
to the AWS infrastructure you will be using the AWS Storage Gateway service
all right so this display is all about the storage domain in AWS let us quickly
jump onto our AWS management console so I show you the few of the services in
AWS that ways so let me jump on to my management console alright guys so here
I am on my management console the first service that we discussed was
under storage which was s3 so let's click on the s3 link and that will give
you UI which will look something like this so I have some buckets already
configured you can create a new bucket so like I said bucket is nothing but the
root folder where you put all your files right so let's say the bucket name is
test - internal part right this is the bucket name then the region that I want
to put this is let's say I want to put it in Oregon region and that's it let's
click on next and keep all versions of an object in the same bucket leave
everything a default just click on next leave everything at the at default and
now let's just click on create bucket alright so my bucket is now created I
can go inside this bucket and I can upload files over here let's try to
upload a file let's click on upload let's click on add files let's go to
pictures and then let's try to upload something let's go documents let's say I
want to upload a file and let's say I applauded this particular file ok so
this is the file that I want to upload I'll just click on next
it will now upload it click on next again here other so this is basically
the types of classes that you can access right Damona access put it in s3 you
want to archive the data put it in glacier all those you can do let's click
on next and click on upload so right now we have not changed any setting in s3
right we are just uploading an object and as you can see my file is now
uploaded over here right now if I select this file I can actually see the
properties of this file via this link right here is an object URL that you get
if I click on this object URL it will say X is denied y-axis denied because
first I have to make that object as public so for that what I have to do is
I'll be going into properties right and hope over here I will go to permissions
and then I will go into public access right so right now it says you can't
grant public access because block public access settings are turned on
go ahead and remove those things so I'll just go to Amazon s3 click on the bucket
and then let us click on permissions and in this permission I want to edit block
all public access I've done that let's save this thing and to confirm these
settings just type confirm hidden confirm and my settings are now done now
let's go back to overview this is my object right and if i refresh this it
says till this X is denied now what I do is I'll go to permissions and now I'll
give public access so I want to give read object to everyone let's click Save
now if I go to the website hit refresh I can see the image over here now this
link anybody who has access to this link will be able to see this image so you
can also embed this link in your website and you will be able to load that page
on your website just like that right so this is what the link does this is what
the link is all about so if you upload object guys and you want to make it
public just go to the permissions of that particular object make it as yes
and you're set all right so this is how you can read from s3 alright guys so now
let's start off with EFS so I'll just click on the EFS service on my AWS
management console I will reach this page where I will have an option to
create a file system let's click on that and now it will ask me for the V PC that
I want my EFS to be created in now remember that is the V PC that you
select here should be the same as the instances on which you want to mount the
EFS volume so for right now it's V PC for b-58 to double three let's click on
next step right leave everything a default guys don't
touch anything else let's create the file system now alright guys so my file
system is now being created now remember guys the security groups that are
attached to this particular EFS Drive has to be the same as well for your ec2
instances ok so how to check which security group is this EFS Drive mounted
on you check that by you can check it over here
in this table so right now it is in creating state so once it is available
you will have the security groups listed over here okay now since my EFS is going
to take some time to set up let me show you the instances that I've launched so
I have two instances one is Ubuntu and the other is 0.02 I'm gonna mount the
same EFS volume which I have created over here to basically connect through
these two instances right now how to do that first I will click on let's connect
to our Ubuntu instance let's connect to this Ubuntu winces first so this is the
IP address and I guess I have already connected to this IP address of 172 3151
my 95 this is the private IP address for the Ubuntu instance and this is the same
right so even this is the same so let's close the extra terminal from here and
now let's connect to our second Ubuntu instance as well so this is the IP
address case let's copy it let's launch a new putty console let's paste the IP
address let's select the PPK and let's click on open
for clarity let us change the colors of the terminal let's make it orange so
that we can differentiate between the first instance and the second so as you
can see the IP address for this is 170 231 51:17 and the IP address is same
over here as well so that means we have to open two servers that are now open on
my putti terminal and what we will be doing is we will be connecting this EFS
mount point to my Ubuntu instances so this is the security group a7 F 0 8 7 2
3 6 now will have to ensure that both my instances have that security group
associated with it right so if we have to check the security group for my one
two instance here it is guys the security group which you will have to
connect so what you will be doing is you going will be going to networking you go
to actions go to networking and click on change security groups so as you can see
it is only launch visit one has been connected to it my security group that I
have to connect is seven F zero eight seven so this is the security group that
I have to connect to my instance so let's connect both of them right and now
let's click on assign security groups so my Ubuntu instance is now connected to
the security group of my EFS let's do this to our second instance as well
let's select the default security group which is there great so now both my
instances are connected to the security group of my EFS right now what I will be
doing is I will be following a set of instructions that you will find on this
console as well right so the first thing that you have to do is Arnab unto
instance you will have to install this package let's do that so this is my
first instance let me copy the command and it is already installed great let's
do the same on our second instance as well so let's first update the machine
sudo apt-get update alright once it's updated the next step would be to run
that command now let's run that command and my NFS common package will install
over you right now what I can do is I can create a directory on my first
instance let's the directory be EFS best ok great and let's create a directory
here as well let's name it as EFS test 2 ok now what I'll be doing is I'll be
mounting my EFS volume so for mounting it just copy this command go back to
your server paste the command and put the directory name so in case of my
first instance the directory name is EFS test let's hit enter great
so EFS test directory is now connected to my EFS volume will verify whether
that is working or not right let's similarly copy the command over here as
well and this would be EFS there's two this is the directory name good so even
this instance is now connected to EFS let's go into EFS test to great now as
you can see if I do i LS over here there is no file right similarly if I do
wireless over here there's no file let's create a one dot txt file
let's put sudo great if I do an LS you can see there is a one dot txt file over
here if I do LS over here you can see there's a one dot txt file here as well
that means this is a shared volume correct if I create let's say 105 over here Purdue analyst over here I can
see the two-door TX is also available similarly I can create a file from here
as well and if I do any less over here I can see that the 3 dot txt file is also
present so the guys this is how EFS works it acts as a shared Drive between
multiple instances in AWS all right so let's come back to our slides guys now
so guys you successfully discussed we discussed what s3 is we discussed what
glacier is we discussed what EFS is and we have discussed what storage get bins
our next set of services are belong to the database domain so let's go ahead
and understand these services so the first service so where's the database
domain comprises of these many database services in AWS the first AWS services
that service that I have is Amazon IDs then we have Amazon DynamoDB Amazon
redshift and in the end elastic caches so let's understand these services one
by one starting off with Amazon RDS so guys Amazon RDS is nothing but a
relational database service guys it's not a database it's a database service
what do you mean by that is that you will in under the IDS service of AWS you
can launch these many databases you can learn some microsoft SQL server you can
launch the MySQL service you can launch the Oracle SQL PostgreSQL Maya DB Amazon
Aurora you can launch all these databases but what is RDS for IDs
basically manages these databases now how does it manage we make sure that it
takes automated snapshots of all these databases which will be corresponding to
a particular time it can also ensure that if there is read replicas required
or if there any replication required in your database that also can be taken
care of RDS or by RDS third thing is it also takes care of any security patch
which has to be applied on your database if you enable automatic updates right so
this is how RDS works guys now again let me emphasize on the point that RDS is
not a database it's a relational database service in which you can launch
all these relational databases alright so I guess I hope RDS is clear with you
next service is Amazon DynamoDB now what is Amazon DynamoDB it's basically a no
sequel database by Amazon right and what is the no sequel database whenever you
have to store unstructured data that is data which does not follow a particular
format you use unstructured data base like DynamoDB now alternatives to this
you might have seen or you might have heard about MongoDB or you might have
heard about other no sequel databases this Amazon DynamoDB is a no sequel
database by Amazon right so there is no database that it supports it itself is a
database unlike RDS and in this you can store unstructured data okay third
service is guys Amazon redshift so Amazon redshift is a data warehouse
service which basically what it does is under the data warehouse you will have
multiple databases so those databases can be queried by your warehouse and it
looks as if the whole all the databases they combine together to give one
database where all the data exists but it is actually not like that Amazon
redshift comprises of multiple database engines which it can connect to and give
the output as required next service is Amazon ElastiCache a so last Amazon
ElastiCache a basically is a service which serves as a cachet so what is the
cachet a cachet is a layer between the client and the web server or the server
from which the information is being a question what happens is let's imagine
you want to get the data for an employee whose salary is greater than 10,000 and
you basically want to get all the current cities that the employees are
staying in right now let's say you do this query time and again now what
happens your server is again and again doing the query on the database using a
particular query that this is the data that I want now what happens is when
again and again you are doing the same query it does not make sense
to run the same query again on the database let it do the computing work
and get then get the results so what these kind of data what ElastiCache it
does is whenever it sees that there's a frequently accessed data it stores that
data on the cache a which means whenever a similar request will come rather than
querying the database the same data will be going back to the customer from the
cache a layer itself so it decreases the overhead on the database and it also at
the same time increases the performance of your application right so that is
what elastic cachet is all about guys all right so our next domain is the
security domain so in security domain these are the following services that we
have in AWS so we have the first service which is AWS I am and the next service
is AWS kms let's check out Bodhi services and understand what they do so
guys I am is basically used to authenticate users to your AWS account
now the account that he just created on AWS it basically is the root account for
that AWS account now what happens is big companies or companies like Netflix a
B&B they own only one AWS account and what they do in their AWS account is
they create multiple users with restricted permissions okay so each user
can have their own user ID and password but basically they will be logging into
this same AWS account and that is possible using AWS I am right so you can
create multiple users for a single AWS account with granular permissions such
as what actions can they do on the AWS management console that you can also
restrict them to particular services that they can access for example the
user can just access s3 or you can just access ec2 or you can just really easy
to when cannot stop and start an instance or you can just start and stop
an instance but cannot create a new instance right or you can put also a
restriction that nobody can terminate an instance who whoever users you are
adding right the account or the user that you signed up with that is
basically the root account now what is the root account the root root account
always has all the privileges right it has all the privileges can do anything
but if you have to put any restrictive access to a particular person you will
have to create a user account in I am so that is one type of account that you can
create and I am the second type of account that you can create in I am is
an application account now what is an application account let's say I have a
website which can upload data to s3 how do i attend takete my website to
upload data on s3 for that we have the AWS I am service using which I can
create application identification keys as well so what you get in that case is
you get access key and you get a secret access key so that access key and secret
access key has to be embedded in your program and only then your program is
authenticated to upload data on to the s3 service of your AWS account otherwise
it cannot write so this is what are the reasons or these these are actually the
reasons that you use I am for right it can help you to put a restrictive access
on user accounts as well as application accounts alright moving forward guys the
next service is kms cavus means key management service so key management
service is basically used to create the key pairs that we saw while we were
creating ec2 instances right so those key players are actually created by the
kms service right so similarly if you want more key pairs to be created you
just have to head on to the kms service and you can create your key pairs over
there and authenticate yourself accordingly to whatever service you want
alright so guys this was the securityq domain now let's move on to an next
domain AWS services management alright so this is our next domain guys so let's
see what all services are included in this domain so these are the domains
that are included in this these are services that are included in the
domains of first service is AWS CloudFormation then we have AWS opsworks
then we have AWS cloud trail and in the end we have cloud watch now what is AWS
CloudFormation guys tradable us cloud formation is basically used to template
eyes and AWS infrastructure okay so let's say I have launched to you zero
instances with the load balancer with a now in an auto scaling group which is
connected to an RDS instance which in turn is also you know connected to my
EFS so all these things I will have to launch right if I know what the
architecture is I have to launch everything one by one and then probably
my architecture would be ready but what I can do with cloud formation is
everything I can specify in a JSON file so in a JSON file I can specify all the
resources that I want to launch all the things that I want to configure in the
network everything I can specify in the JSON file and just run it through cloud
formation so what cloud formation will do it'll create that whole architecture
according to my JSON file so I don't have to stress too much on you know
creating my architecture one by one using either through my management
console or through my CLI right I can directly do that from by just writing a
JSON file and passing it through my cloud formation so this also helps us
when we want to replicate our architecture across multiple regions
right let's say I have an architecture in one particular region I want to
replicate it across multiple regions so in that case also cloud formation helps
us a lot so it's an automation tool which can
help us to launch AWS resources by specifying it in a JSON file a next
service is AWS opsworks now it's a little similar to cloud formation
because this also deals in automation but basically this is a configuration
management tool so if you guys are aware of DevOps there's a configuration
management tool called chef right so chef recipes are readily accepted by AWS
opsworks and what you can do is in this in a table so of course what happens is
there are multiple layers that you have to configure and all these layers
together they form to become a stack ok so what I'll be doing is let's say in
the first layer I specify the the ec2 instances that I want to automate on
right the second layer could specify what all software that I want to be
configured in that ec2 instance so that is how ops works is helpful guys so a
configuration management tool is nothing but which can configure all the software
requirements on a particular set of servers at the same time right that
means if I have to install let's say MySQL on let's say hundred servers how
will I do that it's a very daunting task I have to go to each server I'll have to
install MySQL so opsworks makes it easy for me and it makes it easy in a very
effective manner that is every server will have the same configuration that is
specified in opsworks now don't get confused with cloud formation and ops
guys cloud formation is used to deploy an architecture opsworks
is used to specify consistencies in that architecture with respect to the soft
phase that we are going to install in that okay so and also it's not just a
one-time deployment that you will be doing through ups works let's say
tomorrow your database link and the password is changing right and you have
some 200 servers in your fleet how do you do that so that is possible using
opsworks all you have to do is just for that layer very have specified the link
and the password just change that and update or deploy the opsworks
architecture and then in that case it will update all these servers with that
very small change that is specified in one of the layers right so for all these
very small changes which are very important and have to be same across all
the server's I use ops Oaks right next service guys is AWS cloud trail so AWS
cloud trail is basically a monitoring service which logs everything which is
happening in your architecture right so that logging is not enabled by default
to some of the services you can enable logging by specifying that you know AWS
cloud train should log each and every action which is happening and it exactly
does that it basically would log each and every action or each and every event
that is happening inside a particular AWS resource once you attach AWS cloud
trail to it and then that log data basically you can use to do further
monitoring by connecting it to probably a bi service which can visualize your
log data etc all right so that is what AWS cloud trail is all about then our
next service is AWS cloud watch now what is a cloud was cloud watch is basically
again it's a monitoring service but it's a little different kind of monitoring
service right so what you can do with your cloud Watchers you can set up
alarms for example if let's say I want an alarm whenever one of my servers goes
in on a hill restage right No so how would I do that
so one thing would be that I continuously hire my employees and they
are constantly checking if my servers are in the healthy state or not or what
I can do is I can configure very well take various
less time to just configure my cloud watch to monitor all my resources and
whenever there is a resource which goes in the energy state it will trigger an
alarm now what kind of alarm can it trigger it can either email you or it
can basically trigger a next set of events it can trigger to create earn ec2
instance or it can trigger an AWS lambda function it can trigger something else
as well so this is what cloud watch is all about it watches all your resources
and on basis of that it can do for the simple process that you define an cloud
watch okay moving forward guys our next domain is customer engagement domain so
in this domain we have the following services the first service is Amazon
connect and then we have simple email service so let's look at what these two
services do so that they are so helpful in the AWS community the first service
being Amazon Connect guys so I'm some connect with nothing but it's a it's a
full-blown customer contact center for your company for example you would have
seen that whenever you purchase a product there's always a customer
helpline that you can call on right and then you get the IVR options in there
where you choose and then you get connected to human agent to talk your
way out right or you to put your grievances and you talk to your customer
service agent right now if you want to set up something like that for your
company it is very simple to setup that using Amazon Connect you can build a
customer contact center in less than five minutes with Amazon connect right
all you have to do is go to Amazon Connect service click on get started and
it will allow you toll-free or a normal phone number based on what you choose
after that you just have to fill in the agents that you want to be on the other
side so that whenever people will be calling that one particular toll-free or
normal contact number they should be routed to the agents screen agent screen
right and this happens on the internet so there is no need of purchasing
carrier plans or something like that right so this is what Amazon Connect is
all about guys a next service is simple email
servus now this is also plays a vital role in customer engagement you would
have seen that you get marketing emails from off companies for example if there
are food companies that you order for or if you have went into a store you gave a
phone number over there in the contact Li they'll also message you Pisa
delivery stores or grocery stores they want email or SMS you one way or the
other right this service right here is when you want to have email
interactivity with your customers right you can send bulk emails you can also
set up your simple email service to respond to particular reply emails right
so that is what SES is all about and this also can be configured to route
emails for example if there is a email address that you set up for a company
for example support at the rate in Telecom is our email address right so if
you email to that particular email address and you get routed to our
support agents who will help you out in solving your queries right so that all
can be set in Amazon SES guys so this is it for the customer engagement services
our next two main talks about app integration right so in this domain will
basically have services which help you to integrate two or three services in
AWS let's look at what services they have to offer so there are two services
basically guys one service is called simple notification service and the
other one is simple queuing service let's see what these services are the
first service which is Amazon simple notification service basically helps you
to send notifications to other AWS services in occurrence of an event right
so it waits for a trigger to happen and based on the trigger it sends a
notification to a corresponding AWS service which has to work next for
example you can set up a website which can send you an email and all you have
to do is let's say whenever a customer purchases something from your website
you want to get me an email to the customer with all the details now if you
have to do this on a distributed environment what will you do is the
moment there is a trigger that there is a cash payment received from a
particular customer your lambda event triggered right now that can be
triggered in numerous ways one way is either your service can directly trigger
the lambda event but that is only possible for some of the AWS services
the other way around is that you can send a notification to SNS right so SNS
will detect the type of notification received and it will have a mapped out
road too as to which service it has to notify next so in this case it will
receive the notification it will see okay so this is the type of notification
you'll receive it will invoke the lambda function which will basically send out
the email to your customer alright so this is what this is how an SNS service
basically works as you can see in the diagram here as well that you have a
publisher a publisher is a person who sends out the notification right and the
next thing that you do is the way of filtering out different kind of
notifications in SNS is that you define topics okay so based on the topic the
messages are filtered and what you do is in the topics you will define which
service to basically trigger and based on that those services will be triggered
and the services to be triggered are basically called subscribers okay so
guys this is how an SNS service actually works moving forward now let's look at
the simple queuing service now what is a simple queue service it's basically a
queue for or it's basically a place where you can store all your jobs
whenever you have a stateless kind of an architecture what is stateless kind of
an architecture let's say you have a system which doesn't have its own memory
the prime example for this would be a The Bluest lambda so what aww Sandra
does is it does not know what is happening in your application okay what
it knows is just the job job that it has to do for example let's say the job of
the lambda server is just to send an email right it will not know whether it
has already sent an email to that to a particular customer or not it will not
know whether it has already sent an email to a customer right what it will
do is it will just pick up jobs from the queue that you have and based on that it
will perform the job and that is exactly why you have a simple queuing service so
that it can feed to lambda what the next showbiz with
lamb-lamb de having to remember what it has to do right so guys this is what the
SGS services now is this was all about the different
AWS services that we have and that you need to know in order to get started
with AWS so for I think we have almost covered all the use cases that you can
encounter in an organization and basically your job would be to that
based on the problem you would have to suggest an AWS service and that AWA
service implementation also details you will have to know right so based on the
knowledge that I've just given you what each and every service does you can now
decide what an architecture should basically have in order to get a job
done all right moving forward guys now let's talk about a very important topic
which is AWS pricing now what now we know about all the services that we're
going to use right but what will we do or how will we use these services
totally depends on the pricing of these services correct so let's move forward
and understand how the pricing model works in AWS and if I am using a set of
services how will I be charged how much will I be charged so guys the AWS
pricing options are among these three right the first option is pay-as-you-go
model which means whatever amount of time you will be using an instance for
or whatever amount of time you will be using a server for that amount of time
will be billed to you and will be given back to you so whenever you will be
launching any server you will get a per hour basis charge on that particular
service right so you can see that service you can see what the charges are
and accordingly you you will be charged whenever you terminate that instead of
whenever there is a monthly billing cycle of yours which is ending right so
the first model is pay-as-you-go model which is widely used second model is
save when you reserve now what do you mean by that let's say you're launching
a website today right and that website is for your company and you foresee that
I will be at least running this website for the next three years based on it's
just a start up I might not see that much growth but I will sustain it for
three years in three years my website is one of them right let's say this is the
scenario so what do you do with the appliances you can opt for dead
created instances or reserved instances so you can say that I'm going to use
this instance for three years from now right
and I'm not going back out I am going to use the three years this instance for
three years then what AWS will do for you in that case is it will give you a
counteroffer it will give you a discounted price right reason being that
it is no longer an on-demand service it's a service that we have asked from
EWS which you will use for three years which means that you will pay you will
have two options in front of you to get this kind of deal one thing is you can
do a full up from payment of three years right you can pay all the whatever
discounted price they deliver to you you can just give the amount for three years
and you can use your instance right then you logged in or what you can do is you
can also do a partial upfront payment if that is is out the financial stress on
you right for example you do not have that kind of money to pay for three
years so what do you do is you can divide your payments into EMI and then
you can pay you to enable that's a partial upfront payment so with this you
can get discounts up to seventy percent of the pricing which is there in
pay-as-you-go model right so the guys that's very cheap so if you have an
application where you know the server that you're gonna use is going to be
there for like three or two or three years then it's better to go for
resolved instances where you can opt for taking a server and you get huge
discounts on using them right the third kind of pricing is a less by using more
what this basically means is the more you will be using your instance for
example your instance the the type of pricing that you get for instance is on
a per hour basis right the more you will use your instance the less the Ally
rates will become right so that's also an awesome feature by AWS which says pay
less by using more right so then these were the pricing options in either blue
there's one mobilising option that you get in AWS which is called spot pricing
what is spot pricing or spot instances spot instances are basically idle
instances which AWS is running and what it does is it offers it to you in a
cheaper price right so for example it's 2:00 p.m. in the afternoon and I know
load is less at this particular time so what a the blows will do is it will
offer you some instances as a lowered price rate because those are just
sitting idle over there and if you want to use them you can use them so what
happens in that case is you take that instance and you bid it right if you
want to take that incident you have to bid amount on that particular instance
the highest the higher the bid amount is obviously that bit amount will be lower
than the actual rates but the higher the bid amount is that instance goes to that
particular person right now there is a catch in this basically that if somebody
bids higher than what you have bid it in that case your instance will be stopped
immediately and will be given to someone else who has done the bidding higher all
right so that's a catch over here but it's but it could be particularly
helpful when you are dealing with work clothes which are not that important but
anyways you have to do them right in those kind of scenarios you can take up
spot instances and you can just bid a particular amount which you feel you are
comfortable in and in case in future the price goes up your instance will be
stopped by at least you're getting your work done in a cheaper rate right so
that's the ideology behind spot instances so I guess now it's clear with
all of you what ADA Blue is pricing options you have now let me know where
and let me tell you a very exciting point about AWS pricing right the free
tiem so the free tier is basically one-time offer that you get whenever you
sign up so whenever you sign up on AWS if you are using a t2 that micro
instance which is 1 GB of RAM and 1 V CPU of computer in that case it will be
totally free of cost to you right so what you do get in a month is 750 hours
of usage so you can launch for instance those five instances all of them
together collectively can be run for 750 hours the moment you cross 750 hours
you'll be charged the normal price but up till 750 hours of server usage you
will be not charged a penny right and that this is what the free tier is all
about now it is particularly helpful for people who are trying out AWS or people
like us who are trying to learn AWS for future careers right so I request you
all guys so whenever you're practicing on e W
always be under the Freetail because that is literally not gonna cost you
anything all right so the the 750 hours irrigate are particular to ec2 and RDS
apart from that you get some other free tier limits as well for example in s3
you have if you store data up to 5 GB you will not be charged anything okay
then in dynamo DB if you have to store something which is if you're at the
instance which you are running is another free tier and if you want to
store something on dynamo DB till it 25 gb it is absolutely free okay so that is
this is the kind of pricing that you get or these are the kind of perks did you
get when you're using AWS for the first time for more details on AWS retail you
can just visit the aws.amazon.com official website and they'll give you
all the details for their lot of other services as well that they offer free
theƶin or free limits in for example the amazon connect that with the service
that we discussed which was basically a one-stop customer center support center
set up in that you get in a month you will get the first 90 minutes of your
polling for free and the way you get charged for that particular services not
on the number of hours that you'd be using that service but on the number of
minutes a customer is speaking to an agent right that is how you get charged
that's I think pretty cool about Amazon connect alright moving forward guys I
think we have covered enough of theory now let's go ahead and do a hands-on oh
where basically I will show you guys how to set up your AWS services and how to
migrate an application from your local computer on to AWS so let's start off
with our hands-on so guys what I've basically done is I have created a
website using which we can upload data on s3 okay so this is how the
architecture looks like so basically my website can data upload data on AWS s3
and that record is also saved in a my sequel database now as of now this my
sequel database is on the localhost and also the website is on the localhost and
right now my website cannot connect to s3 because it is not being able
to authenticate itself all right so this is what we are going to the first step
that we are going to do is we're going to authenticate our website to enter
precisely to upload data once you have done that we will migrate this website
onto AWS infrastructure all right so without any ado guys let me first show
you how my website basically looks like so let me jump on to my browser right so
guys my website basically exists on local hood slash new right so this is
how my website looks like the first thing that I would have to do is I will
have to check if it is able to connect to a database so basically whatever I
will be uploading I can view that over here as a list but right now it cannot
connect to the database so what I do is I'll open up my MySQL on my local host
here it is right and now what I'll do is I create a database called images so
because that is what I have configured in my code right and now let me create a
table with the name names and let there be one field called name with the value
as var Cal and let us give pretty big value so that any length of characters
can fit in this particular table all right so it says no database selected
oops I'm sorry for that so use images and now
let's create the table so when I do a refresh over here it should be able to
connect but now it will show you an empty list because there is nothing
inside my T repeats right if you want to show anything over here it will
basically be visible once the entry is made inside the database
second thing is right now if I try to upload anything let me go to pictures
and let me say let's say this is the image that I upload if I click on open
and after I click on submit my file basically will not be a uploaded reason
being it will say the authorized header is malformed which basically means
authentication is not yet given to my account now how can I give autentic
ation to my website so that it can upload on s3 for that I'll have to head
on to my AWS management console and as we have learned this is a service called
I am so I will go inside that I am service and over here what I do
is I will create a user alright and let's say the user name is web demo and
what I will have to give this user is the programmatic axis so that by code
this user will be able to access all these services on EWS now the kind of
services that I want my website to access is only s3 right so let us put an
s3 over here and as you can see there's a commission over here which is Amazon
s3 full access let's give this user access to this particular user and let's
review it and let's finally create the user once we have created the user guys
I will have the access key ID and I will have the secret key access key now this
is very important for my application to be authenticated so what I do is I will
just copy this access key ID go to my terminal I will create a new page and
this is my Access ID guys and this is what my secret access key looks like
okay so this is my access key this is my secret access key and this will be used
to basically connect my website on to AWS s3 okay now let me show you how my
index or all my code looks like guys so guys this is my code which I basically
used to upload files on to si as you can see the key and the secret key are not
filled as of now so let us fill the key first the key is this let us enter it
over here and the secret key is this right so once I enter the key an access
key a secret access key my website will now be able to authenticate itself onto
s3 and now it should be able to upload objects into a bucket right which bucket
are we talking about let me quickly show you so there's a bucket that I've just
created on s3 and that bucket name is basically test in telepods so as you can
see there are no objects in this bucket as of now right and now what I'm going
to do is let me refresh this website and as you can see now it says new record
created successfully that is the image that I chose earlier should now be
uploaded over here so now if I do a refresh I can see there is one image
that has been loaded lets me upload one more image for
the sake of understanding it let's upload this particular image and let's
click on submit so what happens is the moment it takes up a file it changes the
name of the file into a random name and then it applies it over here so if i
refresh you can see there's one more image which just has been uploaded now
what I can do is I can just go back to my website and I can just click on
checklist so this will give me a list of files which are uploaded onto my s3 if I
click on this list I can basically download the file from s3 and if I click
on it you can see the file this is what I uploaded right similarly if I click on
over here this is the file that I uploaded right similarly let me upload
one more file so that it's clear for everyone let's say I upload let us take
not let's take let us not take an image let us try to put something else so
let's say there's this app this test dot jar that I can upload so let's just
click on open and it's click on submit so this is the file I click on submit
and now if I do a refresh over here I should have that file ok so that file
might be a little larger inside that's why it's not uploading so what I can do
is let me take this particular image right and let me submit this as you can
see a new record has been successfully created if I click on check list there
is a new image which has been added if I click on this image I can clearly see
that this is the image that I uploaded similarly if let us try some other file
as well let's choose a file let's try to upload this excel file let's click on
submit and when I do that a new record has been created great if I check the
list this is the excel file that has just been uploaded if I click here the
excel file is downloaded if I click open this this basically file should now open
right great guys so I think our website is working fine but the problem is this
website exists on my local host right and right now it is basically feeding
data on to my local MySQL instance so if I basically would just do a select star
from images so I select star from names this is the name of the table you can see
this is all the values that are there in the table and these are the values that
you can see over here so the first thing that I should do is basically let us
deploy a database on AWS on through which my website will basically be
connected for that I will be heading on to RDS since this is the this database
that I'm using is MySQL let us try to deploy a MySQL database on AWS so for
that I will be clicking on create database and the type of data it is that
I want is MySQL that's like that and now let's click on next next I want to
create a dev environment to test environment because this is just a POC
so I've selected this and I'll click on next next I can select the MySQL version
so let's leave it a default right now and I want to enable only options which
fall under the free tier usage okay so let's select this option and
everything is filled automatically let's identify our DB using a name let's say
it's web - demo write the master user name let's say the Amish user name is
hemant let's specify the master password as well and now let's finally click on
next right now it will ask me what VPC do i want to put my instance into so I
have the default view PC where my instance is being launched and that's
great second thing is public accessibility do I want Internet to have
to be able to access RDS so yes I want public accessibility to be enabled if I
select no in that case you know they'll not be any public IP which will be
assigned to my RDS instance only the B PC in which my RDS instance resides in
only the instance is launched in that particular instance or in that
particular V PC would be able to access that RDS so when I say V PC is basically
a virtual private cloud or it's basically a virtual network right so if
I do not except uh bleak accessibility to my RDS then it will only be able to
connect to machines which reside in that particular network on which it is being
deployed right not on the internet okay so but we because our website right now
is in my local host from local oh so I should be able to upload data onto the
database of AWS so for that I will need public
accessibility right do I want to create a new security group no I don't want to
let us select the default security group okay what should be the database name so
the database name would be the same that I've given for my local - real instance
which is images rest you can just leave a default backup I don't want any backup
so let's select zero and disable monitoring we don't want that we don't
want any upgrades made to my database right and I think that's it
now let's finally click on create database now there is the database
instance it takes around three or four minutes to create meanwhile while this
database is being created and let me tell you the next step that we have to
do now since this website exists on localhost I want this website to be
existing on basically AWS I want this website to be uploaded on AWS and for
that let us use elastic Beanstalk which is the platform-as-a-service service on
AWS right so let's open the AWS management console and now let's head on
to elastic Beanstalk in elastic Beanstalk you can basically upload your
website and I'll show you how you can upload your website and you don't have
to configure anything on the instance every software everything will be
configured by lasting Beanstalk itself so as you can see when you reach this
page user Safa click on get started and now it will ask you the application name
let's say the application name is there - demo right what platform is my website
based on so it's based on PHP do I want a sample application to be deployed
previously yes I do so I'll just click on create application now so guys this
will basically create a web app for me in an elastic Beanstalk we have done
this earlier as well we're doing it once more so that we can upload our own
website on to this particular elastic Beanstalk application ok so it will
again take guys three four to four minutes for elastic Beanstalk to get
deployed meanwhile let's check if our RDS is ready I just head down to our yes
I can see this is a instance which is running so it is still in the creation
phase so once the creation phase is over you would be able to get an end point
over here so an endpoint is basically a URL
through which you will be able to connect to your database right so let us
wait for this database to be ready and once it is I will try and connect to
this database from my local host and see how that goes right so it's in the
creating phase similarly my elastic Beanstalk is also being created let me
show you my code guys let me explain you my code a little bit right so this is my
main file so in this file basically I am using the PHP back in language on which
I have basically imported the AWS is the K right
this is the K you can basically just Google on you can just google AWS SDK
for PHP and you would be able to download it now I have included this
particular folder in my root directory of my website which is over here this is
the folder which has all the libraries right and my index or PHP will basically
has included this particular library and then the service that I want to connect
to is s3 so we are using the libraries of s3 over here right my bucket resides
in Oregon region the code for that is uswest - this is my key this is my
secret access key don't worry guys I will be deleting the user account so
don't try using these keys they will they will no longer work once I have the
username deleted about from that it's pretty simple it's a very
straightforward code days right now the database that I am connecting to is on
localhost that's why the server name is localhost one I would want to connect to
my ideas instance all I have to do is change this server name to the endpoint
of RDS and then it should work like a charm
right that's it this is my index or PHP my list or PHP where I get a list is
basically I'm just connecting to that same database which again is on
localhost and I'm trying to read everything from the table this is the
field that I am reading on the front of that name I am attaching this URL so
this is the URL for my bucket and this remains the same for each and every
object which gets uploaded right so I'm uploading so I am attaching this along
with the name and this I am basically storing in an
Hof tag which basically gives me a link on my website right so if I can show you
over here as you can see this is a list right and this is the link so this link
is basically a eh roughening in which I have embedded this URL with my file name
alright so once I've done that it works great it works like it is supposed to
now I guess we should go ahead and check if my IDs is ready
yes guys so my ideas is now in the available State so how do you connect to
RDS they will be a database which has been created on mod yes but the still
the table is still not created right so first I'll have to create a table on
this RDS instance which would basically be exactly the table like what I have on
my local host so how do I connect to my RDS so just copy this endpoint guys and
now you will have to go onto CMD right so once you are on the CMD guys the next
step is to go to the bin directory of your MySQL installation so my MySQL
installation is basically in program 64 so I'm gonna go in there right so I'm
inside the bin directory now next thing would be that I will have to call it
MySQL - H would be the host name the host name in my case is the amazons host
name this is this the username for this RDS instance it's hey month and the
password is this let me specify right over here let's hit enter and now if
everything has went well I should be able to connect to my RDS instance let's
wait so while this is happening guys if it gets stuck like this it could be that
you know you're not able to connect to your instance and the reason for that
could be in the security group so you'll have to check if the inbound rules for
the our security group are open to accept traffic so let's click on n bound
and yes so this is the problem over here it's allowing all traffic but it's only
allowing to this particular security group so what I'll do is I'll make it
anywhere and I'll save it once I do this let us come back here and
along the command again I'll enter the password and hit enter so as you can see
I have successfully connected to my MySQL instance which resides on AWS
now this instance should have a database called images let's use that and now
let's create a table which would be the same as what I did on my localhost so
let the table name would be names and the field name would be name and the
type of information that can go in it is vaca let's specify that okay so my table
is now successfully created guys and now my database is ready to basically take
in data now what I will be doing is let us go back to our dias and let's copy
this endpoint and now let's make our website maybe be able to basically
interact with my RDS instance so it's pretty simple just change the server
name to the endpoint of RDS the user name in my case is hemant and the
password in my case for my RDS instance is here $1.99 for alright that's it guys
that's all we have to do and let's save this code similarly in my list I will
have to change the values from localhost to these variables I'll save it and now
when I go back through my website let us open the local MySQL instance also so
this was my local MySQL instance as you can see there are only four entries over
here right now let's choose a file let's try to upload the same Excel XX file the
excel file that is and let's now click on submit so it says new record created
successfully but let us check here if this is where my data has been entered
sono Madeira has not been entered over here let us check on the RDS if my data
is being entered correctly over here select start from names so yes a data
has been entered over here with the this particular name and if I click on check
list as you can see even here I get the same
value so you can compare that this is the name that I'm getting over here and
this is the name that I'm getting on my website if I click here I'd be able to
download that excel file successfully great guys this is what I wanted so now
my website is basically connected to my database instance on AWS it was that
simple now the next step is basically to put this website on AWS for good so that
everybody in the world can access it now how can I do that
this is my elastic Beanstalk guys and this is the dashboard that I get when I
use the platform as a service instance right now it says here upload and deploy
so that's what I'm gonna do I'm going to click on upload and deploy and now it's
gonna ask me to choose a file now the way you can upload your website over
here guys is you will have to go to your website codes right and then you will
have to zip these files like this right so once the files have been zip this zip
has to be uploaded over here so I'll choose a file let me go into that folder
where I have the zip here's the folder let's click on open motion label let's
try to give this version label as 1.0 ok and now let's click on deploy so now my
website is now getting deployed to AWS elastic Beanstalk it will hardly take
some 2 or 3 minutes for my website to be ready on this particular platform and
once it is we'll be able to use it by this particular link this link will
basically be my application now that I've shown you on localhost this will
now be available on this particular link so let me close all the unnecessary
windows and now let's wait for this to be ready ok so it will take around 2 to
3 minutes like I said and whatever version that I've specified that version
will now be reflected over here ok so it would be AWS : 1.0 so let's see in the
future I make change to my code any kind of change to my code I would be able to
upload it over here in the same manner possible that is I will have to click on
upload and deploy and then I will have to in
cream ain't the version I will show you that as well let this complete and then
we will go ahead and check so as you can see my running version is now a ws 1.0
great so this is what I wanted now it's the moment of truth let's try to go to
this URL and see if our website is working or not so great guys my website
is now available on this particular URL let's try to check if it is able to
upload everything so let's first check a list so we have one file which is there
my dear base which is uploaded now let's choose a file let's try to upload the
zip file itself right I'm not sure if it will be able to upload let's check so
the loading has started it says new record created successfully awesome
let's go here let's refresh this as you can see a new entry has been made let's
try to download this so as you can see the zip is being downloaded and if you
go to RS 3 if we refresh over here you can see the zip is present over here as
well and my zip is also downloaded if you want to verify if all the contents
are fine or not let's do one simple step let's create a new folder here right and now let's try to paste this
extract over here if we extract all the files click on ok so all the files are
now being extracted let's do a control X and it's based here let's delete these
files so basically on my local oh Sh hello
there should be a file my website should be up and running so localhost / hello
so as you can see I can see my website over here and if I do a checklist so I
if I do a list dot PHP here I should be able to see even the list so that means
these files that have now been uploaded to s3 are working correctly and also as
successfully migrated my website which is there I'm a local host onto elastic
Beanstalk without even going to the terminal without installing any software
on the server right it is now up and ready and anyone who will visit this
particular URL will be able to access my website ok and as simple as that guys
and it's basically hosting my files as well my files are now or available on
this particular link so anywhere in the world if sub anybody would go to this
particular link they will be able to access these files they just have to
click on it and they'll be able to download it alright so thank you guys I
think this is it for this session let me just come back to my slides as I already
told you in the session we will be looking at a hands-on so before moving
on with the session let me briefly show you what is going to be the output of
our hands-on so this is our localhost and this is our elastic Beanstalk go
first in localhost what I am going to do is
I'm gonna upload this job to one dot PDF and I upload it so the file has been
uploaded successfully so now you have to open amazon web services account and i'll
open my f3 bucket so first I'll open the history service and inside that there
are already buckets which have already cleared it for this pool
so first let me go to the from the bucket because in this bucket I'm going
to store all the files which have been uploaded so first so we can see already
there was a dark one and right now the doc another file has been uploaded and
also because it is the same name the file is being overwritten with another
name of the current time and the file so we are appear one more file let me show
let me show how it works in elastic Beanstalk - so it is the same operation
the same : elastic Beanstalk so let me upload another file so I'm
uploading this for both here so I uploaded and the file has been uploaded
successfully so in s3 now if I reload it and you can
see that that file has been uploaded so the final output is going to be so we go
to this to the bucket image so you can see I already had two images and now
there is another image which we uploaded already in the bucket
the from the bucket so here you can see to the bucket PDF and if you go inside
you can see the doc one and this one was uploaded
from this local so we can see here that it has the same name as this one so it
is exactly the same per file which was uploaded here and it was copied to that
bucket so how did we do that using AWS lamda that we'll see in this
session so first let let us learn the theory concepts of this session and then
we'll move on to the hands-on part so come on Leslie okay guys now let us
begin with the session so typically an application comprises of three
components the front end the back end and the database server before the cloud
technologies emerged when a company wanted to host a application what they
did was they used to host all the software components that is the front
end the back in the Hana database service on an individual server so what
happens here is whenever you want to do an operation like consider this website
is for storing photos like google photos so you upload a photo and then press the
button upload in the website so what happens now is the website does not
process it goes to the backend service so whenever you click something here so
the process goes to the backend code some code is triggered so what happens
is that code runs and after that if you wanted to upload the photo it will store
somewhere did he use the backends help and it will store somewhere and it will
show the response in your website also that needs to be information stored in
the database because there is a name for the photo there is a link where the
photo is stored and also there will be other properties like size and something
else so they will be stored in the database service so what happens here is
all the software components are hosted in a individual server whenever you
wanted to do an operation you click some button or do some operation on the
website which will trigger a service in the backend so that will happen and if
there needs to be any data stored it will be stored in the database and the
output will be shown on the website but what is the problem with hosting all
these software components on a single server the main problem is it has
limited resources so let me explain this so considering your website is getting a
lot of traffic and it is using 80% of CPU resources and at the same time the
back-end service needs 50% of the CPUs resources to run operations in the
backend so what happens here is front end
service uses 80% but a conservative only gets 20% of CPUs resources to work on
and if a service wants to use some resources there are none left for it to
use so right now the system falls under a deadlock and the whole server will be in a
problem so what happens is they cannot be skilled and also this is the reason
why the websites get crushed hosting all the software components on a single
server is quite easy but it comes with all these possible drawbacks and
demerits so what is the solution for this let me provide you the solution
with an example let us see that so the solution to this is using and
distributed application architecture so what is a distributed application
architecture it is basically using dedicated servers for each of the
software components the front-end software component is hosted on its own
dedicated server so does the backend and the database so they own the friend and
server only hosts the website the back-end server only does the backend
operations and the database server who only is used when there is a need for
the database service but how does this improve the situation so let me explain
that consider the backend service under a lot of traffic or workload so what
happens now basically in an individual server when
all the software components was hosted so what happened there was back-end
server will consume most of the CPU resources and the other services will
not have enough CPU resources which leads to a crash but here the huddles
but what happens here is only the back-end server will be having workload
on that the other two servers will be left unharmed
so considering the back-end server has used all of its resources and has
crashed the other two servers will not have any harm on them the front-end
server will still be hosting the website and this website will be still visible
to the users even though the back-end services won't be running at the current
time the users will not be able to use the services provided by the website but
the website will still be visible on the browser so how does this solve the
scaling problem in this particular architecture you just have to scale the
particular servers which you want to scale actually so here we consider that
the back-end server is using all of its resources and you
you want to scale that particular server so you can just do that you can just
scale your back-end server instead of scaling all the three components
obviously this is going to reduce a lot of cost and time and for example if you
want to only increase the space for your database service and you want to scale
your database service you can just scale that so I understand this better let me
give you a real life example let us consider movie photos as the example so
here you can see the website so this is the front end server and the images
which are being retrieved are using the back end service to do that and also the
links for these images the data that is the name of this image the size of the
images or all stored in the database service and whenever to retrieve this
particular image the database service row it's the link and the backend
service will use that link to show the image to us so let us consider this and
let me explain how all these services work together so whenever you search
anything on the search bar so I have two images named hallo and hallo years so
I'm searching Allah hallo over here you can see there is photo ID over here one
and two so the names are hollow and hollow ears and the sight lines of this
so this is the site link for the first image this is a site link for the second
image and the baton service' retrieves these information and it will produce it
over here so this is how the front and the back and under data will service is
clubbed together so now we have learned what is a distributed application
architecture and how the front end the backend are available works together so
now let us move on and see what is a double slam de so ABS lambda is a
service compute service which means you don't need to worry about servers while
building and running applications you will just have to code as per your needs
save it in lambda and relax it will take care of everything else like
provisioning servers we will learn more in depth as we move along so now what
distinguishes lambda from ec2 an elastic beanstalk the other compute services
right but there is a difference let me give an idea about that
first let us compare London easy to later we'll move on comparing it with
elastic beatstock let me start with the first
different lambda is a platform as a service while ec2 is an infrastructure
as a service in land I'd provides you and platform to run and execute your
back-end code but in easy do it provides you virtual computing resources the
second reference is that lambda restricts to a few languages like Python
Java C sharp and a few more no restrictions ec2 you can install any
software you want in the platform given to you and then you can choose the
environment like note Dodgers or dotnet and push record into it in lambda but
not an easy to lazy till you have to beside the operating system and then
install all the required software's and then you have to upload your code or
curve internally and then Xavier and then moving on lambda does not
provide you the luxury of choosing poles and one x but an easy two you can
configure every single aspect like different instant types and security
preferences so this is what makes them different I hope you understood the
difference between these two services now let us discuss what distinguishes
lambda on elastic beanstalk let me start off for the first point lambda can only
run you back in code while Beanstalk can run your entire application without you
worry about the infrastructure which it runs on secondly lambda provides the
resources based on your workload but in Beanstalk you have the flexibility to
choose the instant type and other configurations in lambda you don't need
to worry about any configurations or instant types so the last difference is
lambda is stateless system and elastic beanstalk is a stateful system so what
is the difference between them a stateless system is when the output is
based only on the current inputs not using any other inputs which were sold
before but in a stateful system it may provide different values for the same
inputs by comparing the older inputs being stateless provides lambda the
ability to create millions of process threads and execute the model the
different inputs simultaneously only lies now we understood how does he see
do an elastic beanstalk services differ from a double slam de moving on will
first take a look at benefit lambda provides and the limitations of lambda
first let us look at the benefits provided by lambda later we'll move on
to the limitations of lambda the first benefit is that it provides a server
less architecture so you don't need to worry about provisioning and managing
servers you just have to concentrate on building and running the application so
to be very simple you will be given a console you choose a language write your
code and hit on it choose an ec2 instance according to the required
processing power the next point is code freely there are multiple programming
runtimes and editors so you can just choose it freely and
like how we do it in an offline editor like in Visual Studio or an eclipse the
next point is no ritual machines needs to be created we don't need to create
and configure any ec2 virtual machines as I already told they are provided by
lamda according to the processing power needed for your function the next point
is pay-as-you-go PS ego is a feature provided for all services in AWS and
what is here is in lambda you will be only charged for the number of seconds
your lambda function runs on for the particulars seconds how many services
are run and that's it you won't be charged anything more the fifth point is
monitor your performance you have a default monitoring option in lambda
which is connected with cloud watch and this particular lambda function with
generates multiple logs and metrics for you to visually and actually understand
what is going on in your lambda function so whatever advantages the service
provides there will always be some limits to it so now let us take a look
at the limitations of AWS lambda the first limitation of a table is lambda is
the maximum disk space provided is finer than 12 MB for the runtime environment
this means the /tmp directly storage can only store 512 MB at the same time why
do we need last /tmp directly storage in lambda is that because being a temporary
storage for storing the current lambda functions inputs and outputs because the
same lambda function there is no guarantee that it will execute two or
more times the next limitation is the amount of memory available for lambda
during execution is 128 to 3008 MB so this is the amount of RAM you can choose
between a the 128 or 3008 and between that it is a 64 MB incrementally 128
plus 64 and it keeps on adding the third limitation is that the function timeout
is set only to 900 seconds so the maximum amount a lambda function can
execute is 15 minutes and the minimum is 3 seconds so if you want your lambda
function to execute more than 15 minutes that is not possible it can only execute
to 15 minutes and the default is 3 seconds so maybe a lambda functions
execution completes in one second but it has to run at least 3 seconds to
stop the fourth limitation is that the one year available languages in lambda
editor can be used for wiping the code the languages are like Python c-sharp
Java no dot J's go and Ruby okay all right guys we've seen the pros and cons
of AWS lambda so now let me give you a brief idea on how lambda actually works
okay guys now let us mourn so first what you do is you write your code on the
lambda editor or upload it in a supported programming language in a zip
file so it is not that you have to create only one lambda function you can
create any number of lambda functions your application needs and after that
lambda executes the function or the code in your behalf you don't need to run it
so but to run the code you need to trigger the lambda function right so how
do you trigger it doesn't run automatically you need an external aw
service which can trigger and invoke the lambda function so for example it can
mean s3 bucket or a database so what happens is whenever a write operation
occurs in the database your function should be triggered so you send that as
a trigger and whenever a new record has been uploaded or inserted in the
database the lambda function will be automatically triggered and retrieve the
information which you need from that so after that we know that the lambda code
is running but where does it run it needs a sewer or a computer to run right
so what did this is it provisions servers and also monitors and
manages it so how does it provision servers so lambda functions have various
types of code so if you code require a lot of crossing power it will choose an
instant type which has more processing power and RAM or else if your lambda
code only executes for 2 seconds it will choose the lowest possible instance this
save your money and time ok guys now we understood how lambda actually works so
what are the various concepts in lambda let us look into it so now we are going
to see various concepts in lambda and the four concepts you are going to see
is functions run times layers and so what is a function function is a
script or a program that runs in AWS lambda so the lambda when it whenever it
is invoked this function runs so you can see here this is the function name and
here you write the code so the function processes the even then
returns a response so let us see some function settings so here you can see
the code the code is the logic you use in the lambda function which you write
over here and then runtime lambda runtime executes your function so
whatever one time you choose whatever one time we choose that particular code
you will be it will be written here so that will be executed by the lambda
runtime and then the handler the handler is where you mention the functions name
with the files name so whenever the lambda function is
invoked this particular function this particular function is executed and then
tags tags or key value pairs which you can mention for any aw service just to
track their cost or track their metrics and then description description is
describing your function which you can give while creating a function in lambda
and then timeout you have to set the timeout between after 3 seconds to 900
seconds because that is the capacity or that is a timeout period for a function
which I've already discussed in this session so let us move on with runtimes
so what is the runtime a runtime allows functions to run in different languages
in the same base execution environment that means in the same environment you
can run a Python file a Java file and a node dot JS file and this runtime sits
in between the lambda service and your function code so it is in between so
whatever code you send there are multiple runtimes and it shows the
correct runtime for your file if it is Python chooses Python 1 runtime if it is
Java chooses the Java runtime and it runs and gives you the executed response
so also see you can you can take a look at multiple various Sun time so these
are the latest supported runtimes and these were already announced so first
its dotnet code 2.0 c-sharp so and then grow programming language and then Java
note the jails Python 3.7 and Ruby 2 point 5 and the other supported
languages are node.js but older version 8.1 0 and Python an old version of 2.7
and 3.6 Perkins now let us see what our layers
ok so lambda layers or a distribution mechanism for libraries custom runtimes
and other dependencies like so instead of downloading using the code which you
write in your lambda function you can create layers and store those particular
allow the resource custom runtimes which you want to run your program in and you
can store it in multiple layers and you can create only five layers for a
particular lambda function and you can upload it so that there is no confusions
while the code is choosing a particular library or a custom runtime for example
if your code needs to run a particular particular library which it needs for
uploading information to excel sheet for a CSV file so what you do is you you
upload that particular library to a layer keep it in order so that your so
that your code chooses the appropriate layer and to gets out the libraries so
after that also your layers lets you manage string development function code
independently from unchanged Alec you don't need to change your code or
resources that uses you can just upload the information in a zip file as layers
and information out of that and you can
create multiple layers the maximum number of layers is 5 / function and you
can use layers which is provided by AWS customers are already available layers
or you can create your own layer and then come to the log States this is the
part where you monitor the lambda function
so normally lambda automatically monitors your function invocations and
reports metrics to cloud watch if you don't want to just watch metrics you can
write the code in your lambda function so that you can get logging statements
for each and every step your function goes through and you can look at the
execution flow and how your lambda function is performing and whether it's
working properly or not so moving on now let us see how AWS lamda works with s3
so here we're going to see how exactly an operation in the s3 bucket can
trigger the alias lambda function so consider a user is trying to access a
website which you can use to upload photos so if you upload a photo here it
will be stored in the AWS s3 bucket which it is connected to also whenever
output operation occurs that is in files uploaded to the s3 bucket the lambda
function will be triggered so for example you can use it as a put
operation or a gate operation so consider it is a put operation for now
so whenever now the user is uploading your photo here
so it gets uploaded to the s3 bucket and then it's uploaded the lambda function
is triggered so the lambda functions code can be anything you can make any
micro-service out of it so if you want to store the the files name and the
location you can store it in the database using the lambdas code or you
can watch the cloud watch matrix and you can also look at the logs which you have
coded in the context of your program so after that you can also copy this
particular file into another s3 bucket using your lambda code also if it was a
get function like if it was a gate operation if the user is trying to
download a photo again the photo will be downloaded which is from the s3 bucket
which is stored there so you can use that also as a trigger in the lambda
function and you can make any micro-service out of it so this
is how s3 is used as a trigger with lambda functions so now we understood
how this works theoretically so now it is time to move on
for practicals so now Allah is doing hands-on on creating an s3 bucket and
then using a lambda function to copy it to various multiple s3 buckets so why
does it go to various multiple s3 buckets is because we have mentioned a
different type of file extension so if it is an image it goes to a different
bucket if it is a PDF it goes to a different bucket so whatever the file
extension is it goes to a different bucket so let me explain that part now
ok guys now we are going to do a hands-on using multiple ailable services
so let me show you what we are going to do exactly before moving on with the
hands-on part so what we are going to do is I already told you how Amazon s3
works with AWS lambda so what we are going to do is we are going to create a
simple website which can upload a file to the s3 bucket and whenever a file is
uploaded to the s3 bucket the lambda function is invoked so what happens is
we have we are going to upload three types of files one is the dot jpg file a
dot PDF file and a dot txt file so whenever of image files uploaded it goes
to the image bucket whenever the PDF file is uploaded it goes to the PDF
bucket and whenever a text file is uploaded it goes to the text bucket and
be able to do this all with a simple lambda code and then what we are going
to do is we are going to use the s3 buckets uploading that is put as or post
as a trigger and whenever an object is uploaded to the s3 bucket the lambda
function will be triggered quickly is now let us see this on the air base
management console and how to do this practically mercury lies at the
beginning of the video I told you that I'm running this file both on the local
host and the elastic Beanstalk so to understand it better first let me
explain the local host part so let me show you the code first so this is the
code there are two files one is indexed or PHP and the other is file logic dot
PHP so let me briefly explain this is a simple form which has an input
and button so input is you click on file and button whenever you press the button
it o it uses file logic dot PHP to execute the uploading to this 3-part so
here I'm mentioning the bucket which I'm going to upload to and here you can see
my credentials so this is my region and version I've given latest and these are
my secret keys and the key which are so that it can access my AWS account and
that particular bucket in my AWS account and upload it over there so here you can
see if if that particular object with the same name exists what it does is it
adds dollar time in front of it so if that same file exists what it does is
the normally if it does not exist dollar key will be just dollar key but if a
file exists in the same name if doc one dot PDF one doc I'm again applauding doc
one dot PDF so it takes a current time match it in front of the file name so
this is why we saw in the beginning of the video there was one doc one and it
had another doc one filed with a time duration in front of that so now we've
seen this code now let me show you the Python code which is used for the lambda
function so in the lambda function using that we are going to send this
particular objects from one bucket to various buckets using their extensions
that also briefly and let us move on to how to do this using the aw
infrastructure so this is the Python key and first I'm importing some things
which I need actually so I'm inputting chess and I am importing Oh a start path
this is for getting the extension and I'm importing both o three this is
mainly important because go to 3 is the AWS DK for python this allows Python
developers to build and run applications on AWS using this so you can write
Python applications but using Python programming language when you include
this particular go to 3 so and I will explain this briefly so this is the
source bucket getting the name here this is the file
name I'm getting it here and copy source I'm giving bucket search bucket and Kiki
so this is why I imported Jason so I'm recording the copy source as the
bucket which I'm going to upload it from and this is the object which is going to
be uploaded and this is our cloud version 4 so if I printed over here it
will be available in the cloud watch streams like you can put to that log
streams and check out these things when the function has started so when the
function started this will be there and after that this information will be
given so and then here comes the logic logic is pretty simple you first get
check whether the object exists if it doesn't yet check whether extension
first you get the extension or the last four characters that is is a dot jpg dot
PDF or dot txt so even if it is a file with a different X technician if it is a
dot PNG the file will only be available in that particular bucket it won't be
copied to any other bucket if it is got PA JPG it will go to tow
the bucket image if this PDF to the packet PDF if it is checks you go to to
the packet txt so s3 dot copy object bucket this is the destination bucket
this is the file name and this is the copy source that is the file which are
going to upload flow from the bucket that is from the bucket is the name of
that particular bucket so also if it does not exist so this will be printed
in the cloud watch log stream okay guys now we have seen the code let us move on
to the AWS implementation part okay guys let us start from the beginning so first
you have to go to the a management console because you have to create a
role so first thing ever do is you create a role I've already created a
role test so I'm going into that and you have to attach a policy and I have
created a policy already which is s3 and score move so you have to create a
policy and use this particular JSON code inside it so what I'll explain what this
means so I am giving effect alone and also over here so here I am giving the
action slogs create create log group log stream and events so I am allowing all
these events to happen in the cloud watch whenever I am using this
particular policy so in s3 I am allowing all operations like post get put
whatever operation it is I am allowing it to happen so I am giving effect as
alone also they should have an unique resource name so on providing that so
without this particular policy enrolled you can't do this
so you can't basically do what I am going to do right now so we cannot copy
s3 buckets objects in the multi place three buckets so you have to create this
role and policy and use it in the lambda function
so the first thing you are going to do when I was never going to create four
different s3 buckets one is from the bucket which is the main source bucket
and we wanted to create three more buckets which which are for image PDF
and txt so let us do that before moving on with the history part let me tell you
how to do this without copying and pasting this JSON code over here first
you have to create a role which lets lamda call other aw services so here
after clicking on create role you could use lambda so that it can be calling
other UW services and next you have to give it permissions you can see here
there are a lot of permissions that is there are policies available
so before we created a policy s3 under school move and we pasted that JSON to
allow access to cloud watch locks and s3 so what I'm going to do now is I'm going
to give full access for s3 from searching year so searched s3 full so I
got full access for s3 and I'm searching cloud watch cloud watchful so here you can see I got
cloud watchful access to so I gave both so next time with the tags I don't need
any tag so I'm reviewing I have to provide a role name so I'm going to give
this as practice because I'm not going to attach this I'm just explaining you
how this is going how to create a role and how to attach policies to it so you
can see here there are two policies attached Amazon s3 full access and cloud
watchful axes so what I'm going to do now is I'm going to create this role so
I've provided create this role and now the role has been created and as I
showed you before here you can see this and also I can go into the policy and
show you the JSON part this allows all the actions for s3 and the other policy allows all CloudWatch
Jackson you can see in the JSON so you can see here it allows everything so it
allows all cloud watch actions so that is what I wanted to show you now let us
move on with the s3 part so I'm offering the s3 management console so I already
have a bucket this is for a beanstalk I created earlier so now let's create four
more buckets as I told you already so first bucket is from the outlet so
this is what so you don't need to configure anything you don't need to
give any tags so you just click Next and it already sits permission so local
public-access and also if you want you can review it once and then create
bucket so this is how you create a bucket it's actually pretty simple so
then I am creating to the bucket image and I've created that okay next thing is
I'm creating pour the bucket PDF I am doing next next next and create and
finally I am creating two bucket text so I'm giving next next next and I've
created buckets so right now we have created four buckets the four buckets
which we need which is in the code for us so what I'm going to do now is I'm
going to move on to create a function in the lab so first what we'll do is we'll
go to the lambda management console and create a new function so you can see
here I already have a function so let me create a new one so author from scratch
use a blueprint and browser the suppressor repository so using a
blueprint is that there there are already many things so here s3 get
object Python config will change triggered there are many things so what
we are gonna do is we are going to author it from scratch because we are
using our own code so I am giving the function name as AWS lambda 2 plus
lambda demo and I'm writing the code in Python so I'm giving pipe in 3.7 as my
run time and another thing so what I told you we have created the role and I
attached a policy too so we have to use an existing role here so the existing
role which I told you was test so I am using that particular role here I am
creating the function so the function has been created
successfully so what we have to do now is see you can I first explained the
dashboard so here is the configuration and here is the monitoring so you can
see cloud watch matrix and cloud watch logs insights so right now it is loading
so let us see so it will be empty right now after the function starts executing
then we can see some data over here also we can see the logs in cloud watch so
let me show that later so first we'll configure it so first I
am adding a trigger our trigger is going to be s3 so also what I'm going to do is
the bucking is actually from the bucket so whenever an object is uploaded to
from the bucket it has to be triggered so our buckets from the bucket and Here
I am put post or copy so however it may be if an object is uploaded to from the
bucket it progressed in the function starts executing so it is all
object.create events and here I don't need to give anything if you just want
to copy one particular suffix like if you want to copy one leaves RPG files
then you can mention it over here so right now I am implementing that in the
code so you don't need to give that also give an able trigger here and add it so
this figure will be added so you can see here sugar has been successfully added
okay so the next thing is now we have to use the code here so you can already see
there is a code written a simple code so let us consider this as our runtime
right now so the runtime is Python 3.7 this is the editor to edit the code this
is the handler which I showed you so lambda function on the school lambda
function dot lambda handler so even my operation I mean even my functions name
is lambda and so I don't need to actually change anything okay so let me
copy that code and paste it over here and let us save this particular lambda
function go here I am copying this code I'm tasting it over here so the code has
been copied so you can see the imported function imported libraries and here is
the print function and here is the CloudWatch info and here is the logic to
copy from s3 to another s3 bucket so now let us save this so once we save it it
is saved so now whenever so now whenever a object is uploaded to
the s3 bucket from the bucket then this process will start happening first let
me show whether it happens or not so you can see here from the bucket is empty
and to the bucket image is empty and even these two will be empty because we
just created them so right now I'll manually upload a file to from the
bucket so I'm adding files son adding this image and providing next next next
and upload I am uploading this file to from the bucket s3 so you can see here
that the file has been uploaded now let us cross check with to the bucket image
so it has to be uploaded to know the bucket image because our lambda will be
tripled right now and that function will be running and this process would have
happened so let us either first it I am you can see a planetoid jpg file has
been uploaded here so right now it manually works so what I have done is
I've created a simple web page which can upload files to s3 and then the process
happens automatically so what we are going to do now is we are going to
create the same web page which was running in the localhost now we are
going to make it run you in the elastic Beanstalk using URL so you anyone can
upload a file right now and it it gets segregated according to the extension
and it gets copied so I told you after execution of the
function you may see you can see the cloud words matrix so let me go to
monitoring so here you can see cloud watch matrix and here you can see some
data so this is invocations one so the function has run one time and the
duration was the duration is given here and also the success rate was 100% there
was no errors so you can see that here also let me show you the logs
so a lot has been created I am opening that so you can clear so function start
cloud watch so let me open the code first okay so here function start cloud
watch function start cloud watch and you can see the details wing dividers or log
stream name log group name request ID so log stream name lop group name and
request ideas window so after the request ended it is ending the report
and the report has been sent and you can see here it took five hundred and sixty
six point one zero milliseconds and duration was 600 milliseconds so it is
founded so memory size the maximum memory size was 128 MB so there was no
more memory needed for this function so now let us move on with elastic
beanstalk and will create and deploy elastic beanstalk application over there
so let us get started so let me click on get started so first thing I have to do
is I have to create an application so my application is going to be AWS lamda
demo and I have to choose the platform so Michael was in PHP if your code was
in any language like if it was in Java or go or Python you can choose that so
right now I'm choosing PHP what I'm going to do is right now I'm going to
create the application later I'll upload the code to make it more clear so the
application is creating right now so it would take some time so let us
wait and start the processor okay guys now you can see it has been created
successfully so let me show you that actually is there before we do this so
there is some default PHP page over there so this is the default PHP page
and what we are going to do is now we are going to upload and deploy our
application here before that let me explain what actually that is a little
change in the code because the forgiveness before uploading our
application and deploying it let me show you and explain you the simple change
which I made in the code so this is the code so here you can see index dot PHP
so this is there is no change in this but in philological PHP you have to
change the path because you are going to upload it to animes on Linux environment
and the path changes there so the slash where slash app slash current slash your
file so it index dot PHP and file logics dot PHP will be in this directory and
whatever file you upload to elastic Beanstalk will be in this directory so
what I'm going to do is you have to click on all the files and
choose over here and you have to archive it into a zip file so if you ask why
should we zip it rather than make the file a rad file or a tar file because
elastic means the environment only accepts those zip files so you can just
do it in a normal way or instead of using WinZip you can just click on them
and you can just send them to a compressed file which will automatically
create a zip file so I've already done that so now let us upload this file to
the elastic Beanstalk so let us upload now so I click on this
button so I have to choose a file so I would have to do is I have to go back - aw stem so this is the file which have
to upload so AWS uploads file logic and index files or within this particular
zip file you should not create a file on top of this and give that you have to
create a zip file just clicking on these files so I'm opening it and I'm naming
it as a obvious lambda demo so it is a WS lambda Rahman so I am going to deploy
this so deploying it takes time so let's wait
until then and we'll check out our application books
you guys now a file has been uploaded you can see the running version is a
doubly as lambda demo this was the name which I gave for this particular
application so now let me open the URL and show you the website which is
running so here you can see our website is running fine so what I am going to do
now is I'm going to upload few PI Studios and check whether it gets
uploaded in our SD bucket and it is moved to the respective s3 buckets for
image PDF antics but before uploading let us check
whether the s3 buckets are empty or not so that to make sure that there were no
files before and only the uploaded files are getting uploaded right now so I'm
going to from the bucket it's empty and checking all the other buckets just to
make sure okay now all the buckets are empty now let us upload few files so I'm
going to up to three different types of files and check whether each of the file
goes to its respective bucket so first I'm going to upload a PDF
it is successful and then I'm going to upload a image and it is successful too
and then I'm going to upload a text file so first let us check whether all these
files have been uploaded here so now we can confirm that the document PDF the or
jpg n X 1 dot txt have been uploaded to the s3 bucket so now let us check
whether each of the respective files with different extensions get uploaded
to the dedicated s3 bucket so first let me go to to the bucket
image let me refresh it and you can see the dot jpg file over here so next to
the PDF bucket and you can see the PDF file over here and then to finally do
the text bucket and I'm refreshing it and you can see the text window takes a
file over here so right now our lambda function is working and it corresponds
with whenever we're uploading a file through the elastic Beanstalk the
elastic Beanstalk is successfully running our application and it gave us
an instance to run it on so whenever you upload a file using that our files are
getting uploaded to from the bucket and lambda function is triggered and using
this particular logic it builds to the bucket image it goes to the bucket image
and then PDF and then text so this is what's happening in our hands-on
so before finishing this hands-on first let me explain the complete process
which we did to make this happen so first we created a policy and the first
book is it'll roll and we attached a policy to allow all the functions which
we wanted to do in this new bucket so and then we created full different s3
buckets one for first three for destination and then we created an under
function and we uploaded a code which basically copies from one particular way
three buckets to s3 bucket which we give the destination name and we also created
a trigger for the s3 which is whenever an object is created so as from s3 from
the bucket whenever an object is created this particular Google happens and this
function code runs so this function code runs so whenever this function code runs
what happens is a file from the particular sources three bucket is
copied to the destination is three bucket and then what we did we launched
our application we uploaded our local application into classic Beanstalk and
we deployed it and we right now have an URL to run our application so then now
we are going to scoop Claire watch so let me refresh it once more so we upload
with multiple files after that and you can see there are multi multiple logs
over here so this was the first time and then you can see whenever the next one
the function is coming as function start and that is the next function so the
function will be keep on running so you can see log stream name log group name
request ID this is one particular function execution and this is one
particular function execution this is one this is one so we the function has
been executed this many times and the function will started at the first time
so this is how we have to use cloud watch also you can see the cloud watch
matrix over the monitoring tab so here in the monitoring tab in cloud watch
matrix now you can see there are totally fine locations so the cloudburst has
been indicated five times that means the function has Springs on five times and
here you can see the success the success rate has been 100% and there
are no errors because all the files between they're uploading God uploaded
on the function never got any other than in it ran successfully
right now I hope you guys know how to use lambda to write a code and use that
and use other services like s3 as a trigger to run your particular
application so I have learned multiple parameters we learned I am - so that we
have to create a policy first and then we know how to create s3 bucket we know
how to write coding a lambda editor and then we know how to use cloud watch log
streams and also metrics and also learn how to upload your local application
into elastic Beanstalk so that you will get your own URL you can access it from
anywhere in the world so use cases of lambda
there are various use cases of lambda but right now we will discuss three of
them so the first one is the server let's websites the second one is the
automated backups and the third one is filtering and transform data so let us
see Sir Willis website so I already told you what is the service architecture the
service architecture is where you just have to code and it automatically takes
care of provisioning and managing servers and its infrastructure
completely so what you can do here is you can host your static website on st
you static website is basically HTML CSS javascript / typescript files it cannot
run a server-side scripts you can only run client-side scripts so in
server-side scripts like PHP and asp.net cannot be hosted on s3 as a static
website and using s3 for static hosting is very cheap and you can write lambda
functions and connect them with the s3 to correlate it you can make a static
website available using AWS lambda by writing some code so that the users can
keep track of any resources which is being used on the website so next is
automated backups so the word says you everything the ascendant automated
backups you can automatically create backups you can
create lambda evens and scheduled them at a particular time on a particular day
and create backups automatically in your AWS accounts so to create backups you
can check whether there are resources idle and take those content and just
back it up and delete it from that particular place and also you can
generate reports using lambda in no time using code or just connecting it with
cloud watch so you can generate every report and you can see how much of data
has been backed up how much of data has been deleted and
now you can manage it very easily and schedule it on time so that third use
cases filter and transform data so you can connect lambda with other Amazon
services like s3 genesis redshift and database services like RDS DynamoDB or
Amazon Arora so you can filter the data before sending the data to any Amazon
storage service or database service you can filter them using the code and you
can easily transform the code and load the data from between lambda and - all
these services now let us discuss land uprising first let me explain what is
free tier 3 t1 is provided by Ed bliss for a 12-month period and in that period
you can use three services which are provided by air bluest you can use free
tier eligible services you can use services like ec2 s3 and even lambda for
free but they have their own limitations for example in lambda you can use 1
million requests per month or 400,000 GB seconds of compute time per month for
free and anything exceeding that will cost you also you might be guessing what
is GB seconds and 1 million requests 1 million requests is when the lambda
function is triggered 1 million times and GB sequences Giga bits that is
thousand megabits per second that is the transfer rate so 400,000 GB seconds of
compute time per month is allowed 400,000 GB seconds of transfer time is
allowed per month for any given lambda function and then requests as I told 1
million requests earth free and then after that each 1 million requests cost
two point two dollars and after that duration I already told you what is GB
seconds so 400,000 GB seconds per month is free and after that every GB second
you use will cost you the number which is given there that is point zero zero
zero one six six six seven dollars so that is all about lambda pricing today
in this session we are going to discuss the top AWS questions that can be asked
to you in your next AWS interview so we'll start this
session by first causing the domains from which we have
collated these questions these domains are directly mapped to the AWS exam
blueprint which was recently updated in June 2018 so there is a high possibility
that your next AWS interview might contain questions from these domains so
I want you to pay the utmost attention that you can so that you can gain as
much knowledge as you can from this session all right so let's take a
top-down approach let's start from the simplest questions that are some general
questions on AWS that can be asked to you in an interview all right so the
first question says what is the difference between an ami and an
instance so guys an ami is nothing but a template of an operating system it's
just like a CD that you have of an operating system which you can install
on any machine on the planet right similarly an ami is a template or is an
installation of an operating system which you can install on any servers
which fall into the Amazon infrastructure all right you have many
types of a.m. eyes you have Windows AMI you have Ubuntu VM eyes you have sent to
a CMAs etc there are a lot of a.m. eyes that are present in AWS marketplace and
you can install them on any servers which are there in the AWS
infrastructure all right coming on to instances what are instances of
instances are nothing but the huddle machines on which you will install am i
right so like I said a.m. eyes are templates which can be installed on
machine these machines are called instances and
again instances also have types based on the hardware capacity for example of one
CPU and 1gb of machine is called t 2 dot micro right similarly you have T 2 dot
large you have T 2 dot extra large then you have io intensive machines you have
storage intensive machines you have memory intensive machines and all of
these have been classified in different classes right depending on the hardware
capability so this was the difference between an AMI and an instance our next
question asks us what is the difference between scalability and elasticity all
right so guys scalability versus elasticity is a very confusing topic and
if you think about it so scalability is nothing
but increasing this the the machines resources for example if your machine
has 8 GB of RAM today you increase it through 16 GB therefore the number of
machines are not increasing you are basically just increasing the
specification of the machine right and this is called scalability when you talk
about elasticity we are basically increasing the number of machines
present in an architecture we are not increasing the specification of any
machine for example we choose that we require a 3 GB machine with around 8 GB
or 10 GB of storage right so any replica which will be made or any auto scaling
which will happen it will only happen to the number of machines
it will nowhere be related to this specification of the machine the
specification of the machine will be fixed the number of machines will go up
and down and this is called elasticity on the other hand scalability is called
is basically termed as the change of the specification of the machine that is
you're not increasing the number of machines you're basically just
increasing the specs of the machine for example the RAM the memory or the hard
disk etc and this is the basic difference between scalability and
elasticity moving forward our next question is which aw is offering enables
customers to find buy and immediately start using software solutions in their
a SS environment now you can think of it as say you want a deep learning MA or
you want a Windows Server ami which specific software is installed on it
right so some of them are available for fee but some of them can be purchased in
the AWS marketplace so the answer for this is AWS marketplace it's basically a
place where you can buy all the AWS systems that you are or all the AWS or
non AWS softwares that you require to run on the AWS infrastructure right so
the answer is AWS marketplace moving on a next question would fall under the
domain of resilience architecture so all the questions that we'll be discussing
henceforth in this domain will all be dealing with the resiliency of an
architecture all right so a customer wants to capture all client connection
information from his door at an interval of 5 minutes which of the
following options should be chosen for his application all right so I read out
the options for you the option a says enable AWS cloud trail for the cloud
blanche for the load balancer option B says cloud trail is enabled globally
option C says install the Amazon Cloud words logs agent on the load balancer an
option D says enable cloud watch metrics on the load balancer all right now if
you think about it cloud trail and cloud watch are both monitoring tools so it's
a bit confusing but if you have studied it deeply or if you understand how cloud
trail works and how cloud watch works which is actually not that difficult all
right so the answer for this is a that is you should enable AWS cloud trail for
the load balancer reason being option B is not correct cloud trail is not
enabled by default or is not enabled globally two other services option C
says install Amazon Cloud watch so option C an option D you will not even
consider reason being that you're talking about the log of the client
information right what all people are connecting to the load balancer what IP
addresses are connecting to the load balancer etc cloud watch deals with the
local resources of the instance that you are basically monitoring for example if
you are monitoring ec2 instance cloud watch can monitor the CPU usage or the
memory usage of that particular instance it can not take into account the
connections which are getting connected to your AWS infrastructure right on the
other hand plow trail deals with all these kind of things where in client
information or any kind of data which can be fetched from a particular
transaction all of that can be recorded in the logs of Cloud trail and hence for
this particular question the answer is enable AWS cloud trail for the load
balancer moving on our next question is in what scenarios should we choose
classic load balancer and application load balancer all right so for this
question I think the best way to answer this question would be to understand
what exactly is classic load balancer and what exactly is the application of
balancer all right so a classic load balancer is nothing but you know it's an
Olfa load balancer which does nothing but
round-robin based distribution of traffic which means it distributes
traffic equally among the machines which are under it it cannot recognize which
machine requires which kind of workload or it requires which kind of traffic
whatever data will come to a classic load balancer will be distributed
equally among the per machines which have been registered to it on the other
hand applications old balancer is a new age load balancer which basically deals
with identifying the workload which is coming to it right it can identify the
workload based on two things it can either identify it based on the path for
example you can say that you you have a website which deals in image processing
and video processing so you can see it it might go to in telecom slash images
or slash videos so if if the path is slash images the application load
balancer bill directly route the traffic to only the images servers right and if
the path is slash videos the application load balancer will automatically route
the traffic to the video servers and this is application load balancers of
nfe whenever you are dealing with multivariate traffic that is traffic
which is meant for a specific group of servers you would use application load
balancer on the other hand if you have servers which which do the exact same
thing right you just want to distribute the load among them equally then in that
case you would use a classic load balancer our next question says if you have a
website which performs - scoffs that is rendering images and rendering videos
both of these pages are hosted in different parts of the West right but
under the same domain name which AWS component will be apt for your use case
among the following all right so this I think is an easy question reason being
we just discussed this right so the answer for this is application load
balancer the reason being the kind of traffic which is coming is specific to
its workload and this can be differentiated easily by an application
load balancer okay so we are done with the resilient architecture questions now
let's move on to the performance architecture domain we will be
discussing about how to about architectures which are performance
driven right so let's take a look at the first question so the first question
says you require the ability to analyze a customer's clickstream data on my
website so they can do behavioral analysis so your customer needs to know
what sequence of pages and adds their customers clicked on this data will be
used in real-time to modify the page layouts as customers click through the
site to increase stickiness and advertise click through which option
meets the requirement for captioning and analyzing that's in this data all right
so the options are Amazon SNS AWS cloud trail AWS kindnesses and AWS SES so
let's first start with the odd one out options right so we have Amazon SNS
which leaves with notifications so obviously because we want to basically
we want to track user data right so SNS would not be the apt choice for it
because sending multiple notifications in a short amount of time would not be
apt similarly SCS would also not be the app choice
because then we will be getting emails on basically the user behavior and this
would amount to a lot of emails so hence it's not an appropriate solution I think
then we have AWS cloud trail and AWS kindnesses actually both these services
can do this work but the key word over here is real-time right you want the
data to be in real-time so since the data has to be in real
you will choose AWS kindnesses slough trail cannot pass on logs for real-time
analysis kindnesses especially built for this particular purpose enhanced for
this particular question the answer will be AWS kindnesses moving on that our
next question is you have a standby ids instance will it be in the same
availability zone as your primary RDS instance okay so the options are it's
only true for Amazon Aurora and Oracle RDS second option is yes third option is
only if configured at launch and the fourth option is no all right so the
right answer for this I want to think about it like this that whenever you
want a standby RDS instance it will only be there when your RDS instance stops
working now what could be the reasons that your RDS instance could stop
working probably it could be a machine failure or it could be a power failure
at your at at the at the place where your server has been launched it can
also be probably a natural calamity which would have struck your datacenter
various over exists so all of these could be reasons which could lead to
disruption in an RDS service right now if your standby RDS instance is also in
the same availability zone as your primary these conditions cannot be
tackled or these situations cannot be tackled alright so it is always logical
to have your stand by machines in some other place right so that even if there
is a natural calamity or if there is a power failure you your instance is
always up and ready and because of that AWS does not give you the option of
launching your standby RDS instance in the same availability zone it always has
to be in another availability zones and that's why the answer is no your RDS
instance will not be in the same availability zone as your primary
instance alright so our next question is you have a web application running on
six Amazon ec2 instances consuming about 45% of resources on each instance you
are using or scaling to make sure that six instances are running at all times
the number of requests this application processes is consist
and does not experience spikes all right so the application is critical to your
business and you want high availability at all times you want the load to be
distributed evenly between all instances and you also want to use the Amazon EMI
for all instances which of the following architectural choices should you make
alright so this is a very interesting question so basically you want to run
six Amazon ec2 instances a six Amazon is zero instances and they should be highly
available in nature and they would be using an EMI of course because they are
Auto scaled so which among the following would you choose so you have the options
deploy 6e zero instances in one availability zone and ELP deployed three
ec2 so is in one region and three in another region and you zlb you should
deploy three ec2 on one easy that is availability zone and three in another
availability zone and should apply to zero instances in three regions and use
an elastic load balancer alright now the correct answer for this
would be see the reason being that EMI is are not available across regions
right so if you have created an ami in one region it will not be automatically
available in another region you will have to do some changes and only then or
do some operations and only then it will be available in another region so this
is reason number one so the region options mention away get casted out
because of this reason second if you look at the first option which is
deploys 6e zero instances on one availability zone that defeats the
purpose of high availability because like I said if there is any natural
calamity or a power failure at a datacenter then all your instances will
be down so it's always advisable to have your servers distributed but since we
have that limitation of using an EMI and therefore and also the limitation that
it is not accessible across regions we would choose distributing our instances
among availability zones and I'd say we have we just had the option of
are to availability zone right it could be three availability zones and we could
deploy to two servers in each and this would also amount to high availability
all right and of course because you want to load balanced traffic if you apply an
e lb on top of three availability zones it will work like a charm regions across
regions it can become a problem right and but in availability zones it
definitely works and will work perfectly all right so the answer for this
question is you would be deploying ec2 instances among multiple availability
zones in the same region across an ELP alright so a next question is why'd we
use elastic caches and in what cases alright so the answer for this is
basically related to the nature of the service of elastic cachet so elastic as
the name suggests it's basically a cached which can be accessed faster than
your normal application for example if you talk about a database instance from
you which you are gathering information right if you are always dealing with the
same kind of query for example you're always fetching the password for
particular users right so if you're using an elastic kesha that data can be
captured or can be cached inside elastic caches and whenever a similar request
comes in which is asking for that kind of data
your my sequel instance will not be disturbed the data will directly be
relayed from elastic cache and that is the exact use of elastic cache right so
you use elastic Ashi when you want to increase the performance of your systems
right whenever you have frequent reads of the similar data so if you have
frequent or ease of similar data we will probably be querying the same kind of
data every time and basically that will increase the load on your database
instance but to avoid that you can you can basically introduce an elastic
caching layer between your database and your front-end application and that
would not only increase the performance but also decrease the load on your
database instance right so this was all about performant architectures guys are
next to mean would deal with secure application and their architecture so
let's go ahead and start with first question of this domain which
talks about a customer wants to track access to their Amazon simple storage
surface buckets and also use this information for their internal security
and access audits which of the following will meet the customer requirement so
basically you want to just track access to the s3 buckets now if you wanna track
access let's see what are the options so you can enable clout trail to audit all
Amazon s3 buckets you're gonna enable server access
logging for all required Amazon s3 buckets enable the request appeal option
to track its syi a ws billing or you can enable enable is s3 event notifications
for put and post all right so I would say the answer is a and reason being why
is the answer not be because server access logging is actually not required
when you want to deal with tracking access to the objects present in the s3
bucket a requester pays option to access yia w is billing again it's not required
because there's a very simple feature of cloud trail which you which is available
to all the buckets across s3 so why not use that and using notifications for s3
will not be apt reason being there will be a lot of operations that would be
happening so rather than sending notifications over each and every
operations it is better that we log those operations so that whatever
information we want after out of the log we can take in rest we can ignore right
so the answer for this is Amazon using AWS cloud trail ok an exhibition has
imagine you if you have to give access of AWS to a DJ scientist in your company
the data scientist basically requires access to s3 and Amazon EMR how would
you solve this problem from the given set of options okay so you basically
want to give a particular services access to an employee and we wanna know
how would we do that ok so the options are we should give him
credentials for route second option being clearly user and I am with the
manage policy of EMR and s3 together create a user in I am with manage
policies of EMR and s3 separately give him credentials for admin account
and enable MFA for additional security okay so a rule of thumb
guys never give root credentials to anyone in your company even yourself you
should never use root credentials always create a user for yourselves and XS AWS
through that user right this was point number one second
whenever you you want to give permissions to services or permissions
of services to of particular services to people you should always create or use
policies that pre-exists in AWS right so when I say that I basically mean never
merge two policies okay so for example if you if you are using EMR NSA together
that basically means that you create a policy that gives you you know the
required access in one document that is in one document you mentioned the access
for AMR and the in the same document you mentioned the axis for s3 as well well
this is not suggested reason being you have policies created by AWS which is
which are basically created and tested by AWS so there is no chance of any leak
in terms of security aspect second thing is see needs change right so if tomorrow
your user says you doesn't want access for EMR anymore he probably wants access
for easy - right so in that case what will you do if you
had the policy in the same document you would have to edit that document correct
but if you create a document separately for each and every service all you have
to do is remove the document for EMR and add the document for the other service
that he requires probably easy - you just add the document for easy - and
your SD document will not be touched right so this is more easier to manage
than to you know writing everything in one document and editing the code later
to give permissions of specific services that he requires now right so that is
something that is not much manageable so the answer for this is create a user in
I am with the manage policy of EMR and s3 separately alright let's move on to
the next question so how to system administrator add an additional
layer of login security to a user's AWS management console so okay so this is a
simple question the answer for this is enable multi-factor authentication so a
metaphor multi-factor authentication basically deals with rotating keys that
the keys are always rotating so every 30 seconds a new key is generated and this
key is required while you're logging in so once you've entered your email and
password it will not shred away dog you intro again give you a confirmation page
for code that you have to enter which will be valid for those 30 seconds now
this can be done using apps so you have a app called if you have an app from
Google you have apps from other third-party vendors as well right so
these apps are basically compliant with your AWS right and you can use them to
have access to the keys which are changed at every 30 seconds all right so
it is better so you if you want to enable multi-factor authentication it is
the best way of adding a security layer over the traditional username and
password information that you enter all right so our next to mean deals with
cost optimized architectures so let's discuss these questions as well so a
first question is why is AWS more economical then traditional data centers
for applications with varying compute workloads all right so let's read out
the options so we have Amazon Elastic Compute costs are billed on a monthly
basis okay Amazon ec2 costs are billed on an hourly basis which is true Amazon
ec2 instances can be launched on demand when needed true customers can
permanently run enough instances to handle peak workloads all right so I'll
say because this question is talking about the economical value of AWS option
B and option C are correct reason being you're charged according to the R and at
the same time you can have them on demand if you don't need them after to
us just pay for two words and then you can you don't have to worry about where
that server went right so this is very economical as compared to the fact that
when you buy servers and their need finishes say after one or two years when
their Hardware gets outdated so it becomes a bad investment on your part
right and that is the reason AWS is very much economical in terms of reason being
that you know the either charges you according to the are and also gives you
the opportunity of using servers on the basis of on-demand pricing all right so
this would be the answer so option be an option C would be the right answer for
this particular question moving further our question says you're launching an
instance under the free tier usage from EMI having a snapshot size of 50 GB how
will you launch the instance under free usage here so the answer for this
question is pretty simple it is not possible right you have a limit on how
much of size subsidized you can use that would fall into the free tier 50 GB is
not the size is basically a size which will not fall under the Amazon free
theĆ”-- rules and hence this is not possible
all right an expression says your company runs a multi tier web
application the web application does video processing there are two types of
users which accesses service Premium users and free edition users the SLA for
the Premium users for the video processing is fixed while for the free
users it is indefinitely that is a maximum time limit of what if 8oz how do
you propose the architecture for this application keeping in mind cost
efficiency all right so to rephrase this question basically you have an
application which has two kinds of traffic - free traffic and one is
premium traffic the premium traffic has an SLA that if the tasks a should be
completed and say what are or to us the free traffic they do not guarantee it
when it will finish and it has a maximum SLA of 48 hours so if you were to
optimize the architecture for this at the back end how would you design the
architecture that you get the maximum cost efficiency possible using this
architecture alright so the way we can deal with it is there is a thing called
spot instances in AWS basically deals with bedding so you bid
for AWS servers in the lowest price possible and as long as the server
prices are the in in the range that you specify you have that instance for
yourself so all the free users who are coming to this website can be alerted to
spot instances because there is no es la so even if the prices go high and the
systems are not available it does not matter right you can wait for the
applications for processing if you're dealing with free users but for premium
users since there is an SLA you have to meet a particular deadline I would say
you use on-demand instances they are a little expensive but I think because
Premium users are paying for their membership that should cover that part
and spot instances would be the cheapest option for people who are freeloaders or
people who are coming free on your website because they do not have any
agency of their work and hence can wait if required if the prices are too high
for you alright so our next to mean we'll talk about operationally excellent
architectures so let's see what all questions are covered in this particular
domain all right so imagine that you have an AWS application which is
monolithic in nature so monolithic applications are basically which do not
which have the whole codebase in one single computer right so if that is the
kind of application you're dealing with it's called a monolithic application now
this application requires 24/7 availability and can only be down for a
maximum of 15 minutes if had your application been not monolithic I would
say that there would be no downtime but since if the monolithic application the
question has mentioned there is an expected downtime of say 15 minutes how
will you ensure the database hosted on your EBS volume is backed up now since
it's a monolithic application even the database resides on the same server as
that of the application so the question is how will you ensure that a database
is backed up in case there is an outage so for this answer I will say the answer
is pretty easy you can schedule EBS snapshots for a 0 instance
at particular intervals of time and these snapshots would basically act as a
backup to your database instances which have been deployed on ec2 so hence the
answer is EBS instance box snapshots alright an actuation has which component
of AWS global infrastructure does AWS CloudFront used to ensure low latency
delivery now AWS cloud front is basically a content delivery network
which basically means if you are in the US and the application that you're
accessing has servers in India it will probably catch the application in a US
server so that you can access that application faster than to send traffic
packets over to India and then receiving them back all right so this is how cloud
front works basically catches the application to your nearest server and
so that you get the maximum latent sorry the minimum latency possible and it is
possible using AWS edge locations okay so as locations are basically the
servers that are located too near your near your place or near a particular
availability zone which basically catch the applications which are available in
different regions or are at fire fire places
okay guys a quick info if you want to become a professional a tabular
solutions architect you can take up this aww solutions architect course provided
by Intel apart in this course you will be learning all the concepts which is
required to crack the a double certification exam and also there are
three major projects involved for you to understand it better okay guys we have
come to the end of this session I hope this session was informative for
you and if you have any doubts regarding this session please feel free to comment
about it below and we would love to help you out thank you