Hey guys! This is Hemant from Edureka. Today in this session, we will be talking
about Amazon Web Services, so without wasting any more time let's move on ahead and see
what is our agenda for today. So, we will be following a top-down approach,
we will start off with what is cloud and then move on to what is AWS. After that we will be discussing the different
domains which are there in AWS, followed by the AWS services. So, I will be briefing you guys up on all
the AWS services, which are out there, after that we will be discussing the AWS pricing
options. So, we will be discussing all the key points
which are there in AWS pricing. Once we are done with all the learning, I
will teach you guys, how to migrate your applications to AWS infrastructure and in the end, we will be solving a real-life problem using the AWS knowledge that we will be getting today. So guys, this is our agenda for today, are
we clear? Alright. I am getting yes's. Michael says yes, Sandeep says yes, Neeraj
says yes; alright guys, let's go head then. So, before starting off with what is cloud,
let us see what were the problems that people faced before Cloud computing? So, before Cloud computing, if you had a used
case, if you had to host a website, 'How would you do that'? So, first of all you have to buy a lot of
servers. Why servers? Because obviously you will be hosting a website
on something, right? So, you have to buy a lot of servers, and
every website has a peak time. So, keeping that peak time in mind, you have
to buy some more servers. Thirdly, you have to monitor and maintain
these servers, since these are your own servers. You have to continuously monitor them, so
that your application does not experience any down time and once you encounter a problem,
you have to maintain it, right? These are things that you have to do. Now, these things led to problems. Now, what were those problems? Let us refresh them. So, first of all this set-up was very expensive
since you are buying a lot of servers, and guys, servers are not cheap, they are very
expensive, right! So this set-up was very expensive. Secondly, since you had to troubleshoot problems,
they used to conflict with their business goals, because your aim should be making your
application better, right? But if you are also thinking about, whether
your application is or is not facing any down time or if your server hardware configuration
is up to date or not, you are not investing that much time on your application as you
may need to, right? So, it used to conflict the business goals. Thirdly, since your traffic is varying, your
servers will be idle most of the time. What this basically means is, since you bought
a minimum number of servers keeping the peak time in mind, now, what about the time when
the peak time is gone, right? Let's take an example, say suppose your peak
time is between 4 p.m. to 6 p.m. and when I said peak time, its time when a website
is experiencing the most traffic, right? So, if your website is experiencing the most
traffic between 4 p.m. to 6 p.m. and you are placing say 5 servers to handle that kind
of traffic, what about the time which is there after 6 p.m.? After 6 p.m. when the traffic goes down. You say you need only 2 servers, which can
handle that traffic. Now, what about those extra 3 servers? Right? They become a liability on your investments
because since you've invested a lot on those servers, but you cannot utilize them now because
you do not have the need for them after 6 p.m. Right? So, it becomes a bad investment, hence it
was a problem. So, these are all the problems that people
faced before cloud computing. Now we have to fix these. So, let's see, how do we fix them? So, we came up with Cloud computing. So, now in Cloud computing, instead of buying
servers, you actually rent them, right? So, instead of burning a lot of your investment
on buying servers, you can actually invest it on some other things, may be, a better
idea or may be hiring more people, right? So, first of all, the foremost things that
you are renting servers cuts on your cost into a fraction. Secondly, scalability was a problem. So, now with Cloud computing, you can scale
up or scale down according to your needs. So, you don't have to foresee what kind of
future your application will have or what kind of traffic will be coming in the future,
as and when the traffic comes in you can scale up or scale down as and when required, right? So, scalability was again an issue, which
has been solved with Cloud computing. Thirdly, now, you don't have to manage your
servers, all of that, all of that tension will be done by your Cloud providers. You just have to focus on your application. So, your Cloud provider will manage all the
updation, which is required to secure the patches, to ensure that your application is
not facing any down time. Only thing you have to do is, choose a right
Cloud provider, because, if you choose a new player which is out there in the market, just
for saving a few dollars maybe, your application may not become that much successful because
new Cloud providers are not equipped to handle the kind of problem that you might face. Right? So, you have to be very careful when you are
choosing a Cloud provider. So, guys, this is how we address the problems
that we faced before Cloud computing. Now, any question until now, any question
that you have that should be answered before we move ahead? Alright. I have a question from Michael, so Michael
is asking me, 'On which parameters we should decide to choose which Cloud provider?' Alright, Michael, good question. Michael, you have to consider in a lot of
things, say, suppose let us talk about AWS. So, AWS came into Cloud computing market in
2006, right? And if you compare it with other Cloud provides,
say suppose Azure. So, Azure came in 2010, right? So compared to Azure, AWS has a more mature
model of infrastructure and if I was you, I would choose AWS than Azure, because AWS
has seen a lot since it has started, it has become better in terms of handling problems,
which come up when you are hosting someone else's application. Apart from that, you also take into account
what kind of server capacity does a Cloud provider have and what kind of companies are
associated with the specific Cloud provider. Say, suppose if you talk about AWS, AWS is
hosting application for Netflix, which is a very successful video application. So, if Netflix is not facing any downtime,
you can be assured that your application is in safe hands as well. Right? So, these are all the parameters that you
take in mind when you are choosing a Cloud provider. So, Michael, does that answer your question? Alright. Michael says yes. Any other question guys, anything related
to Cloud computing? How cloud computing became important, when
or why Cloud computing came in to the picture? Alright, since, it is a yes from all of you,
let's move ahead. So, since now we have understood what was
the need that led to Cloud computing, let us understand what Cloud computing exactly
is. So, 'What is Cloud computing'? So, Cloud computing is the use of remote servers
to store, manage and process data. So, you do three things; you store, you manage,
and you process data. So, when I say store, you are storing a file,
say on a file system on the Cloud. So, when I say manage, you are managing a
data using databases there on the Cloud. You process your data, so you are using computing
power on the cloud to process your data. Say, suppose you have a huge chunk of file
that has to be processed, right? And you don't have that kind of machine in
your own infrastructure. So, you can always rent a server from AWS,
with the right kind of configuration and you can use that machine to process the data or
process the file that you want to process, right? And once you are done with that you can always
terminate your machine and you will pay AWS according to the number of hours that you
have used this for, and that is the power of Cloud computing. You don't have to buy that computer, exclusively
to process that file. You can rent that server from AWS, use it
and pay them according to your usage, right? So guys, this is what Cloud computing is all
about. Any questions, any kind of doubts that you
have regarding Cloud computing. So, Michael is asking me, so is it pay by
use. Yes, Michael you pay according to your usage. Yes, you bang on, its pricing model is pay
by use. Good to know that you know about Cloud computing
now Michael. Any more questions, any more suggestions guys,
any more thing to add, which I just told you Guys? Alright. Michael says, no. Sandeep says all good. Alright. I am getting confirmations. Alright guys, let us move ahead then. So, we talked about Cloud computing and how
it is a successful model. Right? So, it is bound to have competition in the
market. So, if you want to become a cloud provider,
you will have a lot of competition in the market. Because this concept is huge, it's pretty
advantageous, it's very successful. A lot of people are trying their hands-on
Cloud computing, but it's a fact that there are a lot of Cloud players out there, but
why are we discussing about AWS? Why are you learning about AWS? Why do you want to become a solution architect
for AWS? Right? So, let us put a light on to that and discuss
why AWS? So, these are three parameters that we will
be discussing. So, first off, we have AWS global cloud computing
market share. So AWS has a global cloud computing market
share of 31%, as compared to its competitors, which have a cloud computing market share
of 69%. Now you would say, 31% is nothing is compared
to 69%, right? But, then the thing here to understand is
that AWS alone has a share of 31% in the global cloud computing market share, and no one,
not even Azure, which is the closest competitor to AWS has a number even near to 31%, so this
number is huge. AWS is leading by a very huge margin. Let's talk about the second parameter, which
is the server capacity. So let's consider that all of its competitors
combined, when I say it, I mean AWS. So all of AWS's, competitors combined say
have a server capacity of 'x'. AWS alone has a server capacity of 6x, which
is 6 times the server capacity of all of its competitors combined, right? So if you have an application which is very
successful and if you foresee that there is going to be more traffic in the future, your
safest bet would be on AWS, because they have that kind of infrastructure for your application
to grow, and that is why AWS has such a large user base. Let's come on to the third point, which is
flexible pricing. Now, any enterprise big or small wants flexibility
in its pricing, right? They want to cut down their cost. They want value out of their money, right? And that is what flexible pricing is all about. So, AWS charges you for the hours, right? So, when you use AWS servers and suppose you
use them for 3 hours, you don't have to pay for the whole day, or for the whole month. You just pay for 3 hours and with this kind
of flexibility in the pricing, it has attracted a lot of customers, right? So, this is the reason that AWS is so successful
today in the market and that is why you guys are learning about AWS, but that does not
mean that the other Cloud providers are not performing well. If you look at Azure, which was just launched
in 2010. It is the closest competitor to AWS, but if
you see the current scenario, if you take in mind the current scene, AWS is still leading. AWS has more job opportunities and AWS is
more successful than any other Cloud provider. Maybe 3 years or 4 years down the line, other
Cloud providers will start coming at par with AWS, or maybe they'll surpass AWS. Like I said, if you see the current scene,
AWS is the thing and that is the reason, we are learning about AWS. That is the reason you are here, that is the
reason I am teaching you guys about AWS. So, guys are we clear why are we learning
about AWS or why you guys are obtaining AWS training rather than some other Cloud provider? Alright. Michael says, yes. Others Neel, Sandeep, they say all good. So, everybody is giving me a "Yes". So, let's go ahead. So since, now we have understood the 'why'
of AWS, let us understand the 'what', so 'what is AWS'? So, AWS is a secure Cloud Services platform. It's a platform on which Amazon offers its
Cloud services and it offers its Cloud services in Compute, Database, Content Delivery and
other Domains. So, AWS is a secure cloud services platform,
so it is a platform on which Amazon offers its Cloud services and what it offers as its
Cloud services? It offers Cloud services in Compute, it offers
cloud services in Database, and host of other Domains. So, having said that, let us discuss the different
domains in which AWS offers its services. But, before that guys, any questions that
you have regarding what AWS is about? Alright. Neel says No, Sandeep says No, Michael says,
No. Alright guys, let's move ahead then. So, let's now discuss the different domains
in AWS. So, these are all the different domains in
which AWS offers its services. So, first off we have Compute. So, in Compute, there is a service called
EC2. EC2 is an Elastic Compute Cloud. So, its just like a raw server. So you can configure this server to be anything. You can use it to host a website. You can use it as work at your environment,
its a clean state. Its just like a new PC that you buy. So, install a fresh operating system on your
PC, and then you can configure it to be anything, and solve any software you want, and then
it can serve you as you require. Right? And that is what EC2 does as well. Right? So, this is the Compute Domain. Next up, we have the Migration domain. So, Migration domain is when you want to transfer
your data to the AWS infrastructure or you want to transfer your data back from AWS infrastructure. Next up we have the Migration Service. So, this Migration service is used to transfer
data to and from the AWS Infrastructure. So, if you have petabytes scale of data in
your data center and you want to send it to AWS Infrastructure, you will be using services
in migration. Now, there is a service called Snow Ball in
migration, which is used to physically transfer your data to AWS Infrastructure. Right? So, basically what AWS does is, it sends you
a physical device which is just like a hard drive, to your premises and you transfer your
data on to it, then AWS sends it back to the Infrastructure. Now, this is Snowball. So, it offers similar services in the migration
services. Now, you would ask me, why are we sending
our data physically to AWS Infrastructure? Why not through the Internet? So, like I said, if you have petabytes scale
of storage in your Data Centre, and you have to send it to AWS Infrastructure, it is better
to send it physically rather than sending it on the internet. You can imagine it, just like giving some
data to your employees, would you give it on a hard drive, on an external hard drive,
if the data is large? Or would you send them an e-mail regarding
that? Right. So, just like that you can imagine this scene. Next up, we have the Domain : we have security
and identity compliance. In security and identity compliance, you have
services like IAM, which is used to authenticate users and define user rights to them. Say, suppose you are running a company and
you have a root AWS account. Now, you want other employees to work on AWS
account as well. But you want them to have restricted access. Say, suppose you want user 1 to maybe just
launch instances and user 2 can only edit instances, but not launch instances, maybe
user 3 can only review the instances, and not launch or edit these instances, right? So, all of these granular permissions can
be given to your users using IAM and that is was what Security and Identity Compliance
domain is all about. Next up we have, the Storage Domain. So, Storage Domain would include services
like S3 which is Simple Storage Service. So its a file system, it's an object-based
file system, in which you can store your files and access them as and when required, right? People usually get confused between Storage
domain and Database domain, because basically they are storing data, right? So, why 2 different domains? So, storage like as I said, would include
services like S3, so its a file system. Now, what is the difference between a file
system and database. So, a database cannot include your executable
files. So, say suppose you have an image file. So, that image file would not be stored in
a database, its better to store that image file in a file system and hence, access that
image file using a path, which can be stored in the database, right? So, this is the basically the difference between
a file system and a database. So, like I said, it includes services like
S3, so S3 is an object based file system in which you have buckets and objects which we
will be discussing further in our slides. Moving on, we have the Networking and Content
Delivery domain, so it includes services like Route 53. So, Route 53 is a domain name system which
basically redirects your traffic from the URL that you purchase, say a domain selling
website like GoDaddy and redirects it to your instances or your servers which are hosting
your web application. Why do we do this? Because you cannot remember the IP addresses,
right? You need something solid, you need something
simple and that is what domain name system is all about. It translates the simple into the IP address
and redirects your traffic to that IP address, right? So, this is about Route 53. Next up, we have the Messaging Domain. So, this messaging domain is all about services
like, say suppose simple email servers. So, it is used to send emails in bulk to your
customer base, right? So, if you have an application where you have
to notify your customers about new update. So, rather than sending emails to each and
every customer, with the click of a button, you can send it using SES, and you can also
handle the replies that customers gives, right? So, all of that can be managed using SES,
which is Simple Email Service, which comes under the Messaging domain. Next up, we have the Database domain, so Database
domain would include services like RDS, which is a Relational Database Service. So, a relational data service basically manages
some databases for you. Its not a database in itself. But its a managing service, which manages
databases for you, so it can manage databases like, MySQL. It can manage databases like PostgreSQL. And when I say manage, they can automatically
update the DB engines or they can automatically commit to your changes. All of that is managed using a management
service in Amazon, which is RDS, which comes under the database domain. Let's move on to last domain, which are Management
tools. So, management tools are basically tools using
which you can manage your AWS resources. So, in this domain, you have services like
Cloud Watch, which is an all-in-one cloud monitoring tool. So you can use these tools and monitor all
the AWS Resources that you are running in your AWS Infrastructure. Right? So guys, these are the different domains in
AWS. These are the different areas in which AWS
offers its services. Any doubt regarding, any of the domains that
we just discussed? Alright, I have a question from Neel. So, Neel is asking me 'What is content delivery'? So, good question Neel, I mentioned content
delivery and not explained what content delivery is actually about. So, Content Delivery is basically a caging
service. So, what it basically does is, if there is
a user which is far from a server, which he is trying to access. So, they call that server is caged to a location
near the user. So, that the latency becomes low, so that
the response time is faster, right? And that is what content delivery is all about. I will be recovering content delivery in detail
in the coming slides. So, just have a little patience, I will explain
everything in detail. Any more question guys, anything related to
the domains that we just discussed? Right. Neel says, all clear. Michael says no. Sandeep says no. Alright guys, let's move ahead then. So, since now we have discussed in what areas
AWS offers it services, let's discuss the AWS Services. Right? So, the first domain in AWS service is AWS
Compute Domain. So, let's discuss the services under Compute. So, the first service is EC2. So EC2 is the most important service in the
whole of the Compute domain. So, why do I say it's the most important service? Because EC2 is the base and the other services
which are, Lambda and Elastic Beanstalk are just advanced versions of EC2. How? Let's discuss that. So, E'2 like I said before, is just like a
raw server. Now you can configure this raw server to be
anything, right? You can configure it to be a web server like
I said or work at your environment or something else. Now, and this web server can be resized according
to your needs. The instances or the server that you have
launched, they can be replicated, as in, you can launch multiple servers of the same configuration
or you can also increase the configuration as well, right? So this is the kind of resizeability that
you get with EC2 and this is what EC2 is all about. So guys, are we clear with what EC2 is? So, the first service in the computing domain
is EC2. So, EC2 like I said is just like a raw server. You can configure EC2 service to be anything. So, it can be configured to become a web server,
it can be configured to become a work at your environment. The softwares that you require can be installed
and the server can be configured anywhere required, right? And that is the kind of independence you when
you are using EC2. But what is the difference? You can re-size the server as and when required,
it can be re-sized according to the number of servers that you are using. As in, if you are using, say suppose a particular
configuration and you wanted to ploy the same configuration that you want to host or you
want to serve your application through these number of servers you can do that. Otherwise, you can increase your configuration
on your particular instance. Now the way to understand this is. Say, suppose you are using the i3 configuration
right now, you are using the i3 processor on AWS and you want to go on to i5, you can
do that; you can do that kind of resizeability as well. So, this is the kind of flexibility you get
when you are using EC2. So guys, are we clear with what EC2 is all
about now? Right, people are giving me a yes. So, let's move ahead. So, let's move on to a second service now,
which is AWS Lambda. So, AWS Lambda like I said is an advanced
version of EC2, so its based on EC2, but the difference between EC2 and Lambda is Lambda
cannot be used to host your application. Lambda can be use only to execute your background
tasks. Now, what are your background tasks? Say, suppose you have an application, right. Your application is all about images, so when
you upload an image, the image is compressed and it is stored on a file system, right? So, your image first will be uploaded to the
file system, right. So when the image is uploaded, that is performed
by a application. Now, the tasks which have to be done in the
background like compression, maybe you have some more tasks like applying filters and
everything, these tasks are background tasks and these tasks can be executed using AWS
Lambda. Now, how does AWS Lambda functions is like
this: AWS responds to events. So, there are triggers that you set up in
AWS Lambda and in response to these triggers AWS executes the code, right? So, in this case in our examples that you
took that we are uploading a file, right, or uploading an image. So, the moment that image gets uploaded to
say, suppose S3 which is the file system for AWS, a trigger is generated and that trigger
is being listened by AWS Lambda. So, when that event is listened by AWS Lambda,
it responds to that event using the code that you provide, it executes that code and then
sits again and waits for another event to happen. Right? Now that code would include your code for
compression, applying filters etc. and that is how AWS Lambda functions. So guys, are we clear with how AWS Lambda
functions and what AWS Lambda is all about, and what is the difference EC2 and AWS Lambda? Right. People are saying, yes. Alright guys, let's move to next service now. So, next service is Elastic Beanstalk. Elastic Beanstalk is again an advanced version
of EC2. But with this the difference between Lambda,
and EC2 and Elastic Beanstalk is, that first of all Elastic Beanstalk is used to host an
application. So, if you compare it with AWS Lambda, this
is the difference. Elastic Beanstalk is used to host an application,
Lambda is not used to host an application. So, this is the difference between Lambda
and Elastic Beanstalk. Now, let's talk about EC2 and Elastic Beanstalk. So, Elastic Beanstalk is an automated form
of your EC2. How? With Elastic Beanstalk, you don't have to
configure in all the details, or you don't have to set up your environment. Say, suppose you have PHP website that you
want to host on EC2. Now, for your PHP website to be hosted, you
first have to create a PHP environment in your EC2. Right? But with Elastic Beanstalk, you don't have
to do that, you just have to select what kind of environment do you want, and AWS will install
all the configuration files required and will give you the environment on which you just
have to upload your code and your application or your website will be deployed. Right? So, this is how simple, Elastic Beanstalk
is. You create your environment, and then you
upload your code, that is it. Nothing else is required. So as you can see in the diagram, say suppose
you have a PHP code. So, you create a PHP environment. First, launch it, once that environment is
created, you upload the PHP code, and your application is deployed. As simple as that. So, guys, are we clear what Elastic Beanstalk
is all about? Right. People are saying, yes. Neel has a question, so Neel is asking me,
'When would you use EC2 and when would you use Elastic Beanstalk'? Very good question Neel. So, Elastic Beanstalk has a limited number
of environments. Right? So, if you have an environment or an application,
which has to be hosted and the environment is listed in Elastic Beanstalk, you should
go ahead with Elastic Beanstalk. But, say, suppose your environment is not
there in Elastic Beanstalk, maybe, Elastic Beanstalk is not ready to host your environment
yet, may be your used case is not about hosting an application, in that case you will be using
EC2, right? You would not be using Elastic Beanstalk. So, these are the differences, Neel. Any doubts in what I just explained or do
you need anymore further explanation on this? Alright. Any other question guys regarding the differences
between EC2, Lambda and Elastic Beanstalk, any confusion that you guys have? So, Michael is asking me, 'Does this mean
that configuration is easy in EC2'? No, Michael, I mean it the other way round. With Elastic Beanstalk, the configuration
is easy because you just have to select like I said right, if you want to host a PHP website
on your server using Elastic Beanstalk, you have to select the PHP environment. But if you were to do that in EC2, which is
a raw server, you can do the same thing in EC2 as well, but you first have to install
the PHP software in EC2 and by PHP software, I mean you have to install the PHP environment
in EC2, so that your machine is now ready to understand PHP, right? And you also, since you have to host a website,
you have to configure your firewall to be secure, you have to configure your firewall
to allow incoming traffic on to your server. Right? So, all this configuration has to be done
in EC2. But, with Elastic Beanstalk, you do not have
to do all these configurations, everything is done automatically. You just choose whether you want a work at
your environment or you want to create a server for website hosting. You select website hosting, you choose your
environment, and you upload your code, you don't have to deal with the firewalls. Everything is managed automatically. So, does that answer your question Michael. Alright. Michael says yes. Any other question guys, any doubts that you
have? Alright. People are giving me a go. Ok guys, so let's go ahead. So, let's discuss what is elastic load balancer. Elastic load balancer is basically is used
to distribute your work load among a number of instances, right? Now, the traffic which will be coming on to
these instances has to be distributed equally among these 5 or 6 instances, right? And this is what, Elastic Load Balancer does. Now, why is this important, say suppose you
have 4 or 5 servers running and all the traffic is directed to your first instance, right? So, it doesn't make sense, because all your
other 4 servers are idle. You have the capacity with you, but you have
not installed the protocol using which the traffic can be distributed to these 5 instances,
and that protocol is Elastic Load Balancer. So, Elastic Load Balancer distributes the
work load equally among the instances, so that the work is done efficiently and also
the work is consistent as in, if I am using say suppose, a website which is being hosted
in AWS, I should experience the same kind of response time as you are experiencing using
the same website, right? Say suppose, my request is going to the 5th
server, which is less busy and your request, is going to say the 1st server which is more
busy. In that case, your response time and my response
time would become different. That is why we use Elastic Load Balancer. So, if I am using your website and you are
using same website, you and me will experience the same kind of response time, because it
being distributed equally among the instances and the instances are busy on the same level. So, this is what Elastic Load Balancer is
all about. Any questions guys regarding Elastic Load
Balancer, any kind of doubt that you have regarding this service? Alright. Let's move on to our next service, which is
AutoScaling. So, AutoScaling is a service which is used
to scale up and down automatically, without your manual intervention. Now, how do you do that? You set up matrix. Now, say suppose you have a website running
and that website is running on 5 servers, okay? And you configure a matrix that whenever combined
CPU usage goes beyond 70%, launch a new server. Right? So, whenever your CPU usage will go beyond
70%, it will launch a new server and then, focus guys, then the traffic will be distributed
among the 6 instances, right? I said distributed. So, this work is done by Elastic Load Balancer
and that is why AutoScaling and Elastic Load Balancer go hand-in-hand. They have to be used together. So, if you are using AutoScaling, you have
to use Load Balancer as well, right? And like I said you can scale up using that
matrix and you can also set a matrix for scaling down, say suppose your combined CPU usage
goes below 50% or goes below 10%, so, you can configure your AutoScaling to decommission
a server in that case and hence you can scale down from the number of instances that you
are running and again, your work load will now be distributed to, if you have 5 servers
before, it will now be distributed to 4 servers, which again incorporates Elastic Load Balancer. So, like I said AutoScaling and Elastic Load
Balancer have to be used together. So guys, any question regarding any other
services that we just discussed? Alright. Neel says all clear. Sebastian says all clear. Michael says, all clear. Alright, guys, let's move ahead. Before moving ahead, let us make an interesting
study, lot of theory, let me deploy a new EC2 instance for you guys, let's go ahead
and deploy a new EC2 instance. So, I will go on to my AWS Console and Sign
in. So, this is how your AWS dashboard looks like,
these are all your services, right? So, since we have to deploy a new EC2 server,
we will be clicking on EC2, which you can find under the other Compute domain. So, let's click on EC2. Now, this is the EC2 dashboard, over here
you can monitor your EC2 instances. As you can see I have two running instances
as of now and 4 snapshots, 4 volumes and 51 key pairs. Right? So, you can monitor your EC2 services from
over here. Since, we are launching a new instance, we
will click on 'Launch Instance'. Alright, now you have to choose an AMI. So, AMI is nothing but an operating system
that you would want on your EC2 server. Right? For now, let's launch a Windows server, so
let's click on 'Windows'. Now you would be asked for the kind of instance
that you have to select, alright. You don't have to worry about the instances,
as of now you can select the t2.micro. And don't worry because we will be discussing
everything in detail as we move ahead in our journey and when we come down to the EC2 Module,
we will be learning all about these different instances. So, for now, just click on t2.micro and click
on Next. Now its asking me, 'How many instances do
I want'? Okay, since its a demo option, since its a
demo, I will launch only one instance and then it is asking me the networking settings
and everything. Okay guys, don't worry. Everything will be discussed in detail in
the EC2 Module. For now let us just focus on deploying this
instance and click on Next. Ignore if you don't understand, I will explain
everything to you guys. Alright. Now we come on to add storage, so with Windows
by default, you have to launch with a memo of 30GB, you can also expand over here, as
per your requirement. So, I am okay with 30 GB. Let's click on Next. So, now we have to Add Tags, so what do you
want your instnces we named, right? So, you can give your name over here, So,
Key is name, the value could be window-server, right? Let's click on Configuring Security group. Alright. Security group is basically like a firewall
on your instance, it is used to control the inbound and outbound traffic, which comes
on your server. So, how to do that? We will do that in the modules that will be
coming in the coming weeks. For now, just understand what Security Group
is all about. So, it is used to control the inbound and
outbound traffic. If you do not understand that as well, it
is okay. We will just click on Review and Launch. So, we will review all the settings which
we have just done. Everything looks fine to me, let's click on
Launch. Alright. This is a very important step. Now, the way you get authenticated to your
instance is with a private key and a public key configuration. So, its called a key pair. So, basically you can create a new key pair,
as in, a new public key and a private key or you can use an existing one. So, the public key is kept with AWS and the
private key you can download, and whenever you want to connect to your instance, you
will be using the private key. You will be uploading a private key to the
console that you have for launching your instance and the private key will mashed against the
public key and you will be given a password and using that password you can connect to
your instances, right? Whenever you are creating a new key pair you
to keep your private file handy and you have to keep it safe, because once you lose it,
you cannot connect and your data is gone, right? So, you have to be very careful with your
private keys. So, for now, let me create a new key pair
for you guys. So, let's click on create a new key pairs
and give it a name say, aws-demo. Right? So, let's download our private file, it says
that aws-demo already exists. Okay, let me give it aws-demo1 and click on
download key pair, right? My private file has now been downloaded. Cool. Let's now launch my instance. So my instance is now launching. Meanwhile, let's go back to the EC2 dashboard
and check if you can see if all instances are being listed there. So, we will go to EC2. So, as you can see we had 2 running instances
before, but now we have 3. So, since you have launched a new EC2 server,
we can see 3 running instances, right? So, let's click on it and it will list you
all the services, which are running. So, we named our instance to be 'windows-server',
so here it is, then instance type is t2.micro, you can also look at the key pair that we
have launched it in, so it is called aws-demo1, you can see the time stamp here and you can
see the security group here. So, once you click on your instances or once
you select your instances, you can see all the details attached to your instance. Say, suppose the public IP. So this is the IP that you will be using to
connect to your instance, so your instance type, your instance ID, your VPC, subnet,
everything can be monitored over here. So, the way we will be connecting this windows
instance is using a remote desktop connection. So, once you open your remote desktop connection
it will ask for IP address. Right? So, that IP address has to be copied from
here. So, this is your IP address that you will
be copying, so once you copy this IP address it will ask you for the user name and password. So, by default, Windows uses a user name called
Administrator and for the password you have to upload your pem file, which is over here,
on your launch console and it will decrypt the passes for you by matching the private
key with the public key, which is there with the AWS and then using that password you can
connect to your Windows instance. Sound simple right? So, let's do it. Its still initializing, let's wait for it
to initialize and then will start with the demo. Meanwhile, guys, do you have any questions,
as of now any services, the EC2 services which are just launched? You have any doubt, in whatever steps I just
tell? Right, people are saying no. Alright, guys. Let's just wait for the instance to be launched
then. So, our window server is now running. We can see that it says green over here and
it says running. Alright, guys, let's launch or connect to
our instance now, so you will be clicking on actions and you will be clicking on Connect,
right? Then you will be prompted on further screen
and then you click on get password. Once you do that, you will be asked for a
key pair path, let's choose our key pair now. So, we will go to desktop and we will choose
the key pair, which is aws-demo1. We will click on open and we will click on
decrypt password. So, I can see the password here now, and I
can see the user name, right? Let's now connect to our instance, so let's
copy the password to somewhere. So, this is our IP address. So, like I said we will copy this IP address
and we will launch the remote desktop connection. So, it is asking me the IP address. I will paste the IP address over here and
click on Connect. So, its asking me for my credentials. So, since my user name is Administrator and
then I will copy the password that I decrypted. Then click on 'OK'. So, as you can see it's now connecting to
my server, which I have launched on the AWS Infrastructure and this is it. So, this is your desktop guys, this is the
server that you have launched on AWS Infrastructure. So, just like a fresh operating system, you
can click on Start. You can see the way you would see it on your
machine, now you can install any software over here and you can configure this server
to be anything. Pretty cool, right? So, guys, this is what EC2 is all about and
this is how you can connect to your EC2 server once you launch it. Alright, guys, let's come back to our slide. So, we just learnt how to launch EC2 Server
on AWS Infrastructure. We learnt how to connect a Windows Instance
on AWS. Right guys? Any kind of doubts, that you had in the demo
session or any doubts in the services which we just saw? Alright, Michael says no. Neel says no. Sandeep says all ok. Alright, guys, let's move on to our second
domain which is AWS Storage Domain. Alright. So, let's discuss the services in the Storage
Domain, so first half we have the S3, which is Simple Storage Service. So, S3 is a file system, so it is an object-oriented
file system, which basically means that all your files that you upload on S3 are treated
as objects, right? And these objects have to be stored in a bucket. Now, what do you mean by a bucket is, you
can consider the bucket to be a folder. So, the root folder has to be a bucket, right? You cannot just upload files on to S3. You have to first create a bucket and inside
this bucket, the secondary folders are called folders as in the normal lingo, but the first
folder, the folder in the root is called the Bucket. Now, once you create a Bucket, its very simple,
you can upload your files and these files in the AWS lingo are objects. Alright. So, you can upload your objects and these
objects will have a certain path, which you can incorporate in your application and that
is how you can access your file from your file systems. This is what S3 is all about, pretty simple. Let's move on to our next service which is
Cloudfront. So, Cloudfront is that content delivery network
that I will explain to you guys. So it's a caging service like I said, so as
you can see in the diagram if a user wants to connect to a website which is very far
from the user's location, that website can be caged to a location which is near the user
and from that location the user can access that website. Now, why would you do this? Its because the response time becomes less
in this case. So, say suppose if you were using that website
from that far-off server, you were getting a latency, says suppose around 0.7 seconds
or 0.8 seconds. Now with this caging, you can get the same
website for around 0.2 seconds or 0.3 seconds. Now, this is very huge when you compare it
at a global level and that is what Content Delivery Networks are all about. So, guys, any question on the Content Delivery
Network or Amazon Cloudfront which we just discussed? Alright, Michael has an interesting question. So, Michael is asking me, 'What are Edge Locations'? So, someone has done their self-study. Alright Michael, so this server that we are
talking about, this server that has been used to cage your website that is a web server
which is near your location is called an Edge Location, right? So, when the user is trying to access the
website which is far off and if they have enabled Content Delivery Network in them,
that website is caged to a location which is near the user and this location is called
Edge Location. So, basically this location would comprise
of a group of servers on which the data is caged, and Edge Locations are nothing but
name given to these group of servers. So, Michael, does that answer to your question? Alright. Any more question guys? Alright, everybody is giving me a go, so let's
move on to our next service. So, our next service is Elastic Block Storage. So, Elastic Block Storage is basically like
a hard drive to EC2. So, when you are using EC2 instances, obviously
your operating system or your software is being stored somewhere, right? So, EC2 is backed by EBS for that matter,
so EBS basically acts as a hard drive for EC2 and it cannot be used independently, it
has to be used with EC2 only. Another interesting fact about EBS volumes
is that one EC2 instance can be connected to multiple EBS volumes, but the vice-versa
is not true. One EBS volume cannot be connected to n number
of EC2 instances or more than one EC2 instances. Each EBS volume can be connected to only one
EC2 instance and why that is because you can imagine it like this: so if you have one hard
drive and at one time that hard drive can only be connected to one computer or one mother
board, right? You cannot connect that same hard drive to
2 or 3 mother boards, correct? And that is the same case with the EBS as
well, so its pretty logical when you think it like that. Any question guys, regarding EBS? Let's go head. Let's move on to our next service, which is
Amazon Glacier. So, Amazon Glacier is a data archiving service. So, basically when you have to back up data
from your say, suppose S3 or EC2 instance, you back it up on Amazon Glacier. So, why would you back it up on Amazon Glacier
is because they use magnetic tapes and these magnetic tapes are cheap and hence your data
storage on Amazon Glacier becomes cheaper, right? Now, why would you store on Amazon Glacier? Which data would you store on Amazon Glacier? So, you would store data which is not that
frequently accessed. Your use-case will be something like a hospital,
where in you have to store test records of all your patients, right. But, what about the test records which are
more than 6 months old? Those patients are not returning, right? So, you can store that data on Amazon Glacier
and if tomorrow, may be in sometime when that patient returns, it can always be retrieved
from Amazon Glacier. Obviously since its cheaper, the retrieval
time will be more if you compare it with S3 or with EC2, but then its worth it because
it is cheaper, so this is how you would use Amazon Glacier. Any doubts guys, about 'What Amazon Glacier
is'? or 'which data would you store on Amazon Glacier'? Alright. Michael says no, Sandeep says no, Neel says
no. Alright guys, good going. Let's move on to our next service, which is
Snowball. So, Snowball is a way of transferring your
data to the AWS Infrastructure or transferring your data back from AWS Infrastructure. Now, how do you do it is like this. So, you have your data in your datacenter,
right? Your Snowball device is connected to your
data centre and all of your data is transferred to the Snowball device. Now, this Snowball device is then shipped
back by AWS to its infrastructure and then your data is uploaded. Now, 'When will you use it'? When you will have large amount of data or
you have petabytes scale of data, which if you are trying to send it across using the
internet, it will take a long time. But if you are using Snowball, using Snowball
this data transfer process becomes more fast because you are physically transferring the
data and the data is being transferring in bulk, right? And that is where Snowball comes in handy. So, guys this is what AWS Snowball is all
about. Any question related to AWS Snowball? Alright, people are saying no questions, all
clear. Alright guys, let's move ahead then. Okay Michael says, can you explain me once
more time, alright Michael. So, our next service is Snowball. So, Snowball is basically a physical device
which is used to transfer data from your datacenters to the AWS Infrastructure. Now, say, suppose you have an application
and you decide to make a move to the Cloud, right? You are hosting your application on your own
for now, but you have decided okay, I want my application to be on the AWS Infrastructure,
right? Now, your application is already running,
it is very successful, it has a huge data set that is being served by your servers on
your own datacenter. Now, this data set has to be migrated to the
AWS Infrastructure, right? Now, the way you can do it is in 2 ways, you
can either transfer it online using the Interne or; you can do the transfer using a physical
device. Right? Now this physical device is Snowball. So, when will use Snowball and when not the
internet is when you will have huge amount of files, right? When you have say, data at the petabyte scale. In that case you would use Snowball to transfer
your data, right? Now, how this process happens is like this. AWS, you have to request for a Snowball device,
right? So, that Snowball device will come to your
premises, you transfer the data on to that Snowball device and then that Snowball device
is shipped back to the AWS Infrastructure, where the AWS experts will upload the data
to their own datacenters, right? Now, if you were to use internet for a petabyte
scale of transfer, it could take a lot of time. But with Snowball, all of this can happen
within 10 days, so you are saving cost on your internet. You are saving a bandwidth and your process
is happening faster. So, this is how Snowball comes in handy. Now, any question guys regarding what Snowball
is? Alright. People are saying all clear. Let's move on to our next service, which is
Storage Gateway. So, Storage Gateway is a service which is
used between your datacenter and your Cloud. Alright. It can be used between your datacenter's resources
as well. Now, how is it used is like this. Say, suppose you have database servers and
your applications servers, right? So, now your storage gateway will sit in between
your database servers and your applications servers and it will keep on taking snapshots
of your database and will keep on storing it on S3. Now if you have say, suppose 3 or 4 database
servers and you have Storage Gateway installed, and your 4th database server gets corrupted
because of some reason. Now, what Storage Gateway would do is, it
will recognize that a failure has happened. It will take the related snapshot of the respective
snapshot and restore your server in which the snapshot was taken, and that is how Storage
Gateway works. This is about your private resources when
you are using your own database server and your application server, the same can be done
in AWS Infrastructure as well when you are using EC2 and RDS which is a database service. So, storage gateway can sit in between these
2 services and can serve the same purpose. So, this is what Storage Gateway is all about. Guys, any questions related to Storage Gateway? And also, we are ending with the storage services. So, any questions related to the services
which we just discussed? Alright. We were seeing all clear. Alright, guys. Let's go ahead then. So, let's move on to the next domain, which
is the AWS Database domain. So, the Database domain would include all
of these services, so the first service in the Database domain is RDS, which is a Relational
Database Management Service. So, the thing here you understand is that
RDS is not a database it is a database management service, so it manages databases for you. Now, which database does it manage? It manages relational databases for you, right? So, it manages databases like MySQL, manages
database like Oracle, MariaDB, PostgreSQL, Microsoft SQL server or Amazon Aurora, right? Now, what are the management tasks that we
are talking about? So, it updates the DB engines automatically,
it installs the security patches automatically, so everything that had to be done manually
or will be done manually if you are hosting a database server, it does that automatically
for you and that's why it is called a Management Service, right? So, this is RDS. Now, the difference between RDS and a service
that we will be discussing some time later, that is DynamoDB, is that they both are management
services, but RDS is for Relational Databases and DynamoDB is for non-relational databases,
so we will see that in detail later. So, for now, let's move on to our next segment,
which is Amazon Aurora. So, Amazon Aurora is a database, which has
been developed by Amazon itself. So, it is included in RDS, i.e., it is relational
database which is also managed by RDS. But, what is the difference between Amazon
Aurora and the other databases which are already out there is this. So, Amazon Aurora is actually based on MySQL. It means the code that you are using MySQL
will work with Amazon Aurora as well. But then Amazon claims that Amazon Aurora
is 5 times faster than MySQL. So, if you are using MySQL and you replace
MySQL server with an Amazon Aurora Server, you will experience 5 times boost in your
performance, and this is what Amazon claims and that is what Amazon Aurora is all about. So, if you are using Amazon Aurora and you
were using MySQL before, you don't have to change your code because it will work exactly
the same with the code as it was, when you were working with MySQL. Because there is no change in the code when
it works with MySQL and Amazon Aurora, but you will get a performance boost when you
are using Amazon Aurora. So, guys, are we clear about what Amazon Aurora
is all about? Okay, so Michael has a question. So, first I need to host a database like MySQL
and then I can take the RDS service to manage it. No, Michael it's not like that, I will actually
launch an RDS database, just have patience for a while and then everything will be clear
to you. Alright Michael? So, for now you can understand, for usual
understanding, that RDS is database service. What all tasks are performed by RDS that is
auto-updating security patches, auto-updating the DB engine, updating rollback, everything
is done automatically by RDS, alright? Any questions related to Amazon Aurora guys,
what Amazon Aurora is? Any doubts? Alright, people are saying all clear. Alright guys. Let's move ahead then. So our next service is DynamoDB. So, like I said, DynamoDB is also a management
service, but it manages non-relational databases for you. So, when I say non-relational bases, I mean
NoSQL databases. So, if you have unstructured data that has
to be stored in a database, so where will you store it? You cannot store it in a relational database;
you have to store it in a non-relational database, and the non-relational databases are managed
by DynamoDB, right? So, DynamoDB is actually a NoSQL database,
which also gets managed automatically. Alright? So, when I say managed, its gets updated,
the security patches are installed automatically, everything is done by DynamoDB itself and
there is no manual intervention required. Also, the thing with DynamoDB is that you
don't have to specify the amount of space that you will be needing, the moment more
data comes in, the database automatically scales, right? So, it grows automatically. Your manual intervention is not required. If the storage is coming low, nothing is required
in DynamoDB, you just go to the DynamoDB console and you start creating new tables, it doesn't
ask you for anything. You don't have to configure any storage property
or anything like that. It grows automatically and shrinks automatically
as well. So, this is about DynamoDB guys. Any question regarding DynamoDB? Alright. The next service in the database domain is
DynamoDB. So, like I said, RDS is a database management
services for relational databases, right? DynamoDB is a database management service
for non-relational databases. So, when you have unstructured data, you store
your unstructured data in non-relational database. You cannot store your unstructured data in
the relational database, because relational databases consist of data that can be structured
in a table, but when you talk about unstructured data, say, suppose, let's talk about something
random, let's talk about your post on facebook, they are all random, right? So, if you have to analyze your posts on facebook,
you just have to take that data and you feed it into a non-relational database and it makes
sense out of it. This is the power of a non-relational database,
and DynamoDB is a non-relational database management service. So, its a service which manages a non-relational
database for you. The other thing with DynamoDB is that you
don't have to specify the storage space that you will be needing. As and when the storage requirements increase,
your database scales automatically. So, if you have a storage requirement of 10GB
today and tomorrow you are feeding data of say 5GB more, so your database will grow automatically
and you will be charged according to that. If you are using 15GB, you will be charged
for 15GB and so forth. So, you don't have to manually intervene to
increase your storage. DynamoDB does that automatically. This is about DynamoDB guys. Any questions related to DynamoDB? Since you guys are all clear, let's move on
to our next service, which is ElastiCache. So ElastiCache is a caching service, so it
is used to set up, manage and scale a distributed environment in the cloud. What that basically means is, say suppose,
you have an application, right? Now the way databases works is like this. You query something from database, the database
processes that query and in turn give you a result. But, what if there is a lot of demand for
a specific kind of result set? So, the same query is running again and again
and again, so it increases the overhead on your database in getting the same results
again. So, with ElastiCache what you can do is, as
you can see in the diagram, the user was first accessing this query from database, but now
since it has analyzed that this query is being asked very frequently, it stores the result
set in ElastiCache and whenever that query comes in, it directly feeds in that result. So, the DB's overhead or the database's overhead
is reduced, because the request never reaches database because it has already been processed
and it is stored in ElastiCache and the ElastiCache in turn, serves the user with the result and
hence this process becomes faster. And this is what ElastiCache is all about. Any question guys regarding what is ElastiCache
is? Alright, people are saying all clear. Alright guys, let's move on to a last service
in the database domain, which is RedShift. So, RedShift is a data warehouse service,
so, it's a petabyte scale data warehouse service. So, it gets data feeded from RDS, it gets
data feeded from DynamoDB and it does analysis on its own. Its an analysis tool; it is a data warehouse
service which can be used to analyze all the data that you have stored on your database,
say like RDS or DynamoDB. Alright? So, its an analytic tool. So, guys, any question related to RedShift? Its a data warehouse service, like I said,
which is used to analyze data and you can feed the data from RDS and DynamoDB as well. Any question guys? Alright. Michael says no, Neel says no, Sandeep says
all clear, Sebastian says all clear. Alright guys, so this is end of database service
on RedShift. Let us now launch an RDS Service and let like
Michael asked he had a doubt. So, let's see if we can clear his doubt when
we launch an instance in the RDS. So, let's go back to our AWS Management console,
let's go to the dashboard for AWS. So, under the database domain we can find
RDS over here, we click on RDS. Alright? We will click on instances, so there are no
instances as of now, so let's launch a DB Instance. So we have to select which database do we
want to be managed, right? So, Michael was asking me first we have to
host a database and then we incorporate RDS, is it like that? So, no Michael, the moment you click on RDS
and you try to launch an instance, it will ask you which database do you want to manage,
right? If I say, suppose I want to manage MySQL or
I want my databases, say suppose hosted under MySQL server or my local host, right? If I wanted to be migrated to AWS Infrastructure,
I would want the same database over there as well, right? So, I will choose MySQL over here and click
on Select. Now it will ask me which environment do I
wanted to launch it. So, there are two kinds of environments, one
is the prod environment and the other is the Dev/Test, that is the development or the test
environment, since ours is a demo, so we will launch it in a test environment and click
on Next. So, over here we will be figuring our instance,
so it will ask me for DB Instance class, let's select the minimum which is db.t2.micro, so
we will be discussing all of these instance types when we reach the database modules,
so don't worry about it, just go with the flow. Select db.t2.micro and then it will ask me
for Multi-availability zone deployment, so multi-availability zones deployment is something
related to availability, so if you want you database to be there in different availability
zone, so that if one availability zone goes down there, you would choose option "Yes"
as option over there, since ours is a demo we'll select "No" and it is asking for the
storage type, so it SSD, Provisioned IOPS, Magnetic, basically the options here is given
according to the use case, so if you have an application, if you low latency, so you
would choose accordingly and we would discuss all of these in the database domains, don't
worry. Let us discuss click at default for now and
let's go in the settings. So, it is asking me how will a database be
identified, so let's give it the name as 'edureka-demo', alright? Let's give the master username as �Edureka�,
master password as "edureka123" and let's confirm the password again. Alright guys, let's click on the next step
now, so it is now asking me for VPC, now guys this is a step which is very important. If you have an application that you want to
deploy in AWS and you want that application to interact with database section as well,
you have to include both these services, say suppose - you host an application on EC2,
now EC2 and their RDS or whatever source you are using for your database has to be included
in the same VPC so that they can interact. This has to be kept in mind when you are trying
to deploy an application on the AWS infrastructure. So, let us keep it at default for now. So, I have a VPC called 82a742e5, so let's
keep it at that, and it's asking me VPC Security Group, so let's select the default security
group, DB name let's leave it blank, we will launch our own DB or we will create our on
own DB, okay so over here you can select which version do you want, which is 5.6, everything
else looks fine, you don�t have to change anything here because we have not studied
anything of this yet, so once you do that I will explain you guys all of this, right? Let's click on launch DB now, Launch DB instance,
so it says my DB instance is being created. It will take a while for my DB instance to
be launched, let's see if we have our instance listed in our RDS Dashboard. Alright, so my instance is created now, so
it will take a while, let's look at the things over here. So, once you click here, you will see all
the data associated with your RDS instances. Now the way you will connect your RDS instance
is using your command line, right? So using a command line, you will use a command
called MySQL, so for using that command you first have to go to the bin directory of your
MySQL installation, so yes you should have my SQL installed on your local server from
which you were trying to connect your RDS instance, so you will enter the command MySQL
once you are in the bin directory in command prompt and space - h, so h is your hostname,
right? And hostname is the nothing but your end point. I will show you guys exactly what to do, but
since it is being created I'm explaining you guys the process. So the endpoint is basically the host name
that you will get, so you will get that once this is created, right. So, when I say MySQL space - H followed by
the endpoint, space - P which is the port number using which you will connect to RDS,
so you have to mention the port number that will also be mentioned in the end point and
then you have to enter the username. So, if you guys remember, I entered the user
name as Edureka and it will be followed by " - small p" which is just a password, click
on Enter, the next line it will ask you for the password and you will enter the password
that you have specified while launching your instance, in our case it was "edureka123",
so we will enter "edureka123" over there and then hit Enter and if everything goes fine
we would be able to connect to our RDS instance. So, guys this is the whole process, in a few
minutes I will explain you guys how we will be doing that. We are just waiting for a DB instance to be
created, so let's wait a little longer. Alright guys, our RDS instance is launched,
lets now try and connect to our RDS instance, so for that let's launch a command prompt
and navigate to my MySQL directory, for me it's in wamp64, bin directory, alright. Now I'm in the bin directory from my SQL installation,
now I will get the following command to connect to my RDS instance, so the command is my "SQL
- h" which is the host name, so now I will be entering the host name that is the end
point that AWS provides me on the console, so you will select your RDS instance and you
will click on your end point, and you will hit copy, alright. So, once you have copied it, go back to command
prompt and paste it here, so the 3306 is actually your port name, so you have to enter it separately,
so just delete the 3306 part and enter - P, remember guys it is capital P, it's not small
p, small p is for password, capital P is for your port number, alright. So you'll enter P and enter 3306 which is
your port number, once you have done with that then you will enter your user name, so
for that you will enter the tag as -u which is the username, so if you guys remember it's
edureka followed by -p which is the password which we will enter in the next line. So, you will hit enter just after -p now,
so in the next line it will ask you for enter password, right? Over here you will enter your password which
you have specified in the installation, hit Enter and if everything works fine you will
be connected to MySQL instance, yes, so we are connected to our RDS instance now, so
you can enter the MySQL commands as you enter in your normal MySQL, so we will enter 'show
databases', so these are all the databases which are there for now on my RDS instance. So this is how you connect to your RDS instance
once you have launched it. So, Michael, like I said, you had a question
whether you have to host the MySQL service first and then launch an RDS instance to manage
it, so Michael, does that answer your question? Does this practical explain you what your
doubt was? Aright guys, so Michael says yes. Any doubts that you guys have other than Michael. Alright. Everybody is giving me a go. Alright guys, so let's move ahead, let's come
back to our presentation. So we are done with all the database services
now. Let me give you guys a recap what we just
learned, so RDS is a relational database management service, Aurora is a database which is built
by Amazon which is based on my SQL, which performs 5 times faster than my SQL, alright. DynamoDB is a database management service
for no sequel database, ElastiCache is a cache-in environment, it is used to cache results which
reduces the latency and reduces the overhead on databases as well. RedShift is a data warehouse service which
can be used to do analysis on data and the data can be fed from RDS and DynamoDB as well. So guys, these were the database services. Any questions with any of these services which
we just discussed? Alright, you guys are giving me a thumbs up. Alright guys, let's move on to our next domain. Our next domain is the Networking domain,
so let's see what all services are offered by the networking domain. So, Networking domain basically offers 3 kind
of services, the VPC, Direct Connect and Route 53. Let's discuss each one of them. So, VPC is a virtual private cloud, so it's
a virtual network. If you include all your AWS resources that
you have launched inside one VPC, then all these resources become visible to each other
or can interact with each other once they are inside the VPC. Now, the other use for VPC is that when you
have a private data center, and you are using AWS infrastructure as well, and you want your
AWS resources to be used as if they were on your own network, in that case you will establish
"virtual private network" that is a VPN connection to your virtual private cloud in which you
have included all the services that you want on your private network. You will connect your private network to the
VPC using the VPN and then you can access all your AWS resources as if they were on
your own network and that is what VPC is all about. It provides you security, its makes communication
between the AWS services easy and it also helps you connect your private data center
to the AWS infrastructure. So guys, this is what VPC is all about, any
doubts in what VPC is. Alright, people are giving me a heads up. Let's go ahead on to our next service which
is Direct Connect. So, Direct Connect is a replacement to an
internet connection; it is a leased line, a direct line to the AWS infrastructure, so
if you feel that the bandwidth of internet is not enough for your data requirements or
networking requirements you can take a leased line to the AWS infrastructure in the form
of the Direct Connect service, so instead of using the internet, you would now use the
Direct Connect service for your data stream to flow between your own datacenter to the
AWS infrastructure and that is what Direct Connect is all about, nothing much further
to explain. Let's move on to a next service, which is
Route 53. So, Route 53 is a domain name system, so what
is a domain name system. Basically whatever URL you enter has to be
directed to a domain name system which converts the URL to IP address, the IP address is of
the server on which your website is being posted. The way it functions is like this, you buy
a domain name and the only setting that you can do in that domain name or the setting
which is required in the domain name are the name servers, right. Now, these name servers are provided to you
by Route 53. These name servers that Route 53 provide you
are to be in the entered in the settings of that domain name. So, whenever a user points to that URL, he
will be pointed to Route 53. The work and the domain name setting is done. You will have to configure Route 53 now. Now that your request has reached Route 53
it has to be pointed to the server on which your website is hosted, so on Route 53 now
you have to enter the IP address or the alias of the instance to which you want your traffic
to be directed to. So you feed in the IP address or you feed
in the alias, and it's done. The loop is now complete. Your URL will now get pointed to Route 53
and Route 53 in turn will point to the instance on which your application or website is being
hosted. So, this is the role which Route 53 plays. It's a domain name system, so it basically
redirects your traffic from your URL to the IP address of the server or which your application
or website is hosted. So, guys any question related what Route 53
is? People say all clear. Alright guys, so we are done with the networking
domain. Let's move to our next domain which is AWS
Management Domain. So the management domain includes all these
services. Let's start of with the first service, which
is CloudWatch. So, CloudWatch is basically a monitoring tool,
which is used to monitor all your AWS resources in your AWS infrastructure. Now, how you can monitor them is, let's take
an example - say suppose you want to monitor your EC2 instance, you want to be notified
whenever your EC2 instances CPU usage goes beyond say 90%, right. So, you can create an alarm in CloudWatch
and whenever your usage will cross 90%, it will figure an alarm and that alarm in turn
will send you a notification maybe by an email or maybe whatever parameter that you set,
you will receive an alarm in that scene, so this is what cloud watch is all about. You can set alarms and turn to reactions that
you get from your AWS resources. So guys, any questions related to CloudWatch? Alright. Let's go ahead. So, our next service is CloudFormation, so
CloudFormation is basically used to templatize your AWS infrastructure, now why would you
templatize your AWS infrastructure is when you have different environments and you want
to launch the same infrastructure in different environments, right? So, if you create infrastructure and you don't
want to create that again, you can always take a snapshot of it using the CloudFormation
and then you can templatize this infrastructure and use it in other test environments. Say, suppose you have Test Environment, Product
Environment, and Debit Environment, and you want the same Infrastructure in all of these
3 Environments. We can do that using CloudFormation. So, CloudFormation is a tool using which you
can templatize your AWS infrastructure. So, nothing much more to explain. Let's go ahead and see our next service which
is CloudTrail. So, CloudTrail is a logging service from AWS. So, you can log all your API requests, and
API responses in CloudTrail and why would you log them is because when you want to troubleshoot
a problem. Say suppose you get some error while you are
using an application and since there can be countless cases, where in you don't get error,
but in a particular case you get an error, now you have to track down the problem. So, how will you track it down? You will track it down using the logging service. So, since every request is logged using CloudTrail,
you can go to that particular log where the error occurred, and hence jot down the problem
and jot down the line where the error is occurring and then solve it. So, this is what CloudTrail is all about. Now, how are the logs stored? Now, say suppose you have CloudTrail enabled
on a particular service. So, CloudTrail will generate logs and will
store those logs in S3, which is a file system provided by AWS. So, this is how the whole process happens. So guys, are we clear why would we need an
AWS CloudTrail service? Or why do we need a logging service in that
matter? Alright, people are giving me a go. Alright guys, let's move on to our service
now, which is AWS CLI. So, CLI is a command line interface, which
is just basically a replacement to the GUI interface that you have. I showed you guys the AWS dashboard, right? So, that is the GUI. But, say suppose you are comfortable with
command line. So you can also use command line to deploy
instances and the way to do that is by using AWS CLI. Right? So, there is nothing here to explain, it's
just a replacement to your GUI that you use any request. So guys, any questions, any confusions in
AWS CLI? Alright. You guys seem to be pretty smart understanding
everything. Alright guys, let's move ahead to our next
service which is OpsWorks. So AWS OpsWorks is a configuration management
tool. So, it consists of two parts. It consists of stacks and it consists of layers,
right. So, layers are basically different AWS services
that you have combined together and when you combine them together this whole system is
known as a stack. Now, where would you need a configuration
management tool. Now, imagine a scene like this. You have an application which is using a host
of AWS service, right. And you want to change something very basic. Now, one way to change this thing is by going
to each and every service particularly and changing that setting. The next way to do this is using OpsWorks. If you have deployed your application using
OpsWorks, one basic setting that you have to change in all of your infrastructure can
be done at the stack level. So, basically all your resources, all the
services that you are using and different layers in that stack and that combination
is known as stack, right? That AWS infrastructure as a whole would be
known as a stack and if you have to change the setting, you will have to change it at
the stack level and automatically it will be implemented to all the levels. So, this is how OpsWorks function, right. It's a configuration management tool. So, any question related to OpsWorks guys,
any part that you don't understand about OpsWorks? We are good to go. Alright, let's move on to our next service,
which is the Trusted Advisor. The Trusted Advisor is just like a personal
assistant to you in the AWS infrastructure. So, how does it advise you is like this. So, it advises you on your monthly expenditure. So, say suppose you have the best practice
and if you use that practice you can reduce your expenditure. So, Trusted Advisor would advise you to do
that thing. It would advise you for using your IAM policies,
if it recognizes that a lot of users, a lot of different peoples are using your AWS account
and you have not set up any IAM Policies on your AWS Account, it will advise you to create
those IAM policies and hence this will enable you to create your AWS account better. Right? And these are the kind of advises that the
trusted advisor will give you in AWS. So, guys, are we clear with what Trusted Advisor
does? Alright guys. So you guys are giving me a go. Alright guys, so we are done with the management
services, any doubt in any other services, which we just discussed. Alright. So let's move ahead. So, our next Domain is the Security Domain. So, let's discuss the services in the Security
Domain. So, the Security Domain includes 2 services. The first service is IAM. So, IAM is Identification and Authentication
Management tool. So, what this basically does is, like I said
if you have an enterprise and your users are using your AWS account to control the AWS
infrastructure. You can provide them with granular permissions. Say suppose, you want a user to just review
what all instances are there, so you can give him that access. If you want a user to just be able to launch
instances and not delete them, you can give that particular user that access, so these
are the kind of accesses that you can give using IAM and that is what IAM is all about. It authenticates you to your AWS account or
in this case your employees do the route AWS account in a fashion that you want. So guys, any question related to what IAM
does? Right, Michael seems pretty clear. Neel says all well. Alright guys, let's move on to the next service,
which is the Key Management Service. So, AWS KMS is Key Management Service. So, basically any instance that you have launched
in an AWS is based on this infrastructure that there will be a public key and you guys
will be provided with a private key and the public key is with the AWS. Whenever you want to connect your instance,
you have to upload the private key or you have to get the private key and then AWS will
master private key with your public key and if it matches it will authenticate you to
your AWS instances. So there is nothing more to explain. This is what basically KMS does. So, KMS basically assigns you with the private
key. You can create a new key pair or you can use
an existing one, but guys, you have to be very careful with your private keys. If you lose your private keys in any case,
there is no way you can gain access back to your particular AWS resource, which will be
using that private key. So, guys are we clear with what KMS is. Alright. Michael says yes, Neil says yes, Sebastian
says yes. Alright guys, so we are done with the security
services. Let's move on to our next domain, which is
the AWS Application Domain. Right? So, the Application Domain includes 3 services. The first service is Simple Email Service. So, like I said, if you have a large user
base and you want to send emails to them, you can do that on a push of a button using
the SES, also if you want the replies to be automated that can also be done using SES. So, it's a very simple service, nothing much
more to explain. Any doubts in this particular service guys,
SES. Alright. Let's move ahead then. The next service is a pretty interesting service
is called the Simple Queue Service. So, a Simple Queue Service acts as a buffer. Now the way it functions is like this. Now, say suppose, you had that application,
you had that image processing application, right? Now, whenever you upload an image. Say suppose you have to do 5 tasks. Now these 5 tasks will be listed in your Simple
Queue Service or your simple queue, and a server will keep a reference with this queue
and see what all tasks are left to be done on the image. Now, how does this help? This helps when you have multiple servers
running for your processing, right. And say, suppose, your first 2 operations
are done by the first server and the next 3 operations are may be to be done by the
some other server, right? So, the next server should know what all operations
are already done, and this knowing is actually referenced through your SQS. So, whenever a task is done that task is removed
from the queue, and the next task is queued and that is what SQS basically does. So guys, are we clear with what SQS is and
how it functions? So, Michael is asking me does this works on
priority basis. Michael, when you will be listing those tasks
those will be on priority. The first task that you have listed will be
executed first. So, yes it is based on priority. The priority is first in first out. So, the first thing that you list has to be
executed first, right? So, say suppose you have 5 images pending,
so the 2nd image is after the first image, the third image is after the second image,
so the first image will be processed first according to the queue, and then the second
image will be processed and then third image will be processed, so it works like this. So, Michael are we clear with what SQS is? Anymore question guys, regarding the SQS services? Right, you guys are saying all clear. Alright. So, let's move ahead. So, the next service is SNS, which is the
Simple Notification Service. So, it basically sends notification to other
AWS services. Now, how can it be used, it can be used like
this. Say, suppose that application that we just
discussed about image processing, you upload the image, right? And now you also want it emailed whenever
an image is uploaded. Right? Now, how will you do it using SNS? So SNS will send a notification through SQS
and SES that an image has been uploaded. Now when the notification is sent to SQS that
notification can also include the number of tasks or the task that have to be done on
that image, right? So SNS sends a notification to SQS with the
details that have to be added in the queue and SNS can also send a notification to SES
that an image has been added, so sent the respective email to the respective person,
right? So, this is how SNS functions, it sends notification
to different AWS services and this is what SNS is all about. So, guys are we clear about all the services
that we have discussed so far, such as SES, SQS and SNS. Any doubts in any of these 3 services. Alright. Sebastian says no. Michael says no. Neel says no. Alright guys, so we are done with the services
now. So, let's move on to our next section which
is, AWS Pricing. So, we are done with all the services. We have learnt what all AWS has to offer us. Now, let's see how we charge using these services. So AWS has these models. So, the first model is, 'pay as you go', so
it means you pay what you use. So, let's take an example of a file system
here, so you don't have to force your requirement and buy a 50 GB junk for yourself, even if
you are using say only 10 GB out of it. If you are using some other service if you
are not using AWS and they don't follow 'pay as you go' model. You have to foresee what your requirement
is, you have to foresee say suppose you might need 50 GB in a month, right, but you end
up using sat only 10 GB, but since you asked for the 50 GB you have to pay for that whole
50 GB, right, even if you are using 10 GB out of it. But, with AWS, this is not the case; you pay
according to your usage. So, if you are using S3 and you have just
6 or 7 GB of data on it, you will be paying for that 6 or 7 GB, and in future if you upload
more data on to it, you will be charged according to that data or that storage that you are
using. Alright, so this is what the 'pay as you go'
model means. The next model is pay less by using more. So, this is a very interesting concept. Let's understand this. So, we have included for pricing S3. So if you are using S3 up to 50 GB of storage,
you are charged at 0.023$ GB per month, right? If your usage goes beyond 50 GB and it lies
between 51-100 TB of storage, your rate reduces to 0.022$ GB per month. And if you use 500 TB plus of storage on S3,
you will be charged at 0.021$ GB per month. Right? So as you are using more, your prices are
dropping down, the rates are becoming less and this is what 'pay less by using more'
means. So, let's move on to our next model which
is, 'save when you reserve', now this is again a very interesting model, save when you reserve. When you reserve any instances in AWS, you
have an option to reserve them for one year or a 3 year term. So, if you have that kind of usage, you can
reserve your instances and you can save cost up to 75%. So, as it evident in the picture or in the
diagram below, a lady who didn't know which servers and for how long she will be using
that service. If she ends up using that service for say
one year, she will be entitled to pay this much amount, but in the other scene she knows
she will be using it for say one year or 3 years whatever and what instance she wants,
she can save cost up to 75%. Now this is the power of reserving, but it
makes sense for you only when you know when your used cases like this that you might be
requiring that instances for a term of 1 year or 3 years, right? For example, our company, Edureka has it website
called edureka.com, right? So, it makes sense for us to reserve the instances
because our website will be there as long as our company exists, right? So we can reserve instances and host the website
on reserved instances, but then since I am giving you guys some demos today, right? So, these demos should not be reserved, I
might delete them tomorrow, I might not be using them maybe after a week or month. So, it doesn't make sense for me to reserve
instances and that is what reserving instance is all about. So, if you know that you will be using your
instances for a minimum of 1 year or maybe 3 years down the line, you can reserve them
and you can save up to 75% on that. So guys, this brings us to the end of the
AWS pricing section. Are we clear with all the pricing models that
we just discussed? Alright guys, I am getting a yes from you
guys. So, let's move to our next section. So, this is the fun part guys. We are done with all the theory, let's now
discuss a problem and let's see how we can solve it's using the knowledge that we have
gained today, right? So our problem would be hosting a website
and we will be hosting it on the AWS infrastructure. So, let's see how we will go about it. So, let's first discuss the problem. So, we have to host a website on which we
can upload images and these images should be displayed on the homepage once you have
uploaded them this website should also auto scale and should be highly available, right? So this is our used case and we have to host
this application on the AWS infrastructure. So, let's see how we will architect the various
website we function. So, the user will point to a website address
that website address will point to the website and that website will actually be dealing
with the files a file server and a database. Why the file server? Because you will be uploading our images on
something, right? So that's why the file server and you have
to remember the parts of those images and for that you will be using database, right? So, guys this is our architecture that we
will be constructing using the AWS resources. So guys, do you have any doubt in this architecture,
which we just discussed. Alright. Shubham says no. Michael says no. Neel says no. Alright guys, so if it's clear with you let's
move on to the AWS services. Alright guys, so let's see how we will architect
this architecture which we just discussed using the AWS resources. So, user had to point to a website address,
which will point to a website. Right? So, like we said, we will need a domain name
system here, so will be incorporating Route 53. So, our user will be actually interacting
with Route53, which will in turn point to a server. So, our server is actually the place where
we are hosting a website, right? Now, my application or my website is a PHP
application, so like I said if you have an environment, which is there in Elastic Beanstalk
you should always go for it. Right? So since PHP is listed in Elastic Beanstalk,
I'll be using Elastic Beanstalk instead of EC2, right? Now, what is Elastic Beanstalk exactly under
the hood, let's discuss that. So, Elastic Beanstalk will have EC2 instances
under the hood which are automated, so you will have the PHP environment automatically
configure on your EC2 instances and your EC2 instances will be controlled using the Auto
Scaling and Load Balancer. So, whenever there will be a requirement for
more number of instances, more number of EC2 instances will launched and the traffic will
be distributed accordingly using the Elastic Load Balancer, so this is how it functions
under the hood, but you don't have to worry about that. If you're not understanding what I just said,
let's consider Elastic Beanstalk as a black box. Right? Now, your website has to interact with RDS
and S3, so why RDS? Why S3? S3 is a file system which we discussed in
the architecture above and RDS is the database that you are trying to connect with. Now, since you have to store the paths to
the file that you are uploading, those paths can be actually stored in a structured manner,
right? And since it is a structured data, you will
be using RDS which is Relational Database Management service. So, you can store your data in say suppose
MySQL database, right? That is why we are using the RDS service. Also if you are connecting a third party application
to your S3 file system, you need to authenticate that application to S3 and for that, you would
be needing access keys and those access keys are provided by IAM. Alright guys. So these are the AWS services that we will
be requiring, so any doubt in why we are using these particular services and why not others,
any doubt guys. Alright, people are saying all clear. Alright guys, so let's move on to fun part
now. Let's create this architecture and let's host
our website on the AWS. So, first let me go back to my browser and
let me show you guys the website. So guys, this is my website. I click on upload files. I click on upload image. I have this folder called images on my desktop. I'll go to that folder, let's upload this
image. Click on open and click on upload image. So, it says S3 upload complete that means
my file has been successfully uploaded. Let's see if I can see it on the home page. So yes, I can see it on homepage. Now what basically happened here is that the
entry to the database is made on my local host for now. So, let us see if the entry has been made. Let us launch my MySQL. So, this is my local host MySQL. My database name is Edureka. So, let's use that database and my table name
is image. So, let's 'select * from image' and let's
see if we have our image listed here, so yes there is an entry, so let's go back and see
if we have a file named like this in my S3 bucket. Right, so let's go to the S3 service. So, you can find S3 under the Storage domain. So I have configured my code to interact with
the bucket called Edureka RDS. So let's go to this bucket. So yes, it has a file. Let's compare the file name and see whether
the same file is there in my bucket. So, this is my MySQL. This is my local host MySQL. So, this is the filename 1486720933, let's
compare it with file name over here, 1486720933, so this is the same file. So, let's see if it is the file that we uploaded. So, we will select the file, we will click
on Properties and will click on the link now. Let's see if it is the same file. So yes guys, this is the file that I have
uploaded. Let me verify it on my website. So, as you can see, this is the same file
that we uploaded. So our website is actually working fine, for
now it is updating my MySQL on local host and it is uploading file on S3 and this website
is hosting on my local host as well. So, now I have to upload this website on the
AWS infrastructure, that is Elastic Beanstalk and I have to migrate this database to RDS. Right. So first let's launch an Elastic Beanstalk
environment, right? So let's minimize all of this and go back
to our AWS dashboard. Let's select Elastic Beanstalk, so it should
be under the Compute domain. So let's click on Elastic Beanstalk, so there
are no environments which currently exist, so let's create an environment. Let's click on create environment, so we need
a web server, so we will click on create web server. So we have to select a platform now, so ours
is PHP, so let's select PHP and since we want load balancing and auto scaling enabled ,let's
give it a default, click on next, select a source for your application version, let it
be a sample application for now. We will upload a code later, and let us not
touch any of this, because this we haven't taught you guys, so let us not touch this
and click on Next. So, now you have to name your environment,
let's name it as, edureka-demo. Right. Let's check the availability. So this URL is available, let's click on Next. Like I said, you have to create this environment
inside a VPC, let's check that and click on Next. So, which instant type, let it be t1.micro. Let's select key pair, so we created edureka-demo
1, remember? So, let's select that, it should be listed
here, edureka-demo1, sorry the key pair's name was aws-demo1, so let's select that,
email address is not required, application, nothing else required here. Okay, everything seems done. Let's click on Next. Now you would have to enter the name for your
instance, right? So let's name it as edureka-demo. Right? Let's click on Next. So, now you have to select that VPC, so like
as I said you have to select the same VPC as your RDS instance, so let's check name
for our RDS instance VPC, let's go to RDS , our instances, let's see which VPC is my
RDS included in, so it's an 82a742e5, so let's select the same VPC here, it's 82a742e5, so
let's check if the security group is same as my RDS, so its sg-91bb, so it should be
same, sg-91bb. Alright guys, so this is same. Let's click on Next now. Alright, so you have to select all of these
instances. I'll explain you guys later why you do that
when we reach the EC2 module. So, now let's select all of this and click
on Next. You don't have to touch anything here, click
on Next. Now, you have to review it and launch your
application. So everything seems fine to me. Let's launch our application. Now, this might take a while to launch. Let me teach you guys how to migrate your
database to the AWS infrastructure, right? So for that before migrating my database,
let me show you one more time that whether or not this is being updated or not. Right? So, let's upload a file, let's choose a file
now, I'll choose a second image, click on Open, click on upload image, says S3 upload
complete, let's go back and check whether our file has been uploaded, so yes our file
has been uploaded, let's check in our database. This is my local host database guys, you can
see the address, so let's check if an entry has been made over here. Yes, there are 2 images now. Now, let's go to our S3 bucket and check whether
an image has been added over there. So, we will go to S3, we will go to edureka
RDS bucket. Let's compare and check whether it's the same
file, so it's 1486721420, 1486721420, so it's the same file, but then let's confirm it. So, click on Properties and click on the link,
so this is my image, let's verify it. So, yes it's the same image that I have uploaded. So, once I migrate my database to the AWS
infrastructure, this will no longer be updated, right? So, this is the thing that you have to keep
in mind. So let's take a backup of this database first,
so for that I have to close this and go to my command line. Open the command line, I will go to the bin
directory of my MySQL installation. Alright. So, I reached the bin directory. Now for clicking the backup of my database,
I have to enter this command, it is mysqldump -u, so you have to enter the username for
the local host of your MySQL service which is root and then password, since there is
no password, I will not enter anything and then you have to enter your database name
which is edureka, right? And I have to export it, so you will be using
the > symbol which basically means that you are pointing it out, that is, you are exporting
the file, right? So you are exporting the file to a filename
called edureka.sql, so this is it. Let's hit enter and if everything works fine,
this will work without errors. Alright. The command executed successfully. Let's check if we have a file called SQL in
my bin directory for my MySQL. So, I'll go to the bin directory from MySQL
and I should have a file called edureka.sql, so yes it has the file. So, hence a file has been made. Now you have to connect your RDS instance
and then migrate this file over there, so that the database is created over there, right? So, I have already told you guys how to connect
your RDS instance, right? You have to write 'mysql<space>-h' the hostname,
let's copy the hostname from the RDS console, so it is over here, let's quickly come back,
paste it here, remove port number, its capital P, guys remember, then you enter -u that is
the username which is edureka, -p, not required for now and then -p and then you will hit
enter, so it will ask for your password, so let's enter the password, its edureka123. You are connected to your mysql instance. Now, let's create a database. So, you have exported the file that you want
to be imported to RDS, right? Now, you have to first create a database here,
where you want the details to be copied or where you want the database to be there, right? So you create the database name. Create the database name, which is create
database edureka, right. So, my database has now been created, let's
check if it is empty for our satisfaction. Alright, so let's say show tables, so there
are no tables in my RDS database for now. Okay. So let's now exit RDS and let us now migrate
my database file to RDS, right? So for that, you enter the same command as
you do to connect and then you will enter the < which means you are now importing to
RDS, right? And what file are you importing, it's called
edureka.sql and where are you importing it, you want to import it to the database called
edureka and RDS. So let's do that now. So, basically you understand till here. After that I have included the database which
I had just created in my RDS MySQL which is edureka and what data do I want edureka.sql
to be copied to the database, edureka. Alright guys, so now let's hit enter, it will
ask for the password for the RDS, let's enter the password edureka123, hit enter and if
everything works fine, it will not give me error. So, it will take a while for my file to be
uploaded, so just be patient. Okay, so it didn't give me any error that
means my file has been uploaded successfully. So, let's check if everything went well, let's
connect to our RDS instances, enter the password, my database name was edureka, which I just
created in RDS, right? Now let's check if our table is being listed
over here. So I'll enter show table, so yes it has a
table name called 'image'. So let's check if there is any data in my
table, select * from image, so yes, it has the records which were in my local MySQL as
well, so let's launch local MySQL and see if I can verify my data. So this is my local MySQL and this is my RDS. Right? So, as you can see, the data is same, the
table name is same, everything is same, so I have successfully migrated my database on
the AWS infrastructure. Now, I have to tweet my code. So, that it can connect to the RDS instances
now, right? So let's jump to our code now. So my code is over here, these are my code
files, so I'll have to change index.php, so the host name was local host. Now, I have to change it to the end point
that RDS had provided me. So let's do that, so this is the end point. Let's copy this into my code, remove 3306. It is now asking me for the username. So the username is edureka and the password
is edureka123. Right? Everything else seems fine, let's save our
index.php and let's see what all other changes we have to do. So I have changed index, now I have to change
the upload part as well, let's do that. So, I am connecting to the database over here,
so let's correct this, the host name is this, without the 3306 part. The username is edureka and the password is
edureka123, anything else, seems we configured it. Alright, everything seems fine. Let's check if it's working on my local host
now. Alright. I am getting the home page which means my
connection is working, but then let's be sure, let's choose the file. Let's upload this third image, click on open,
click on upload image, it says S3 upload complete. Let's go back and check whether it's there
on my homepage. So yes, I can see this image on my homepage,
and see if we can see it in my S3, is yes the third image added? Let's check if I can see it in my RDS. So, yes there are 3 records added, so my code
can now interact with the RDS instance and it scan successfully more records over here. So, the connection to RDS is working fine. Let's be double sure and see whether my local
host MySQL is not being updated. So, this is my local host. I use edureka; I will check whether the image
table has been updated, so no, since my code is not pointing to this MySQL anymore, the
record have not been updated. So, I have successfully migrated my database
on to the AWS infrastructure. My code is now interacting with the AWS infrastructure. Let us now do the final step that is upload
my code to the AWS infrastructure, right? So I think our Elastic Beanstalk is ready
now, let's check. So the health says ok. Alright guys? So, I can upload my code over here now. So for that, the way you can upload a code
on to Elastic Beanstalk is by zipping your code. So let's go back over here, copy these files. Go to my desktop. Create a new folder. Copy the files here, zip this folder. Let's call it upload. Right? So, now I will be uploading this folder to
Elastic Beanstalk, so let's do that. So, we will click on upload. Choose the file. Go to desktop. Upload this folder. Click on Deploy. So it will take a while for Elastic Beanstalk
to deploy my application. So let's just be patient. Once my code is up and running, it will show
a green sign over here and I can access my website using this URL. If my website works fine, our next step would
be configuring our Route 53 to point to a specific domain name, basically found a website
which provides a free domain name to use. So I'll be using that domain name to connect
to my instance, so will do that, in a jiffy. Let's just wait for our code to be deployed
here first. Alright. My code has been uploaded now, it shows a
green symbol. Let's check if it's working. So, I click on this URL and if everything
is working fine, my website, I can see my website, so, yes! I can see my website. So, I have successfully migrated my website
as well to Elastic Beanstalk. So, let's confirm this, by uploading a file. Let us click on upload files. Choose a file. Go to desktop. Click on images and upload this file. Click on upload and it says your upload complete. Let's go back and check. So yes, my Elastic Beanstalk is now showing
image. If I can see this image here, this basically
means that my database has updated successfully. My S3 bucket has also been updated successfully. So, everything is working fine guys. Now, the next step is to get a URL. So, I have got my free URL from the site called
'my.dot.tk'. So, this website basically gives you a free
domain name, right? So I have got mine. Login. Alright, so I will go to the mydomain first
section and click. So, this is the domain name that I have got
for free. So let's click on Management domain. Click on Management tools, name servers. So basically these are the default servers,
which are there for now and you have to use custom name service, which will be provided
to you by Route 53. So, let's do that, let's go to Route 53 now. So Route 53 can be found under the networking. So, here it is, let's click on that. So let's get started, create a hosted zone. So click on create hosted zone and you have
to enter the domain name. For me it's edureka.tk. Right? So let's create, now you have to enter the
Record Set. Now these Record Sets will point to your application. So before that, let's copy the nameservers
to my domain name, right. So, these are the name servers that Route
53 provides us and these name servers have to be copied to my name servers section over
here. Right, so let's do that. So let's copy the first nameserver first. So we are using custom nameservers and let's
delete all of them. So let's enter the first nameserver and the
second one as well. Let's check if these are been saved successfully. So yes, let us add the remaining ones. Let copy the third nameserver as well and
the fourth one as well. So, let's click on change nameservers. So it says changes saved successfully. So my domain name is now configured to connect
to my Route 53. Let us now configure our Route 53 to connect
to my instances. So, let's do that. So, let's create a Record Set, which is asking
me a name. Now, there is this concept guys. Now, any website that you visit, you can visit
it in two ways. Say suppose you have facebook.com, right? So you can either go for 'facebook.com' on
you can go for www.facebook.com. Same is the case here. So, if I go to edureka.tk, I should be pointed
to the same website, if I go to www.edureka.tk, I should be pointed to the same website as
well. Right? So we have to create both the record sets. So let's create a record set for edureka.tk
only right now. So, we click on Alias. Now, when you are launching an application
in your Elastic Beanstalk Environment, you don't get private IP addresses, you get Alias. Right? So, you click on Alias and you will select
your Elastic Beanstalk Environment, so this is it. Click on it and click on create, so my record
set has now been added. Let's add one more with www in it and let's
select the same Alias. So guys, this is it. It has been done. I have configured my Route 53 to point to
my servers, that are my instances and I have configured my domain name to point to my Route
53. Now let's check if our URL is working, so
it was edureka.tk, so it's pointing to my website. Congratulation guys! This is your first website. Let's check if we are being pointed to edureka.tk
if we enter this address as well. So yes, we have been pointed to the same website. Let's upload a file and check whether everything
is working fine. Let's click on upload image, open, upload
image, it says S3 upload complete. Let's go back and check whether our file has
been added. So yes, our file has been added. So our website is functioning. Fine guys, congratulations! We have successfully completed our practical
and we have migrated our application to the AWS infrastructure fully. Right? So let's go back to our slide and just recap
what all we did. So we launched Elastic Beanstalk Environment
and this environment is basically hosting our website. Route53 is pointing the URL to the servers
or the instance that is hosting our website. RDS is MySQL server, which is hosting the
path for our images and S3 contains images and IAM is giving our application or our website
access to the S3 file system. So guys, any question related any part that
we have discussed now? Sebastian says thank you, very nice session. You're welcome Sebastian. Michael says no. Michael says nice practical. Thank you, Michael. Right, Neel says all clear. Alright guys. Let's do a recap of what we just discussed. So, we started off with what is Cloud computing
and then we discussed what is AWS. After that we discussed different domains
which are there in AWS, followed by the AWS services. After that we discussed the AWS Pricing and
in the end, we did a use-case, where we migrated our application successfully on to the AWS
infrastructure. So, any part of today's session guys that
you don't have clarity in, you can discuss right now, I will explain you guys again. Alright people are saying no. Neel says all clear. Sebastian says all clear. Shubham says thank you. Alright, since everybody is giving me a go,
let's wrap it up. So first of all, thank you guys for being
a part of this session, for being patient with me. I hope you enjoyed the session and I hope
you learned something new today. For the next session, I will be handing you
guys out the assignments, so they will be uploaded to your LMS. I want each and everyone of you to do those
assignments and come back to me in the next session. Also, I want you guys to try out this practical
at your own. If you have any problems, you have our support
team at your disposal, you can contact them 24x7. See you in the next session guys. Alright, have a good day. Bye Bye!