How to Crack 🔥AWS Certified Cloud Practitioner Exam🔥 in 8 Hours?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
foreign [Music] the introduction we will start and first talk about what AWS certifications are currently available and also cover what is the recommended order if you choose to stick with AWS for a longer time we will next cover the AWS certified Cloud practitioner exam official exam blueprint and highlight important information related to the exam by the end of this module you will also have your very own AWS free tier account created ready to be used when will progress through Hands-On Labs during the course we will wrap up module 1 after installing a few applications on your Mac or Windows operating system PC that will come up real Handy later in the course with that said let's get started foreign [Music] the rest of the course is quite Hands-On so you will need an AWS account please open a browser and navigate to aws.amazon.com free and we will go through the registration process right now you're now on the AWS free tier account landing page just click on create a free account so first thing first create an AWS account you will need to fill in all of the email password confirm password details and give your AWS account a name after that just click on continue next in the process is the contact information page you'll need to choose between professional and personal account type so select professional if you intend to use this AWS account within your company educational institution or organization otherwise select personal for this training you will need to create a personal account no need for professional fill in the name phone number full address and then agree with the terms by clicking this box when you're done please click on create account and continue next is the payment information page just in case you go over the AWS free tier limits AWS should be able to charge you somehow so that's why they are asking now the credit card number details please go ahead and fill in all of the details and when you're done click on secure submit next let's confirm our identity as you can see before you can use your AWS account you must verify your phone number when you continue the AWS automated system will contact you with a verification code so fill in the cell phone number and then go for the verification AWS is asking for great your identity has been verified successfully please click on continue now you would need to select a support plan AWS offers a selection of support plans to meet your needs choose the support plan that best aligns with your AWS Usage Now for this course the basic plan which is free actually it's everything that we need so now please click on free excellent so the registration process was successfully completed we can now just click on sign in to the console and we will get our first look to the AWS Management console congratulations well done you're now in the AWS management Council thank you and see you in the next section in this section we will install some additional software on your Mac now if you're a Windows user you may just skip this section and move on to the next one which is absolutely dedicated to Windows operating system users now you will see that throughout the course we will deploy some test web servers which are actually ec2 instances in AWS so virtual machines and we will use the terminal app this is the built-in app that comes by default with your Mac OS operating system now once we deploy the VM instances so the ec2 instances we will build different web pages in these web servers and we will use some kind of a text editor either using the default one or more advanced one like text mate and text Wrangler let's now go ahead and install textmate application so again one of the option is text made from macromates if you go to macromates.com you will be able then to download textmate 2.0 as of today and let's do that right now now here it is I'm using textmate and let's make a short comparison as opposed to the by default text edit application as you can see this will be for example the the index.html so the file in order to Define our first web server on AWS if you're using the default application text edit there is nothing here making it special on the other hand the textmate is a more advanced application that can really differentiate between your your code so for example the HTML body and the header are with different colors as you can see as opposed to the the text that defines the web page as I just mentioned when connecting to any ec2 instance on AWS you don't need to install any additional software we will use terminal app which comes by default with your Mac OS operating system so just click on it it will open and it is now ready to use thank you and see you in the next section in this section we are going to prepare your Windows PC for the rest of the course let's now talk about authentication so usually when we hear authentication we think of username and password and one good example is when you authenticate in your email account you provide your username which maybe is your your email address and also you provide your password and then you are able to access your emails now with your AWS ec2 instances as you will see in the course authentication is done a bit differently indeed you are going to use a username but for the password you are going to use a file this is called the key pair file if you are going to use putty software program that will install shortly then you will need to use the keeper file in a PPK format now let's talk about Amazon ec2 authentication process so Amazon ec2 uses public key cryptography to encrypt and decrypt login information let's talk about this now so the public key to encrypt a piece of data is being used which means that as the name implies the key is public anyone can encrypt data and send it to you on the other hand you as the recipient you have the private key which is different from the public key and only you can decrypt the data and actually read what's inside the packet the public and private keys are known as a key pair so this is why it's called a keeper because there are actually two keys used in the authentication process the public key again is used for encrypting the data and the recipient is going to use the private key to decrypt data and read the clear text now for the Windows operating system tools we are going to use a putty also mobilex term and for the text editor notepad plus plus we are going to create a keeper in AWS and we are going to export it and by default when you export it it comes in the pem format if for the rest of the course you are going to use putty and I will show you right away what it is then you will need to change the the keypad file from the PM format to the PPK format and we are going to use this putty or mobile extern which is my my favorite one in order to authenticate to get access into the ec2 instances now what is it with this SSH this is a new term and if you're new in the IT world when SSH means secure shell and it is a protocol that allows you to securely access resources so servers VMS machines that are remote at quite a distance to you it may be like in your same on-premises company data center or a thousand kilometers away now for the text editor the most common one or the the most spread one is notepad plus plus and this is for for Windows and this again is the same for the code that we are going to build and by the way do not worry about the code building the web pages that I mentioned earlier I'm going to provide you all the code necessary so no coding experience necessary here but again it's a more advanced text editor and it it will differentiate between the actual code and a regular text that you you're going to put there in your files so now let's go ahead and do the installation of this software so here I am on my Windows machine in uh in Google Chrome let's now type for party and go for the download putty and you can download party here it will lend you this page most probably you have a 64-bit machine so click on party 64-bit and here is the uh the party software is being downloaded now the next one is notepad plus plus and just click on download and then go for the big green download button and the last one mobikes term click on download thank you and go for the home edition which is free click on download now then go for the installer Edition and that's it now please go ahead and install all these software packages I will not waste your time please do it on your own it's absolutely simple you know that already and we will continue now with the generation of the AWS key pair alright so I'm now in AWS Management console and before we start and create the actual key pair there is one thing to note in the next module we will talk about regions availability zones and Edge locations but for now please note that at the top right corner you can select on what region you want to work in and region for now means data centers in a specific geographical region of the world so choose one region and then let's create the the AWS key pair but please note that the key pair is only available for that specific region so for example if I now create the key pair in order to authenticate in uh in ec2 instances in U.S east then I'll have to create another uh key pair if I want to use it for I don't know Asia Pacific or for EU Frankfurt something like that so I'm leaving the default which is the the oldest region in AWS the North Virginia I'm leaving this one here and now let's go to services and let's go to compute and ec2 so elastic compute cloud now in here we will go on the left and under Network and security please go to key pairs we will now click on create key pair and we will give it a name in my case I will call it excess and I'll click on create as you can see the pem file has been downloaded and it is available to use so now what we need to do is convert the excess pem file into something.ppk so let's go now into start and scroll down to party let's find party here is and if you expand it you have multiple icons here but you want to go with the party gen so putty generator first we will need to load our private key file in PM format and if we go to downloads we have to select here all files and then in my case sas.pam successfully imported the foreign key great now we would like to save the private key yes I'm I'm absolutely sure I want to save this without a passphrase okay yes and this is PPK and the file name I will say the same excess and I will save it as you can see a new file appears now with a little server here and this is the PPK file and the type it's party private key file so this is what you're going to use coming back to what I was mentioning mentioning earlier when authenticating to any ec2 instance you are going to use the username and instead of a password you will use this party private key file if you are going to use putty thank you and see you in the next section in this section we will cover the AWS certified Cloud practitioner official exam blueprint let's now talk a little bit about the exam so the exam format you receive multiple choice multiple answer questions the exam time is 90 minutes and the exam cost is 100 US dollars you can schedule the exam at either person view or PSI exam centers around the world as you can see I haven't noted anything related to the number of questions because this can vary but you should expect something around 65 questions in the exam now the AWS certified Cloud practitioner landing page you can find at this URL https aws.amazon.com certification slash certified Cloud practitioner and we will cover now exam guide and we will also go through the sample questions very very important so I'll click on it and it will open right now so this is the information we have covered already the exam overview and then here you have the exam resources let's now take a look first at the exam guide so you have here the introduction and also the exam code this will be important when scheduling the exam and in terms of exam preparation you all provided uh some trainings here which they do cost and they are not cheap and very important massively important it's uh absolutely critical to your future success when going for the exam is the AWS white papers either in Kindle and PDF format anyway you will be provided this AWS white papers access to download them you will find them in the course at the right moment and but if you don't like it to I don't know to read because some of us really do not like to to read documentation but the let's say we are more into watching video content like you're doing now do not worry honestly all of the white papers will be covered in the course extensively so you're really in good hands now continuing on the exam content I was mentioning that there are two types of questions multiple choice which means you have to choose one correct response and three are incorrect so one out of four or the second type is multiple response and you will choose two out of five options very important also is the passing score so as you can see here the passing score is 70 or 700 out of total one thousand going further you can also take a look at the content outline so there are four domains covered in the exam and also in the preparation in the course domain One Cloud Concepts which is which is 28 security 24 technology 36 and the last one billing and pricing 12 percent if you also want to take a look deeper at what each of the domain will cover you can take a look right here now the second thing I wanted to talk about is this one the sample questions now going through the sample questions is absolutely important massively important again I would say because you will understand what is the AWS wording so how do they really ask when they need to ask something do they I don't know let's say use some kind of weird or not so common wording is it straightforward so you have to get used to this type of questions and answers that you will provide also going through sample questions is one thing this is 10 questions another thing that you might want to do before uh showing up at the exam the real exam is to go through the practice tests so this is like a 30 min 30 not minutes but 30 questions available or something like that 25 to 30 questions and these are actually real exam questions that will appear in your exam it doesn't mean that you'll have the same questions but anyway you will get custom used to how AWS will throw questions at you now understanding the question is critical and let's go through one or two examples so the first sample question is this one why is AWS more economical than traditional data centers for applications with varying compute workloads so more economical this means it's related to billing and pricing and also varying compute workloads so this means that the application is now for example requesting High compute power and then in 10 minutes is requesting low compute power and so on so the options option A Amazon ec2 costs are built on a monthly basis this is a true fact customers retain full administrative access to their Amazon ec2 instances this is also true you have full access and full control on your Amazon ec2 instances or VM machines option C Amazon ec2 instances can be launched on demand when needed this is also true and customers can permanently run enough instances to handle Peak workloads so yes all of the options from a to d are true but you have to relate your options to your question so the question was again which one is most cost effective uh when you have an application with varying compute workloads and honestly you should go with option C Amazon ec2 instances can be launched on demand when needed which means that we will be covering the billing and pricing in a separate module you will understand more but answer C or option C is the one that you would want to choose because you have this flexibility elasticity with ec2 instances and yes you can launch them whenever you need or the application demands it so for this one option C is what you want to choose now the second question which AWS service would simplify migration of a database to AWS this is a great example of why we would need to go and you to understand and retain the information uh in in a couple of modules from now with AWS Services overview so the majority of the questions will be delivered like this one you need to know what the AWS uh in this case let's say storage Gateway does or database migration service does and just choose the right service that will accomplish your goal or your business need so this concludes our discussion on the exam blueprint again you can go on the cloud practitioner landing page and take a look uh what the information is covered here as well but really don't worry everything that you need to do and everything that it is in the blueprint will be covered in depth so that you will be comfortable and you will pass with flying colors thank you and see you in the next section so as of today there are 10 certifications available we will be covering in this course AWS certified Cloud practitioner now the next level of certifications is the associate one so Solutions architect associate developer associate csaps administrator associate will represent let's say the foundation of your AWS certifications progress now if you finish with this one uh probably you would like to go to the specialty exams so these are kind of newer and the last one on the list is machine learning specialty this is quite new so anyway AWS certified security specialty Big Data specialty Advanced networking specialty and last one as I said machine learning specialty now if you are let's say up to this point you definitely want to go to the last level of certifications so this is the professional one devops engineer professional and the master exam to say so Solutions architect professional so we will start now with the certified Cloud practitioner let's see what AWS says in regards to this certification so this certification provides individuals in a larger variety of cloud and Technology roles with a way to validate their AWS Cloud knowledge and enhance their professional credibility now honestly this is a high level overview of AWS Services certification you are not let's say tested on how the services really work and anyway let's say more advanced functionalities of the AWS services so typically you will get questions like what does this service do or you have to accomplish this task one of the services from this from the list that will help you to accomplish that and and so on so this is the foundational AWS certification and it's not that technical it is a great place to start for anyone either you are technical or not you should probably start with the certified Cloud practitioner if you want to be in the AWS game now let's go over the AWS associate level certifications first one AWS certified Solutions architect associate so this certification validates your ability to effectively demonstrate knowledge of how to architect and deploy secure and robust applications on AWS Technologies this is probably the best AWS search to continue with after the practitioner exam and this certification adds some emphasis on the following AWS let's call them core Services because they are core there are some miscellaneous or anyway not that important but this one these ones that you see here are very important IAM so identity and access management ec2 the very very most important service on AWS so elastic compute Cloud VPC the virtual private Cloud S3 the storage service RDS relational database service and sqs for queuing services now this exam assesses your ability to architect on AWS and probably it's the most valuable in terms of market demand as of today if you get this exam you'll be in a very good position next certification on our list is the developer associate this certification validates Proficiency in developing deploying and debugging cloud-based applications using AWS now this is the best next AWS certification to continue with and actually this is the simplest one the easiest one from all of the associate exams this certification adds emphasis on some of AWS services for example dynamodb elastic Beanstalk sqs SNS but really not very dip many topics overlap with Solutions architect associate exam and you will see that when preparing for the developer exam it is really not that big of a stretch from the solutions architect associate exam the next one is the csops administrator associate this certification validates your technical expertise in deployment management and operations on the AWS platform honestly this is the toughest associate AWS certification but still it's an associate many topics overlap with the associate Solutions architect associate exam and also the developer exam probably in a couple of weeks um if you put the necessary effort you'll get the csop's administrator associate exam as well now focus on identity and access management or IM VPC virtual private Cloud elastic compute cloud cloud watch S3 and some other miscellaneous Services now this concludes the AWS associate level certifications and we will now move on to Specialty certification exams the first one that you should go or you may want to go is the security one this certification validates your technical expertise in securing the AWS platform some of the exam Milestones would be AWS data protection mechanisms data encryption methods secure internet protocols AWS Security Services and features but also taking the decisions with regards to cost security and complexity of deployment for this exam there is a focus on services like identity and access management KMS Key Management Service Cloud watch cloudtrail VPC but also we have so web application firewall next hour on our list is the certified Big Data specialty this certification validates your technical expertise in designing and implementing AWS services to derive from value from data so as the name implies this is a this example focuses on on core AWS big data services and you will Design and maintain big data and leverage tools to automate data analysis focus on services like elastic mapreduce redshift Kinesis but also some as you may consider when seeing them in the exam unrelated topics like KMS machine learning iot and some other ones but anyway this is for Big Data now Advanced networking specialty it's um let's say as of today the the most uh challenging one from the specialty specialty track this certification validates your technical expertise in designing and implementing AWS and hybrid it architectures at scale so on premises public cloud and the mix which is the hybrid ID architectures some of the exam Milestones design and maintain Network architecture for all AWS Services Implement core AWS services in accordance to basic architecture best practices but also automation so automate networking tasks focus on services like VPC virtual private Cloud routing either static dynamic bgp in this case Direct Connect DNS which is Route 53 for AWS now the last one in the specialty track is the AWS certified machine learning this certification validates your technical expertise in building training tuning and deploying machine learning or ml models using AWS cloud some of the exam Milestones design and Implement scalable cost optimized reliable and secure machine learning Solutions choose the right machine learning approach in order to solve a specific topic or business problem focus on services like machine learning comprehend deep lens Lex and some others this concludes the specialty exams and we will now cover the last two professional level certifications from AWS AWS certified devops engineer professional level this certification validates your technical expertise in provisioning operating and managing distributed application systems on the AWS platform you will see some of the exam Milestones are managing implementing and automating security controls compliance governance processes you will also learn to deploy and manage monitoring metrics and logging AWS but also implementing highly scalable available and self-healing systems in AWS so if something breaks the system has to heal by itself with no human interaction many topics overlap with the solutions architect professional exam which is next now this certification validates your Advanced technical skills and experience in designing distributed applications and systems on the AWS platform this is the broadest AWS certification and it is the most valuable too you will Design and deploy dynamically scalable highly available fault tolerant and reliable applications on AWS migrate workloads to AWS from on-premises to the cloud and also implement cost-saving architectures now what do I mean by this one not all AWS services are tested in the exam so you will be tested if you go for the professional exam for most important let's say services in depth but not let's say up to a hundred percent so it is the most uh challenging one you'll have to to study quite a bit for this but it definitely pays off now for the last one the recommended path so we are studying here at the AWS certified Cloud practitioner my recommendation is to go to the solutions architect associate exam and then move on to developer and the csop's administrator once you're done with the foundation so the cloud practitioner and all of the associate exams should you choose to move further I advise you to go to the specialty exams start with the security exam then move on to Big Data Advanced networking and the last one machine learning should you choose to go further well you should start with the devops engineer professional and last one most important the must exam Solutions architect professional exam thank you and see you in the next section [Music] well done this concludes module 1 course introduction the module completion section at the end of every module in the course includes a summary of topics covered in the respective module and critical exam hints what I call Hot Topics topics that you need to know 100 when you see the real Cloud practitioner exam please make sure you cover module completion sections at the end of each module throughout the course with that said please join me in our next module module 2 AWS Cloud introduction [Music] welcome to module 2 introduction to AWS cloud computing this module provides an introduction to AWS cloud computing we will start and talk first about what is cloud computing and then I will introduce you what are the current cloud computing models and the different cloud computing deployment models working in AWS brings Great Value to any business from startups to large Enterprises so next we will discuss about the six advantages of running your applications in AWS cloud by the end of this module you will also have a good understanding on AWS Global infrastructure and be able to describe what are availability zones regions and Edge locations we will wrap up module 2 after going through AWS management interfaces and understanding more what are the options to interact with AWS Cloud platform with that said let's get started [Music] in this section we will go through an introduction to AWS cloud computing so what is cloud computing cloud computing is the on-demand delivery of compute power database storage applications and other it resources through a cloud services platform via the Internet with pay as you go pricing Amazon AWS cloud services platform provides rapid access to flexible and low-cost ID resources so I did mention pay as you go pricing well this means that you pay only for what you use and when you use it let's now concentrate on applications and data centers applications and services are typically run on servers which are comprised of CPU or the processor Ram or memory and storage hard drive or SSD where HDD is the Legacy version and SSD or solid state drive is the new version of hard drives now when I refer to Applications I'm referring to let's say email or web servers like you're running your company's website on a web server right databases that can serve the web server maybe you have a database of your clients and FTP server so file transfer protocol and so on so how can you run these services you can either run in your company DC or data center or you can literally rent the compute power and move it into the cloud this is what it is this is what cloud computing is you literally rent what you need you run the compute the storage power and you pay as you go now running services in AWS Cloud now with cloud computing you don't need to make large upfront investments in hardware and spend a lot of time provisioning the hardware you provision exactly the right type and size of computing resources you need to power and run your services you can run let's say one server or tens of thousands of servers in minutes as you need almost instantly and only pay for what you use with AWS you can stop guessing let's now go through a real world example on how AWS can help you and your business imagine that you are the CEO of example.com Corporation and you're running an online business you have an online Marketplace of thousands of products in order to run the business you are running the example.com website on your own infrastructure so on your own servers in your data center currently you have three servers running in your DC so let's consider that two of them are web servers and the third one represents your database now it's November time and you are going to run a big campaign for Black Friday fortunately this is a well-run campaign so well done to the marketing department now unfortunately the web traffic hitting your servers your web servers has doubled and you even registered peaks with three times more traffic than in unusual month of the year now this has led to bad performance and really bad user experience that resulted in your company losing money now next year you decide to invest more in your data center and you buy two more servers having now four web servers once you launch this year's Black Friday campaign everything runs smoothly excellent user experience not even one complaint registered now after this great and successful campaign you realize that for the rest of the year you don't need the additional two servers that you bought in January this year and now this is a great example of bad investment let's consider now that you're running your business in AWS cloud with Amazon ec2 so elastic compute cloud and for example one database using a dynodb now once you launch the Black Friday campaign you start experiencing traffic increase hitting your ec2 instances this time using AWS Auto scaling technology two more servers are deployed automatically by AWS and potentially even more one server database server so a dynamodb now excellent user experience for your clients no upfront cost for you and no initial investment once the Black Friday campaign is over traffic returns to usual values and AWS automatically will shut down the servers that are no longer needed your AWS infrastructure as you can see is elastic and can automatically adapt to changes of your business your AWS infrastructure is scalable and it is able to scale up or down depending on your business and dynamic workloads that you may encounter at different times and one last thing with AWS you only pay as you go and pay as you and your business grows and this is really fantastic let's now talk about different cloud computing models there are three major types of cloud services available yes or infrastructure as a service pass or platform as a service and SAS software as a service the differences between them consist of let's say functionality and very important tasks ownership and flexibility so who's in charge with what and what is your flexibility with your favorite Choice yes pass or says now let's have an example so working on your own premises data center is the same as doing the maintenance of your personal car also if you want a better more powerful car you then have to buy a new one right when you lease a car then you choose the car that you want and drive it whenever and wherever you want but now the car isn't yours if you want a more powerful car then just lose another one that suits your desire and needs and this is the same as yes or infrastructure as a service you can rent compute power storage and other AWS Services as you wish and run them when you want to now if you need to run a more powerful machine you can upgrade it on the fly or just in a couple of minutes now when you get a taxi you don't drive it yourself but still you can go where you want this is the same as platform as a service or pass the last flavor is SAS so when you use SAS is like traveling by bus the bus has a pre-assigned route and can't be changed you share the ride with other passengers so this is just an application you either choose to use it like it is or just leave it let's now talk about every each of the uh the versions or the options so infrastructure as a service or yes contains the basic building blocks for cloud ID and typically provides access to networking features computers and data storage space yes provides the highest level of flexibility and management control over the infrastructure so you can literally do whatever you want with your virtual machines and the best example here for us is Amazon ec2 elastic compute cloud now platform is a service or pass removes the need for your organization to manage the underlying infrastructure so hardware and operating systems and allows you to focus on the deployment and management of your applications this helps you to become uh I don't know let's say more efficient as you don't need to worry about resource procurement so basically also waiting for it waiting for the for the equipment capacity planning software maintenance or patching and the the past service in AWS is Lambda and Lambda is really really fantastic we will talk more about it in the upcoming modules the last one software as a service or SAS provides you a complete product that is run and managed by the service provider so it says you don't have to think about how the service is maintained or how the underlying infrastructure is managed you only need to think about how you will use the app really how you will use that what it doesn't does it help you to to run your business so a common example of a SAS application is a web-based email for example Gmail now let's talk about cloud computing deployment models so what are the possibilities three Cloud deployment models are currently available the first one on premises is when you run everything in your own data center so this is also called the private Cloud hybrid is when you run some some of your applications in your data center and some in the AWS public Cloud the last one is literally Cloud you run all your applications in AWS public cloud so let's talk now about on-premises also known as private Cloud resources I are deployed in your on-premises data center using virtualization and Resource Management tools now when I say virtualization let's think of VMware also from Microsoft this is the hyper-v and the open source version openstack private Cloud option offers the ability to provide dedicated resources not split between users or end customers so only your applications will sit on the actual Hardware well as opposed to the public Cloud where you have a hardware let's say a piece of Hardware a server and on that specific server your applications are run and maybe some others uh some other let's say customers or end customers are run also in the private Cloud you have your own equipment and it is dedicated only you run your applications there you have full control over your infrastructure and you are responsible which is also important for management and operating system patching so applying fixes or let's say patches to bug fixes no hybrid the hybrid deployment can be an intermediate step while you are on your way to fully migrating to the AWS cloud a hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing resources that are not located in the cloud so simply put is just a mix between on-premises and Cloud only the most common method of hybrid deployment is between the cloud and your existing on-premises infrastructure in order to extend or grow your organization's infrastructure so what most companies do they first run in a hybrid let's say deployment model where at first they put let's say their testing infrastructure devops or something like that in the cloud and they stick to on-premises for the rest for the rest of their applications and services the last option the AWS cloud the application is deployed in the in the cloud and all the components of the application are run in the cloud applications in the cloud have either been created in the cloud or have been migrated from an existing infrastructure to take advantage of the cloud benefits now also in terms of terminology migrating an application from on-premises to cloud is typically typically called lift and shift and this refers to taking the application as it is without modifying it and running it on cloud native resources thank you and see you in the next section in this section we will go over the six advantages of AWS cloud computing we will talk about trade Capital expense for variable expense benefit from massive economies of scale stop guessing about capacity increase Speed and Agility stop spending money running and maintaining data centers and going global in minutes so the First Advantage is trade Capital expense for variable expense now instead of having to invest heavily in data centers and servers before you know how you're going to use them you can pay only when you consume Computing resources and pay only for how much you consume there is no upfront commitment and you and you only pay as you use for example Samsung corporation saved 34 million US Dollars by using Amazon and building the Smart Hub app benefit from massive economies of scale by using cloud computing you can achieve a lower variable cost than you can get on your own because usage from hundreds of thousands of customers is aggregated in the cloud providers such as AWS can achieve higher economies of scale which translates into lower pay-as-you-go prices for udn customer stop guessing about capacity eliminate guessing on your infrastructure capacity needs remember the Black Friday example earlier right while guessing you often end up either sitting on expensive idle resources or dealing with limited capacity which is like bad experience for the end user you can access as much or as little capacity as you need and scale up and down is required increase Speed and Agility with AWS you reduce the time to make it resources available to your developers for example from Wix to just minutes this results in a dramatic increase in agility for the organization since the cost and time it takes to experiment and develop is significantly lower let's think of the time to production to say for a new server you order server then you wait for it to come then you take it you unpack it you rack it in your data center then you configure it and in a couple of days weeks maybe it's online really stop spending money running and maintaining your data centers with AWS you can focus on projects that differentiate your business and not the infrastructure let AWS take care of the infrastructure AWS will take care of the actual rooms of the data center power so redundant power to say so cooling Rex servers cabling storage networking and security equipment and guards and and actually some others too but these are the main ones you really need to focus on your business and not on the data center itself now going global in minutes with AWS you easily deploy your application in multiple regions around the world with just a few clicks this means you can provide lower latency and a better experience for your customers at the minimal cost and when I say Global reach I'm referring to Regions and availability zones and low latency this is achieved through Edge locations now these topics will be covered next thank you and see you in the next section in this section we are going to talk about the AWS Global infrastructure region availability zones and Edge locations AWS Global infrastructures building blocks are the Region's availability zones and Edge locations but we will also briefly touch on Regional Edge caches let's briefly touch on each of these components in order to understand what they are and what is the real value they bring to the customer we will start with the AWS availability Zone unavailability Zone represents one or more discrete data centers each data center with redundant power networking and connectivity housed in separate facilities now what's the real benefit for the End customer well running your applications or services in multiple availability zones you can easily achieve High availability fault tolerance and scalability and this is not possible if you are going to run your applications in a single on-premises data center so for example if the data center will fail then your application will not be functional will not work while if you are going to run your application in multiple availability zones or maybe also multiple regions and we will talk in a couple of minutes then if one availability Zone fails your application will continue to run because it is deployed in another or maybe multiple availability zones now specifically for the certified Cloud practitioner exam please note that one availability Zone equals one data center as you can see on the screen now there are multiple let's say buildings let's say three and plus one yeah four buildings these buildings represent each of them a data center so an availability zone is comprised by one or more data center but for the exam there is a high possibility that you get this kind of question so what is an availability availability Zone yes it is a dare Center now what's really inside the box so inside a data sensor as you can see in the picture in a real data center you will find a lot a lot of servers networking Storage security and balancer equipments so really a lot of stuff that AWS can take care of and you will just have to focus on your real business now let's also talk about the region so what is an AWS region an AWS region is a physical location in the world that consists of multiple availability zones and this means two or more all AWS regions are completely isolated one from each other which brings highest standards fault tolerance and stability now regions are isolated one from each other okay availability zones are isolated one from each other but the availability zones in the same region are connected through low latency links actually low latency and really really high bandwidth links and again inside an AWS region you'll have two or more azs we will now go through an example so in AWS Management console in the top right corner you can select on what region you want to deploy your services I'm now talking about the US the US East North Virginia Which currently has the most AWS azs or availability zones in it so Us East 1A 1B C D E and F now as I mentioned just a bit earlier and a region has multiple availability zones two or more and they are connected through low latency and high speed links which happens for U.S North Virginia as well now what's the current inventory so 60 availability zones are available and 20 regions as of now planned for 2019 15 more azs and five more regions in Bahrain Cape Town Hong Kong Jakarta and Milan so if you want to know more about the global infrastructure you can follow this link AWS amazon.com about AWS and Global infrastructure now let's talk about Edge locations and how they can really help you as an End customer is a fast content delivery Network which is called also a CDN so a CDN service that securely delivers data videos applications to customers globally with low latency and high transfer speeds now about the cloudfront network Amazon cloudfront uses a Global Network of 166 points of presence which means 155 Edge locations and 11 Regional Edge caches in 65 cities across 29 countries again more information at aws.amazon.com cloudfront features now let's go through an example of CDN again an edge location is simply an AWS endpoint that will cache content locally so let's say that there is a user that comes online in the Seattle region and it will request a file from a distance far distance let's say Amazon S3 bucket in Melbourne it will request the file the file will be delivered from the Amazon S3 Bucket from Melbourne through Amazon content delivery Network and in the end the file will be presented to the user the thing is that now the file is available in Seattle at Seattle Edge location why is this important now because another new user of group of users when they request the same file the request will hit the AWS network but now the file will be served locally which means really low latency a higher speed and a better user experience more information is available following the URL and I'm referring to Edge locations and if you go to this link if you navigate there you'll find information about the edge locations and also the original Edge caches that we will briefly touch next for each of the the main geographical regions so for example for emea Europe Middle East and Africa there is a complete list of the edge locations so in Amsterdam there are two Edge locations Berlin 2 Cape Town one Copenhagen 1 Dubai One Dublin one Frankfurt from Germany eight and so on so again more information if you follow AWS amazon.com about AWS and Global infrastructure URL now there is a big difference between an edge location and the original Edge cache and I want now to go through um let's say a comparison and then through an example cloudfront helps you deliver your web content faster to your end users thus providing a better user experience so that's very simple cloudfront Edge locations bring the web content closer to your user to your viewers sorry and make sure that popular content can be served quickly cloudfront original Edge caches really help when the content is not popular enough to stay at the cloud front edge location and improve delivery performance for that content I'm sure you'll understand more once we we go through this example so a user opens let's say his laptop or her laptop and will try to navigate to website.com this request will be routed through AWS DNS service which is called Route 53 to an edge location now the edge location will question itself well do I have this content that is being requested do I have it locally is it cached here locally and if it is then it will be delivered straight away to the user now if the content is not available then the edge location will ask original Edge cache and that's the difference it will send a request and if the content is available at the original Edge cache then the content will be delivered to the edge location and the edge location will deliver the content requested to the user now what's going to happen if the edge location and the original Edge cache really do not have the content cached locally well the edge location will literally ask the web server for the HTML files for example and the Amazon S3 bucket in order to um let's say receive the the files so it will ask the web server and the Amazon S3 bucket if the content is uh is there on on aesthetically basis on an S3 bucket and the AWS Cloud will deliver the the requisite files to the edge location and also to the original Edge cache and of course in the end the the content will be delivered to the end user and that's pretty much all that you need to know about Edge locations and Regional Edge caches thank you and see you in the next section in this section we will go through the different AWS management interfaces in order to understand how can we interact with the cloud platform AWS provides three distinct options in order to interact with the AWS Cloud platform and these are the Management console the command line interface or CLI and the AWS software development kits or simply just sdks let's start now with the Management console the AWS Management console is a graphical user interface for accessing a wide range of AWS cloud services and managing compute storage and other Cloud resources the Management console is a web application that comprises and refers to a broad collection of service consoles for managing AWS services now another talk says that you have to navigate to https console aws.amazon.com and we actually did it when we created your free tier account in module 1. now here is a screenshot of how does the Management console look and specifically now I'm in the VPC menu at the top you have the navigation bar and very very important is the region selection so this should be probably one of the first things that you do before you start deploying services in AWS you have to select your region so where do you want to install all of the services and the selection is very very much related to where is the the vast majority of your users so for example if the majority of your users are in the US it wouldn't make too much sense to select a region like somewhere in Asia you know now on the left you have the navigation Pane and there is also a little orange bar that will highlight the current menu selection now the next option that we can use in order to interact with AWS is the command line interface or simply CLI the AWS command line interface is a unified tool to manage your AWS services with just one tool to download and configure you can control multiple AWS services from the command line and automate them through scripts which is absolutely fantastic and it helps a lot in large environments after AWS CLI tool installation you can begin making calls to your services from the command line now here is an example so I'm logged into the CLI and I will type AWS easy to describe instances this will return the information on all of my ec2 instances that I have defined in my account this could also be done in the Management console so in the web user interface but now we are referring to a different type a different method and this is the CLI now the another command AWS ec2 start instances and then you provide the the ID of the instance it will just start the machine the ec2 instance and last example AWS S3 LS Or List S3 and then the name of the S3 bucket this list the contents of your S3 buckets in a directory based listing so very very simple and pretty much self-explanatory now the last method is the sdks so the software development kits so what are these sdks you may have heard this terminology it's pretty common nowadays now a software development kit or an SDK is really nothing more than a set of tools that allow developers to create software or applications for a specific platform operating system computer system or maybe a device Now using SDK you can access and manage AWS services with your preferred development language or platform and the offering from AWS is really really large if you want you can go to Amazon aws.amazon.com tools and you will see what's now on your screen so let's let's read it simplify using AWS services in your application with an API tailored to your programming language or platform so we can play with the with the um let's say with the AWS if we already know Java or net or node.js or PHP python Ruby browser go C plus plus and so on I think now it's a good idea also to have an example with python why because python is popular and I like it very very much so for python you have to install bottle boto is the Amazon web services SDK for python it enables python developers to create configure and manage AWS services such as elastic compute Cloud simple storage service and others too now the example below shows how to describe one or more ec2 instances using described instances uh syntax so it may not look so friendly to you it's really about programming languages and learning how to program but anyway python if you're not familiar with programming it's a really really good start for you as opposed to different other programming languages that may seem let's say more complicated in the beginning so the syntax import bottle 3 ec2 equals Moto 3 client and we have here ec2 the response will be easy to dot our argument describe instances and then we want to print this described instances meaning the result so what are my ec2 instances in my AWS account do not worry about it this is really not uh so much necessary now you just have to know what are the methods and honestly from these three we'll play the most with the Management console and we'll do some examples also in the CLI in the upcoming modules thank you and see you in the next section [Music] before we wrap up module 2 let's now go through a quick recap on the most important topics and exam hints covered in this module we started our discussion with AWS cloud computing so cloud computing is the on-demand delivery of compute power database storage applications and other it resources through a cloud services platform via the Internet with pay-as-you-go pricing think of cloud computing as renting the hardware with no initial investment but pay as you go and as you grow next we have talked about cloud computing models you should know the three cloud computing models infrastructure as a service is the first one platform the service second and software as a service or SAS the third one infrastructure is a service or yes contains the basic building blocks for cloud ID and typically provides access to networking features computers and data storage space Amazon elastic compute cloud or ec2 is the the AWS service that fits in this category platform as a service or pass removes the need for your organization to manage the underlying infrastructure hardware and operating systems and allows you to focus on the deployment and management of your applications AWS Lambda an awesome AWS service and we will talk about it in a later module software as a service provides you a complete product that is run and managed by the service provider think of Gmail your application your SAS application that you probably use daily now cloud computing deployment models this is different there are three Cloud deployment modules that are currently available on premises which means that you will run everything in your own data center hybrid which means you run some of your applications in your data center and some in the AWS public cloud and the last category is just Cloud where you run all your applications in the AWS public cloud so on premises everything on on premises everything in your own data center Cloud everything in cloud in public cloud and hybrid it's a mix of those on-premises and Cloud make sure that you know the six advantages of AWS cloud computing when you see the real exam so this Advanced advantages as we have talked about are trade Capital expense for variable expense benefit from massive economies of scale stop guessing about capacity increase Speed and Agility stop spending money running and maintaining data centers and Go Global in minutes next we have talked about the global infrastructure regions availability zones and Edge locations an availability Zone represents one or more discrete data centers each data center with redundant power networking and connectivity housed in separate facilities an AWS region is a physical location in the world that consists of multiple availability zones and we have said here that there are at least two azs in a region Edge locations are AWS endpoints that cache content locally and Regional Edge caches store even more cash locally the last topic AWS management interfaces AWS provides three distinct options in order to interact with the AWS Cloud platform and retain all of them please for the exam AWS management Council AWS command line interface or CLI and AWS software development kits or sdks now with that said please join me in our next module module 3 AWS Services high level overview foreign [Music] [Music] welcome to module 4 AWS core Services the backbone this module covers AWS core Services relevant to the certified Cloud practitioner exam highlighted in the official exam blueprint we will start by creating a billing alarm for your AWS account so that you are aware of any charges incurred by your AWS Services spending and we will continue with identity and access management or IM and virtual private clouds topics or VPC for every topic covered in this module we will first lay down the foundation from a theoretical perspective and then we will jump to AWS Management console or AWS CLI for Hands-On Labs by the end of this module you will have a good understanding and also gain hands-on experience with services like elastic compute cloud or ec2 security groups or SGS elastic Block store or EBS and Amazon simple storage service or S3 we will wrap up module 4 after going through a fast recap on all topics covered in this module an exam hints relevant for the AWS certified Cloud practitioner exam with that said let's get started [Music] in this section we will create a billing alarm for your AWS account so now remember in module 1 we have created your AWS free tier count which means that now you have 12 months access to AWS but also there are some limits there in terms of how much you can use a specific service remember for ec2 there were some kind of 700 and something hours that that you can use ec2 and so on now the idea is that we will create a billing alarm so that if you're going to use in your AWS account services that go over the free tier limit then you will have to pay all right that's fine but I want to know for every month what it's my current spending and if I go over a specific threshold or value that we will set just now then I want to receive an email from AWS so let's switch over to the console and take a look at creating a billing alarm for your AWS account alright so here I am logged in to the AWS Management console the first thing that you need to do is go over to your account so at the top right corner just click on your username there and go to my account then you have to go to billing preferences and here you have to enable the receive billing alerts so just as it says turn on this feature to monitor your AWS usage charges and recurring fees automatically making it easier to track and manage your AWS spending great so enable this one and then just click on Save preferences in my case because I have enabled some time ago the thing is that you cannot disable billing so yeah that's why I cannot enable it now so that you can see it anyway click it and this way you will enable billing alerts click on Save preferences and that's it all right so we have enabled receiving billing alerts now let's create the actual alarm so services and under management and governance please go to cloudwatch click on cloud watch and now we want to create an alert so under alarms I will click on create alarm select a metric to alarm on and our metric is USD so dollars clicking on all metrics and then total estimated charge I will just click on USD and select metric now I'll click on new list and let's see here in my case my email and let's say 25 so when my total AWS charges for the month exceed this value so just put here whatever fits uh your needs or best fits your needs then I will send an email to this specific email address and that's it just click on create alarm and now you will be sent an email in order to confirm uh this this data so check your email inbox for a message with the subject AWS notification subscription confirmation just confirm this this email I'll do it later and that's it now you have a complete billing alarm that will alarm you when the your AWS spending let's say goes over in my case 25 US dollars thank you and see you in the next section in this section we will cover identity access management basics AWS identity and access management are just IAM are also known as IM is a web service that helps you securely control access to AWS resources you use identity and access management to control who is authenticated which means signed in authorized which means what permissions will be allowed or just given to the authenticated operator and also what resources can then access the key in understanding IIM is represented by these two concepts as you will see in just a moment authentication and authorization so let's dig more on the subject in order to understand this I am concept which is really really massively important in AWS we need to Define and understand the following concepts and there are four User Group row and policy document the first three user group and role are related to authentication and the policy document is related to only authorization so let's let's now just continue with it a user is a permanent named operator so what is that it can be a human it can be a machine or another AWS service a group is a collection of users and usually contains multiple users right this is a simple name plain English a user can belong to multiple groups now a role which may seem a little bit more complicated it's an operator too so it is just another authentication method just like a user a role can be as well a human or another AWS service so you may say okay stop so what is really the difference between a user or user and group and the role and this is the key now user and groups which is let's say a collection of users authentication credentials for those two are permanent so once defined this do not change and they are like available since start to end so they not change their permanent but for the role authentication credentials are temporary and let's have a short example on on user groups and role so let's say we have an ec2 instance that will just create a snapshot so a backup of all data and would like to store this snapshot in Amazon S3 so simple storage service in order to do that the ec2 instance once it authenticates needs to have the right permissions which means that a policy document will also be attached there to to permit uh storage of of file of multiple files of incremental snapshots and so on now with the user in group if I just Define on the ec2 instance the username and password so something static that doesn't change it means that if the ec2 instance will be hacked then the hacker will have access to everything in my AWS account on the other hand and this is the key also if I attach a role to the ec2 instance and in that role I Define that the ec2 instance will be able or will be allowed to only access S3 then it means that if that ec2 instance will be hacked it will no longer provide to the to the hacker access to everything in my account only to S3 and and I can also limit the period that I also define the role so all right so it has access to S3 for this period of time so this is uh something that is really great and it makes the difference again for the user and group authentication credentials are permanent while for the role authentication credentials are temporary now once a user or role is authenticated by AWS it will be given permissions which means it will be authorized based on policy document or documents that are attached to it police documents which come in Json format can be attached to a user group or role if policy is attached to a group once a user joins the group or is added by the administrator to that group it will inherit the attached policies so not only one but all of the policies and by the way Json comes from JavaScript object notation so you may ask how does a policy actually look like so let's have an example you have now on the screen a policy that's named administrator access which by the name means that I will provide full access to AWS services and resources just like it is written in the description the big advantage of Json formatting of these policy documents is that they're really they're let's say easy to read and understand so let's have a look now the version it's easy to understand it's about the document version next comes the statement so the effect is allow or permit also the action is Asterix so this is basically a wild card meaning that okay I am permitting anything and next one is resource which again uh in the administrator access policy AWS is using an asterisk which means again anything so come let's go again from start to to the end the document version is this one since 2012. I am permitting anything on any AWS resource and it is um easy to read understand and it totally makes sense because this is the administrator access policy now let's have now a quick recap because we have the complete picture so a principal or operator again it can be humor or another AWS service makes a request for an action on an AWS resource and this is what's what is called API call so coming back to our example the ec2 instance will call for storing so put the snapshot into the Amazon S3 first the user is authenticated based on username and password pair and is referred to human access or access key ID and secret access key this refers to Services also known as programmatic access for CLI API and software development kits and this is something that you can select when you create your user you create your user and you can then say okay I'm going to Grant or permit access with username and password or I can also add programmatic access now the user's action will be permitted or authorized based on attached policies and very very important every API call will be recorded in AWS by cloudtrail so this is something we have covered in the previous module thank you and see you in the next section in this section we will configure identity and access management for your account so following the theoretical lecture just a moment ago we'll create users groups we'll also enable multi-factor authentication and attach a policy to your account so let's switch over to the console and take a look at what we need to do right now alright so I logged into AWS Management console before we start configuration of identity and access management please know that currently Northern Virginia so U.S east region is selected this is important because when you click on services and navigate to security identity and compliance and click on IM you will see that the region is no longer being selected here IM does not require region selection so this is a global AWS service the goal of this section is to have five green ticks here the root access Keys relates to your um email address that you use in order to sign in for this particular AWS account we will now continue and click on activate MFA on your root account activate MFA on your AWS root account to add another layer of protection to help keep your account secure so this is what we want to do just another layer of security clicking on manage MFA continue to security credentials and then again multi-factor authentication activate MFA now we have three options to choose from and we will use a virtual MFA device authenticator app installed on your mobile device or computer so what we will use is Google Authenticator app so you need to go on your smartphone either is or Google so Android and install Google Authenticator application I'll click on continue so first thing you need to install the Google Authenticator app and then use your virtual MFA app and your device's camera to scan the QR code so here I am in app store I have installed Google Authenticator and I will now click on open and I will begin setup so clicking on setup show barcode 520 and then seven to eight then I'll have to wait until the second code is being generated now I have another one 388703 388703 and I'll click on assign MFA you have successfully assigned virtual MFA perfect so click close And if I now click on dashboard now I have two green ticks that's excellent let's continue and create individual IM users so clicking on this and manage users I currently have no identity access management users defined so I'll click on add user user one as I mentioned in the previous section I can now select what kind of access I want to provide to this user user one programmatic access enables an access key ID and secret access key for AWS API CLI SDK and other development tools and AWS Management console access enables a password that allows users to sign in to AWS Management console so the second option is mainly for human let's say operator in order to connect to Management console while the first one programmatic access is going to be used let's say in roles and when providing access to another AWS service to to access a third one or something like that I will also leave here the order generated password selection and I will allow users to reset their password anyway I'm requiring password reset so everything is good now let's click on permissions in order to assign permissions to this specific user I could add user to a group I could copy permissions from an existing user which is not the case because this is the first user I'm creating and I I can also attach existing policies directly what I want to do is uh to create a group so I will do that right now so click create group the group name because I want this one to be this user to be administrator let's say with permissions of an administrator I will just name it administrator access group now why is that I'm looking here at the administrator access policy I will just expand it right now and this is the policy we have to we have looked at in the Json format right in the previous section so this is permitting or allowing any action to any resource on my AWS account so this is right administrator access I'm going to select this so the group is administrator access group I am attaching administrator access Json policy to this group which means that user one with which is in this group will inherit all of the permissions uh of this group and of this policy I will click on create group add user to the group yes this is what I want to do I could also add some tags so for example name and this is user 1 department and this is it admin and so on next I'm going to review everything I have configured and click on create user I'm now being provided access kid and also secret access key and password so again the password is going to be used in order to log into AWS Management console while the access key ID and the secret access key will be used for programmatic access what I can do now is download CSV and I'll have this all this information in a CSV file and I also can send an email to the specific user DM is the user these are your credentials in order to use your AWS account I will not do any of those now it's not needed I'll just click on close and if I go to dashboard now I have four green ticks the last one in order to to complete everything in this lab is to apply an IM password policy so clicking on it and then manage password policy will give me the option to let's say enforce my password policy for example require at least one uppercase letter lowercase one number password expiration if it makes sense to me or not prevent password reuse also password expiration requires administrator reset anyway it makes sense for me to have a more complicated password but this really relates to your company policy or really what you want to do here so clicking on apply password policy and then going to dashboard I can see that now I have everything in green so what we did we activated MFA on your root account we created one user and one group and also we have given full access or administrator access permissions to these users by attaching the policy to the group and the last one we have applied an IM password policy One Last Thing Before we wrap up this section is the IM user signing link and you can take a look here so if you do not change it or customize it it will be in the format of https and then your AWS account number and sign in AWS amazon.com console what you can do is click on customize and replace your AWS account number with something more friendly and then you can just hand over this link to your users to sign into AWS thank you and see you in the next section in this section we will cover virtual private cloud or VPC basics Amazon virtual private Cloud enables you to launch AWS resources into a virtual Network that you define this virtual network is similar to a traditional Network that you would operate in your own data center with the benefits of using the scalable infrastructure of AWS so literally what you're doing is moving your traditional data center into Amazon web services the public cloud you can launch your AWS resources such as Amazon ec2 instances into your VPC in order to get started a virtual private cloud or VPC is a virtual Network dedicated to your AWS account and you will see that when we start the launch VPC wizard we will have to select what is the ipv4 address range that we want to allocate to this VPC now let's go over some basic terminology in terms of ipv4 and also related topics to VPC a subnet is a range of IP addresses in your VPC and again as an example 10.0.1.0 24i ipv4 address space please note that this is a shorter address range so this is the subnet of the overall virtual private cloud a route table contains a set of rules called routes that are used to determine where Network traffic is directed so where it will go an internet gateway allows communication for your instances to the internet now let's see exactly how it looks from from a visual perspective what we will next do in this section uh and in the upcoming sections as well so literally what you do is first login to AWS account and then you will select the region right so for the upcoming sections and the rest of this course I will use Us East 1 Northern Virginia region now after you select your region you will have to define the VPC the virtual private cloud and we will Define a VPC with a single public subnet but wait so we said that as an example for the single public Summit subnet we will select 10.0.1.0 and you may already know that this subnet is not a public one it's a private one so this this subnet 10010 24 could be allocated anywhere around the world and this is literally a private subnet and we will see in a moment what makes it really a public one next we will define an ec2 instance and we will allocate a static IP so something that we literally Define 10.0.1.100 and we could deploy this in one availability Zone and the address space for this az1 is 10010 24. but we can also allocate or not allocate but we've also installed this ec2 instance for high availability and redundancy reasons in another AZ so az2 and please note that we have a different ipv4 address range here 10.0.2.0 24. now communication between availability zones so between these two ec2 instances is performed by what is called a router and this is also managed by Amazon so by AWS you don't have to do anything communication between ec2 instances happens locally so inside the VPC and it is handled by this router now communication outside of the VPC is handled by internet gateway in this case or it can be also a net Gateway but this is outside of the scope of the certified Cloud partitioner exam so the ec2 instance I mentioned that it has a route table and in the route table we Define destinations so where will my traffic go if it goes locally so inside the VPC 100 16 it will be handled by the router but if the traffic needs to leave the VPC and go on to the internet then it will use the internet gateway and you can see the green route in the route table zero zero zero zero slash zero and the target is the I Gateway so identity of the Gateway great internet gateway ID so basically the internet gateway performs two roles it will net the traffic so it will perform Network address translation for the traffic leaving the VPC for example of the ec2 instance in availability Zone 1 and it will then change the source IP address once the packets reach the internet gateway from 1001 100 to something public to a public and a real public IP address as the traffic leaves the internet now when the traffic returns it will again change the destination this time for the from the public IP address to the real ipv4 private address 1001 100. so the internet gateway performs the static net one-to-one mapping between the public ipp4 address 1001 100 and the allocated real ipv4 public address so this is everything that I wanted to to talk about from a theoretical perspective in the next section we will literally start and configure our first Amazon VPC thank you and see you in the next section in this section we will create the virtual private cloud in AWS so let's switch over to the console and get started alright so I have logged in to AWS Management console and now in order to get started let's navigate to services and scroll down to networking and content delivery section and just select VPC so click on VPC and before we create our first VPC let's take a look at what is already configured so in the VPC sections if you click on it you will see that there is already one VPC by default created by uh by Amazon and it has an address range defined here as 172 that's that's 31.0.0 and this is a slash 16. it also has a main round table and a main route table is the route table Associated to every subnet that you define in your VPC and if you define a specific a custom route table that applies to your uh to your subnet then it will not use one but if you don't create a custom one it will use the main route table all of the subnets will use this one ACLS and security groups it's something that we will discuss later in this section Also let's click on internet gateways I can see that this internet gateway with a specific ID the state is attached and it is attached to this VPC which ends in 9e9 and if I take a look again in my vpcs this is the VPC 9e9 so this means that this VPC is a public one meaning that the subnets here defined in the VPC that are associated or attached to this VPC can connect to the internet through an internet gateway and I was mentioning in the previous section um how how it is that when I Define a VPC and I say that this is a VPC with a public subnet attached and that specific subnet it's a private one what makes it a public one well traffic from any specific subnet that leaves and connects to the internet through an internet gateway well it's called a public subnet now let's continue and create our own VPC we can click on create VPC here and we will provide these options or what we need to do also is go to the VPC dashboard and click on launch VPC wizard we are now being presented four options VPC with a single public subnet and let's read what it says your instances run in a private isolated section of the AWS cloud with direct access to the internet and this is what we want network access control lists and security groups which will be covered later in this module can be used to provide strict control over inbound and outbound Network traffic to your instances so literally this option creates a slash 16 Network this is for the VPC with a slash 24 subnet for our public subnet public subnet instances will use an elastic IP or public IPS to access the internet and this is what elastic IP is just a real public internet IP for the uh for the internet now the other options VPC with public and private subnets and please know that also the picture here changes in order to to provide a meaningful uh visualization of of the description so this this has two two subnets one public and one private so for example public for your web servers and the private ones for any databases running in the back end another option would be VPC with public and private subnets and also Hardware VPN access so you you would select this one if you want to connect your AWS private Cloud to the corporate data center so to your on-premises data center and that would make sense for a hybrid model when you run some of your applications in the cloud and also you run some other applications in your traditional Data Center and the last one VPC with a private subnet only and Hardware VPN access so only some resources are run in the cloud no connectivity to internet but you want to connect these resources or these applications to your traditional Data Center anyway we will go for the first option VPC with a single published subnet so I will click on select now I have to define the ipv4 block and I will use the default option here that I am being presented 10.0.0.0 16. and this means the slash 16 notation means that I'll have over 65 000 IP addresses available to use I will also give a name here so let's say AWS CCP VPC so this is the VPC name that I'm providing for my VPC that I'm creating now the public subnet ipv4 address range will be 10 0 let's say 10 0 1 0 24. now as I mentioned in a previous section I can also select the availability Zone I can leave like no preference and AWS will select one of these six availability zones or I can select mine U.S east 1A it's a good choice why not and the subnet I can give here a name this is a public subnet again this 10010 it's literally a private one so RFC 1918 private IP address space but it will be public because so it will become a public one because it will connect to the internet through an internet gateway and this is the definition of a public subnet so I will leave the default subnet name here public subnet enable DNS host names yes why not yes Hardware tenancy default I will not change anything here I'll just click on create VPC creating VPC route tables subnets attaching internet gateway and so on now if I click on your vpcs I see that I have two vpcs here so the AWS CCP certified Cloud practitioner VPC and the default one I have a VPC ID state is available I can also see the ipv4 address space that's associated with this VPC I also have a main route table if I click on it the selection changes here on the left menu from the your vpcs to Route tables I can also click this is the summary I can click on routes and I can see here the 100 16 address range and the target is local so anything that is going to be routed for this address range so between ec2 instances will happen locally will not leave the VPC and it will be performed by a router ethernet gateways as you can see I have two one is attached to the default VPC and another one with this specific ID it's attached to my AWS CCP VPC and this internet gateway will perform the static net what I was talking about between the the private address and the public address this way allowing traffic from the ec2 instance inside the VPC inside the availability Zone to go over to the internet and also it will permit us from our laptop from our PC to connect to the ec2 instance because again the internet gateway performs a static one-to-one mapping be between the private and the public IP address one more thing elastic IPS so elastic IPS we will use these ones um these are real public ipv4 addresses and we can attach elastic IPS to our ec2 instances or we can rely on AWS to allocate one public real public ipv4 address from a dacp pool and this is going to happen when we are going to Define our ec2 instance in the next two modules so first we'll talk about from a theoretical perspective and just have a good technical background and foundation and then as you will see we will go over and configure it in AWS thank you and see you in the next section in this section we will cover elastic compute cloud or ec2 basics Amazon elastic compute Cloud provides scalable Computing capacity in the Amazon web services or AWS cloud AWS virtual compute environments are called instances so now for the rest of the course we will use the terminology of instances so ec2 instances and try to avoid anything like virtual machines Amazon machine images or Amis are available to choose from and you will see that there are pretty much pre-configured templates for ec2 instances that you can choose from when you decide to to launch an ec2 instance in Amazon instance type so now let's cover also some basic terminology instance types are different configurations of CPU memory storage and networking capacity also secure login is assured by Amazon so so secure login to ec2 instances and we will use key pairs you store the private key and AWS stores the public key these are the basic concepts of pki or public key infrastructure let's say terminology and the way it works with certificates in the pki world you can attach storage volumes to your ec2 instances instant storage volumes and this could be ephemeral storage or it could be persistent persistent storage volumes for your data are available through elastic Block store or EBS and we will cover Amazon EBS volumes in just a couple of sections now you can store data in multiple locations and we have learned about regions and availability zones you can Define basic security using AWS built-in firewalls and these are called security groups in a security group you can Define rules like protocol Port Source IPS that permit or deny access to your ec2 instances and we'll also cover this in the upcoming sections in this module elastic IP address this is a static ipv4 public address that you can attach to your ec2 instance for example in order to use it for a website the difference between the elastic IP address and the dynamically public IP address that Amazon will allocate to our ec2 instance is that this elastic IP address again it's static so it will not change if you stop your instance and then start it again and it makes sense for example if you use it for website so that the uh the IP address stays the same and you don't have any kind of problems in accessing your ec2 instance you can also create and attach tags or labels to your ec2 instances now when you launch an easy to instance you first have to select an Ami and again this is the Amazon machine image which basically represents software selection so you can start your ec2 instance with a Linux operating system you can start it with a Windows operating system and this is one thing that you have to decide on before going further in your ec2 instance initialization all Amis are categorized as either backed by Amazon EBS or backed by instance store most probably you would like to go with the first option Amazon EBS in order to not lose your data if you reboot your instance so what I mean is that for AMI with root volume backed by EBS data is deleted only when the instance terminates which is not the case for the instant store volumes where data persists only while instance is live so again if I stop and then restart the instance or reboot it if I'm using instant store volumes then all the data on my ac2 instance is lost and I have to restart my work the next step is to select the hardware and I'm referring now to instance type each instance type offers different compute memory and storage capabilities and they are grouped in instance families based on these capabilities if you want to learn more about instance types you can follow this link aws.amazon.com ec2 slash instance Dash types now in this course we will use the free tier ec2 instance type we don't want to pay while we are preparing for the exam and honestly this is more than enough in order to get started and learn the basics and what you need for the certified Cloud partitioner exam now also very important as you will see we will cover pricing details and pricing information for the core services and also for key services in our next module so now let's cover pricing model models for the Amazon ec2 there are four ways that you can pay for Amazon ec2 instances and this refers to on-demand instances reserved instances spot instances and dedicated hosts with on-demand instances you pay for compute capacity per hour or per second depending on which instances you run the on-demand instances option is basically the option that you go for while you you authenticate to AWS Management console and you literally start an ec2 instance so this is the on-demand instances type the next one is Amazon ec2 spot instances spot instances allow you to request spare Amazon ec2 Computing capacity for up to 90 percent of the on-demand price so when Amazon has some more Computing capacity left and not being utilized they just provide you the option to use this this little space probably to to to run your workload but this is like an auction the price is not fixed and you can set in your Amazon Management console uh let's say a threshold a price threshold and when the Amazon pricing reaches that threshold then you will be allocated resources so for example I want to be allocated easy to spot instances if the price is at maximum five dollars and five dollars could be whatever per hour per week per depending on what you said there now some common use cases for easy to spot instances applications that have flexible start and end times you could not run your website on spot instance right so this this has to run like 24 7. and this it is not a good option for websites applications that are only feasible at very low compute prices so you could have like a fleet of ec2 instances that you want to run let's say big data on you want to to run something which requires a lot of ec2 instances but you need to keep the pricing low so that's why you're choosing spot instances even if you have to wait for the for the right time for when the Computing capacity is available users with urgent Computing needs for a lot of additional capacity so this is just another use case another option is the reserved instances Amazon ec2 reserved instances provide you with a significant discount so up to 75 percent compared to the on-demand instance pricing for applications that have predictable usage reserved instances can provide significant savings compared to on-demand instances this choice is best for customers that commit to using ec2 over one three-year term in order to reduce their total Computing costs the last option is the dedicated host and this really first to having one physical server and not splitting or sharing the the hardware with another customer so that will be dedicated only to you dedicated hosts can help you reduce costs by allowing you to use your existing server-bound software licenses as an example like Windows servers license or SQL server and so on and this type of instances can also help you meet compliance requirements now in the last section this is where we left off so we created a VPC we have allocated 100 16 as the ipv4 address address space block and we will now create an ec2 instance in one availability Zone this ec2 instance connects to the internet through an internet gateway and it will also have an ipv4 public IP address now this address will help us to connect and this is what we will do through SSH first from um Mac PC then from a Windows PC and see exactly how we can get access to the ec2 instance we will first start in the next section with the deployment of the ec2 instance at the end of the section we will connect through SSH from an Apple uh PC and then in the next one we will just connect so that you see how we can do that from a Windows operating system thank you and see you in the next section in this section we will launch our very first elastic compute Cloud instance and connect to this instance from a Mac OS operating system PC so let's switch over to the console alright so here I am logged in to the AWS Management console in order to get started with elastic compute Cloud you would need to go to services and under compute category here please click on ec2 now I currently don't have any running instances so we will now do it together so click on launch instance and now we are going to um to cover all of the steps covered in the theoretical lecture just a moment ago so first step you need to choose an Amazon machine image or Ami and remember we have talked about the root device type in this case it's EBS so elastic Block store which means that everything that you would store you would put there on your volumes in your in your machine in your ec2 instance will not be lost if you for example stop the machine and start it again or reboot the machine and this is good news now in order to continue please click on select and now we have to choose an instance type and this refers again as I mentioned earlier refers to the hardware configuration for this course we will use T2 micro and this is free tier eligible which means that we will not pay for this usage now if you hover over this this cursor this this mouse over you will see that for the first 12 months following your AWS sign up date you will get up to 750 hours of micro instances each month so we have pretty much enough time to play with now other families as you can see here this is the general purpose family general purpose instances provide a balance of compute memory and network resources etc etc other families let's just scroll a little bit this is compute optimized so compute optimized instances have a higher ratio of virtual CPUs to memory than other families and the lowest cost per vcpu among all Amazon ec2 instance types others are for example Graphics optimized GPU and this is Graphics processing units other family memory optimized with a lot of memory and we can see here for example 768 Ram so gigs of RAM right so let's continue now and here we have memory and this is in gigabits uh gigabytes sorry about that so we will continue with the general purpose it is free and is absolutely enough for our testing one virtual CPU and one gig of memory of ram this is EBS and that's good news so now let's configure also instance details we will start with one instance the network so now we can select the VPC we could use the default VPC or we could use the VPC that we have created previously and we have given a name AWS CCP VPC now for the subnet we could have multiple options here but but because we have deployed a VPC with one public subnet we have here only one option so there it is this is also very very important our assign public IP so in order for this instance to connect to the internet and also for us to be able to access it through the internet and we will use SSH so secure shell we have to say here enable so yes I want a public IP address to be assigned are assigned by AWS to this instance we could also do different other things that are really not related to the cloud certified Cloud partition exam capacity reservation also very interesting I am role so this is something that I could um I could select here a predefined role a pre-configured im role for example if this ec2 instance will store the snapshots so the backup in S3 it will need some credentials in order to access S3 and this is where I would put this this IM role shut down Behavior stop or terminate well if you if you choose stop then you have the option to start it at uh at a later time if you choose terminate it means you will stop it and everything will be deleted and some other options as well in order to have an IP on the ETA so ethernet 0 interface I could leave this blank in order to have an automatically provisioned IP from the 10010 Network this is defined in the VPC or I could Define it statically here if I want for example to know exactly what would be the the IP address for this ec2 instance and that's pretty much everything that you need to know actually more than you need to know for the um for the CCP exam so certified Cloud practitioner moving on to the storage this is the root volume and it is like eight gigs the volume type remember we have talked about different storage types I can select SSD general purpose 2 that's fine or provisioned iops or magnetic which is the the Legacy version or the old version of a hard drive anyway I will leave the default gp2 I could also select here the encryption so if I want to encrypt this this volume this is what I can select here either default or I can use other keys that I previously defined but anyway I don't want to encrypt the the volume I could also add new volumes but this will be covered in a later section in this module when we will talk about volumes and EBS elastic Block store we could also add some tags so for example when I um deploy some resources for a project I can give it a tag let's say that the name is AWS CCP and VPC although this is not a VPC I can tag every resource that's related to a project with a specific tag and afterwards I can search on resources or services that are deployed in AWS with that tag and have a complete list now let's configure also Security Group this is also going to be covered in a later section in this module I can assign a security group um so I can create a new security group or I can use an existing one so let me say here Security Group I will create a new one and yeah let's do the same Security Group AWS CCP VPC I don't need this description so the security groups are basically fire roles built-in firewalls default by default from the AWS side and at this moment I am permitting SSH which is TCP and Port is 22 from what source from anywhere zero zero zero zeros zero means that I can connect from anywhere around the world through SSH on this ec2 instance we have the chance now to review so description let's say why not AWS CCP VPC and I will review all my selections I could also expand these ones and take a look at at every setting and I will now click launch in order to launch this instance I have to select an existing key pair or create a new key pair so in the first module in the windows dedicated section we have created a keeper called xas it doesn't matter if you're going to use that one or you're going to create a new keeper one the idea is that you have to select something so in this case let me just use whatever this one it says.com I will acknowledge and I will launch the instance so now everything is being deployed absolutely very very fast as you will see I will click on view instances and now I have the complete look for this specific instance as you can see here I'm under instances and instances menu let's have a quick look very very important I have here the ipv4 public IP automatically assigned 3.85.22.136 now in order to connect to this instance I would have to do uh one thing so just take it here and follow something that you will see in a moment but I want to see it running so this is running everything is good all right in order to connect with select it in case you have multiple ones and click on connect and here you have a step-by-step let's say guide what you need to do in order to connect to this one so first you need to change the the permissions of this pem file and you would have to change this to 500 then again I'm talking about Mac OS operating system users if you're a Windows user you can just skip the rest of this section and move to the next one which will be dedicated 100 for you Windows operating system users so now continuing on for Mac users change the mode of the PM file and then connect to to the um today instance with this SSH command minus I then you will just paste here the name of your PM file and then the user is easy to hyphen user at and the complete name or the IP of the ec2 instance so let's just do that right now I will start my terminal all right so I am now logged into the terminal let's see exactly where I am I'm in the AWS CCP folder great let's take a look with ls Or List what are my current PM files the one that I'm using now is this one it says com.pm let me just say LS minus la and for this one I have only this permission read so I'll say change mode 400 and the name if I say again LS minus l a it should be the same right good now I could connect to the ec2 instance so the idea is this SSH minus y minus I and then the PM file and then I would say ec2 hyphen user add and here you paste the IP address of your ec2 instance click on the copy to clipboard or you could also select it and right click and copy but it's easier like this and then go to terminal and paste it there like this and just hit enter so are you sure you want to continue connecting I will just type yes and here I am as you can see here ec2 Amazon Linux 2 Ami so let's do some basic testing so if I say I have config I want to see my interfaces and I can see the eth0 has the the IP address that I have configured when I was launching the ec2 instance so when I was going through the ec2 instance uh ec2 launching wizard good do you think that this ec2 instance has connectivity to the internet well obviously I don't have to enter and see the the result of course it has because we have now connected to this instance how am I connecting to the internet through an internet gateway so let's verify this right now all right so I have this ec2 instance that is running is running in Us East 1A the type is T2 micro and I have a public dynamically assigned ipv4 public IP now if you take a look here in the VPC list we can see that we have two vpcs in this first VPC we will see also some route tables the route tables are two now so one is the main route table that applies to all of the subnets in this VPC and one is what is called a custom route table and if I click on this custom one and take a look at the routes I can see here for traffic that is inside of this VPC I'm using a router so the the routing is being done locally if I want to Route traffic to anything else or out on the internet for example I will use an internet gateway and if I click on this internet gateway I see that the state is attached and yes I'm using the AWS CCP VPC VPC so this is how everything ties all together thank you and see you in the next section now in this section you will learn how to connect to the ec2 instance if you are a Windows operating system user so this is only for Windows users if you're a Mac OS operating system then you can just skip this lecture and move on to the next section all right I am now connected on a Windows VM machine Let's see exactly how we can connect from a Windows operating system to the ec2 instances that we have running in AWS so we said that there are two options or or at least two options we will uh we will see in this course and there are more than enough the first option is putty and this is something that you may have heard of if you've been in the industry or uh not necessarily on the technical side but you know it people use it very much when they connect to different machines or equipments so party will use let's remember what we have done in the first module it will not use the PM file but we'll use the PPK file and because we have started this VM uh this ec2 instance sorry about that in Amazon with excess com PPK file then it means that in order to connect from this Windows machine we will new we'll need to use the same PPK file so what you need to do in putty for the host name or IP address let's copy it from the all right from the AWS Management console so either you select it or just copy to clipboard then go to party paste it here but we we will not authenticate using a password we'll authenticate using the PPK file so just expand SSH menu then also go to authentication and here is what we need to do the private key file for authentication is so let's browse for the file I'm going to look in the desktop and awccp and I will select the excess com PPK file so I'll open and now click on open I will click yes let me just increase a little bit the fonts so that it's obviously more visible to you so let's say 16 and apply so I'm now being asked for username the username is the same ec2 iPhone user and just enter now if you do this it's authenticating using a public key imported open SSH key good and I'm now connected to the ec2 instance again I have config of course I can ping like AWS or Amazon not AWS amazon.com so everything is working great now another option in order to connect to ec2 instances is mobile extern and we have installed this in first module of the course now what you need to do and let me just say here exit and I have now successfully disconnected from the ec2 instance why this is a good option the Mobile's term is for example if you have an ec2 instance that you regularly connect to let's say that you have a web server and we will configure that in the upcoming sections as well now if the IP is static and it won't change and you regularly connect to that specific IP maybe you'd like to use something like mobilex term so if I click on session and then click on SSH the secure shell is the protocol that we will use for this connection for the remote host we put here the IP and actually this is not the IEP so let me let me just grab the IP again so IP and yes so so for the remote host I'll put the IP I do want to specify the username ec2 hyphen user and for the advanced SSH settings I will go here and yes I want to use a private key and I will select but this time so desktop and AWS CCP I will use the PM file and that should be it great if I now click on OK authenticating and yes that's it I'm connected here what you can also do here in um in Mobile's term uh yeah actually you can do also in paribat is not that nice and colorful and easy to use with with tabs as you can see here multiple ones you can save the session so that when you connect um later or I don't know a day after a week after you have it available and you connect through a script you don't have to enter again the the IP or the DNS name of the ec2 instance and the password and everything in order to connect to that instance so that's why your you can use here the mobikes term or maybe also you you know about the secure CRT this is another great software and so on thank you and see you in the next section in this section we will configure the ec2 instance to act like a little web server and we will do this in order to highlight the functionalities of security groups which will be covered in the next two sections so now let's switch to AWS Management console I have logged into AWS Management console and before we start let's go to ec2 either by clicking ec2 here in the recently visited services or as you know already from the services and then go to ec2 or under compute ec2 so multiple possibilities here now I'm going and clicking on the running instances and please know that I have changed the name into web server we did say that this is going to be a web server so it makes sense for it to be renamed here I will copy to clipboard the ipv4 public IP in order to connect on the terminal here I am in the terminal I will now use the syntax already presented so SSH ec2 Dash user at the public IP so paste and then I will use minus I and then the PM file and I should be now connected to the ec2 and yes I am now as a best practice before you start and use any kind of VM or Hardware equipment that runs Linux as this ec2 instance does you should update all of the software packages apply all of the the security patches the latest ones and so on and in order to do this we should type now yum an update and in order for us to not say or confirm every question during the update we can say minus y I'm doing this and it says that you need to be rude to perform this command so now I'm logged in as easy to dash user but I need let's say the most advanced Privileges and these are called root privileges so I can do this by typing sudo SU now as you can see the user has changed and I am now logged in as root which means administrator I can do whatever I want on this machine so now let's type again so yum update minus y and now this machine is going to be updated luckily no package is marked for update and that's very good we can just clear the screen by typing clear and enter and now let's do the following we will install a small software small package on this ec2 instance in order to run to run a web server and this one is called httpd so yum install HTTP D and again minus y now the software is being installed as you can see now it says complete now httpd in case you're new to the Linux world comes from httpd it's Daemon so a little process that runs in the background and can we check what is the status of this HTTP Daemon or process we can do this by typing service HTTP D and status now why I'm doing this because it says here that it is inactive so we need to start the process and let's do that right now so service and httpd and start let's start Let's uh not start but check the process again so service httpd status and as you can see now it says active and running great now what we'll do is create a little web page we'll create the index.html file that any web server is using and we'll put this into a specific folder that is going to present this information to anyone that is going to visit the the web server uh on the IP address of it so now let's see how we can do it first we'd have to navigate to a different folder if you use the PWD command you can see that we are now in home slash easy to dash user folder and we'll now change this to CD VAR www and slash HTML we should now create the the index.html file now if you're more advanced in Linux you can use the traditional VI if you're a new one you can you can use Nano or Pico or whatever so Pico let's say index dot HTML Pico it's not installed let's install it um install Pico minus yes no package available let's see if we have Nano so Nano and index Dot HTML Nano is here great now let's do this we can say HTML then again we can say body so what's the body of this page and also we can say header H1 and we can type a text here for example this is my what first web server on AWS ec2 and why not exclamation mark now we have to close everything that we have opened in the beginning this is pure HTML so we'd have to close HTML body in H1 and we can do this like this so let's start now from the end to the start we'll close H1 then we will close body and then we will close HTML in order to save the page as you can see at the bottom in the left corner it's Ctrl X so Ctrl X save modified buffer and I will say yes so go on the Y and what's the name it's index.html so just enter if you type now LS which means list I can see the new file if you want to see what's inside just to print the file you can say cat and index.html and I see the the text that we have just typed in right now great now we would like to uh basically see the web server in order to do this we will take the public IP address I will say paste and enter well I'm telling you it will not load and this is because we do not have the right the right necessary permissions in order to access this website so that's it in the next in the next two sections we will address security groups both from a theoretical perspective and more important we will modify the security groups that are currently applied to this instance so that we can access the web server thank you and see you in the next section in this section we will cover the security group's basics AWS security groups act as a virtual firewall for your ec2 instances to control inbound and outbound traffic so really what do I mean when I say inbound and outbound traffic inbound traffic means traffic that is originated from outside of the ec2 instance for example from the internet and it is arriving at the ec2 instance and outbound traffic is traffic that is leaving from the ec2 instance and is going for example again going to the internet security groups enforce security at the instance level and not at the subnet level different ec2 instances can have different security groups applied in a security group you add rules that control inbound traffic to instances and separate rules that control outbound traffic so that traffic that is leaving the ec2 instance now here is the AWS Management console interface and this is how it looks so you configure inbound and outbound rules these are different tabs you click on ADD Rule and then you go on click and create and that's it it's very very simple and you'll see just in the next section how we will do it now let's continue and talk about the next thing security groups basic again so when you first create a security group it has no inbound rules so this means that no traffic is permitted to the ec2 instance I'm referring now to a newly created Security Group because there is also um default Security Group like you have seen for the VPC as well so when defining rules you can only specify allow rules and no deny rules so only allow traffic no traffic that you want to deny basically you would Define what you want to permit and anything else will be denied or blocked by default all outbound traffic is permitted and now let's just refer to what kind of rules can you actually Define in a security group so you have to Define inbound rules and outbound rules if you want to customize it and not rely on the default ones that you have available when you define a new security group or when you use the default one for inbound rules you will have to define the protocol Port range or just port number and the source IP address so where is the traffic coming from again zero zero zero slash zero so four zeros this is the the default IP address that you would use it means that you'll put me you will permit traffic coming from anywhere on the internet while for the outbound rules again this is what you define where traffic is going to be allowed to travel to so that is the destination again you can set here the protocol Port range or just a number and the destination zero zero zero zero zero means anywhere thank you and see you in the next section in this section we will modify the security group and permit access to the web server so before we start let's go through a short recap of what we have done up to up to now and where we are so we have created a VPC we have launched an ec2 instance and now the ec2 instance also has an ipv4 public IP address assigned so it has internet connectivity and also we can access this ec2 instance through SSH from the internet going into the VPC and connecting to the instance we have logged uh into the instance through SSH and we have configured to act as a web server and we have also defined there the index HTML file now next we will modify the security group that it is attached to this instance and we we have actually created the security group when we launched the ec2 instance right and we will modify it in order to permit access from the internet through HTTP so now let's switch over to AWS Management console all right so I have logged into AWS Management console and before we actually fix the problem fix what is wrong let's analyze a little bit what's the current status so I mean I'm going to ec2 now and I want to see what is the security group that it's attached and what are the inbound and outbound rules so now I am in the ec2 dashboard and if you look on the left you have here the network and security uh let's say category and you have security groups before going there directly I want to go into running instances and take a look at what is configured now and what are the settings applied so let's see where is the security group right here so security groups and I have this Security Group applied SG underscore awsccp VPC I can click on it or I can just see the inbound rules here so clicking on inbound rules I can see that traffic that is permitted to the ec2 instance so again the security group is applied at the security instance level not at the subnet level traffic permitted is current currently only TCP Port 22 or SSH coming from where coming from whatever we want coming from anywhere around the world and this Security Group again is SG underscore AWS CCP VPC now in order to permit uh traffic to this web server we would need to add some other services here and let's stop for a little bit and think so how is the web service running and what is the protocol at Port number that web services run on if we are talking about simple HTTP then we're using TCP and Port 80. if you are talking about the secure HTTP or https then we will also need to allow TCP Port 443 now if I click on the security group it is the same as if I have clicked here in the network security and security groups I can see the security group The VPC the owner group name whatever now here I also have description and inbound and outbound rules I'll start with outbound rules all traffic as it is by default it is permitted any protocol any port range to any destination let's now modify the inbound rules so we will say here edit and I will click add rule in order to permit web traffic so HTTP and https would need to be allowed HTTP is here and there is a TCP and 443 any Source that's good but I will also want to allow http and I'm not sure if I'm gonna find it yes it is so HTTP and TCP 80. coming from any source and this is good now I'll just click save and now I have an updated list with rules for traffic coming to this specific instance so these are inbound rules again what I will do I'll go to instances grab the public IP address so copy to clipboard and I'll now move on to a browser and test connectivity to this web server so I have pasted here the ipv4 public IP address dynamically assigned to this ec2 instance configured now as a web server through yum install httpd HTTP demon if I click on enter well this is it this is my first web server on AWS ec2 so it means it is running the web server was functioning but we were not permitting traffic through security groups which again are built-in firewalls by AWS you don't have to create them there are they are available in order to use and let's take a look again in order to permit traffic to the web server so HTTP or https we don't know usually if it's going to be HTTP or https although uh nowadays the secure version is being used so to be on the safe side I definitely think that it should be good that you add both HTTP and https and also before we wrap up please note that when you add a rule for HTTP ipv4 and this is the source automatically automatically it will be added a rule for IPv6 which is column column slash zero thank you and see you in the next section in this section we will cover elastic Block store or EBS basics Amazon elastic Block store EBS provides Block Level storage volumes for use with ec2 instances and this is totally different from what the simple storage service or S3 offers S3 is good for storing objects which means files movies pictures documents whatever while EBS or elastic Block store is storage for volumes so this is the same as if you'd put multiple hard drives or SSD drives into your PC well this is EBS so multiple volumes on your running ec2 instance EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same availability Zone LG will see in the next section when we will do a lab on Amazon EBS you have to create the volume the EBS and you have to deploy that in the same availability Zone where your ec2 instance lives EBS volumes that are attached to an ec2 instance are exposed as storage volumes that persist independently from the life of the instance and this means that for example if you decide at one point that you want to stop or terminate the ec2 instance you can just detach the Amazon EBS volume and attach one at your own will on another ec2 instance so this is different the life of the is to instance is not the same or with the life of the volume the Amazon EBS volume Amazon EBS provides two volume types which differ in performance characteristics and price SSD volumes now this option offers high iops and this refers to input output operations per second and the Legacy hard drives volumes though so the second option offer throughput over iops so if you need performance then you would go with the first option SSD for highly and intensive applications highly let's say accessed if you need storage throughput then you'd go for hard drives now SSD volumes come in two flavors and we have seen that just a moment ago when we deployed the ec2 instance the root volume was general purpose SSD or gp2 and this is a good option as a balance between price and performance the second option is provision iops SSD and this is the highest performance SSD volume that you can get in AWS HDD or hard drives volumes also come into flavors throughput optimized HDD this is also known as ST1 low-cost HDD volume designed for frequently accessed throughput intensive workloads and called HDD SC1 this is the lowest cost hard drive volume design for Less frequently accessed workloads now for security reasons data stored on EBS volumes needs to be encrypted you may decide to encrypt it or not but this is an option that you can choose you can launch your EBS volumes as encrypted volumes remember that when we went through the launch ec2 wizard right we could select that the root volume could be encrypted and we could encrypt that with keys either we use default ones or create new keys if you choose to create an encrypted EBS volume and attach it to your ec2 well data stored and snapshots are encrypted and this is also known as encryption at rest so refers to data not moving uh just staying there on the volume now with data encrypted on your EBS volumes you also ensure security for data in transit so data in transit refers to data traveling from one location to another from an EBS volume to another EBS volume on another let's say ec2 instance to a file system to Amazon S3 and so on you can take point in time snapshots of data on your Amazon EBS volumes and store them in Amazon S3 now what what are snapshots snapshots are incremental backups which means that only data on the volume that has changed after your last snapshot is saved so you do in the beginning a complete snapshot and after that you do incremental backups or incremental snapshots which means that you will copy and make available only data that has changed from the previous complete snapshot each snapshot contains all of the information needed to restore your data to a new EBS volume let's talk now a little bit about Amazon EBS pricing this is also important for your Cloud exams or certified Cloud practitioner exam Amazon EBS pricing depends on the following now first is volumes and you will be charged for the total storage of all EBS volumes per gigabyte per month in terms of snapshots total snapshot storage consumed in AWS S3 influences your your bill your pricing for AWS per month EBS snapshot copying between regions is also charged now this means that if you copy from uh you know us East one to I don't know Europe or Asia the snapshot you will be charged now also as a rural rule of thumb for AWS data coming into AWS so inbound is free but data living AWS so outbound is charged you will be charged for anything that you take out of AWS so for example if you decide at one point to put there all your documents all your private pictures in in S3 that's fine you'll not be charged for this but if you decide at a later moment to take out your data then you will be charged thank you and see you in the next section in this section we will create an EBS volume and attach it to the ec2 instance alright so I have logged into AWS Management console before we create a new EBS volume let's examine what we currently have so if I go to ec2 I have a running instance and please note that now I have a different IP address and this is because I stopped the instance between recording and now I've started it again and it has been allocated a new public IP address now on the left side if you look in the menus here and scroll down to elastic Block store just click on volumes and this is our a gigs root volume again this is also ABS volume type that has been created when we launch the ec2 instance in terms of attachment information let me just expand this a little bit it is attached to our web server so what we need to do now is to create a volume and then attach it to the new ec2 instance to our web server and then also make it usable and you will see exactly what I mean in a moment so click on create volume now we have to define something like volume type so this is a general purpose SSD provisioned iops called hard drive throughput optimized or magnetic this is something we have talked about in um in EBS so I will click now gp2 I will leave it like default I'll say that the size is 10 gigs and very very important is the availability zone so when you create a new volume it has to be created in the same availability Zone where your ec2 instance is living so because I have my aceto instance in US is one I will leave this as default this is not important encryption if I want to encrypt the volume and pretty much that's it so create volume and close now the volume once it is set up it will appear here so now let's wait for it to to be deployed it is in use and this one is available I will just rename it as for example new EBS volume enter it is available now in order for this volume to be used by our ec2 instance we need to attach it to this specific instance this also means that if you decide at one point in order to stop or terminate the ec2 instance but to use that specific EBS volume with another ec2 instance well that's possible so now with this ec2 instance being with the with this new EBS volume sorry about that select it I can go to actions and attach volume now I'll have to select which is the instance I'm going to attach it to so I have only one I will click on web server and now I have to then select how I would like to Define it and this refers to if you correlate it to Windows maybe you're more familiar with Windows you know that you have like C column slash as the the first volume there and maybe also have another partition like d column slash and another one e colon slash and so on so we have the SDA currently being used by the um let me just just show you you have here so this is the attachment information this root volume is going to be it is attached to the web server and it is Dev slash xvda the new volume also needs to have a path so similar to Windows like d column slash so let's continue now so I'm being I'm selecting the new volume I'm clicking on attach volume I'm selecting where I went to uh where I want to attach this EBS volume and here are the options the currently available options so for Linux devices I can say anything Dev slash SDF through sdp so I'll leave here the default Dev as the SDF and click on attach very simple and now the EBS volume state is in use now also before we make the EBS volume available to use on the ec2 instance let us just go through an analogy so imagine that you have a Windows PC and you have one hard disk inside there and the partition is C column slash it makes sense for you to add another hard drive if for example you have no free space and you need some more so you just go and buy another hard drive you connect it physically in your PC in your laptop tower machine whatever and then in order to make it available you have to format it and maybe you're also familiar with FAT32 or NTFS now if you don't format it it will not be recognized by the operating system by windows so that's where that's when you do it you just format it and also after you format it you have to have a letter for this partition so you cannot have the same letter like C column slash for both hard drives and this is similar to mounting in Linux so if I want to copy a file to this um to this Windows PC that has two hard drives if both partitions or hard drives are going to be mounted or they do are recognized by the the Windows operating system as C column slash then the operating system would not know where to copy the file while if I have different mounting points or different um later there in order to define the partition like C column slash D column slash and so on then the Opera operating system can make some difference between the two and know exactly where you want to copy the new file so the idea is that we will have to format this EBS volume and then we'll have to define a mount point and you will see exactly what I mean in a moment now what we need to do now is to go to our ec2 dashboard and grab the ipv4 public IP and connect on the terminal so now let's connect to the ec2 instance SSH minus I and I now have to define the pem file so the private key and then ec2 Dash user add and the public ipv4 address and yes and yes here I am in the ec2 instance before we actually start any configuration we need to see exactly what is happening now and what is the current status we will use a command like LS Blk and this comes from list and BLK comes from block so we have two blocks now so l s BLK we have two blocks we have the xvda and this is the root volume it is 8 gigs and we have another one xvdf that we have just attached to this ec2 instance the mount Point as you can see in the last column for the X for the xvda is Slash and this is the let's say the root the root point that you can mount on for the xvdf we currently have nothing see here is blank so what we will do now is to Define a file system for the xvdf in order to to make it visible for the ec2 instance we will do 0su and now as you can see the root is here so we have change from ec2 user to root 1 which means that we now have full control on this ec2 instance so let me just clear the screen and continue now we can say file minus s and Dev and xvda we can see here some information that's fine lsblk we can say this one and as you can see we have here the xfs file system data good now if you do the same for xvdf you can see that nothing is present here so we currently don't have any file system defined on on this new volume so let's define it now we'll do this with make MK FS so this is file system then we will have to define the type minus t and this is xfs the same like the first volume the root one and then we will say what is the actual volume that we want to to define the file system for and this is x v d f great so we have also have gone through uh through this step now I said that we have to mount this new volume and again as a as a comparison or as an analogy with Windows we have to define a letter for it like d column slash how we're going to do that we will use the command Mount and then we will say what do we want to mount and this is slash Dev slash xvdf and we will now have to define the uh the mounting point and we actually have first defined a folder so M K there or make directory and we will say this is AWS CCP all right so now we have to define the mount point so Mount what do we want to mount Dev and xvdf and the mount point is what we have defined AWS CCP now clear now let's take a look at list block command LS BLK and as you can see here for the xvdf uh let's say partition but the correct terminology is volume we have a mount point and this is slash AWS CCP great now let's do some testing so let's create a file let's say Nano and and the file is awsccp colon let's say text file Dot txt and we'll say this is a text file control X yes yes LS where are we now we are in home ec2 user let's list the content of awsccp and here it is this is the text file.txt we can see the content with cat and AWS CCP and the text file and this is the text file so great now it looks like we have a new volume attached lsblk we have X xvda and this is the root volume both of them are EBS type we have another one xvdf we have defined here the file system which is xfs the same was as for the first partition the root one and we have we have also defined amount point now I was mentioning earlier when talking about storage and pricing with AWS anything that you lead you let in the in your AWS account in order to leave and do not terminate or just delete will um will come with some charging some costs for you so Network now what we'll do is the unmount this specific this specific volume so the xvdf one then we will detach it from the ec2 instance and then we will uh we will delete it right so let's do that right now so let's unmountain you mount slash awsccp good lsblk now we have no mounting point for the second volume xvdf which means that if it's not mounting we can now detach it from the ac2 instance and then actually delete the volume so now let's switch to AWS Management console we are in instances now let's scroll down and go to elastic block block store category click on volumes we have here two volumes the new EBS volume which is 10 gigs it's the first one so selecting it and then go to actions and say detach volume and yes detach it and now this is the moment that if you want to you could take this volume and attach it to another ec2 instance this is not the case for for us now we will just delete it once once it's available for uh for deletion so now let's wait in order to change the state from in use to available great the state now is available so what I will do again with this volume being selected going to actions and the delete volume is our option here and yes delete it is now in the state of deleting and once it's done it will not show in this list thank you and see you in the next section in this section we will cover Amazon simple storage service or S3 basics Amazon S3 or Amazon simple storage service provides object storage through a web service interface with Amazon S3 you can store and retrieve any amount of data at any time from anywhere on the web so first of all it is a web storage service which means that you can store files videos pictures documents whatever but you cannot store operating system and this is what elastic Block store is for in this section we will focus on Amazon S3 Concepts Amazon S3 features and we will wrap up with Amazon S3 pricing related information very important for your certified Cloud practitioner exam now first let's start with a bucket a bucket is a container for objects stored in Amazon S3 every object in S3 is contained in a bucket and here is how the URL looks like for any object stored in Amazon S3 https column slash slash then is the service which is S3 bucket name Amazon aws.com the object name I would like you to think of a bucket as a folder where you can store objects or files we will see that in the next section when we Define a bucket bucket names are globally unique which means that you cannot have two buckets in AWS all over the world with the same name so they have to be unique in terms of naming objects are the fundamental entities stored in Amazon S3 objects consist of object data and metadata object data is the actual data while metadata is just data about data so for example if you upload a file in Amazon S3 you could have data about data like you can see on the screen now so when it was last modified what's the storage class does it have any e-tags does it have any server-side encryption configured and what's the size and so on so data is the actual data while metadata is data about data when you create an object you specify the key name which uniquely identifies the object in the bucket every object in a bucket has exactly one key and as you can see here it's actually the name so in this case I have uploaded picture.png to Amazon S3 you can also see the object URL and the key name is the actual name so the key equals name now let's talk about what Amazon calls the data consistency model so please please pay attention now Amazon S3 provides read after write consistency for puts of new objects in your S3 bucket in all regions this means you can access the object immediately after it was copied in an S3 bucket well let's have an example if you upload a document to uh to an S3 bucket it will be available for reading so for displaying the content in another region around the world just like that immediately now this is totally different with eventual consistency Amazon S3 offers eventual consistency for override puts and deletes in all regions and this means that if you update or delete an object in an S3 bucket the change will eventually be propagated and visible to everyone as an example if you are in Seattle and you just go on and modify an existing object in your S3 bucket then the change will be visible to users in let's say Asia somewhere anywhere on let's say far more distance than than the actual location the change will be propagated it will not take like an hour it will take like one second or something like that two seconds three seconds so short amount of time but the change will eventually be propagated and visible to users in in another region around the world now let's talk about storage classes very very important for your exam as well Amazon S3 offers a range of storage classes for the objects that you store you choose a class depending on your use case scenario and performance access requirements and all of these storage classes offer high durability which means that you will not lose your data each object in Amazon S3 has a storage class associated with it as an example as you can see here now the storage class is standard for this specific object in S3 let's talk about the different storage classes that are available in Amazon the first one is the standard this is the default storage class when you upload any object to S3 and you do not change the the default settings the standard storage class is used for performance sensitive use cases those that require like millisecond access time and for frequently axed access data now the next type of storage classes is the infrequently accessed or IA and we have here two types and these are the standard infrequently accessed and one zone infrequently accessed designed for long lift and infra infrequently accessed data Amazon S3 charges a retrieval fee for these objects so they are most suitable for infrequently accessed data this means that if you store data with infrequently accessed or is you will see just in a moment with archiving storage classes like Glacier then it will the the retrieval it will not be instantaneous it will take a couple of seconds minutes or hours depending on the storage class and you'll also be charged for this for retrieving data that has been stored for a longer time the standard infrequently accessed and one zone IA storage classes are suitable for objects larger than 128 kilobytes that you plan to store for at least 30 days now let's talk about the standard IA Amazon S3 stores the object data redundantly across multiple geographically separated azs availability zones objects are resilient to the loss of an availability zone so if one AZ fails then you'll still have access to your data well this is not the case for one zone IA Amazon S3 stores the object data in only one AZ however the data is not resilient to the physical loss of the AZ so in other words if the AZ fails then the data will not be available until the availability Zone will be restored and available as well another type of storage classes is the data archiving category we have here two options Glacier and deep archive Glacier data stored in this storage class has a minimum storage duration period of 90 days and can be accessed in one to five minutes using expedited retrieval well the Deep archive now minimum storage duration period is 180 days so half a year and a default retrieval time of 12 hours can take right until you have access to your data and this is the lowest cost storage option in AWS so let's have a comparison between all of these storage classes first is the durability so you have durability like 11 nines 99.999999 11 Nines in total this means that your data will not be lost so it's a low low probability that you lose anything that you put in S3 in terms of availability the availability for accessing your data starts from 99.5 for one zone in frequently accessed storage class and goes up to 99.99 for almost all of your storage classes availability zones so for the one zone IA just a small and short recap the data is stored in one availability Zone and for the rest data is stored in a minimum of three availability zones some more Amazon S3 features and discussions now let's talk about bucket policies bucket policies or permissions provide centralized Access Control to buckets and objects based on a variety of conditions with pocket policies you can add or deny permissions across all or even a sub subset of objects within a bucket only the bucket owner is allowed to associate a policy with a bucket so defining permissions who can access what now some features interesting features are transfer acceleration and cross-region replication Amazon S3 transfer acceleration enables fast easy and secure transfers of files over long distances between your client and an S3 bucket using Amazon Cloud front globally distributed Edge locations so let's have an example a user somewhere or in North America decides to upload a file to S3 in an Amazon S3 bucket in Melbourne so because this is far far away it will take some time and in order to accelerate the the transfer you can enable the Amazon S3 transfer acceleration feature which means that the file would be uploaded into a near bucket so the in this case the Seattle Edge location and after that the data will be accelerated and transferred fast in Amazon S3 bucket in Melbourne which means that the total time is going to be less considerably less if you do some testing in Amazon Management console you'll see that depending on the far location it could be even three times faster Amazon cross religion replication cross region replication enables automatic asynchronous copying of objects across buckets in different availability zones and regions to say so again an example if I decide to replicate all my content in Amazon S3 bucket in Seattle I enable cross-region replication and once the file reaches the S3 bucket in Seattle it will be immediately copied to the far away Amazon S3 bucket in this case it's the Melbourne Amazon S3 bucket let's now just wrap up this section with Amazon S3 pricing so first let's start with an overview Amazon S3 is AWS object storage service built to store and retrieve any amount of data from anywhere it is designed to deliver 99.999999 whatever right durability level 9 storing data for millions of applications Amazon S3 provides the Simplicity and cost effectiveness of pay as you go pricing and you are already accustomed with this you pay only for the storage you use there is no minimum fee let's now go through estimating your costs so price the total price you will pay per month will depend on the following storage class so you would start with standard storage class and probably you will move your data to standard IA in frequently access in order to reduce your cost by storing less frequently accessed data there and move to S3 Glacier storage for archiving data at very low costs in terms of storage cost depends on number and size of objects right and you're also going to be charged for requests so number of requests get requests come with charges which means that users clients are going to access your data in Amazon S3 they will perform get request and you will be charged for this also data transfer you are going to be charged for the amount of data transferred out of Amazon S3 region so this is something I have mentioned in previous sections as well anything that is going to leave the Amazon in general the Amazon Cloud you're going to be charged for Amazon Glacier it starts at 0.00 uh four dollars per gigabyte per month Amazon Glacier allows you to Archive large amounts of data at a very low cost you pay only for what you need with no minimum commitments or upfront upfront fees and other factors determining pricing include requests and data transfers out of Amazon Glacier so incoming the data transfers are free outgoing again uh will charge you Amazon snowball with Amazon AWS Noble you pay a service fee per data transfer job and the cost of shipping the appliance so remember the Amazon snowball is the actual suitcase that you can have from AWS side in order to quickly get your data into AWS each job includes the use of a snowball Appliance for 10 days of on-site usage for free anything more than 30 day 10 days you will be charged so data transfer into Amazon S3 is free data transfer out of Amazon S3 is price per region just as a fact so you'll not be tested in the exam the snowball 50 terabytes will cost you 200 and the snowball 80 terabytes will cost you 250 dollars thank you and see you in the next section in this section we will create a bucket in AWS and we'll also run some operations with objects or files so now let's switch over to AWS Management console and get started right away alright so I have logged into AWS Management console and before we start please know that we are currently working in Virginia Us East 1 Northern Virginia and this is really important because now if we start and click on services and go under storage in S3 you will see that it changes here and now it says Global so S3 does not require region selection and this is something um that you really have to to bear in your mind to keep it there for your exam as well another option another another AWS service to say so would be Im so identity and access management this service as well does not require any region selection so now let's create our first bucket so I'll just click create bucket and the very first thing that you need to do is enter a bucket name so please enter DNS compliant bucket name and this means that the name has to be unique for example if I type test and click on next it will say that the bucket name already exists so somebody created a bucket with this name I will say AWS CCP just to try it so bucket exists so let's say Dash V1 so version one I'll leave the region here and let's imagine that we have some applications some servers that will use uh data stored in this bucket and the servers are deployed in Northern Virginia so that's why I'm leaving this one here let's configure some options now so I could enable versioning so keep multiple versions of uh of our documents in the same bucket this is not what we need just that you need to know it exists we could also enable server access logging or we could also add tags object level logging so even more deep default encryption we could also enable encryption in in this bucket but will not do anything we'll just keep the defaults and click next in terms of permissions block Public Access bucket settings so if I leave this one here checked block all public access it will be very very hard in order to enable access to this bucket so I want to to do some testing after I upload the the documents to this bucket so I will just untick it and just click next now we can review our selection and create bucket so now we have our very very first bucket in AWS AWS CCP version 1 and this is similar to um to a folder so if I click on this one immediately immediately I get some properties permissions and management uh to the right I can also say here edit public access settings and as you can see here in the access column it says that objects can be public which means that they are not necessarily public so let's go on and explore a little bit the options edit public asset settings will just lend me the page that I got when I created the bucket so nothing really important let's go on inside and inside the bucket we have properties permissions and management let's click on properties these are different features that I can enable for this bucket so versioning to keep multiple versions of an object in the same bucket server access logging static website hosting this is very very nice host a static website which does not require server-side Technologies object level logging default encryption and some some advanced settings like transference acceleration we have talked about it in the theoretical lecture as well in terms of permissions these are the block public access settings we know this already we can also go for a bucket policy and paste it here this is more advanced configuration and we'll address this in the associate course course configuration nothing really uh to to bother of now management we can also add some management here in for this bucket for example life cycle so something like add a lifecycle rule while I now have the object in the standard storage class after some time I will transition them to let's say infrequently accessed and maybe to Glacier or deep archive later replication cross-region replication we have also mentioned this feature analytics metrics and some kind of inventory so these are the the main features of the bucket and what we can really explore now we can also go and upload some content so desktop AWS CCP and I have prepared two objects here text file.txt and the cloud practitioner badge so I will choose now these ones in order to upload them I could upload them right now or I can just go just the way I did for the bucket through the options so for example in the properties I can select here what is the storage class I want and uh you know like Define Glacier Glacier deep archive or just leave it the default the standard one so let's click next so it is not encrypted some metadata tag here and the storage class is standard so I'll click now just upload and I should have in a moment two objects here and there they are Cloud practitioner PNG and text file.txt if for example I click on the cloud practitioner PNG I get here the object URL and if I just copy it and try to access it let me just do that now clicking on it it says that access is denied so why is that let's investigate a little bit now in order to make this this object public I could say make public and there it is and if I now click on the link here is the AWS certified Cloud practitioner badge now let me get back and look at the second object so the text file another way to make it uh public would say would be to go to permissions and click here on everyone and enable read object all right this object will have Public Access so click on Save and if I go now to text file and click on the object URL so https S3 Amazon aws.com slash and this is the bucket name awsccp version one and then the real key of the of the object so the name text file.txt if I click on it here it is this is a text file One Last Thing Before we wrap up this section are properties and these properties are related to the object this time so not to the whole bucket storage class encryption metadata tags and object log this is something that I can configure specifically for this object only in terms of permissions me as the account owner have permissions like to read the object read object permissions and write object permissions while in terms of public access for everyone there is only permission for reading the object and nothing nothing more thank you and see you in the next section in this section we will work in AWS CLI and we will copy a file from the ec2 instance to simple storage service or S3 alright so first thing let's SSH to the instance SSH minus I and then the pem file and then EC to dash user at and then the public IP and I should be connected right perfect then I have to create a file in order to upload it to S3 so let's create now we have no files good and I'm in home ec2 user good let's say Nano and because I have mentioned um previously quite a couple of times about copying snapshots from ec2 to S3 let's create a file just like that snapshot dot what I will say txt and this is my first snapshot follow only incremental good control X then yes and enter and now LS with list snapshot.txt if I want to see the file cut snapshot great now I would like to upload this file and this is basically a simulation of uploading a real snapshot of the ec2 instance into S3 I want to upload it into my S3 bucket so now let's let's see how we can do that the command is AWS S3 so this is the service and then copy and I would have to say here um S3 column column then the the bucket name and let's see exactly it was AWS CCP Dash version one and I forgot to say exactly what to copy copy snapshot.txt if and if I do this it says unable to locate credentials so it will need some credentials to log into S3 if I just say Amazon S3 LS so I want to list the content in the in the S3 in Simple Story service and I hit enter it says unable to locate credentials you can configure credentials by running AWS configure so now let's do this AWS configure and enter and now I'm being asked AWS access key ID and I can type here the the exact access key and this refers to what we have seen when defining a user in identity and access management with programmatic access so now let's go to Identity and access management and grab the access key ID and secret access key alright so recently visited Services IAM I will click on IM and then I will go to users and now I have here user one and this is administrator access Group which is good it will have permission to copy a file from the ec2 instance um going forward to S3 so I'm clicking on the user and if I click on security credentials I can see here the access key but I no longer see the password so in order to have the password available I will just make this inactive or just click on this one in order to delete it and I will say again create access key so access key ID and secret access key these are available only when you actually create the user so I'll say here show and I will say copy download the CSV file as well so paste here and enter and the secret access key I will take it with copy and paste it here as well Now the default region we are using us Dash East dash one and nothing here enter now if I say AWS the service S3 and LS for list I can see now the bucket AWS CCP version one that's good so what I need now is to copy the file I can also create a bucket and I can do this AWS S3 MB AWS CCP let's say version two let's see if it creates the bucket now the format is not good so let's say S3 and the bucket name make bucket so if I say again AWS S3 LS I can now see two buckets in my AWS account good now let's try to copy the file to any of the buckets so I have here the snapshot.txt I will say AWS S3 the command is CP for copy what do I want to copy is Snapshot so a snapshot.txt and the location is S3 colon slash slash and the name of the bucket so let's try AWS CCP and then version one and it says that it has uploaded the file let's take a look in the web console so close here and I'll go to AWS so the home page and I'll go to S3 and take a look in AWS CCP version 1 and here it is snapshot.txt if I click on it obviously it's not available now because I have to say make public and it will be so now I can access it but that's a different thing so coming back here this is how you create a bucket again AWS S3 and then the command so copy or NB for make bucket and so on so we have now accessed S3 Simple Story service from ec2 using the credentials of a user of a user that has been configured with administrative access another option would be to use roles we have also talked about roles at the beginning of this of this module so let's first delete the credentials that I have defined here on this ec2 instance we can do this by going to this folder which has the credentials here so if I look at the credentials I can see the exactly access key ID and the secret access key that I have defined earlier so what I can do is say get out of it LS okay I'm here I can say RM so remove recursive forced and delete this folder let's try to access it again so it's not here if I say AWS S3 and LS now it says unable to locate credential you can configure credentials by running AWS configure so what we can do now is attach a role that has permissions to S3 in order to do the same thing so now let's go to AWS Management console and let's go to services and we will go under security identity and compliance to IM and let's go to roles so we have some roles here actually I have one but I will create another one so that you can see the exact process so choose a service that will use this role it will be an easy to instance and I will click next for permissions I will search for S3 and I will attach this Amazon S3 full access and say next no tags the role name would be S3 Dash administrator Dash actually it's underscore and access and just create role so this role S3 administrative access allows ec2 instances to call AWS services on your behalf great now we have to attach this role to the ec2 instance so let's go to services and then go to ec2 em ec2 just under compute let's now go to our running instance and this is our web server just click on actions and instance settings and here is attach or replace any IM role that you have defined the IM role is S3 underscore administrator access and apply and close so here it is I am roll this is something that you also can Define when you actually launch the instance but we haven't defined one then we have attach it attached it right now so coming back to my AWS CLI if I just retype the again retype the command again now I can have access to these buckets AWS CCP version 1 and version 2. now let's stop for a bit before we wrap up and think why are roles more secure why would you go for the roles and not use any any user that you have previously defined if for example the ec2 instance get hacked then it means that the hacker has access to your credentials while using roles as you see now I have no credentials stored on this machine it is using a role and if you just recall we have talked about in the theoretical lecture roles are the same as users and groups in mean meaning that they just perform authentication and not storing the credentials on the ec2 instance makes sense from a security perspective as you can understand thank you and see you in the next section [Music] this concludes module 4 AWS core Services the backbone of AWS congrats for your progress on the course you have learned a lot really in this module before sitting the AWS certified Cloud practitioner exam please make sure you are comfortable with the AWS core services covered in this module let's now go over the most important topics covered in this module and the exam hints we have started our discussion in this module with identity and access management or IM s identity and access management helps you securely control access to AWS resources you use IM to control who is authenticated and this means signing in and authorized which gives you permissions in order to use what resources IM authentication and Authority authorization have been discussed in this module we have talked about user group role which relates to authentication and the policy document which relates to authorization a user is a permanent name operator it's it can be a human or just another AWS service and very important it has permanent authentication credentials a group is a collection of users and usually contains multiple users a user can belong to multiple groups a role is an operator as well just like a user a role can be as well a human or another AWS service but this one has temporary authentication credentials and last polish documents enforce authorization or just handle permissions um to to the AWS service that is authenticated or to the user that is authenticated next we have also talked about vpcs so virtual private cloud a virtual private cloud or VPC is a virtual Network dedicated to your AWS account as an example 10.0.0.0 16. Amazon virtual private Cloud enables you to launch AWS resources into the virtual Network that you define AWS VPC is literally your data center in the AWS cloud next on our list is the elastic compute cloud or ec2 AWS elastic compute cloud or ec2 provides scalable Computing capacity in the Amazon web services Cloud this is literally the virtual machines in AWS that you will build and please note that the correct terminology that you should use and also for the exam in real world is instances and not virtual machines there are four ways to pay for Amazon ec2 instances on-demand instances reserved instances spot instances and dedicated hosts with on-demand instances you pay for compute capacity per hour or per second depending on which instances you run Amazon ec2 spot instances allow you to request spare Amazon ec2 Computing capacity for up to 90 percent of the on-demand price Amazon ec2 reserved instances provide you with a significant discount so up to 75 percent compared to on-demand instance pricing Amazon dedicated host is a physical ec2 server dedicated for your use this means that you will not use um this resource a dedicated host with another customer it's only only for you we have also talked about security groups or SGS AWS security groups act as a virtual firewall for your ec2 instances to control inbound and outbound traffic security groups enforce security at the instance level this is very important and not at the subnet level different ec2 instances can have different security groups applied in a security group you add rules that control inbound traffic to instances and separate rules that control outbound traffic elastic Block store or EBS Amazon elastic Block store provides Block Level storage volumes for use with ec2 instances EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same availability Zone we have seen in the lab that you will deploy or you will just create an Amazon EBS in the same availability Zone in order to be available for attachment to the ec2 EBS volumes that are attached to an easy to instance are exposed as storage volumes that persist independently from the life of the instance so there are a couple of types of ebf volumes so SSD volume times we have general purpose SSD or gp2 this is the best balance between price and performance provisioned IELTS SSD or iol this is the highest performance SSD volume that you can get with AWS now the Legacy one HDD or hard drives throughput optimized HDD or sd1 this is good for low cost and frequently accessed throughput intensive workloads the last one called hard drive or su1 lows cost less frequently accessed workloads choice now in terms of pricing very important for your exam Amazon EBS pricing depends on the following so on the volumes total storage of all EBS volumes and you will be charged as gigabytes per month snapshots total snapshot storage consumed in AWS S3 EBS snapshot copying between regions is also charged so please know that data transfer inbound is free outbound so out of Amazon is charged last we have talked about simple storage service or S3 Amazon S3 or Amazon simple storage service provides object storage through a web service interface with Amazon S3 you can store and retrieve any amount of data at any time from anywhere on the web you can use S3 to store like files documents pictures videos whatever so this is object storage this is what I'm referring to when I'm saying object storage and this is not good for operating system storage Galaxy elastic Block store or EBS has been built for this now in terms of Amazon S3 key Concepts let's talk about a container so the bucket is a container for Object Store in Amazon S3 every object in S3 is contained in a bucket and I would like you to think of a bucket as a folder where you can store objects or files very very very important bucket names are globally unique and this is what what Amazon calls DNS compliant in terms of data consistency model Amazon S3 provides read after write consistency for puts of new objects in your S3 bucket in all regions this means you can access the object immediately after it was copied or put in S3 bucket well Amazon S3 S3 offers eventual eventual consistency for override puts and deletes in all regions so this means that if you update or delete an object in S3 bucket the change will eventually be propagated and visible to everyone around the world in Amazon S3 we have also talked about storage classes so the standard storage class is used for performance sensitive use cases and frequently Access Data also now we have talked about the infrequently accessed storage classes we have two types here so the standard IA and the one zone IA these are designed for long-lived and infrequently accessed data and you should plan to store the data for a minimum of 30 days commitment the standard IA now to Amazon S3 stores the object data redundantly across multiple geographically separated azs objects are resilient to the loss of an AZ standard one zone IA Amazon S3 stores the object data in only one availability Zone however the data is not resilient to the physical loss of the availability zone so if the AZ fails then you will not be able to access your data next the data archiving storage classes the glacier data stored in this storage class has a minimum storage duration period of 90 days and can be accessed in one to five minutes using expedited retrieval the last one is deep archive minimum storage duration period of 180 days and the default retrieval time of up to 12 hours this is the lowest cost storage option you can get in AWS we have also talked about two cool features in AWS S3 you could speed up the transfer of any files that you would upload in an S3 bucket that is far far away enabling transfer acceleration the main point is that you would upload the um the file to a location that is near to you to an edge location and then AWS will use the CDN Network in order to speed up the transfer of your file into the far distance S3 bucket now the next one is the cross region replication so you can Define here that the content in your S3 bucket should be replicated once any file arrives there to another bucket and it was an example with Seattle and Melbourne S3 buckets the last thing to talk about is Amazon S3 pricing Amazon S3 provides the Simplicity and cost effectiveness of pay-as-you-go pricing you pay only for the storage you use with no minimum fees price will depend on the following storage class so you upload your files first in the start in the standard storage class and then your files can be moved to infrequently accessed or deep archive storage classes if you choose to do so you'll also be charged based on stories of number and size of objects based on requests you'll be charged with getrequest that come from your clients and you'll also be charged on the data transfer so this is a rule of thumb for AWS data transferred out of Amazon S3 will come with charges for you so in our next module we will start to Deep dive on each of the AWS key services with a real Hands-On and practical approach so please be ready to use your AWS account extensively with that said please join me in our next module module 5 AWS key services that you need to know following the same approach as in this module really Hands-On and I do think it's going to be a lot of fun thank you and see you in the next module [Music] welcome to module 5 AWS key services that you need to know this module covers AWS key Services relevant to the certified Cloud partitioner exam highlighted in the official exam blueprint we will start this module covering AWS Route 53 service and continue with cloudfront application load balancer and auto scaling for every topic covered in this module we will first lay down the foundation from a theoretical perspective and then we will jump to AWS Management console or AWS CLI for Hands-On labs by the end of this module you'll have a good understanding and also gain hands-on experience with services like relational database service AWS Lambda elastic Beanstalk cloud formation simple notification services and also AWS Cloud watch we will wrap up module 5 after going through a fast recap on all topics covered in this module and exam hints relevant for the certified Cloud practitioner exam with that said let's get started [Music] [Music] in this section we are going to talk about Route 53 AWS service but it's honestly going to be more of a discussion about DNS because this is what Route 53 is is AWS DNS service so DNS first stands for domain name system and acts as the phone book of the internet when you want to call someone and you don't know the phone number you look it up in the phone book so Yellow Pages or something similar to that and you find there an association between the person's name and the phone number so as an example I'm looking for William Stone's phone number and I can find that in the phone book it is plus one for USA international one two three four five six seven eight nine zero and then I can make the call so you can then call Mr William because now you know this gentleman's phone number DNS solves a similar problem but this is for the internet worldwide let's have an example so this user is trying to access aws.amazon.com and because in the it let's say in the IIT world or anything related to it we communicate you using ipv4 or ipvc's V6 addressing then it means that this user would need to know what is the IP address again ipv4 or IPv6 of aws.amazon.com and because the laptop doesn't know that it will ask something that is called a DNS server and the DNS server actually keeps a mapping between the name in this case aws.amazon.com and uh public real public ipv4 address and actually that's a real IP IP address I just I just found that by pinging aws.amazon.com so the laptop will ask the DNA server hey Mr DNA server what's the IP address corresponding to this name then the DNS server will just handle this request and return the IP address which means that now the user can access the specific page AWS at amazon.com and that's with all you know you need to know with the with the DNS now returning back to Amazon Route 53 Amazon Route 53 is a highly available and scalable domain name system DNS web service it is designed to give the developers and businesses an extremely reliable and cost-effective way to Route end users to internet applications by translating names like example.com into the numeric IP addresses like 192021 that computers use to connect to each other what else does the the Route 53 you can use Amazon Route 53 in order to register a domain name and this is something we will do in the next section we will register a new domain and we will also deploy static website now in terms of pricing for Amazon raw 53 Amazon Route 53 pricing is based on two things something that is called hosted Zone and you'll see in the next section although it's not relevant for the certified Cloud practitioner exam uh you will see exactly what it is anyway you will pay 50 cents per hosted Zone per month so anything in Amazon is on a monthly basis and also you are going to be charged based on the number of queries you're gonna pay 40 cents per million queries per month thank you and see you in the next section in this section I will register a domain on AWS Route 53 and then I will use that domain in order to create a static website on Amazon S3 so let's switch over to AWS Management console and get started so let's proceed please click on services and then go down to networking and content delivery and here it is AWS Route 53 so click on Route 53 and then I can select from these four options for our first step we need to register a new domain so domain registration so yes let's get started now I will have to either register domain or transfer one so we want a new domain so I'll click on register domain now I can choose a domain name for example let's say AWS training bootcamp .com yes this is it so AWS training bootcamp.com let's check the availability and it looks like the domain is available great now before we start with the registration please note that in order to serve the content from an S3 bucket and transform that bucket basically in your website you would need to have a bucket with the same name that you register the domain so in my case AWS trainingbootcamp.com so before I go ahead and do the registration I will just try and see if it's available so the the bucket so what I will do now is right click on services and I will open a new tab so services and I'll go to S3 and let me just try and see if this is available so right click and paste and I will say create bucket so right let me just say cancel so this is where you search for a specific bucket click on create bucket the DNS compliant name let me see if it's available and it looks like it is so AWS trainingbootcamp.com I will click on create now the first thing that we can see here for this specific bucket it says that bucket and objects are not public so we want to change this either select it and go to edit public settings and click save or you can just go ahead and enter the bucket and then go to permissions and here it says on so I'll click on edit and uncheck it and Save so again I have to type confirm so here it is and confirm so it should be good now Public Access public access settings have been updated successfully great let's click on overview and go to Amazon S3 so now it says that objects can be public which really doesn't mean that objects will be public so what you need to do is go to permissions and go here in the bucket policy bucket policy uses json-based access policy language to manage Advanced permission to your Amazon S3 resources and if you just search in AWS documentation you will see here this one permissions required for website access the following sample bucket policy grants everyone access to the objects in the specified folder so this is something that we need and I will copy this one and the bucket policy will also be available for download if you want to to do the registration yourself so let me just get back to S3 and I'll go to bucket policy paste it here and I will click save and policy has in the invalid resource which means that I have to put this one here Amazon resource name I will take it copy and replace the example bucket so paste and let's try again save and this is totally different this bucket has Public Access you have provided public access to this bucket we highly recommend that you never Grant any kind of public access to your S3 bucket but this is something that we want because we want to transform this bucket into a website so going back to Amazon S3 we can see that for Access objects are are not in this position now so the status is not objects can be public but the the access is really public so before we continue with the domain name registration let's do this I will go in the bucket and upload some files and I'll go into my folder and into text into website sorry so I'll just take this ones and I will say choose and I will say upload so these ones are going to be uploaded perfect now one more thing I forgot to do in the properties very important this is the feature that we will use static website hosting so host a static website which does not require server-side Technologies so if you click on it you can say use this bucket to host a website the the endpoint is used as the website address and you can see here like you can you can type like two documents I have uploaded index and error.html for index.html it says that this is the home or default page of the website and for the second one this is returned when an error occurs and you can name this whatever you want but I have said that this is the index.html and the second one error.html so I'll click save and yes bucket hosting is now ticked so everything looks great coming back this is public we're hosting in uh Us East one in Northern Virginia so here I have the index.html and error.html these are the two pngs or pictures that are available so now let's get back to Route 53 so let's continue now with this one so I'll add to cart and I will say continue and now I have to fill in all of the contact details for this one first domain my registered administrative and Technical contacts are the same so let me just fill in these ones now all right so registration was successful as you can see here thank you for registering your domain to AWS Route 53 your registration request for the following one domain had been successfully submitted and here is the domain AWS trainingbootcamp.com so what next going to domains it says that domain registration is in progress and it should be let's say fulfilled in a couple of hours maybe 24 hours although in the previous page was said that it could take up to three days so anyway I will just pause the the registration and I will come back to you once I receive the confirmation from AWS alright so I'm back in AWS Management console honestly AWS Services have improved a lot and are continuously being improved it took around 15 minutes until I received the confirmation from AWS so now let me just refresh the page and let's say register domains now so it is no longer independent requests AWS trainingbootcamp.com is now uh shown up in registered domains so if I click on AWS trainingbootcamp.com I can now go ahead and manage DNS so I'll click on this I'll select it and then I can say so let me just take a look create record set this is what we want I'll click on this one and AWS trainingbootcamp.com I will create an alias for this this is a type a record DNS record which means that I have now the possibility to select a Target and the target is actually the uh the S3 bucket so I will select this one and just say create and here I have the DNS the DNS record it is a type again so it is an alias going to this uh going to this bucket so going back to the previous uh section we went through an example so if an user we will try to go to AWS trainingbootcamp.com that request will be forwarded to the S3 bucket which is now serving statically content to the user so let's try now to connect to AWS trainingbootcamp.com website enter and here it is welcome to AWS certified Cloud practitioner training bootcamp foreign bootcamp.com something that is not okay I will just say enter then I will be presented the error.html HTML content so resource not available anyway not available so pretty much that's it with registering a new domain with AWS Route 53 and then also going to S3 creating a bucket with the same name like the domain name making it public so also configuring it with this option static website hosting and that should be everything that you should do if you want to host a static website with AWS all in all you don't have to do it in order to pass the certified Cloud practitioner exam it's just like a great example of how you can just use different AWS services in order to create cool stuff in the cloud thank you and see you in the next section [Music] in this section we are going to talk about cloudfront Basics this topic is not new we have addressed the cloud front and also the AWS content delivery Network in module 2. when we discussed about the global infrastructure with the regions availability zones and also Edge locations Amazon cloudfront is a web service that speeds up the distribution of your static and dynamic web content to your users cloudfront delivers your content through a worldwide network of data centers and these are called Edge locations when a user requests content that you're serving with cloudfront the user is routed to the edge location that provides the lowest latency so that content is delivered with the best possible performance so again age locations are AWS endpoints that cache content locally let's now go through an example so consider that a user comes online in North America somewhere around Seattle and will request a file that is stored in an Amazon S3 bucket all the way in Melbourne so the content will be delivered that's for sure but it will take some time and in order to accelerate this and provide a better user experience you could enable CDN so cloudfront with uh with the content delivery Network and when a user requests the same file then the content will be delivered locally through an age location and also meaning that through the CD and the content delivery Network so that's all that you have to know for the certified Cloud practitioner exam and we can just move on more information about it and also about the global infrastructure at aws.amazon.com about AWS Global infrastructure so some conclusion now cloudfront helps you deliver your web content faster to your end users thus providing a better user experience cloudfront Edge locations bring the web content closer to your viewers and make sure that popular content can be served quickly if the content is not popular enough it will be aged out meaning that it will not be cached locally at at the edge location so cloudfront original Edge caches really help when the content is not popular enough to stay at the cloud front edge location and improve delivery performance for that specific content in terms of cloud fraud pricing well you have now signed for one year with a free tier account here at AWS and for every month during this year you will receive 50 gigabytes of data transfer out to out of um out of AWS and 2 million HTTP or https requests after this period you'll go with the on-demand pricing and again per month for example for the first 10 terabytes you will pay like 8 cents or up to 25 cents so this is extremely low cost there are also other levels so after this for 10 terabyte terabytes then you have the next 40 terabytes and so on and a complete list you can find here at aws.amazon.com cloudfront pricing so thank you and see you in the next section [Music] in this section we will create a cloudfront distribution and then we will test the website speed I am now in AWS Management console and before we configure the cloudfront distribution I would like to do some testing and see exactly what is the current status so this is the website and it is functional we can we can serve static content from the the Amazon S3 bucket so I will go now into web inspector and this kind of option is available with other browsers as well I'm using Safari and in Safari if you select Network here you can see when you refresh the when you finish the website what is the current let's say latency so in this case 133 milliseconds and I can do more than this I can just keep on refreshing and it is somewhere around 130 150 milliseconds now the idea is that if we implement the cloud front distribution we should have less than that and I've seen cases where latency between 300 and 400 milliseconds have been has been improved up to 30 40 milliseconds and that's really really great it really depends on the distance between you the the current user and the S3 bucket where the content is being is being stored so in order to proceed please go to AWS Management console and go to services and we should go down under networking and content delivery here it is cloudfront so please click on cloudfront all right so here is the start page Amazon cloudfront getting started so you just click on create distribution and there are basically Two Steps step number one select delivery method and step number two just to create the distribution I will click on get started and now I have to select the origin domain name I can see here the Amazon S3 buckets I have three buckets and the one that we are interested in is the AWS trainingbootcamp.com S3 Amazon AWS so this is the S3 bucket where we have defined the static website and where we have put the content for this website right so I will just select it and honestly just for this testing there's nothing I have to modify here there's also distribution settings where you also select how much of the edge the content delivery and the edge locations you'd like to use so you could say use only US Canada in Europe or use U.S Canada Europe Asia Middle East Africa or all of them which means best performance but also the the highest price as well that you will pay for so I will leave everything default now and scroll down and click on create distribution so it will take some time until the the distribution is created the cloud from distribution is created currently it says the state is enabled and the status is in progress so now I will just let's say take a look here I just click on that one and when everything is done in order to to connect to our website through cloudfront we'll take from the domain name everything that is here and we will connect to that website with this specific let's say fully qualified domain name so anyway we will not use the the let's say the domain we have created AWS training trainingbootcamp.com but we will use what we have in the domain name within cloudfront distribution so what we'll do now I'll just click on distribution I will wait until the status will change so not in progress and I will just pause the recording now and get back to you when it's done all right so deployment has been successful everything has been deployed and we are ready to test as you can see the status is deployed state of course is enabled I will click on this cloudfront distribution I want to use a domain name so I will take this one and I'll open a new tab in order to test this um the speed and latency of the current website delivered through cloudfront this time so now I am using the cloudfront distribution domain name page is loading just fine let's examine the latency so develop and the web inspector let's take a look here down the time so in the time column and I will now just refresh as you can see it's around 30 milliseconds as opposed to what we had when we were using the AWS trainingbootcamp.com uh let's say domain directly 300 160 160 and so on thank you and see you in the next section foreign [Music] we will cover application load balancer basics so now let's first start with an overview with AWS elastic load balancing you can achieve fault tolerance for any application by ensuring scalability performance and security elastic load balancing automatically distributes incoming application traffic across multiple targets and this is an example like ec2 instances AWS elastic load balancing supports three types of load balancers and these are the network load balancers classic load balancers and application load balances which is our subject in this section if you want to go and search for a comparison between these types of load balancer here is the link that you can follow aws.amazon.com elastic load balancing slash features and you'll see that there is a comparison on a per future basis and application load balancer is the first one then the network one and then the classic one application load balancer is good for application traffic as the name says HTTP and https and if you if you are also familiar with the OSI model from the networking field well application load balancer really understands applications you can Define complex rules with URLs and and stuff like that so this is good when we need layer 7 as related to OC model visibility now the network load balancer is good for traffic for intensive traffic and it has visibility up to layer 4 which means IP and also protocol and port number and the last one classic load balancer is considered now Legacy and is not that much used so let's now go through the components of the application load balancer architecture the load balancer is the single point of contact for the clients and when I say clients I mean people users from the internet contacting or sending traffic to a specific application that is behind a load balancer the load balancer distributes incoming application traffic let's say web traffic across multiple targets such as ec2 instances in multiple availability zones which results in increased availability of your application The Listener checks for connection requests from clients using the protocol and port number that you have configured and forwards this request to one or more Target groups you will Define rules for traffic forwarding including Target groups condition and also the priority the target group or TG routes request to one or more registered targets such as ec2 instances using again the protocol and port number that you have configured and the target can be registered with multiple Target groups as you will see in just a moment health checks are run on all target registered um on all target registered to a Target group now this may be confusing to you or um anyway not that clear if you're new to networking and also to to the cloud to the cloud stuff so I think that it's a good idea to go through a visual representation of all of these components so let's say that a user comes online and initiates an HTTP or TCP Port 80 traffic and it will reach an application load balancer now the application load balancer or inside the application load balancer as you will see we will configure a listener and this listener is a process that really listens for traffic coming or arriving at HTTP port 80. so if this is the case then the application load balancer will forward the traffic to The Listener and within the listener you will configure a rule that says okay if I receive any traffic from uh from any user on HTTP Port 80 then I will forward that traffic to a Target group configured for HTTP 80. and inside the target group you'll have there a one or multiple targets or registered targets which in our case will be ec2 instances we will also have something that is called a health check and the health health check is really really useful for The Listener in order to verify if the targets are available to receive traffic so if one target for example is not available then it makes sense for traffic to not be the um let's say sent to that specific ec2 instance so that's let's say in a nutshell everything that you need to know about the application load balancer so as a comparison on the right side we have two target groups configured for HTTP port 8080 and HTTP 443 and we have a listener defined here with two rules so traffic that will be arriving here at the listener for 80 80 as the port number will be forwarded to the left Target group and traffic that it is originated from any user and is destined into HTTP 443 will arrive on the right target group now also please note that we have a Target so an easy two instance that sits really on the boundary between the two and this is because a Target can be a member of multiple Target groups now in our next section Also let's have visual representation of what we'll configure in order to test these Technologies and actually see how they work so we will configure Two web servers web server number one and web server number two and this will be configured in different availability zones in order to achieve High availability and redundancy for our web traffic so for our web server now we will configure an application load Bouncer and a listener that will wait for traffic arriving on Port 80 so TCP 80 or just simple http and we also have a rule here configured saying that if I see any traffic coming here on HTTP then I will forward the traffic to a Target group that has these two web servers registered as as targets there so also we'll have a health check and the health check will be okay if I can reach the index.html file on the web server it means that the instance is available if that is not available then it means that the health check will fail and I will not forward traffic to that specific web server as a verification we will just initiate traffic from our PC or laptop going to http 80 to the DNS name of the application load balancer as you will see just in a moment and then the traffic will be forwarded to web server number one if we refresh the page it will be load balanced so again we are talking about load balancer right and it will reach web server number two then again web server Now 1 and web 702 and that's it thank you and see you in the next section foreign [Music] we will create an application load balancer and test load balancing within AWS Cloud so now let's switch over to AWS Management console and get started alright so I am logged in AWS Management console before we start let's take a look at what we currently have here available in our account so what I mean by this is for example if we now go to VPC so just a small and short recap we have one VPC that we have used up to now so AWS CCP VPC in terms of subnets when we have defined this this virtual private Cloud it was the first option with one public subnet and because we said that these two instances will be deployed in two different availability zones it means that we need to deploy or configure just another subnet so what we will do now is create another subnet let's call this for example public subnet one and let's have it this way and we'll just create another one before we do that let's take a look at the at the settings of this subnet so if you look at the route table you can see that two routes are available so for anything that is going to be routed inside the VPC 100 16 it will be used the built-in router for anything leaving the VPC or coming from the internet then it will be used on ethernet Gateway and I was saying that the internet gateway basically performs static net for traffic leaving the VPC and coming from the internet inside the VPC so now I'll just create a subnet let's say that the subnet name will be public and subnet 2. the VPC will be awsccp VPC and the availability Zone will not be us East 1A will be usd's 1B so ipv4 space 10 0 this time 2.0 and slash 24 that's good and I will just click on create let's click close and take a look so now we have two subnets that are associated with this AWS CCP VPC if I click on public subnet 2 I currently see that I have only routing local or locally rotting enabled so I will need to do something in order to have here the default route as well if I go on the route tables and take a look actually let me get back to the subnets and this is public subnet 2 route table and edit route table Association so this is a2fe I will select the other one so c43 as the ending and now I have the same routing table associated with both subnets in this specific VPC so just click save and close so I use the same routing table for both of the of the subnets public subnets as they are named just look here in the route table column I have the same routing table great now I can start and deploy the two subnets so the the two ec2 instances sorry about that so I'll just go to ec2 and you will now also see how I can also shut down terminate and delete a specific instance so with this one selected the web server I'll go on actions or you can say right click that's the same and instance State and terminate and I will click yes terminate now so now we will deploy web server 1 in the first availability Zone and web server 2 in the second one so click on launch instance we will use Amazon Linux to Ami so click on select the instance type is going to be the t2 micro this is free tier eligible so we will not pay for for the usage I will click on configure instance details and now we have the the possibility to do some to make some choices so I will click our VPC for the first Subnet I will use here public subnet 1 in Us East 1 availability Zone for the auto sign public IP I will select here enable and this is because I want to be able to connect to this specific instance let me just take a look here the network interface I'll not do anything here okay so let's just continue now so add storage I'll leave the default here and click on ADD tags as a tag I can see here name and this is going to be web server let's say zero one I will click on configure the security group and I will select an existing Security Group I will go with the SG Security Group AWS CCP VPC and as you can see here for the inbound rules so traffic that it it is arriving at this specific instance or poor Alex or TCP 80 or HTTP it is permitted from from what source from any Source around the globe so I'll now just click on review and launch so this is the review and launch now I have to select an existing keeper I will use the one that I have used up to now so I'll just acknowledge and click on launch instances so view instances and here is the one that is being deployed now so web server zero one T2 micro usds 1A and pending so if I click on it and now I have access to the public IP I will just copy it and connect to it through ssh in my terminal now all right so now let's connect through SSH let me see if I have here and I do have where is it here is the PM file so this is the private key so let's say SSH minus I and then we have to communicate here the the pen file so the authentication private key file and then we will say the username so ec2 Dash user and then the specific IP so I'll connect to that and I will say yes and now I am in the ec2 instance I will just say sudo SU which means that now I am the root user in this specific instance so I can do whatever I want first thing I will say yum update and minus y and this will enable the update for this specific instance and now I have the the ec2 instance updated I would like to install so yum install Apache so I want this to be a web server and I will say minus y in order to to have the yes option selected during the the installation and here it is good now if I say service httpd so the HTTP Daemon the HTTP process if I say here status again it says that it is available the although it says active inactive so it is not active we'll have to start it so just up and say start and if I now check the status again it says that it is active and running in green perfect so now going to VAR www dot slash sorry about that so HTML I have nothing here now I will Define the index.html file so Nano index.html file let me just grab the code for this and this code will be available as a resource for for this specific section so I'm just saying that when any user will access this specific web server the page that will be displayed is web server number one or web server zero one so now Ctrl X and I will say I will just press on y for yes and then enter and that's it with the configuration now if I say cat and the index.html I have this specific code here so HTML code now let's get back to the AWS Management console and test our web server I'll just refresh now and everything is in green I can take this IP so just copy it and I or I can take the public DNS name the full name copy and let's just open a new tab and test functionality and here it is web server web server number one or web server zero one now what we want to do also is to create the second server so let's do that right now so again launch instance we will use the Amazon Linux 2 Ami so I'll select it the instant instance type will be again T2 micro we don't have to use anything bigger than that so let's configure instance details now we have to select the VPC and for this specific one we'll use the other subnet public subnet 2 and this is in Us East 1B so this is a different availability Zone we also have to enable here the public IP so this is what we want and I am roll nothing here no we can do here like Advanced details so this is something that you can use if you want to script your configuration and again what you will see in just a moment will be available as as a resource to be downloaded if you want to test it on your own so let me just paste it here and take a look so the first line will inform the ec2 instance that this is a script we will go into the root user and then we will update the Box we will install httpd or Apache we will start the service we will go into VAR www.html HTML and we will just echo or put this text into indexed HTML and this is web server number two so this means that when we launch this specific instance we will not have to go and SSH and perform manually all of these steps but anyway you will not be tested on this knowledge into the certified Cloud partitioner exam this is just to to show you that the option exists and it will make your life easier in a production environment so now let's add storage I'll leave the default here I will not modify anything just add tags and for a tag I will say here the name is going to be web server and not number one but web server zero two and I will configure Security Group now I will select an existing group and I will select the SG AWS CCP VPC again this one permits HTTP traffic so it's fine review on launch we can now take a look at everything that we have configured and just click on launch selecting that we acknowledge everything here and also that we will use this specific keeper so just click on launch instance I will click on view instances and now we have web server zero one and web server zero two as you can see there are in different availability zones and they are also using of course different IP and different public subnet as well so what we'll do now is just take the IP and start refreshing the page so that we hopefully see web server zero two as the result so nothing nothing yet I will just keep on refreshing and hopefully when here says that it's good hopefully we'll see here web server zero two yeah here it is so now we have two web servers web server zero one and zero two that are deployed in two separate availability zones so we are now ready to test our um our configuration with load balancing in order to do that please navigate on the side menu here down to load balancing and click on load balancers so now let's deploy our first load balancer I will click on create load balancer and here are the options we have talked about in the previous section the theoretical one application load balancer Network load balancer and the classic load balancer which is also stated here that it is previous generation so for the cloud practitioner exam we will use the application load balancer click on create so let's put a name here this is going to be an application load balancer next we can also Define the listeners so a listener is a process that checks for connection requests using the protocol and Port that you have configured we are going to configure a load balancer that is going to listen for HTTP and the port is 80. now here are the availability zones that the load balancer will process traffic to and if we select the AWS CCP VPC we have only two availability zones so this is why we also configured a second a second public subnet so a second subnet for this specific VPC I will select both meaning that the load balancer will Route traffic to The Targets in this availability zones only so that's why I'm selecting both and then now click on configure secure settings this is related to https inspection decryption and then again encryption we will not do that we'll use just simple HTTP so for this reason I will click on next configure security groups I can use an existing one which is what I will do so I will select the second one the SG awsccp VPC and now we have to configure routing so now I'll have to define a Target group so let's say that this is TG Target group and then web servers good the target type so where is going where is where is really traffic going to arrive or to to be destined to is going to arrive at that instance or multiple instances the protocol is HTTP good Port is HTTP so traffic HTTP Port 80. now here are the health checks and if I look here says the protocol load balancer uses when performing health checks on Targets in this target group I want to check HTTP and I want to go for index.html so this specific file exists so it will be it will be checked and if that exists then the target will be used in order to to send traffic to that specific Target if it doesn't respond to this one to to index.html verification then it means that the instance is not available if you look here at the Advanced Health check settings there are multiple options I will just lower the the interval in order to have the instance let's say faster available for performing uh let's say or to responding to http requests coming from the internet next we'll have to register targets so who is actually going to receive the request well in this case it will be both of my web server zero one and web server zero two and add to register on what port on Port 80. so I'll just click on add to registered and I will now have to say review and we're almost done so I'll click on create and I can see here that is being created I'll click on close let's take a look for example here at listeners so I have only one I am expecting only HTTP Port 80 traffic and this is the rule I was mentioning so the rule says that traffic is going to be forwarded to this specific Target group TG web servers if I click on TG web servers or I click on target groups on the side menu so let's just click on this one I can see here exactly what are the targets so what are exactly the the targets of the instances in this case that will receive this specific uh traffic now I can see here that the status the Target registration is in progress so it is being checked now the availability of this specific two instances and when everything is performed and successful you will see here that the status will change to healthy great health checks I'm checking for this specific file and we have some tags no we don't have any tags for this target group so now let's wait for the targets to register and I'll just come back to you when this is done all right so now the status has changed to healthy so if I just take a look here it says that this target is currently passing Target groups health checks so we are good both of them are fine which means that both of them will receive the request and will be able to respond to the client's request so now what we'll do is take a look here in the load balancing we will just click on load balancers and let's take a look at our load balancer as you can see now this is an A type record so remember from the Route 53 section array an A type record is a record or DNS record that says for this specific DNS name for this node endpoint name there is an ipv4 address in the end that will be able to be resolved too so it doesn't matter if the IP changes if we have a DNS name available we can use that so I will just take it here with a click it is copied and what I will do I will go now and say paste so this is the application load balancer fully qualified domain name if I just keep on refreshing as you can see we have web server zero one and web server zero two and zero one and zero two and zero one and so on the last thing I want to test with you is go on and delete the index.html file or just rename it and the health checks should fail which means that we should see only web server01 or web server zero two depending on which index.html file we will delete or just modify so here I am back in web server zero one let me just say LS so this is the index.html if we just say let's say move index to index one dot HTML let's say LS so this has renamed the index file and the new name is index1.html so what we'll do let's get back to AWS Management console and if I take a look in the ec2 dashboard at the running instances and take a look at web server zero one if I just select it and take the public IP let me just open a new tab and see here paste and go well I can see that Apache is being installed so this is Apache 2.4 but the index file is no longer there so we do not see the web server zero one good now let's get back to the load balancer so on the side menu I'll click on load balancers and I will click on target groups and I'll go to Target and now as you can see here web server zero one it's in this it has a status or it is in the status of unhealthy why because we because we have the health checks defined here that I will check the full path going to index.html file and if that file isn't there then I consider that the target is not available maybe just the instance it's um let's say rebooting or it's anyway deleted so not what we can do now is the following so we get back to the application load balancer and keep on refreshing and we will see that only web server 2 will answer to our request because web server01 is not that it is not available but simply uh it doesn't have here the health checks passed so if I get back to the the terminal now and modify again the so here in index.html so now it should be the health check should be passed and I should be um let's say passing traffic with this web server zero one as well so let's see the status I'll just keep on refreshing it will take around 30 seconds to to one minute up to one minute and let me just pause the recording now so now both of the web servers are healthy which means that if I refresh then yes now I have again both of the web servers answering to the client's request where server zero one zero two zero one zero two and so on thank you and see you in the next section [Music] in this section we will cover AWS ec2 Auto scaling basics so first let's try to answer this question what is Amazon ec2 Auto scaling Amazon easy to Auto scaling helps you ensure that you have the correct number of Amazon ec2 instances available to handle the load for your application which means going up or down so not necessarily increasing the number of ec2 instances ec2 instances are grouped as you will see in Auto scaling groups um in something like minimum number or desired number or maximum number of ec2 instances these are different levels that we can configure in Auto scaling groups and you will understand just in a moment what these are scaling policies will automatically launch or terminate instances as your application demands so now let's go over the minimum maximum and desired ec2 capacity this is probably the key in understanding Auto scaling groups so you define an auto scaling group as we will do in the next section we will actually go on and lab this uh these technical topics uh theoretical ones and get some hands-on experience so again you will define an auto scaling group and in that specific Auto scaling group you say that well I would like this web server let's have an example this web server to to be sustained by three ec2 instances maybe in three availability zones in order to uh to achieve a redundancy and also High availability now okay this is the desired capacity so uh three easy to instances but at one point you see you see that well the capacity uh needed could be only one because there's no some so much traffic hitting your web server and you can also Define this one here the minimum capacity uh you can lower your capacity of the auto scaling group manually or you can Define scaling policy as you will see and in a scaling policy you can say something like well if the average CPU of my auto scaling is I don't know um 10 or 80 so some value then increase or decrease my ec2 uh instances my number of visitor instances at the same time as we gone through an example with with the Black Friday maybe we can also set a maximum capacity so if the application demands I will approve let's say the increase so scaling out to a maximum of C of five ec2 instances as you can see on your screen now so again minimum desired and maximum you start with the desired one so desired let's say three in this case and then I can also adapt so scaling or scale out depending on the on the application so now let's continue as you will see in the next section when we'll lab this these Technologies we will start defining a launch configuration a launch configuration is an instance configuration template that an auto scaling group will use to launch ec2 instances and you can see that in the launch configuration we will include basically everything as you would do when you launch an ec2 instance so the Ami ID or instance type and then the key pair Security Group also the volumes of the block storage and again these are the things that you'd normally Define when you just launch an ec2 instance by itself now Auto scaling groups an auto scaling group contains as you as you have seen in the previous slide a collection of Amazon is two instances that are treated as a logical grouping for the purposes of automatic scaling and management maintaining the number of instances in Auto scaling group and automatic scaling are the core functionalities of the Amazon ec2 Auto scaling service the size of an auto scaling group depends on the number of instances you set as the desired capacity so you will start with a desired capacity and then manually change to minimum or maximum or automatically have the the number changed depending on the scaling policy the auto scaling group starts by launching the desired number of vc2 instances again this is a desired capacity in the example we had three ec2 instances you can use scaling policy to increase or decrease the number of instances in your group dynamically to meet changing conditions when the scaling policy is in effect the auto scaling group adjusts the desired capacity of the group between the minimum and maximum capacity values and launches or terminates or terminates the instances as needed again our example is between 1 and 5 and this is exactly what we'll do in the next section so in the previous section we have played with web server zero one and web server zero two and load balancing for for the purpose of of solving this next Lab we'll take web server one and we will create our first Ami so an Amazon machine image then we will create the launch configuration with three desired ec2 instances create the auto scaling group and also Define Dynamic Auto scaling capabilities through scaling policies as you will see in just a moment thank you and see you in the next section foreign [Music] in this section we will create a launch configuration and also on auto scaling group we will wrap up this section by playing a little bit with the scaling policies so now let's switch over to AWS Management console alright so now let's get started let's go ahead and navigate to easy to console so we are now in ec2 console and we currently have two running instances so this is web server number one and number two from the previous section when we address the load balancing capabilities so what I will do now I will just terminate web server 2. so instant State and terminate and yes I would like to terminate this instance now I will use web server number one in order to create an Ami so Amazon machine image and we will use that specific image when defining the launch configuration so with web server number one being selected I will go to actions and then go to image and create image so I will say here as the image name let's call this web server Ami image description web server Mi and let's say simple web server now what's the volumes included I will just leave the one that is currently living with the web server number one so nothing here and I will just click on create image so create image request received which me which means that it will take some time and if you go here on the left side menu and click under images and Mis you will see here that currently the image is being created so the status is pending once this is done we can go ahead and create the the launch configuration and also the auto scaling group and we can find it here scrolling down in the side menu with auto scaling launch configuration so I'll click click Now launch configuration and start with create launch configuration and here instead of going with the default Amazon Linux Ami or whatever image here I'll go to my Amis and here is the image we have available so web server Mi simple web server so I'll just select it and yes I want it to be a T2 micro instance type I will click on configure details the name of the launch configuration let's call this web server and launch configuration all right now I am no monitoring let's see in the advanced details no need to configure anything here in the IP address type only assign a public IP address to instances launch in the default VPC and subnet assign a public IP address to every instance why is that because we will also select the VPC and we want to work in our VPC in AWS CCP VPC and I would like to create a new security group or choose an existing one let me just choose the one we have used up to now we have SSH enabled and also HTTP and https so that's good that's fine I'll now click on review and we can also now review everything we have configured so this is the launch configuration on demand purchasing option EBS optimized now storage anyway everything we have selected is here so now I'll click on create launch configuration I'll have to choose an existing keeper or to create a new one the the one that we have is fine so just click on create launch configuration now we can also go ahead and create an auto scaling group with this launch configuration which is what we want so I'll just click on this menu and I'll have to provide a name here so group name let's say again web server and auto scaling group SG I would like to start let's say with three instances and I will work in AWS CCP VPC and I will use so let's say we have only two subnets that's fine public subnet one and also public subnet 2. all right health check race period monitoring that's fine let's now go to configure scaling policies keep this group at its initial size or also we can use scaling policy to adjust the capability the the sorry the capacity of this group so let's say that as in our example earlier I want to scale between one ec2 instance and up to five is two instances depending on what depending on the average CPU utilization and I will say that the value is 80 percent and because we will not do anything with the ac2 instances we will see that after the deployment of the three desired ec2 instances most probably the auto scaling group will scale in the size to one or two ec2 instances so let's now go and say configure notifications here I can configure an email in order to receive alarms or just notifications when something happens with the auto scaling group in a in AWS I can also go ahead and let's say create some tags here so we can say name and this is the web server Auto scaling group setup right so now review I can review everything so minimum is one these are these three maximum five I'm using these two subnets that I have available in um AWS CCP VPC now just click on create auto scaling group initiating creating perfect so click on close and now let's analyze a little bit what's happening here so I'm on the left menu as you can see under Auto scaling I mean Auto scaling groups and we have only one here web server Auto scaling group perfect we can take a look here in scaling policies and also instances first so as you can see here now already three ec2 instances are being deployed as the desired number we have configured so if I go now and let's say take a look in instances I will see web server zero one which is still here web server zero two we have terminated and we have another three web servers so let me just expand a little bit uh three other web servers that are being deployed here as you can see now as um as per our configuration so I can just click on the refresh and now I have three ec2 instances in the status of healthy if I take a look at scaling policies I see here that currently the policy type is Target tracking scaling execute policy when is required to maintain an average CPU utilization at 80 percent so the the action that it will be taken is add or remove instances as required all right it will take some time 300 300 seconds in order to detect that there is no traffic and most probably to do a scaling or reduce the number of ec2 instances in the meantime if I take anyone uh any instance from from the three and I will just take the the public IP I will say here right click paste and go and once it's done I should see here the web server uh index.html page so let's see initializing initializing none of them already so again let's just wait for a little bit in order to to finish the the let's say the deployment of these three ec2 instances and also to see what's happening here with the number of instances once the auto scaling group will detect or the scaling policy will detect that three instances are too much for the current load all right so now all of the ec2 instances are in the running State and the status checks look fine so two out of two so as you can see here I'm not I'm now just refreshing the page of one of the ec2 instances that we have created and indeed I am provided the index.html content we have uh we have really configured in the previous section with web server zero one so now going back we should analyze a little bit what's happening here and once the necessary time passes so it was in the scaling policies like 300 seconds to warm up after scaling so after everything is done and we have all of the uh three instances available there's going to be 300 seconds needed to wait so we again I will pause the the recording and came back and came back to you when uh get back to you when we will see that some instances either will be um let's say initiated or some of the existing existing ones will be let's say terminated all right so it took around five to six minutes and now I can see here that two of the instances are in the terminating state so if I go and check for the instances and just refresh I can see that web server SG this one is terminated and this one is also terminated and we we now have only one web server in the SG setup getting back so here is what we have only one this one is going to be terminated and that's it if you look at the scaling policy it says here that um it will execute the policy as required in order to maintain the average CPU utilization at 80 percent and because I'm not doing anything on these machines probably the CPU is like five ten percent and only one machine will just solve this equation like having the CPU under 80 percent now if I look here at the activity history I can see here first launching the ec2 instances and then another two actions like terminating these two instances thank you and see you in the next section [Music] in this section we will cover relational database service or RDS Basics so now first let's start with what is Amazon RDS Amazon relational database service or RDS is a web service that makes it easier to set up operate and scale a relational database in the cloud now remember from the overview module I think it was module three a database is just a location to store and retrieve data Microsoft Excel is a great example think of spreadsheets in Excel where information is stored in columns and rows relational databases can use information from multiple tables and combine it and that is create relations between the tables and the example was in an Excel we have three tables like courses students and registration and with the information that is stored or at least a part of it installed in course table and also in students table we can create a registration table using some information from this course and students table which represents also relations now let's continue with the advantages of AWS RDS you can easily allocate or increase resources as you need them on the Fly and I'm referring to CPUs of the processor memory and also storage you literally can't forget about backups operating system patches and also recovery this is fully managed by AWS and that's why this is called an AWS manage relational database service automated or manual backups for the database restoration is also managed by AWS you can also achieve High availability with primary database and a synchronous secondary database you can use read replicas to increase read scaling so just read the replicas just to to put a note on this our Amazon RDS instances that are going to be used only for reading so not for writing also information there control who can access your database with of course AWS identity and access management now let's talk about the Amazon RDS database instances the database instance is the basic building block for AWS RDS a database instance is really just a database environment in the cloud each database instance runs something that is called the database engine and maybe it's not the the best or the most most correct one but you can think of it as as an operating system but it's not like that so each database instance runs a database engine and AWS supports popular ones like mic SQL mariadb postgresql Oracle and also Microsoft SQL Server database engines the database engines differ in terms of features and also the database engine controls the databases that it manages now let's continue now the database instance class determines literally the CPU and memory the database will use so you can choose here as you will see in the next section when we go through a lab that the database can have like one virtual CPU and one gig of RAM or more than that when you select the story for the database you can choose from magnetic general purpose SSD and also provision iops for the best performance each database instance has a minimum and maximum storage requirements and this depends on the storage type and database engine it supports now very important for Amazon RDS redundancy and high availability it's the multi-az feature so now we'll go through an example of multi-az deployment so in a VPC you have a load balancer that is going to distribute traffic to your web server application that may be deployed into availability zones one in zoo in order to achieve High availability redundancy and although also fault tolerance now usually if not most of the times the web server application has some let's say database that is running on the back and you can literally replicate the content on your primary database to a secondary database or secondary Amazon RDS that is going to be hosted in a different availability zone of course in order to achieve the same availability redundancy and fault tolerance for your whole setup now let's talk about the Amazon RDS security also RDS security is implemented through security groups and you can allow access to the database by specifying IP address ranges or Amazon ec2 instances that will be allowed to access your database three types of security groups can be used database Security Group which means controlling access to database instance that is not in a VPC a VPC Security Group controls access to a database instance inside a VPC and last one Amazon ec2 Security Group controls access of an ec2 instance to the database now one more thing how can we interact so AWS Management console the command line interface or CLI and also AWS software development kits in terms of monitoring you can use the free Amazon Cloud watch service to monitor the performance and health of a database instance performance charts are shown in the Amazon RDS console now the last part of this section is Amazon RDS pricing Amazon RDS costs depend on the following clock hours of server time so you are going to pay for what you use and nothing more with no initial commitment database instance type also counts database purchase type either on demand or reserved for a big discount in advance number of databases and of course this is a no-brainer right backup storage is charged on a per gigabyte per month and three more number of input and output requests the deployment type either single or multi-az for redundancy and high availability which also means multiple instances and data transfer in inbound data transfer is free and for any outbound data transfer you are going to be charged by AWS thank you and see you in the next section [Music] in this section we will deploy an Amazon RDS database instance running MySQL I have logged into AWS Management console and now in order to get started with the database installation I will go and say services and I will go down to RDS so under database services so click on RDS now now once the console loads we are now in Amazon RDS console we'll go and say create database we'll have to choose from different engine options and we said that we are going to run a MySQL engine so I'll just click on my Sequel and then why not go to next here I'd have to select the use case so do you plan to use this database for production purposes and anyway this is the recommended version but we are now just testing so I'll go for the dev test MySQL and click on next in order to continue now I'll have to specify some database details I will leave here the default with license model so general public license also I'll not modify the database engine and now let's just scroll a little bit down I can select here the database instance class which also comes with different performance and costs for me so because this is only a test I will go up sorry about that and select the database T2 micro that's fine multi AZ deployment we have gone through an example I will not go for the multi-az so I'll leave the default selection with no this is where also you can select the storage type so general purpose SSD or for better performance the highest one provisioned iops also SSD so anyway I will leave the general purpose SSD with 20 gigs that's fine and here I have the estimated monthly costs which comes from the database instance and also storage and the total approximate total of around 15 dollars now we'll have to specify some details in as related to the settings so the database instance identifier my database instance so this is the AWS CCP database always just call it the RDS database the master username so I will use no it doesn't want any spaces so I will just say the share and the share let's see if it's good great now the master username AWS and CCP and DB master password I will use the same everywhere so no I'll say here this is the password and this is the password good now let's click on next in order to continue some advanced settings now I'm going to run this database in a VPC and I will use our VPC AWS CCP VPC default VPC here that's find the subnet group is it going to be public accessible so I will say yes because I want to connect at the end of this session to validate the installation any availability zones yes I can say Us East one now the VPC security groups I can create a new VPC security group or I can choose existing one so I will use our SG AWS CCP VPC now the database name let's call it again so I'll just paste here awsccp database the port so I'm connecting on Port 3306 now database parameter group I will not change it option group I am database authentication no I will not use IM in terms of backup I will say that no backup here um so I will not select any backup because again this is just for testing purposes I can also select something related to the log expert so select the log types to publish to Amazon Cloud watch logs but I will not select anything maintenance that's fine and I'll now just um click on create database in order to continue so your database instance is being created it may take several minutes I will click on view database instance details and here is our RDS database so this is the name we have defined at the beginning of um of creating the database if I click on databases I can see here now the status is creating so I will now pause the recording and wait for the database to be deployed by AWS all right so now the database is available as you can see in the status column so in order to connect to the database I will just click on the name and I'm now being provided in the connectivity and security reports so right here on the under endpoint and Port this specific endpoint name so if I take it here all right and the port is 3306 I'll now open an app and this is MySQL workbench so this is something for a Mac OS operating system but there are different other softwares for Windows as well so anyway you will not be tested on this on the exam it is just for testing for this specific section so I'll click on the plus sign in order to create another MySQL connection to this specific RDS instance and in the connect connection name I will just say AWS CCP and DB for database I'm connecting instead of an IP I'm providing the endpoint name the port is 3306 for the username we said that AWS CCP DB is our username in terms of password I will say here the same AWS CCP and DB and I'll now click on OK and I'll just say test connection so let's see if the connection is going to be successful or not so trying trying and the idea is that it will not be it will not be successful and the question is why so why am I not able to connect to my newly deployed RDS database instance and let's wait now for the error so it says failed to connect to mySQL at this specific endpoint name on 3306 the port number so I'll now click on OK and go back to AWS Management console now I'm back in AWS Management console and let's now navigate to services and ec2 and let's examine a little bit from a security perspective what's happening in this setup so I'll go to security groups and I will select the SG Security Group awsccp VPC that we have worked with up to now so clicking on inbound I see that I'm permitting only HTTP https and SSH so we will need to add another rule edit and add rule in order to permit traffic to this specific application so if I say here uh let's see let's see where is it custom TCP so that's the one and the port is three three zero six or maybe just select from the list here Microsoft SQL so not Microsoft SQL MySQL all right so this is the same 3306 I will now permit traffic with inbound rules arriving at this specific instance so I'll click on let's say here anywhere I will click on Save and now let's get back to the MySQL app and try again to connect to the RDS instance so let's try again so I'll click on Plus again this is awsccp and database and the hostname is going to be the endpoint name the username we said is awsccp and DB the password has to be the same AWS CCP and DB and click on OK and I'll now say again test connection and right successfully made the MySQL connection so I'll click on OK and OK and I have now added this MySQL in the MySQL connections so clicking on it will just open the SQL editor and from this point on if you're using MySQL in your daily uh daily work or as a daily task then it's just your database ready to use in just a couple of minutes in AWS cloud thank you and see you in the next section [Music] in this section we will cover AWS Lambda basics so AWS Lambda is a compute service that lets you run code without provisioning or managing servers so for example if you have a website that you're going to host it in AWS on ec2 instance you can choose to run it serverless so no servers in your equation there so what is that you can take your code and run it with AWS Lambda and literally that's it so you'll not have to manage any easy to instances and again this is called serverless AWS Lambda executes your code only when needed and scales automatically you pay only for the compute time you consume there is no charge when your code is not running and this is really nice AWS Lambda runs your code on a high availability compute infrastructure and performs all of the administration tasks of the compute resources maybe you're wondering what's happening on the back so literally AWS on your behalf will take care of server and operating system maintenance capacity provisioning automatic scaling code monitoring and also logging you can use AWS Lambda to run your code in response to events as an example you can run your code in response to http requests using Amazon API Gateway in order to get a better understanding on Lambda from a Hands-On perspective you can go to AWS documentation website and search for build a serverless web application and this is a really really nice project and it will take somewhere between one and two hours and you will get the to use the Amazon S3 Amazon Cognito Amazon API Lambda and also dynamodb so long story short you will host any static content in an Amazon S3 bucket we have also done this in a previous section and that will be your website when the user let's say comes to your website it will also be able to create a username and password and register with you and this can be done with using Amazon Cognito user pool uh so the servers literally is Amazon Cognito but you will Define the user pool once users start to register with your web you will also uh use their Amazon dynamodb and Amazon uh let's say Lambda will store uh only any data that it will generate when running in this Amazon dynamodb and you will integrate everything with Amazon API Gateway and provide let's say access to the outside world with this functionality and Lambda and every everything so I really encourage you if you like the idea of serverless to start testing this setup it is already there you don't have to to provision too much the step-by-step guide is available and I think it's a nice thing to do but maybe just after your certified Cloud partition or exam now in terms of pricing with AWS Lambda you pay only for what you use you pay only for the compute time you consume there is no charge when your code is not running you are charged based on the number of requests for your functions and the time it takes for your code to execute Lambda registers a request each time it starts executing in response to an event notification or maybe an involved call thank you and see you in the next section [Music] in this section we will cover AWS elastic bean stock basics so first let's answer this question what is AWS elastic Beanstalk with elastic Beanstalk you can quickly deploy and manage applications in the AWS without having to learn about the infrastructure that runs those applications so this is a great starting point for let's say developers that have their app ready and they want you to run that app in AWS but they don't know exactly what services to use in order to run the app so you can just start using that with elastic bin stock that will automatically provision every AWS service that is needed in order to run your application so you simply upload your application and elastic bin stock automatically handles the details of capacity provisioning and more like load balancing scaling and also application Health monitoring elastic bean stock will provision one or more AWS resources in order to run your application and the most basic example is Amazon ec2 instances so if your app needs an ec2 instance to run on then the the service the elastic beam stock will automatically provision that and put everything that is needed on the ec2 instance now in order to use elastic beams so you create an app upload an app version as a package so as an example you upload an archive with everything included there that that defines your application and you upload that to elastic Beanstalk and then provide some information about the application in the bin stock console as you will see later the elastic bean stock automatically launches an environment and creates and configures the other AWS resources needed to run your code and that is really really awesome so after your environment is launched you can then manage your environment and deploy new application versions as you progress with your web app or whatever app you're deploying in order to have let's say incremental updates to your app now in terms of pricing there is no additional charge for elastic bean stock usage you will pay on only for the underlying AWS resources that your application consumes so for example if you have an app that will just let's say deploy uh I don't know 10 ec2 instances and a couple of RDS database instances then you're not pay for elastic bean stock that has helped you in order to create these resources but you will pay for the resources that have been created with elastic bean stock and this is the example if deploying the app with elastic Beanstalk uh I don't know and you fire up several SUV instances you will only pay for ECP usage and not for elastic bean stock thank you and see you in the next section [Music] in this section we will create a sample app using elastic Beanstalk all right so we are back in AWS Management console in order to continue please click on services and under compute you can find here elastic bin stock so please click on elastic bean stock and now you're going to be provided the landing page of this service AWS elastic bean stock in order to continue please click on get started and now we can create our web app using elastic bin stock so the application name let's call this AWS CCP and app and now we can choose the platform that we need in order to run our application let's choose for example PHP or Tomcat why not and now we can uh select either sample application or we can upload our code so if we have our application up and running so ready in order to be included in AWS Cloud there we can say here upload and you can see here a zip or war is being expected so either an archive zip or a Java or archive so in this case we'll just use a sample app so I'll select it this one and in order to continue I'll click on create application now everything is going to be deployed automatically by AWS so creating the environment and we should now wait for the deployment to to be installed here and we'll take a look at what is the result all right so the deployment has been completed successfully now let's examine a little bit what has happened behind the scenes so literally what has been let's say delivered by the elastic Beanstalk AWS service if you take a look here on the left menu in configuration you'll see exactly what are the AWS services that have been deployed with this uh with this app so the most important one and relevant let's say is the instances we can see that T2 micro ec2 instance has been deployed in order to run this specific app if for example you would use elastic bean stock to to deploy a WordPress website or blog depending on what you what you want to use it for you'll also see for example here in databases some RDS databases instances created and so on now the environment environment type is one single instance and anyway this is something that you may want to to watch and to take a look after using elastic Beanstalk now on the other on the other hand if you take a look here at the top in environment ID and the URL you can click on this specific URL and your web app your newly deployed web app will be provided and shown to you so congratulations your first AWS elastic bin stock application is now running on your own dedicated environment in the AWS cloud if again coming back to the WordPress example if you're going to do that you'd need either to deploy it directly so the database as well with your elastic bin stock or what you can do also is deploy the the front end so your web application your WordPress website and then also deploy an RDS instance that will be your database working on the back of your web app so your backend not your front-end and connect the two in order to have a complete website application now before we wrap up this section let's take a look at so services and ec2 we should see here at least one ec2 instance and here it is that has been deployed by AWS elastic bin stock and here is the instance we can take a look at all of the details as with any other ec2 instance like the instance type elastic IPS availability zones and so on so before continuing on with the next section let us just terminate this instance so instance State and I will choose terminate and yes I want to terminate this one and also let's get back to elastic Beanstalk console and this is our app so clicking on it and then let's go to actions and say terminate environment which also would have Also let's say terminated the ec2 instance so I will just select the name because this is what is asking and I will click on terminate and in a couple of minutes everything will be let's say deleted from your account and you you'll be ready to continue with the next section thank you and see you in the next section [Music] in this section we will address cloud formation Basics and we will deploy a WordPress website using cloud formation so now let's start with what is AWS cloud formation with AWS cloudformation you create a template that describes all the AWS resources that you want so for example ec2 instances or databases RDS databases and AWS cloud formation takes care of provisioning and configuring those resources for you you don't need to individually create and configure AWS resources and figure out what's dependent on what literally AWS cloudformation handles all of that so now let's switch over to AWS Management console and deploy a WordPress website right away all right so I'm back in AWS Management console and before we start let's just take a look at AWS cloud formation documentation so if you visit this page there is something that you may want to take a look this is sample templates and if I click on us East Northern Virginia region which is the one we have worked up to now in I can click on Sample Solutions and we have here something that is called WordPress basic for a single instance and also WordPress scalable and durable and this installs and deploys a WordPress onto Amazon ec2 instances in an auto scaling group also with a multi-az Amazon RDS database instance for storage so this is something we will use when deploying the WordPress website so what else we can do from the Management console go to services and in the management and governance category you can click on cloud formation so what we can do now is create stack and we can select a sample a sample template and from the list we can see here WordPress blog so this is one option and continue with next or from the sample Solutions documentation web page we can see here launch stack and if you do this if you are logged into AWS Management console it will lend you this page and basically it will launch this template from an S3 bucket that AWS is hosting and again you can say next and continue with the installation so let's get back to AWS Management console with the template selection so indeed I am going to install a WordPress blog or what WordPress website so I will click on next now now the stack name I will give something like AWS CCP and this is WP for WordPress I will use WordPress uh no I will just say AWS CCP and WP and paste it and no so everywhere I will use AWS CCP WordPress instance type I will use the t2 micro the key name I will use the same keeper as I've did up to now in the course I will just click on next in order to continue there are several options here that I can Define so for example the IM role may be important but it says that you can choose an IM role that cloudformation uses to create modify or delete resources in the stack if you don't choose a role cloudformation uses the permissions defined defined in your account so as I'm root I will not have to Define here an IM row good so I'll just now click on next I have the chance to review my selection and it looks fine so what I can what I can do now is just click on create no I don't want to save any passwords so let's take a look while the the WordPress installation uh it's going to be deployed to take a look at what are the current options and what is the information that is being displayed so this is just an overview right that's that's nothing very very fancy in the output you will see here the um the WordPress URL that is going to be available so we can click and just navigate to the website in terms of resources we will see here exactly what resources will the cloud formation deploy for us so as of now it has deployed a security group now is also deploying an instance here and this is going to be the web server so if I click on services and I open also ec2 I will take a look in the running instances and I can see here that something is being deployed and this is probably my instance so it finishes in ddb0 so let's take a look in cloud formation yes this is the one DD bb0 so now it's currently deploying the web server if you'd also take a look in events you'll have the uh the possibility to see step by step what the cloud formation is doing and now very very nice it's the template so if you take a look in the template you will see exactly step by step what it does so this is the template AWS template format version this it will Define several parameters like keyname all right instance instance type so this is going to be a T2 small instance type then the SSH I'm going to be allowed SSH from any location database name so this is WordPress database all right and database user database password and mappings and everything that is going to do in this um in this order is present here also parameters database name database password we know that we have put the same value for everything here and the key name SSH location and everything so it says that it is complete now so if I take a look in the outputs this is the website URL so I'll just uh yeah open it in a new tab and I can see here that it says your server is running PHP version 5 3 29 but WordPress is five to one and it requires at least five six to zero so what we need to do now uh unfortunately is to kind of manually update to five six to zero our PHP version so let's do that right now I will go to ec2 and take the public IP and I will just SSH through my terminal all right so I'm now in my terminal on my Mac so let's SSH into the web server so let's say SSH and then minus I now I will use the the keeper and then the user issued ec2- user and of course the IP so I will type yes and I am now in my ec2 instance let me just say pseudosu now I'm a super user so let's do the following first yum update minus y so I'll update everything on my box and yes there are some some uh some packages to be to be updated but anyway let's just wait for everything to be updated and we will continue great so the box has been updated the latest packages and patches have been applied so before I update my my PHP version Let me just verify what it is now so PHP minus V and I can see here that it says php53 and here is 5329 and everything is successful I should see here PHP 5.6 so first let me let me just stop the Apache web server so I will say service HTTP D and current status is running so I will say stop I will remove anything that is currently installed in let's say in related to the HTTP Daemon so yam erase httpd httpd tools and also some other options and I will also put this available as a resource in order to be downloaded if you want to test and also do the the WordPress website installation so I'll do this now and I will say yes let me now just remove the uh the current PHP installation so I'll do this and say yes and it should be done now now let me just install PHP 5.6 and I will say minus y in order to accept anything during the installation and it is complete if I say PHP minus V it is now saying that I am running PHP 5.6.40 which is good and search and PHP 5.6 there are a lot of packages as related to PHP 5.6 and I will do the following yum install php56 and dash and star for everything and I'll just say minus yes and a lot of packages will be installed now so let's wait for it to be to be completed great now let's say again PHP minus V and yes it is 5.6 and I will say service httpd and start I will just start the Apache web server and service service httpd and httpd and status so it is running let's get back to the web server and do a refresh and great so here is the WordPress website that is now up and running so that that really took not that much so couple of minutes so let's say the title is AWS CCP and this is WordPress grid so let me see the username I just forgot what I installed so what I can do is go back to the cloud formation and take a look in parameters and I have here AWS CCP WordPress good so let me just get back and say here this is the username and then I will say don't choose and this is the password and yes I confirmed the use of a weak password your email something like whatever whatever but anyway this should be your email and I'll just say install WordPress so not now and here it is so WordPress has been installed thank you and enjoy I'm sure going to enjoy it so again username password yes remember me and log in not now and yes here it is the dashboard of Wordpress so yeah why don't we write our first blog so clicking on this one and all right so plus and it will say just paragraph come on and this is our this is our first post and I will say publish and publish and if I get back right and I click on this one awsccp WordPress here is our first post so installation is not that complicated as you have seen you just have to update the PHP version in order to to run the WordPress website and that should be it so in order to not consume any resources you may want to shut down everything as related to the WordPress website installation and what you could do is let's say delete or terminate let's say resources one by one which is an option or I can go to the cloud formation and because I have this stack here so cloud formation and stacks I can say with with selecting the awsccp WordPress actions and then I can delete it but not here in the overview so click on it and delete stack and it's going to ask me are you sure you want to delete this stack stack name is this and yes I will click delete as you can see this status now changes to deleting progress so everything looks good so let me just say services and if I now get back really really quick to ec2 instances I already have zero instances running because this one is being terminated shutting down so that should be everything that you need to know if you want to just run a WordPress website in AWS thank you and see you in the next section [Music] in this section we will cover a simple notification service or SNS Basics so now let's try to answer this question what is actually AWS SNS Amazon simple modification service or Amazon SNS is a web service that coordinates and manages the sending or delivery of messages to subscribing endpoints or clients and you will understand more that endpoints refer to other AWS services and clients can be literally human so users in Amazon SNS there are two types of clients Publishers and subscribers also referred to as producers and consumers Publishers communicate asynchronously with subscribers by producing and sending a message to a topic which is a logical access point and also Communication channel subscribers which can be web servers email addresses also Amazon sqs cues consume or receive the message or notification over one of the supported protocols and you will see that possible ones are Amazon sqs HTTP or https email SMS when they are subscribed to the topic so what we will do next is configure the S3 bucket to send a notification through SNS when any new object upload takes place within my S3 bucket this was a very very short section let's now get on to AWS Management console and get started with SNS configuration thank you and see you in the next section [Music] in this section we will configure SNS or simple notification service to send a notification for any new S3 object upload in our S3 packet alright so back in AWS Management console in order to get started please go to services and scroll down to application integration and please click on simple notification service in order to continue now here is the Amazon Amazon SNS implementification service console and first thing we need to do is create a topic so I'll click on next step I will name this topic for example let's say S3 upload and new so upload new object Now display name is optional I will just click on create topic we will have to go also to subscriptions so on the left side menu click on subscriptions I currently have no subscription so I'll say create subscription and the topic I am subscribing to is the S3 upload new object and the protocol is email now please type here your email address and then just click on create subscription now the subscription is created and as you can see here the status is pending confirmation so you need to go to your email client and then confirm the subscription now this is how the AWS notification looks like so you'll have to just confirm subscription so click on this link and that's it now I am back in Amazon SNS console so we need we need to do one more thing so I will click on the S3 topic and I will go to access policy and click on edit I will have to go to access policy and I will just edit this policy so I'll delete this one and copy and paste this one and anyway this will be available as a downloadable resource in this specific section so what you need to do is replace this one so the SNS topic Amazon resource name so now let's go to services and let's go to S3 and let's say that we will use our first bucket the AWS CCP V1 so if I just select it I can copy the bucket Arn and let's get back to the policy and I'll replace the bucket name here so the Arn is complete now and I also need to replace this one the SNS topic arm so let me just say again in this one services and simple notification service and I'll go to this topic and I have the Arn here so I'll just take it copy so complete one copy get back to simple notification service and I'll just replace this one here in the policy what I also need to do now is save changes of course the last thing we will go in the bucket so services and S3 and we'll have to enable notifications so for this specific bucket I will go here and go to properties and I have this option here to go to events and I will say add a notification I would like to name this event let's say new S3 so S3 object upload and it's upload and then I'll say that the event is a put I will not do anything here for prefix or suffix and I will send notification with SNS topic and the topic is S3 upload new object so now I'll just click on Save and I can see that I have here one active notification so what I will do now is just go to the bucket and upload a new object and I will see what happens with the notification whether I receive an email or not so I'll click on upload and then click on add files and I'll go to awccp folder and I have this text file here new object upload to S3 so I will choose that and I will click on upload the upload is successful so I now should receive a notification through email announcing that a new object has been uploaded to my S3 bucket now after you do the subscription confirmation you should receive an email similar to this announcing you that you're now subscribed to that specific topic and for any new upload you should receive a notification from Amazon S3 with specific information for that specific let's say upload for example you will see in the email like the bucket name and also what's the name of the object and so on thank you and see you in the next section [Music] in this section we will cover AWS Cloud watch basics so let's start by answering this question what is AWS cloudwatch with Amazon cloudwatch you can monitor your Amazon web services or AWS resources and the applications you run in AWS in real time you can create alarms which watch metrics and send notifications or automatically make changes to the resources you are monitoring when a specific threshold is breached and as an example you can watch the CPU usage of your ec2 instances and use that in Auto scaling groups in order to scale up or down the number of ec2 instances you can also use this data to stop underused instances to save money with auto scaling policies and remember the minimum desired and maximum ec2 instances example we have gone through AWS cloudwatch can be accessed and used with the AWS Cloud watch console the CLI the cloud watch API and also the AWS software development kits so where can I use this cloudwatch AWS service and maybe it's useful or not so let's decide together now Amazon ec2 Auto scaling so you can use AWS Cloud watch with Amazon ec2 Auto scaling in order to automatically launch or terminate ec2 instances based on user-defined policies you can also use the cloud watch along with the cloud trail service Cloud watch writes log files to the S3 bucket specified when you configured cloudtrail also you can use cloud watch with Amazon SNS so simple notification service send messages when an alarm threshold has been reached thank you and see you in the next section [Music] this concludes module 5 AWS key services that you need to know congrats for your progress on the course you have learned really uh quite a lot in this module before sitting the AWS certified Cloud practitioner exam please make sure you are comfortable with these key services covered in this module let's now go over the most important topics covered in this module and the exam hints we will start with Amazon rod 53. DNS stands for domain name system and acts as the phone book of the internet DNS helps you to resolve names to IP addresses Amazon Route 53 is a global highly available and scalable domain name system web service so first of all it is global it does not relate to any region that you are going to work in you can use Route 53 to resolve domains so this is the basic function and also to register new domains like we did in the lab for AWS trainingbootcamp.com now let's move on to Amazon cloudfront Amazon cloudfront is a web service that speeds up the distribution of your static and dynamic web content to your users cloudfront delivers your content through a worldwide network of data centers called Edge locations cold front Regional Edge locations really help when the content is not popular enough to stay at the cloudfront edge location and improve delivery performance for for that specific content now as related also to cloudfront origin is where the cloudform gets the files from uh and that could be an Amazon S3 or just another website or web server when you want to use cloudfront to distribute your content you create a distribution for lower latency and increase use annual and in order to increase the user experience like we did in our lab in order to have a better latency around 30 milliseconds now we have also talked about the application load balancer with AWS elastic load balancing you can achieve fault tolerance for any application by ensuring scalability performance and security elastic load balancing automatically distributes incoming application traffic across multiple targets so for example ec2 instances there are three flavors of load balancers so the network load balancer classic load balancer and the most advanced up to layer 7 as it relates to the oscillary model right the application load balancer and this is what we have used also in the labs now let's move on to Auto scaling Amazon easy to Auto scaling helps you to ensure that you have the correct number of Amazon ec2 instances available to handle the load of your application and this means scaling up or down not necessarily up you see the instances are grouped in Auto scaling groups and we have defined here and talked about the minimum number of ec2 instances desired number and also the maximum number of ec2 instances in an auto scaling group scaling policies will automatically launch or terminate instances as your application demands we have also talked about relational database service or RDS in AWS Amazon relational database service or RDS is a web service that makes it easier to set up operate and scale a relational database in the cloud Amazon RDS is a fully managed RDS in the cloud so AWS takes care of all of the hard work for you database instance is just a database environment in the cloud that runs a database engine databases come in different sizes and this we have talked about this as database instance class with multiple storage options like HDD or sdd provisioned iops and as well our next service is AWS Lambda very very fast Amazon Lambda is a compute service that lets you run code without provisioning or managing services so that is servers AWS Lambda executes your code only when needed and scaled automatically you pay only for the compute time you consume there is no charge when your code is not running you can use AWS Lambda to run your code in response to events as an example run your code in response to http requests using also Amazon API Gateway elastic bean stock is next on our list with elastic Beanstalk you can quickly deploy and manage applications in the AWS without having to learn about the infrastructure that runs those applications you simply upload your application and elastic bin stock automatically handles the details of capacity provisioning load balancing scaling and application Health monitoring elastic bean stock will provision one or more AWS resources for your app to run smoothly so for example Amazon ec2 instances also databases and so on now cloud formation with AWS cloud formation you create a template that describes all the AWS resources that you want so for example ec2 instances and again databases and AWS cloud formation takes care of provisioning and configuring those resources for you you don't need to individually create and configure AWS resources and figure out what's depending on what AWS cloud formation handles all of that in for you now simple notification service so SNS Amazon SNS is a web service that coordinates and manages the sending or delivery of messages to subscribing endpoints or clients Publishers communicate asynchronously with subscribers by producing and sending a message to a topic which is a logical access point and also a communication Channel subscribers consume or receive the messages or notification over one of the supported protocols so for example an email we have seen that in in our example in our lab when they are subscribed to that specific topic Cloud watch with Amazon cloudwatch you can monitor your Amazon web services resources and the applications you run on AWS in real time you can create alarms which watch metrics and send notifications or automatically make changes to the resources you are monitoring when a threshold is breached and as an example you can watch the CPU usage of your ec2 instances and use that in Auto scaling groups with that said please join me in our next module module 6 billing and pricing wrap up thank you and see you in the next module foreign [Music] pricing and AWS support levels this module complements the information covered in the previous module relevant to the billing and pricing in AWS and for the certified Cloud practitioner exam in the previous module we covered pricing and billing information for some of the AWS Services as it is presented in the AWS pricing white paper we will start this module by wrapping up the discussion around billing and pricing and we will cover AWS fundamentals of pricing and continue with cost optimization through reservations by the end of this module I will also introduce you two other interesting services or tools from AWS and these are AWS cost calculator and the AWS trusted advisor we will wrap up module 6 after covering AWS support plans and a fast recap on all topics covered in this module and exam hints relevant for the AWS certified Cloud practitioner exam with that said let's get started [Music] [Music] in this section we will cover the fundamentals of pricing in AWS AWS provides agility helps you reduce your ID costs and reach Global coverage in minutes with AWS you can optimize your costs continuously in order to match your needs and environment and let's now think about Roy or return on investment AWS offers pay as you go on-demand pricing with the best Roy for each specific use case now in regards to the key principles for AWS pricing these are as follows understand the fundamentals of pricing start early with cost optimization maximize the power of flexibility and use the right pricing model for the job let's now go over each of the above so we will start with understanding the fundamentals of pricing for the vast majority of AWS Services the following impact the cost with AWS compute storage and also outbound data transfer there is no charge for data transfer inbound I have talked about it many times in the previous sections and modules data transfer outbound is charged outbound traffic is aggregated on your bill as you can see that monthly and you will see that as AWS data transfer out and you will be charged as per gigabyte storage is paid on a per gigabyte basis and compute is paid by the minute or by the hour now let's talk about start early with cost optimization it's really never too early to start with cost optimization you should start managing your costs from the beginning of your implementation start date the com the complexity grows as you move forward and scale your project so if at the beginning you have like a couple of ec2 instances and maybe some databases there I don't know let's say also an elastic load balancer and so on as you progress with your project if you don't think about or you don't have in mind the cost optimization from the beginning it will be very very hard to keep up with the progress of the project it's easier than recommended to put cost visibility and control mechanisms in place before environment becomes large and complex maximizing the power of flexibility with AWS you pay for exactly what you need with no minimum commitments or long-term contracts you can choose to save money through a reservation model for example using a pay as you go model procurement complexity is reduced which enables your business to be fully elastic like the cloud is you don't pay for services that are not running and this refers to cost efficiency so I've seen multiple environments with with my clients that have really let's see implemented the schedule for the ec2 instances and when that specific service is not supposed to be offered to their end clients then the Eco instances are automatically powered off and basically they are on for about nine to nine to ten hours per day and not 24 right so that means cost efficiency within your business using the right pricing model for the job with AWS you can choose the pricing model that best fits your business needs as well different pricing models are available for ec2 instances of elastic compute Cloud let's start with the first one on demand Pay and use ec2 with no upfront payment or long term contract delegated instances AWS Hardware is dedicated to you and only you you will not split the host so the physical machine with any other customer now another two spot instances purchase spare Computing capacity at discounted hourly rates and reservations pay for compute capacity ahead of time and receive discount up to 75 percent for ec2 RDS dynamodb and many many other AWS Services thank you and see you in the next section [Music] in this section we will briefly cover cost optimization through reservations companies can achieve significant cost savings by using reserved instances and other reservation models for compute and data services with reserved instances you commit in advance for usage which in return means a lower price to you with reservations you can choose to pay with no upfront partial upfront or all upfront which means you pay everything in advance the larger The Upfront payment the bigger the discount ec2 reserved instances allow you to reserve capacity and receive a big discount on your instance usage compared to running an on-demand paying model so which means like you log into AWS account and just start an ec2 instance that is on Demand with ec2 reserved instances You can predict compute costs over the contract term when you want to use the capacity you reserved you launch an ec2 instance with the configuration as the reserved capacity that you purchased and AWS will automatically apply the apply the discounted price now let's have an example with ec2 reserved instances the first at the top is the standard one-year term and as you can see at the bottom is the standard three-year term now you can see that the price is literally going down as you pay more in advance so no upfront and again partial upfront or all upfront and for the three-year term as you can see when you pay all so everything in advance all upfront the discount goes uh goes to 62 percent which is a lot now the difference between no upfront and all upfront it's around six percent burn when you but when you have a large Fleet obviously two instances this let's say little so it shows it as a small value six percent can mean a lot a lot of money so again this is one way that you can save money within AWS go for reserved instances and pay in advance now the Amazon pricing wrap up so very very important this is what Amazon says about their pricing philosophy you pay as you go pay for what you use pay less as you use more and pay even less when you reserve capacity in order to estimate estimate your monthly bill you can use the AWS simple monthly calculator which will be covered in the next section thank you and see you in the next section foreign [Music] we will cover AWS cost calculators so there are two options available the AWS cost calculator and you can access that at the following URL so calculator dot AWS and the AWS simple monthly calculator that I have mentioned in the previous section so now let's switch over to AWS website in order to check these two options alright so the first option is the AWS pricing calculator and here is the landing page what you can do is click on create estimate and then give it a name so let's say my estimate estimate region so we have worked up to now through the course in Northern Virginia great and I can click on create estimate then I can just go on and add different services that I currently have in my infrastructure or maybe I intend to add in My overall setup so clicking on add service will lend you this page it says here like select the service and a small info browse the list of AWS services that AWS pricing calculator provides estimates for or search Services by keyword anyway this is not that extensive as you can see all right we can use the elastic load balancing Amazon ec2 the storage Gateway and so on but it will provide you some information I do not really recommend this one I say that it would be best in order to to estimate your monthly usage using the simple monthly calculator as opposed to the previous one as you can see here on the left side menu this is an extensive list of different services that you can use in AWS so my proposal is that let's do a simulation now maybe we decide to deploy a WordPress website and we will work in U.S East Northern Virginia region so for the compute Amazon ec2 instances I will say that I will go for the on demand so I'll click on this one I want one instance and the type is in this case let me see if I have the t2 micro let's see going down and here it is so I will go for T2 micro close and save so I will use the the smallest let's say options that are available both for the ec2 instance and also for the RDS database in terms of the Amazon EBS so storage for the root I will say the storage is let's say 30 gigs iops all right snapshots let's say I will have 30 gigs let's say 15 gigs per month of storage going down elastic IP I will have one elastic IP so for my website data transfer let's say well data transfer like 10 gigs inter region not that much so let's say five gigs data transfer out remember that this is being charged so 10 gigs inside nothing and that should be it in terms of Amazon Route 53 let's have let's say one hosted zone so we will buy the domain and we'll have it hosted through routed 53 that would be all for the hosted zones let's go down resolver no Amazon cloudfront we will enable Amazon cloudfront so data transfer out let's say again 10 gigs out of origin uh let's say again 10 gigs Edge location traffic distribution I will leave for United States 50 and Euro 50 just to have an idea of what's Happening um with this setup in terms of SSL certificate so yes we want the website to load through an SSL also https connection so I will type here one let's now go over to Amazon RDS I will choose Amazon RDS on demand this is a my sequel right and I will go for T3 micro as well as for the ec2 instance so this is my database intervention data transfer out not that much let's say again five gigs and continue with Amazon elastic load balancing so I will use one classic load balancer or maybe I want no classic load balancer but I want to make my intelligent decision so I will use an application load balancer average connection I don't know what to say here it doesn't matter Network load balancing anyway one application load balancer should be uh should be fine and Cloud watch simple notification transcoders as you can see the list is pretty extensive as opposed to the previous one elastic map reduce snowball direct connect Amazon VPC um let's say there are transfer out the same 10 gigs EFS simple DB anyway that should be everything that we need to know about our setup so here we have our estimate for our monthly bill 600 but anyway the most is here with cloudfront as you can see this is the most that you will pay for if you don't enable the cloudfront service then obviously you're gonna pay like around 20 bucks 20 dollars US dollars so extending the cloud front service we can see here that the custom SSL certificates is the most so if I say here no then I will be left with the transfer out only 86 cents and out of the origin 20 cents so depending on what you need for your setup the customers sell certificates in this case is the most that you can you can pay for and anyway so this is something that I advise you to use in uh in your production uh for calculating your monthly costs anyway it is roughly around what you will pay but after the first month after second month and so on you'll get a better understanding and you will be better and better at estimating your costs within AWS thank you and see you in the next section [Music] in this section we will briefly cover AWS trusted advisor AWS trusted advisor is the AWS service that provides you real-time guidance to help you provision your resources following AWS best practices so there are five directions categories or pillars within the trusted advisor and these are the cost optimization performance security fault tolerance and also service limits if you want to take a look at this service the trusted advisor please follow the link AWS amazon.com premium support technology trusted advisor and best practice checklist so now let's switch over to AWS Management console and take a look at this specific service alright I'm now in AWS Management console and I would like to navigate to the trusted advisor service so in order to do that click on services and type here in the search bar trusted advisor and here is click on this one now once you do that you will be providing the landing page so the console of the trusted advisor and again there are five pillars cost optimization performance security fault tolerance and the service limits I have some problems here in security so I will click on this one and I see a red flag for security groups specific ports unrestricted I can go ahead and expand it and take a look exactly at what it says so in terms of alerts I see here that the green is fine so access to port 8025443 or 4 6 5 is unrestricted and a red flag as I have for the security group AWS CCP VPC right it means that I have provided access to FTP and other services so like 3389 so RDP and so on so clicking on this specific Security Group AWS CCP VPC I can see exactly what I have defined here so clicking on inbound will provide me a list of what I have defined here so I will say edit and let me just remove some of them or do something else so let's say that I will provide access from this specific IP address one two three four and I am referring to I'll just kick the IPv6 out and let's say that I'm providing access from a specific IP address like I'm doing now so I'll click on Save and then go back to the trusted advisor management Council and I have this refresh option here so I will just click on refresh and let's see if anything changes Now with uh within the security group options all right so it says that this specific option so the security one has been refreshed checks have been refreshed so let's see now if we have the same uh the same red flag here no so no red flags now I have the Amazon S3 bucket permissions here so let me just expand this one as well so I can see that the bucket name AWS trainingbootcamp.com that we have created when we also played with this domain name so we have defined a web static website so yes of course this is open and I can do some restrictions here as well in terms of security groups let me just expand this as well I only have only one so the auto scaling security group number one so clicking on this one let's see exactly what is our configuration so yes for the auto scaling group I should not permit Administration through SSH the secure shell to anybody so I what I can do here is is to configure here like not anywhere but myip and if I click my IP then my real public IP will be populated here and because this is an auto Skilling group maybe I would like to add like HTTP access from anywhere and maybe uh let's see where it is https so maybe this is a website so just add different services that I want to make public to to the worldwide so if I do that then as you can see here in the trusted advisor what I need to do is also refresh again and the changes will be populated and I can see if I'm now compliant if I'm all green or if I am recommended to do any changes within my AWS account so something similar if you want to optimize your cost you can go here but it says that upgrade your support plan to unlock all trusted advisor recommendations so this is not available for the free tier account we should upgrade our account in order to have access to the cost optimization as well we'll not do this now this is just for that you get an understanding on what the trusted advisor service can can do for you and how it can help the same is for performance checks let's see fault tolerance the same and also the service limits this is something that we can use so let's take a look here Auto scaling groups it checks for usage that is more than 80 of the auto scaling group limits values are based on a snapshot so your current usage might differ so I can take a look here what is my limit what is my current usage we have played just a bit with the auto scaling so I can see here that in this specific region Us East 1 I am within the limit account so I hope that you get uh you get an understanding now with the trusted advisor this is something you may want to use again in your daily job in your uh let's say production networks in AWS thank you and see you in the next section [Music] in this section we will go over and compare the different AWS support plans so there are two types of support plans within AWS the first one is basic basic support is included for all AWS customers and includes the following customer service and this one is 24 7 access so at any moment during the day AWS trusted advisor but this comprise only seven core trusted advisor checks and guidance to following the best practices and also the AWS personal health dashboard so this is a personalized view of the health of AWS services and alerts you when the resources are impacted so when your resources are impacted now the second option is the premium support plans three options are available developer business and Enterprise support plans differ in terms of how many call let's call them add-on Services you get from AWS how much the AWS team gets involved in your projects and of course they differ in terms of pricing let's now switch to AWS website and have a comparison between the premium support plans developer business and Enterprise alright so following the URL that you have seen earlier on the slide will lend you this specific web page compare AWS support plans so I was saying that there are three developer business and Enterprise so let's start the comparison AWS trusted advisor best practice checks so for the developer there are only seven core checks and let's see exactly what this means so what are currently the checks in in AWS trusted advisor best practice checks so again cost optimization and as you can see there are quite a lot in terms of security some of them are here and they're even more fault tolerance so this one is here as well and the last one is the performance so if you go for another more advanced let's say a support plan like business one you will have this one activated as well so let's go back and continue enhance technical support so in case you have any any kind of problems within your account you can just open a ticket to the AWS let's say support team so unlimited cases one primary contact for developer in terms of business and Enterprise premium support plans you have like 24 7 phone email and chat access to sport Engineers the same here and also unlimited cases and unlimited contacts let's go down and talk about case severity and also very important response times so for the developer you should expect like less than 24 business hours in order to get an answer for the for the general guidance and if you have any kind of problems let's say 12 hours so Under 12 hours um your response time for business and Enterprise of course you're gonna pay more and this is uh let's say a faster response that you get from AWS so if you have a production system down then you should inspect a response in less than an hour or for the Enterprise support plan you can expect less than 15 minutes when you encounter any business critical system down issue in terms of architectural guidance for the Enterprise you can get consultative review and guidance based on your applications and for the business context contextual to your use cases now let's go down very very important also in in regards to the questions that you may get in your exam this is the technical account management the option is available only for the Enterprise option so you really have a dedicated technical account manager in order to proactively monitor your environment and assist you with optimization so this option is available only for the Enterprise support plan yes you get some access to training and also to support him and now of course is pricing so for the developer support plan you start at 29 US dollars per month for the business one you started 100 per month and for the Enterprise you start at 15 000 US dollars per month and it can go even higher thank you and see you in the next section thank you [Music] this concludes module 6 billing pricing and AWS support levels before sitting the AWS certified Cloud practitioner exam please make sure you are comfortable with the AWS billing and pricing Concepts you can definitely expect questions in the exam relating to billing and pricing also now let's go over the most important topics covered in this module and the exam hints so we started with fundamentals of pricing Amazon AWS offers pay as you go on-demand pricing with the best return on investment for each specific use case AWS key pricing principles are understand the fundamentals of pricing start early with cost optimization maximize the power of flexibility and use the right pricing model for the job we continue now with cost optimization through reservations with reserved instances you commit in advance for usage which in return means a lower price to you you can choose to pay with no upfront partial upfront or all upfront the larger The Upfront payment the bigger the discount next we have introduced two more AWS services with AWS cost calculators you can easily estimate your cost on a monthly basis and AWS trusted advisor provides you real-time guidance to help you provision your resources following AWS best practices and this refers again to cost optimization performance security fault tolerance and also service limits now let's talk about AWS support plans there are two types of support support plans within AWS basic this comes with all AWS accounts and premium which me which also means extra paid service premium support levels there are three developers start at 29 per month and you have an SLA or service level agreement which is which is less than 12 hours now business this starts at 100 per month and an SLA under one hour the most expensive and complex is the Enterprise premium support plan starts at 15 000 US dollars per month the SLA is under one hour you get the full AWS team support and also a dedicated technical account manager or Tam with that said please join me in our next module module 7 Security in AWS thank you and see you in the next module [Music] welcome to module 7 Security in Amazon web services this module provides a brief introduction into AWS security we will start this module by covering General guidelines related to AWS security and also massively important topic within AWS the shared responsibility model please make sure you understand the AWS shared responsibility model before taking the cloud practitioner exam by the end of this module I will also introduce you several other security related services within AWS relevant both in real world scenarios and of course for the cloud practitioner exam we will cover AWS web or web application firewall shield and firewall manager and will also cover AWS inspector we will wrap up module 7 after going through a fast recap on all topics covered in this module and exam hints relevant for the AWS certified Cloud practitioner exam with that said let's get started foreign [Music] [Music] I will provide you an introduction to AWS security so let's start now AWS delivers a scalable cloud computing platform designed for high availability and dependability security is AWS top priority AWS helps you to protect the confidentiality integrity and availability of your systems and data AWS architecture has been built following two key principles flexibility and security providing an extremely scalable and flexible Cloud platform AWS uses redundant and multi-layer controls continuous validation and testing with built-in automation that helps monitoring and keeping customers safe and secure the same level of Automation and security is contained and replicated in any AWS Data Center and this equals availability Zone if you remember from from the start of the course so again one data center equals one availability Zone with AWS you get a resilient fault tolerant architecture designed for security able to satisfy the requirements of even the most security sensitive customers now let's start our discussion about the shared responsibility model security and compliance is a shared responsibility between AWS and the customer the customer assumes responsibility and management of the guest operating system so including updates and also security patches as well as the configuration of the AWS provided Security Group firewall while AWS takes care of the cloud and this brings us to brings us to something that is known as the following security of the cloud which is AWS task and also Security in the cloud which is the customer's task or job let's talk now about the security of the cloud AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud this infrastructure is composed of the hardware software networking and Facilities or data centers that run AWS cloud services on the other hand customer responsibility for security in the cloud now customer responsibility will be determined by the AWS cloud services that a customer selects and this means if the customer for example chooses ec2 is one thing it has to do quite a lot in order to secure this this service but on the other hand if it chooses a managed service this means that the AWS service is managed by AWS well it doesn't have to do too much in order to secure that that's why it's called manage service it will be managed by AWS so this determines the amount of configuration work the customer must perform as part of their security responsibilities if we now take a look at the Shared responsibility model let's discuss um so starting from bottom up we will start with AWS so responsibility for security of the cloud so there are two things that AWS takes care of first is the hardware or the AWS Global infrastructure and remember we have talked about this quite a lot so regions availability zones and also Edge locations the second thing is the software software for compute storage database and networking it could be the operating system so the hypervisor of the uh all of the hardware that the cloud is running on or it could be the software in case we are talking about or we are referring to any many service now on the other side the customer so responsibility for security in the cloud has to do quite a lot and it is not always very very easy for the customer to understand this so again starting from barab App the client-side data encryption and data Integrity authentication so this is something that the client has to deal with server side encryption so the client has to take care of the encryption on the server and also networking traffic protections of encryption integrity and identity and you can think of this last item of um just like as a VPN so let's say that the customer defines a VPN a VPN connection for the uh Cloud going to the on-premises data center well this means that it has to be configured correctly in order to provide encryption integrity and identity and AWS cannot do that so the the thing is that the customer needs to take care of this in order to be configured correctly and securely next is the operating system system Network and file configuration so the operating system let's consider that the the End customer just starts an ec2 instance and configures like we did a WordPress website or I don't know let's say just a website right so it has to make sure that that specific server will be patched all of the security um latest releases will be applied there and again continuing on with the firewall configuration so this refers to security groups of course it is customer's job in order to configure correctly the security groups AWS really cannot connect to your environment and I don't know let's say close SSH connection that is open now to worldwide all right and also close other specific management ports it is also customers job in order to take care of the platform applications IM identity access management and the customer data as well I really advise you to take a look at the following URLs so aws.amazon.com compliance slash share responsibility model it is just a short read and I advise you to do that before attending the real the real Cloud practitioner exam all right so I have switched to AWS website and here is the shared responsibility model it starts with an overview and then also talks about AWS responsibility for security of the cloud and then moves on to customer responsibility Security in the cloud and here is the diagram that you have seen Also earlier so it is really not that big it will probably take a couple of minutes and I really advise you to read this before sitting the real exam now let's continue with security products and features AWS offers a lot of tools and features that can help you meet your security objectives and we have mentioned up to now uh probably at least a half of what I'm going to talk about in the next couple of minutes AWS provides security specific tools and features across network security configuration management data encryption access control and also monitoring and logging so let's start with AWS network security AWS provides security capabilities and services that can help you secure and protect your data with built-in firewalls and these are the security groups also encryption interesting using the TLs vpns for dedicated private connections for example to your on-premises data center and also DDOS mitigation Technologies and we will cover DDOS just in a couple of minutes in a next section now let's continue with inventory and configuration management AWS offers several tools that you can make use of so for example deployment tools for creation and decommissioning of AWS services and resources inventory tools so you can see a lot of things related to your usage with different services in dashboards and also template definition in order to create custom ec2 instances with specific config that you can replicate so it takes some time in the beginning to set up your ec2 instance you apply patches you do whatever you need and when it's done you just can create as you have seen on Ami to an Amazon machine image if it's a it's a Linux machine any other type of machine so defining the template with everything that you need in your setup and just then reuse that so this is possible within AWS continuing with data encryption AWS offers the possibility to Define encryption at rest for your data and again encryption address means encryption of data that is not that is not traveling so that sits there for example in uh in S3 or maybe on a database or on EBS volume so data encryption capabilities available for AWS storage and database Services flexible key management system AWS or you can manage the encryption keys so either of two hardware-based cryptographic key storage options this is for the most sensitive customers now Access Control AWS gives you full control over access to your AWS services to the to the services that you are using IIM so identity and access management to Define individual user accounts with custom permissions multi-factor authentication and also integration and Federation with corporate directories so all these are options available in Amazon web services now the last one monitoring and logging AWS provides multiple tools that can help you with monitoring and logging so for example for deep visibility this is cloudtrail AWS service and this is the service that will monitor every API call that that happens in your AWS environment and this is really really helpful when you want to search for uh for an event for example also log aggregation so this is cloud watch and you can also receive multiple notifications through alerts so through emails now now let's also talk about the AWS security guidance AWS provides customers with guidance and expertise through both online tools and AWS Personnel or even Partners Personnel so there are also AWS personals worldwide that can help you with your implementation if you're not very Hands-On so AWS Enterprise support we have mentioned this in the previous module with a 15 minutes SLA 24 7 availability and also you get a dedicated temp so technical account manager AWS trusted advisor we have covered this and also AWS Professional Services which means AWS employees experts will just configure your setup the way you want or you just need the last one I want to cover is AWS compliance program AWS Computing environments are continuously audited with certifications from different accreditation entities across the world so across geographies and verticals and some popular examples maybe you have also heard of is the iso 27001 and the PCI DSS in a traditional data center common compliance activities are often manual maybe you have gone through this kind of auditing periodic activities and include verifying asset configurations and Reporting on administrative activities moreover this is also kind of fun the resulting reports are out of date before they are even published and if you want to take a look at what are the current certification that AWS let's say has been awarded or received you can take a look at aws.amazon.com compliance and programs so let's just switch now to AWS website and have a look alright so I'm now on AWS website and here is the AWS compliance programs so as you can see there are multiple certifications that AWS is now has now been awarded so there are Global and you here is the iso 2701 or 9001 and others let's say this one is also very very popular so for payment card standards the PCI DSS level one and there are also some specific ones like for example the United States these are different ones so the fips for Asia Pacific ones and also for Europe ones so on maybe you receive some kind of questions from your end customers maybe you're doing let's say an implementation maybe you're helping them to migrate some workloads into AWS and if you get any questions related to is AWS or Amazon web services compliant to this and this and this this is the web page that you need to to come and check with so probably or most probably AWS will be compliant to whatever uh the End customer is going to request thank you and see you in the next section foreign [Music] we will cover three AWS Services web or web application firewall shield and also firewall manager AWS web is a web application firewall that monitors connections forwarded to your web application a web is a protocol layer 7 defense and this relates to the OSI model and is not designed to defend against all types of attacks as opposed to typical Network firewalls web application firewalls understand traffic from the application perspective with we have you can monitor for example HTTP and https traffic which is more than just TCP protocol and AD or 443 port numbers so that's why this this is how they are calls or web application files they understand traffic at level 7 so from the application perspective now let's continue what is a web application firewall or web wav typically protects web applications from attacks such as cross-size forgery cross-size scripting file inclusion and also SQL injection this method of attack mitigation is usually part of a suite of tools which together create a holistic defense against a range of attack vectors so deploying a web in in front of a web application literally means installing a shield between the web application and the internet users now web provides several options that you can configure so for example you can allow all traffic except for specific requests coming from your users or maybe you want to block all traffic except requests that you permit and you can also Monitor and count requests with properties that you define some of the general benefits of AWS web application virals weft brings several benefits that you may want to take into consideration in a real world implementation so for example the web brings additional level of security for your web applications you can Define custom rules to protect your web applications and you can also use web API for automated Administration in order to ease your Administration daily now what is AWS Shield AWS Shield helps you protect against DDOS attacks so what's a dose attack and maybe what's a leader stack maybe you're not familiar with this terminology the denial of service or dos attack is a type of Cyber attack in which a hacker aims to make a computer or server unavailable to its users by interrupting the device's normal functioning or normal behavior a distributed denial of service or DDOS attack is a dose attack that comes from many distributed sources so for example a community of hackers around the world that just decide at 6am in the morning we will just attack that specific server and that will be a DDOS attack AWS Shield is the AWS service that helps you stay protected from DDOS attacks and the last one is the AWS firewall manager AWS firewall manager simplifies your AWS web Administration and maintenance tasks across multiple accounts and resources with AWS firewall manager you set up your firewall rules just once the service automatically applies your rules across your accounts and resources even as you add new resources now let's switch over to AWS Management console in order to check these three AWS Services alright I'm now in AWS Management console and in order to check these three services you'd have to go to services and under security identity and compliance you have this option here AWS web and shield so click on this and you will now be presented the landing page of AWS web and AWS Shield so these are the three options we have shield and firewall manager for we have AWS web is a web web application firewall service that helps protect your web applications from common exploits that could affect app availability compromise security or consume excessive resources so going to AWS web now in order to just see how it looks so we can now go ahead and configure a web ACL so this is the landing page for setting up a web access control list but this is outside the scope of this exam so we will just navigate back and also take a look at AWS Shield so going now to AWS Shield let's take a look here is the thing that it is most important there are two types of services as related to The Shield one we are now being here so this one is activated and it's called AWS Shield standard and we see that only two ticks we have here enabled so the network flow monitoring and this relates to active monitoring category and also for the DDOS mitigations it will help us to protect from common DDOS attacks such as the TCP scene flood and maybe UDP reflection attacks in order to have like a full protection against delos attacks we would need to activate AWS Shield Advanced which will allow us to be protected from multiple let's say attacks now but the idea is that it comes with a 3 000 US dollars per month so this is something that you may want to take into consideration before activating this specific service and the last one is the firewall manager the AWS firewall manager simplifies your AWS web admissions registration and maintenance tasks across multiple accounts and resources so this is uh here in order to help if you have multiple accounts let's say multiple AWS accounts so we would consolidate this into AWS organizations and it will make really your life easier when configuring your whole setup thank you and see you in the next section [Music] the last AWS service that we will cover in this module is AWS inspector so now let's start what is AWS inspector Amazon inspector tests your Amazon ec2 instances from the network accessibility perspective and also the security state of your applications that run on those instances Amazon inspector assesses applications for exposure vulnerabilities and also deviations from Best Practices after running an assessment Amazon inspector delivers a detailed list of security findings that is organized by level of severity with Amazon inspector you can install a small software package and this is called an agent in the operating system of the ec2 instances that you want to assess the agent monitors the behavior of the ec2 instances including Network file system and also process activity still you are responsible for the security of applications processes and tools that run on AWS services and remember the security in the cloud now let's switch over to AWS Management console and take a look at AWS inspector I am now in AWS Management console and in order to get started please click on services and under security identity and compliance please click on inspector now you'll be providing the landing page of Amazon inspector and let's take a look at what it says so Amazon inspector enables you to analyze the behavior of your AWS resources and helps you identify potential security issues and in order to continue I will click on get started so welcome to Amazon inspector Amazon inspector assessments check for security exposures and vulnerabilities in your ec2 instances and we have here two options like inspector agent is not required and this one inspector agent is required so I will not select the second option and I might say that I have just fired up a receipt instance I have applied our security group that we have used throughout the course of the SG AWS CCP VPC and I will click now run once so you have chosen to run the following assessments check for ports reachable from outside the VPC the assessment will start now pricing is based on the monthly volume usage and I can just click here in order to learn more but I will not do that I will just click on OK and now the assessment is running now if you just leave it to run for a couple of minutes depending on how big is your environment at one point it would say that the analysis is complete so I will click on recent findings and we have here some severity levels so clicking on this one will just rearrange them let me just go now to the medium ones so if I just expand this a little bit I will see here that this P Port which is associated with SSH is reachable from uh from the internet I can just expand this line here and I will see here that the finding is this one just what I have read earlier so maybe this is important to to the client maybe this is not it really depends on what is the purpose and what is the goal of that specific ec2 instance maybe this is a web server and it should not be open to the internet to to everybody in order to connect on Port 22 and only to specific IP addresses so the IP address of the it administrator thank you and see you in the next section [Music] concludes module 7 Security in Amazon web services before sitting the AWS certified Cloud practitioner exam please make sure you are comfortable with the AWS security Concepts and particularly go through the AWS shared responsibility model let's now go over the most important topics covered in this module and the exam hints so we have started this module with AWS security fundamentals and honestly the most important topic is the AWS shared responsibility model please make sure you go through the information covered in this URL so the official page with the shared responsibility model it will take literally just a couple of minutes but is absolutely relevant and important for the certified Cloud practitioner exam now moving on we have talked about the AWS web application firewall AWS web is a web application file that monitors connections 4D to your web application web typically protects web applications from attacks such as cross site forgery cross-site scripting and all also SQL injection deploying oaf in front of a web application is literally a shield placed between the web application and the internet users next we have covered the AWS Shield service AWS Shield helps you protect against DDOS attacks we have defined what's a Dos attack or dos and also what's Adidas attack a denial of service or dos attack is a type of Cyber attack in which a hacker aims to make a computer or server unavailable to its users by interrupting the device's normal functioning or behavior a distributed denial of service or DDOS attack is a Dos attack that comes from many distributed sources AWS Shield helps you stay protected from DDOS attacks the last service we covered is AWS inspector Amazon inspector tests your Amazon ec2 instances from the network accessibility perspective and the security state of your applications that run on those instances Amazon inspector assesses applications for exposure vulnerabilities and also deviations from Best Practices after running an assessment Amazon inspector delivers a detailed list of security findings that is organized by level of security with that said please join me in our next and last theoretical module module 8 architecting for the cloud best practices thank you and see you in the next module foreign [Music] architecting for the cloud best practices this module provides a brief introduction to AWS best practices architecting for the cloud we will start this module by going over a comparison between traditional architectures and AWS cloud computing please make sure you are comfortable with AWS best practices when architecting in the cloud before taking the cloud practitioner exam by the end of this module I will introduce you several design principles within AWS relevant both in real world scenarios and of course for the cloud practitioner exam we will cover design principles like scalability disposable resources automation loose coupling and some others too we will wrap up module a the last module of the course highlighting the recommended reading AWS white paper architecting for the cloud best practices with that said let's get started foreign [Music] [Music] we are going to cover some of the differences between cloud architectures and also traditional environments so we will start with an introduction migrating applications to AWS even without significant changes so this is also called lift and shift so taking your applications as they are of today and moving them to the cloud provides organizations the benefits of a secured and cost efficient infrastructure architectures need to be changed and get updated which will lead to immediate benefits like agility and elasticity that are possible and available with cloud computing following are the best practices that have emerged as a result of clout when thinking about traditional Computing or differences so the first one ID assets become programmable resources in a traditional data center resources provisioning is done by guessing and making assumptions on the maximum Peak load this results in either idle expensive resources not being utilized or insufficient capacity to handle the traffic now this is totally different with cloud computing you use the right amount of capacity dynamically scale up or down when needed pay as you go and only for what you use AWS services are up and running in minutes you can use them for as much or as little time as needed with no time limits or constraints now global available and unlimited capacity with cloud computing when you deploy your app in the cloud several best practices should be followed so the first one is proximity to your end users compliance or data residency constraints costs and there are some others too now in order to achieve low latency for your applications you may want to use the Amazon cloudfront content delivery Network and we have covered this in the previous modules too High availability and fall tolerance for your apps by using the AWS Global infrastructure you can deploy it in multiple data centers so you can use multiple availability zones and also multiple regions with AWS there is virtually unlimited capacity to use you don't have to worry about that AWS will do the thing for you in the back higher level managed Services AWS services are instantly available to use compute storage databases analytics deployment services using managed services from AWS help you lower operational complexity and of course also the cost reducing risk for your project implementations is easy as all AWS managed services are designed for scalability and high availability and this is a very big big plus now also let's mention security with AWS Cloud governance capabilities that enable continuous monitoring of configuration changes to your it resources are always on and available the this is different than traditional infrastructure where auditing processes are periodic and manual processes solution Architects can use quite a lot of native AWS security and encryption features and services which leads to meeting higher level of compliance and also data protection thank you and see you in the next section foreign [Music] in this section we start our discussion about scalability in regards to AWS design principles we will start with an overview so systems that are expected to grow over time need to be built on top of a scalable architecture scalable architectures provide the ability to grow your environment when this is needed so for example increasing the number of users traffic throughput increase and so on cloud computing allows virtually unlimited growth but the underlying architecture must be designed to support this and you can either scale vertically or horizontally and we will cover what are these next we will start with scaling vertically scaling vertically means increasing the capacity of your current server as an example so at some point you discover that your current server can no longer process the amount of data that is constantly increasing so as a as a result you need to scale or to grow so again let's have an example you are running your website on an AWS ec2 instance and let's say that you're running an A1 medium ec2 instance which highlights here like one virtual CPU and two gigabytes of RAM so this is the uh the hardware resources that are allocated to an A1 medium ec2 instance and because you need to grow and you need to scale you will decide maybe to have one of the the largest ec2 instances in AWS currently so maybe you will migrate to an M5 24x large which provides 96 virtual CPUs and a lot of ram so 384 gigabytes of RAM so this is what scaling vertically means scaling horizontally now scaling horizontally means increasing the number of current resources and this is like adding more ec2 instances to support your website this is not always possible depending on the underlying architecture which can or cannot distribute traffic to multiple resources and we will analyze different scenarios now like stateless applications stateless components stateful components and also distributed processing in order to understand more let's now start with stateless applications but first what are stateless or stateful mean the key difference between stateful and stateless applications is that stateless applications don't store any data and connections are independent from one another a straight a stateless application is an application that needs no knowledge of previous interactions and stores no session information as an example an application provides the same response to any user with the same input so no matter how how many users will try to access like https and then the website URL the first thing that they will get is the landing page so this is an example of the same response to any user with the same input like going for that specific URL now again stateless applications why is this important well stateless applications are a great candidate for horizontal scaling simply just add more ec2 instances in order to run your application and terminate ec2 instances when they are no longer needed the easiest and most popular way to distribute traffic to an ec2 Fleet so in a fleet of ec2 instances is through an elastic load balancers uh and we have played quite a lot in in the course with elbs they are quite fun and a very very powerful AWS service let's now talk about stateless components most applications need to maintain some kind of state information so web applications need to track whether a user is signing this is just an example some web applications use HTTP cookies to store data on the client side other scenarios require storing larger files for the second option Amazon S3 or the elastic file system could be used again just as an example talking about stateful components well there are cases where you cannot change all your components in your architecture to be stateless as an example a real-time multiplayer online gaming well users are connected to the same server and low latency and best experience can be achieved this way now this is stateless application but if you want to map this with a horizontal scaling well horizontal scaling can be achieved in this case also using what is called the session affinity and this refers to binding or getting all your connections from a specific user to only a single server so this can be achieved also so our last topic distributed processing well this is similar to breaking a problem into smaller pieces when a single compute resource cannot process that information because maybe it is too large right then the work will be distributed and split into small fragments to more instances and this use case is absolute common for Big Data scenarios that are covered in AWS processing of large volume data sets thank you and see you in the next section foreign [Music] disposable resources the second AWS design principle so what exactly does disposable mean well disposable resources mean temporary resources is available and you can also think of disposable resources as one-time use resources so you use it when you need it and that's it then you terminate it you don't use it anymore you don't store any information as related to to them in traditional data centers environments you work with fixed resources or servers and this translates to you as a high upfront cost and also a time to production high as well so what has the time to production mean well let's say that you're running your whole setup in a traditional environment and at some point you need a new server maybe to grow or just to replace a faulty server that you have in your data center well then you will have to raise a PO so purchase order and then your uh your let's say your financing department will just have a new server ordered at your whatever vendor you're using so HP or Dell or whatever right then it will take the time to um to get the server and when the server will will get to you then you will have to unpack it to Rack it to stack it in your data center maybe to configure also the hypervisor and you will then be um in a good position to hand it over to your developer for example in order to to start using it well this takes some time and this is not happening in AWS with Amazon you launch as many servers as you need use them as long as you need them and pay accordingly so let's start now with configuration drift and immutable infrastructure let's talk about these ones so in a traditional data center there is something that is called a configuration drift so configuration changes and software patches can be applied inconsistently and this leads to different configurations and maybe these are also untested on your resources in the data center you can just end up like having 100 servers and none of them having the same configuration which makes um makes it really tough for you in order to manage your infrastructure immutable infrastructures this is something that can solve your previous issue instead of patching and modifying initial configuration on your servers when this is needed just change the old server with a new one that has the new software packages applied now automation Automation and this is related to infrastructure instantiating so manually setting up your infrastructure is time consuming and is also error prone so we are humans and we may uh do mistakes and actually we do make mistakes so um let's say automating your stuff is a great idea and will help you avoid many many possible errors now ideally any new environment setup or scaling up existing infrastructure should be done automatically in AWS you can use what is called bootstrapping and also golden images so you can use them one by one or maybe both at the same time let's start now with bootstrapping when you launch an ec2 Amis on Amazon machine image the instance starts with a default configuration now remember when we configure the ec2 instance to act as a web server well what we did is we connected through the SSH so right we were on the CLI with the history instance and we ran some commands so today Sue and then we change to the root user we installed HTTP demon so we installed the HTTP service then we started that then we went to a slash VAR say slash www and slash HTML and we created the landing page so the index.html well all of these commands can be configured or just can be written there in a bootstrap script which means that when the Easter instance will start then all of these commands will be run in order without having you to type them in so this is a great thing that you need to do and will help you to to stay away from errors now when your web server is ready up and running with all security and operating system patches applied you can then create a snapshot of this ec2 instance the snapshot or also called or known as the golden image May then be used in order to create an Amazon machine image the Mi could be used for example in an auto scaling group so that resources sustaining your app can scale up or down as needed as you will configure in the auto scaling policy thank you and see you in the next section [Music] in this section we are going to cover automation so why should I use Automation in my environment and what's in it for me so why should I care do we have briefly touched on this in the previous section so this brings less manual work less possible errors improve your system stability and efficiency let's go over some examples of how you can automate your work in AWS we have talked about briefly uh so earlier in the course about AWS elastic bean stock you just upload your application code and provisioning load balancing Auto scaling and monitoring is done automatically by AWS with elastic bean stock now another example Amazon ec2 Auto Recovery you can just monitor your history instance and if it fails AWS will create an identical ec2 instance for you Auto scaling you can scale your ec2 Fleet capacity up or down depending on conditions you define in your auto scaling policy another one the Amazon Cloud watch alarms you can Define alarms that can trigger other actions as well as you configure them of course now the Lambda scheduled events you can just execute a task at a specific time of day or as a result of another thing that happens in your uh in your AWS environment and the last one is AWS Ops Works lifecycle events supports continuous configuration through events as an example you update your instances configuration as a result of an event and that is also called a trigger thank you and see you in the next section [Music] in this section we will cover loose coupling design principle so what exactly does loose coupling mean well breaking your application into smaller pieces or components in such a way that there are little to no depending on each other leads to a loose coupling system so how can loose coupling be implemented what are the options and we will explore this right now let's talk now about the well-defined interfaces communication between the components should be implemented through open source mechanisms and open source means that these Protocols are not developed or created by any specific vendor they are created by a community and can be used by anyone with no restrictions using open source communication interfaces so again not vendor specific leads to the possibility of developers to modify and adapt configuration on the fly during or after project implementation now let's also talk about service Discovery implementing loose coupling means that you will have a lot of services that need to either communicate with each other or with other services in your environment and when I say Services I'm referring to AWS services there needs to be a way to address or or call any service in a unique way Loosely so that no interdependencies are created as an example think of load balancers you can call a load balancer by using the endpoint name which is totally different than the IP address so when we tested the the web server we just used the load balancer endpoint name which that pointed to the web server itself so running on the ec2 instance and again we have used the endpoint name of the load balancer now about a synchronous integration a synchronous integration refers to Integrations between different services in your infrastructure and what exactly is asynchronous well if two Services can work independently of each other but together as a system this means that the system is asynchronous as an example service a let's say that is the SNS or the notification system the email and service B could be the SQ as the queuing system or even even simpler than that listing that you are sending an email so sending an email is service a and the email is going to arrive at the mail server the mail server will just send the email to the destination and this is service B so if whatever reason the the email server is kind of busy and will not send the email right away well this that not this does not impact sending you the mail so these are two different aspects of the whole process sending and receiving them the same with this example and the synchronous integration that I've talked about now graceful failure So Graceful failure is also another method to increase loose coupling when a failure occurs communication of the failure should be performed into the system and all components should be aware rerouting of traffic to healthy Services should take place as an example Route 53 can reroute clients traffic to healthy ec2 instance that hosts your website just that it may happen your Healthy issue instance that regularly hosts your website for whatever reason fails thank you and see you in the next section [Music] in this section we are going to talk about Services design principle and it is actually going to be a very very short section the main idea is that you should use AWS managed services and migrate to serverless architectures as much as possible many services which means these services are managed by AWS making your life easier include databases machine learning analytics queuing email notifications and even more than that as an example take Amazon S3 you can store literally any amount of data without worrying about capacity availability data replication and actually more second topic is serverless this is the next big thing happening now serverless means that you can actually run your application code with no servers AWS Lambda is the AWS compute service that will run your code on your behalf using AWS architecture as a bonus it's even cheaper than traditional cloud computing with AWS Lambda you are charged for every 100 milliseconds your code executes and the number of times your code is triggered thank you and see you in the next section [Music] this concludes module 8 architecting for the cloud best practices before setting the AWS certified Cloud practitioner exam please make sure you are comfortable with the AWS concepts related to architecting best practices please take the time and go over the white paper and I'm referring to architecting for the cloud best practices it's an easy read and it's really not that long the white paper is available in the download section at the beginning of the course so between module 1 and 2 or you can just follow this URL with that said please join me in our next and last module module 9 AWS account cleanup thank you and see you in the next module [Music] welcome to module 3 AWS Services high level overview this module provides an introduction in the high level overview on AWS cloud services and it is based on April 2017 AWS white paper overview of Amazon web services indicated as a recommended resource for studying in the official exam guide we will start and cover first the core and key AWS services like compute storage database migration networking content delivery management tools security identity and compliance developer tool services and we will close this first part with AWS messaging services by the end of this module you will have a good understanding and be able to identify which AWS service can solve a particular task or job just like we will be questioned during the AWS certified Cloud practitioner exam we will wrap up module 3 after going through the different AWS miscellaneous services that are available under the following categories in AWS console analytics artificial intelligence mobile application business productivity desktop and app streaming internet of things and game development with that said let's get started foreign [Music] AWS compute services so these are the services that will be covered in this section we will start with Amazon ec2 or elastic compute Cloud then continue with ec2 container service container registry Amazon light cell batch Amazon elastic bin stock Amazon Lambda and we will wrap up this section with Amazon Auto scaling all of the services can be found under the compute category in AWS web console so let's start now with Amazon ec2 now first of all what's with this name ec2 well it comes from the two C's from compute and Cloud there you go to Cs and the name Amazon ec2 Amazon elastic compute cloud or simply Amazon ec2 is a web service that provides secure resizable compute capacity in the cloud so Amazon ec2 makes elastic Computing possible you have full control you can apply a flexible configuration whatever you want it's integrated with most AWS Services it's reliable and AWS guarantees 99.95 availability per year per machine per Amazon ec2 instance and of course this is secure and inexpensive simply put the Amazon ec2 is the virtual machine offering infrastructure as a service from Amazon or AWS let's continue now with ec2 container registry Amazon elastic container registry or simply ECR is a fully managed Docker container registry that makes it easy for developers to store manage and deploy Docker container images Amazon ECR hosts your images in a highly available and scalable architecture allowing you to reliably deploy containers for your applications now ec2 container service Amazon ECS or elastic container service is a highly scalable high performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS now ECS is the AWS service that helps running and scaling applications in Docker containers so I just keep on saying containers containers containers but maybe you're new to the game so what is a a container actually right now think of a container as another form or visualization now maybe you have heard of virtual machines these allow a piece of Hardware to be split into different VMS or virtualized so that the hardware power can be shared among different users and appear as separate servers or machines on the other hand the containers virtualized the operating system splitting it up into let's say virtualized compartments to run container applications now how does it work so this this refers to Amazon ECR and ECS so Amazon elastic container registry integrates with Amazon ECS the container service and the docker CLI allowing you to simplify your development and production workflows now you can easily push your container images to Amazon ECR using the docker CLI from your development machine and Amazon ECS can pull them directly for production deployments as you can see on your screen now this is just a diagram made available by AWS now let's continue now with Amazon light sale so what what is it and what's the purpose of it now Amazon light cell is the easiest way to get started with AWS for developers who just need virtual private servers light cell includes everything you need to launch your project quickly so that would be a virtual machine storage SSD based data transfer DNS management for name resolution to IP and the static IP for low predictable price now for example if you start up Amazon light cell in AWS console you first select the platform either Linux or Unix or Microsoft Windows and then you select a blueprint so either application plus the operating system or only the operating system in this case I have selected WordPress 503-2 version and I will literally have a WordPress blog site or whatever I want to to choose to use it very very fast let's continue now with Amazon batch so again maybe this is something new for for you as a terminology batch jobs are jobs that can run without user interaction batch processing is for frequently used programs that can be run with minimum minimal human interaction so this refers to let's say scripting you have like 10 jobs 10 things that you do recurrently daily or maybe I don't know let's say a couple of a couple of times per day and you decide that you automate your uh your job or your daily tasks and this refers to batch jobs now AWS batch enables you to run hundreds of thousands of batch Computing jobs on AWS so this is a very very powerful tool if you want or need to automate your work AWS batch dynamically Provisions the optimal quantity and type of compute resources and I'm referring to CPU so the processor and memory the ram based on the volume and specific resource requirements of the jobs submitted very very nice and interesting let's say service from AWS it's elastic Beanstalk AWS elastic beam stock helps you deploy Monitor and scale an application quickly and easily you can simply upload your code written in java.net PHP node.js python Ruby go and Docker and run it on servers like Microsoft is Apache and some others too elastic Beanstalk automatically handles their deployment from capacity provisioning load balancing and auto scaling to application Health monitoring so even that as an example when you go in AWS console and you launch the elastic beanstop create a web app in this example So I entered the name this is a test app I'm now selecting the platform so I have my code written in Java I will upload my code it's a zip archive and I just click on create application and that's it so it will just grow the application or Shrink it depending on the traffic load and the configuration that I will apply Amazon Lambda very very nice we will definitely talk more about it in the in the core services and the key services so module three and four AWS Lambda lets you run code without provisioning or managing servers very powerful service from AWS you pay only for the compute time you consume there is no charge when your code is not running so you only pay when the code is run by a by AWS Lambda just upload your code and Lambda takes care of everything required to run and scale your code fantastic as an example here is the code I can test it in the Amazon Lambda console I click run and I see the result in this case is the simple hello world let's say printing the hello world to to the end user but anyway this is uh basically how you test your code before running it production in Amazon Lambda now Auto scaling this is the AWS service that I have talked about in the in the Black Friday example AWS Auto scaling can increase the number of Amazon ec2 instances during traffic demand spikes to maintain performance and decrease capacity during traffic silence time for cost reduction AWS Auto scaling can also help to ensure you are running the desired number of ec2 instances so for example I want to make sure that I always run at a minimum three ec2 instances because let's say I have the application deployed in a region that has three availability zones and I want to make sure that I have at least one ec2 instance in each of the availability zones so you can scale up or down automatically the configuration the configuration of ec2 instances based on your desired configuration thank you for your time and see you in the next section in this section we will cover AWS storage services now we will talk about the simple storage service or S3 Amazon elastic Block store or EBS Amazon elastic file system or EFS Amazon Glacier and we will just wrap up this section with AWS storage Gateway Storage services are covered in storage category in AWS console let's start with Amazon S3 the name S3 comes from the three s's in simple storage service so there you go Amazon S3 Amazon S3 is the AWS object storage service that stores and retrieves any amount of data so please pay attention this is object storage service which means that you will not store operating system files you will store documents pictures videos and so on but not operating system files Amazon S3 is designed for 99.999 so 11 nines durability and durability means that your information will not be lost Amazon's 3 is simple to use it's scalable you don't have to care about uh if your information let's say can go inside Amazon's three or maybe it's too big no Amazon S3 is literally infinite so you can store as much or as little as you want it's secure it pro it provides 99.99 availability per year it's definitely low cost you can easily migrate data into or out of S3 and large integration with other AWS Services is provided and as you will see in the course section we will cover S3 also with some examples it is easy to manage when I say that you can easily migrate data into or out of S3 I want to also note that data coming into S3 is free but data leaving S3 will will come with some price let's say charges so we'll have to pay when you take data out of S3 let's continue with Amazon EBS or elastic Block store Amazon EBS provides persisting block storage volumes for use with Amazon ec2 instances in the AWS Cloud so this is the the type of storage that you'll use in AWS in order to create volumes or partitions or maybe you know it in in let's say with this terminology in your in your machine so on your laptop on your desktop you have maybe multiple partitions you have the C drive if you're if you're using Windows for for Windows operating systems and some other partitions for different files and documents that that you store so this is the equivalent Amazon EBS elastic Block store that you will use in AWS Amazon EBS offers high performance volumes 99.999 availability for each EBS volume encryption for data at rest and in transit which means data on your EBS volume will be encrypted and also data that is traveling between different ec2 instances for example will also be encrypted um EBS also offers access management which means you can Define who can access what and also snapshots are available so you can create point in time snapshots for your EBS volumes and store these snapshots in S3 for durability let's continue now with Amazon EFS or elastic file system Amazon EFS provides simple scalable file storage for use with Amazon ec2 instances in the AWS cloud storage capacity is elastic as you add or remove files from the file system and you only pay for the space your files and directories use EFS file system can be mounted on single or multiple ec2 instances allowing Amazon EFS to provide a common data source for workloads and applications running on more than one ec2 instance the next story service is Amazon glacier once the data is stored in Amazon S3 it can be automatically moved into lower cost so even lower cost longer term cloud storage classes like Amazon S3 standard infrequent access or IA and Amazon Glacier for archive archiving purposes Amazon Glacier is a secure durable and extremely Low Cost Storage service for data archiving and long-term backup Amazon Glacier provides three options for access to archives from a few minutes to several hours and honestly also pricing is different with uh with these options so the last one is AWS storage Gateway AWS storage Gateway is a hybrid storage service that enables your on-premises applications to seamlessly use AWS cloud storage applications connect to the service through a VM so virtual machine or even a hardware Appliance the Gateway connects to AWS storage services such as S3 S3 Glacier S3 Glacier deep archive Amazon EBS and AWS backup providing storage for files volumes snapshots and so on thank you and see you in the next section in this section we will talk about AWS database services so we will talk about RDS Amazon Aurora Amazon dynamodb and Amazon elastic cache these services are covered in the database section under AWS console first let's start with an introduction to relational databases or SQL so maybe you're new so I thought it's a good idea to start with this first of all now what is a database a database is just a location to store and retrieve data think of Microsoft Excel Microsoft Excel is a great example think of spreadsheets in Excel where information is stored in columns and rows right now relational database databases can use the information from multiple tables and combine it and this is like creating relations between tables helping you to create complex database systems and you will see just in a moment what I mean by this now let's say that we have three tables in Excel we have the courses the table students one and registration now in the courses uh let's say table we have the course name math history and physics with a course ID in the students we have three students and also student ID and we also have the third registration table now the registration table has a unique field this is the registration ID number and also has a course ID and a student ID which are Fields derived from the course table and students table so the registration table will be populated with information from core stable and students table which these tables represent input data for the registration table and these are the relations or relationships that are created between tables now let's talk about um the Amazon RDS or relational database service now that we have an idea what RDS means or relational databases AWS relational database service or RDS makes it easy to set up operate and scale a relational database in the cloud it provides cost efficient and resizable capacity while managing time-consuming database Administration tasks freeing you up to focus on your applications and business Amazon RDS is fast and easy to administer highly scalable available and durable inexpensive and secure now for example if you go to AWS console and go to the RDS section when you start your configuration you first have to create the database so select what kind of database you want the Amazon Aurora is the AWS offering and then you have some others as well so my sequel mariadb postgresql Oracle And even Microsoft is SQL Server let's continue now with Amazon Aurora Amazon Aurora is a MySQL and postgresql compatible relational database engine that combines the speed and availability of high-end commercial databases with the Simplicity and cost effectiveness of Open Source databases AWS Aurora is highly scalable and I'm referring to being able to grow up to 32 CPUs virtual CPUs and 244 gigs of RAM it is highly available providing 99.99 availability highly secure because you can encrypt the data at rest and also in transit through SSL to secure a socket layer it is MySQL and postgresql compatible and it is fully managed by AWS so you don't have to worry about managing it patching it and so on you just have to use it if you may choose to do so now let's talk also about the uh nosql databases or non-relational databases so how are these different than the previous one the RDS relation database service with relational databases you need to have the database structure defined before you consume data or you insert data into the database what I mean by this now look at the three tables the courses students and registration where can I insert for example the student phone number the field is not available so you have to define a new column and start populating the database with new information as required so with the relational databases you first have to define the column and then populate the data but with nosql databases or no non-relational databases this is no longer needed with nosql databases you are given the flexibility to use data and insert it in your database as you go without prior defining the column as I said as in the previous example an example of nosql databases is using key value pairs what I mean by this is that there are actual actually more types or more flavors of nosql databases and this is an example so one of the options a key value pairs so you define the key in this case let's say it's the course name and the value can be math history and physics you define also another key student name and you have another value Mary or John or Gabriel and also now we have the phone number so you add it as you go you don't need to Define it before working with the with the nosql database course student phone number address now nosql databases are widely recognized for their ease of development functionality and performance at scale Amazon dynamodb so Amazon Dynamo dynamodb is a fast and flexible nosql database service for all applications that need consistent single digit millisecond latency at any scale so you can literally scale very very almost to infinite to say so Amazon dynamodb is um is fast so again single digit latency highly scalable it is fully managed by AWS no worries about operating system patching and so on it is flexible so it supports multiple nosql database types it is event driven programming you can integrate it with AWS Lambda and flexible user access control and I mean that it it integrates with identity and access management or iam now Amazon elastic cache AWS elastic cache is a web service that makes it easy to deploy operate and scale in-memory cache in the cloud the service improves the performance of web applications by allowing you to retrieve information from Fast managed in-memory caches instead of relying entirely on slower disk based databases Amazon elastic cache supports two open source in-memory caching engines just that you know and their names are redis and memcached thank you and see you in the next section in this section we will cover AWS migration services so we will cover AWS application Discovery service database migration service server migration service AWS Noble AWS Noble Edge and we will wrap up this section with AWS no snow mobile these services are covered in migration and transfer category under AWS console so let's start now with AWS application Discovery service or ads AWS ads can help you plan application migration projects by automatically identifying applications running in your on-premises data centers their dependencies and performance profiles so it's actually an inventory tool in order to see what you actually have what you currently have in your on-premises data center so that you need what so that you know what you need to move in the AWS cloud now AWS ads collects configuration and usage data from servers storage and networking equipment to develop a list of applications how they perform and how they are interdependent next on our list is AWS database migration service AWS database migration service helps you migrate databases to AWS easily and securely The Source database remains fully operational during the the migration minimizing downtime to applications that rely on the database so keeping the original to say so database fully operational during the migration until it's done it's crucial and it's absolutely important AWS database migration service can also be used for continuous data replication with high availability now what data replication means is that the actual data that you have on your database you will copy it and create a replica so you'll have uh the same amount of information the same actual information in another location and this means that you're replicating the data by doing so you will achieve High availability meanings meaning that in case the let's say primary database fails the information will be available in a second location so the same with availability zones and regions this is the same concept which is a high availability now let's continue with AWS server migration service or SMS AWS server migration service is an agentless service which makes it easier and faster for you to migrate thousands of on-premises workloads to AWS AWS SMS allows you to automate schedule and track incremental replications of live server volumes making it easier for you to coordinate large-scale server migrations now let's talk about snowball AWS Noble services are AWS data Transport Solutions that use secure appliances to transfer large amounts of data into and out of AWS I want you to think of the snowball as extremely large USB sticks really large I mean you use AWS Noble Services when the time it takes to transfer data between on-premise Data Center and AWS so in or out to AWS it's insanely big huge and the goal is to minimize the data migration time let's start with AWS Noble and you can see how it looks on your screen now AWS Noble is a petabyte scale data transport solution that uses secure appliances to transfer large amounts of data into and out of AWS so what is a petabyte what a petabyte is 1000 terabytes the use of snowball addresses common challenges with large-scale data transfers including High Network costs long transfer times and security concerns data transfer with snowball is simple fast secure and can be as little as one-fifth the cost of high-speed internet so it's actually also a good choice from the price perspective now the snowball Edge AWS Noble Edge is a 100 terabytes data transfer device and again you can see it on the screen with on-board storage and compute capabilities you can use snowball Edge to move large amounts of data into and out of AWS and here you can see also a comparison between snowball and snowball Edge snowball just let's compare just the the storage capacity snowball comes in two options 50 and 80 terabytes while the snowball Edge is 100 terabytes in size the last one is AWS no mobile yes it's actually a truck here you have it on your screen AWS mobile is hexabyte scale data transfer service used to move extremely large amounts of data to AWS so one exabyte means 1000 petabyte which means 1 million terabytes and that is really really huge you can transfer up to 100 petabytes per snow mobile secure fast and cost effective thank you and see you in the next section in this section we will cover AWS networking and content delivery services so we will go through Amazon VPC Amazon Cloud front Amazon Route 53 AWS Direct Connect and we will wrap up this section with elastic load balancing services are covered in networking and content delivery section under AWS console so let's start now with AWS virtual private cloud or VPC Amazon virtual private cloud or simply put Amazon VPC lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual Network that you define let's think of AWS VPC as your virtual data center in the cloud because it's just like that your VPC your private data center in the AWS Cloud you have complete control over your virtual networking environment you can select your IP address range create subnets configure route tables and so on now let's continue with cloudfront also we have covered cloudfront in module 2 introduction to AWS Cloud Amazon cloudfront is a global content delivery Network or CDN service that accelerates delivery of your websites apis video content or other web assets Amazon cloudfront can be used to deliver your entire website including Dynamic static streaming and interactive content using a Global Network of edge locations and Regional Edge caches but I think you already know that right and you can only you pay only for the content you actually deliver through the CDN which is great Amazon Route 53 or the DNS service from Amazon Amazon Route 53 is a highly available and scalable Cloud domain name system or DNS web service AWS Route 53 routes and users to internet applications by translating human readable names such as www example.com into the numeric IP addresses such as 1920.1 this is because computers use IPS to connect to each other so for example one computer when tries to connect to web service so example.com it doesn't know how to connect to example.com it will use DNS in order to find out what's the IP address corresponding to example.com and then it will connect to that IP address as in this example 192.0.2.1 now AWS Direct Connect with AWS Direct Connect you can establish a dedicated network connection from your on-premises data center to AWS this way you can reduce your network costs and increase bandwidth throughput between on-premises locations and AWS Cloud now think of the cloud hybrid model where you have some resources in your on-premises data center and some of them in the AWS cloud and you need a high bandwidth low latency connection between these two now this is a great example when you will go for Direct Connect AWS service the dedicated connection is established using industry standard 802.1q virtual lens or vlans AWS elastic load balancing elastic load balancing or simply elb automatically distributes incoming application traffic across multiple ec2 instances elb can handle the varying load of your application traffic in a single availability zone or AZ or across multiple availability zones it enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic thank you and see you in the next section in this section we will cover AWS management tools services we will start with Amazon Cloud watch then continue with Amazon ec2 systems manager cloud formation cloudtrail AWS config Ops Works service catalog trusted advisor personal health dashboard and we will wrap up this section with AWS managed services services in this section are under management and governance in AWS console so let's start with Amazon Cloud watch Amazon cloudwatch is a monitoring service for AWS Cloud resources and the applications you run on AWS with Amazon cloudwatch you can collect and track metrics collect and monitor log files set alarms and automatically react to changes in your resources you can use Amazon cloudwatch to gain system-wide visibility into resource utilization application performance and operational health I want you to think of Amazon cloudwatch as your monitoring tool and this is for your exam as well Amazon ec2 systems manager Amazon ec2 systems manager is a Management Service you can automatically collect software inventory apply operating system or Os patches create system images and configure Windows and Linux and by the way ec2 systems manager is simple to use you simply select the ec2 instances you want to manage and Define the management tasks you want to perform AWS cloud formation AWS cloudformation enables you to create and manage a collection of related AWS resources provisioning and updating them in an orderly and predictable fashion cloud formation provisions and manages stacks of AWS resources based on templates you create and model your infrastructure architecture with cloudformation you can script and automate your work future and recurrent deployments so for example if you usually run tasks like create a VPC and then create an elastic load balancer there in order to distribute traffic to several ec2 instances in different availability zones and maybe also this easy to instances are WordPress websites that also have some relational databases connected then instead of going through each of the steps that I have mentioned you can create a cloud formation template and literally script your configuration so when you create the template you just launch it with cloud formation and just wait and your configuration or your let's say architecture will be built automatically by AWS now let's continue with AWS cloudtrail with AWS Cloud cloudtrail you can log continuously Monitor and retain account activity related to actions across your AWS infrastructure cloudtrail provides event history of your AWS account activity including actions taken through the Management console SDK CLI and other AWS services now keynote AWS cloudtrail equals to logging so with this AWS service you you will have logs available in order to see what what's happening with your services in your account and with Cloud watch you gain monitoring in your AWS account next service is AWS config AWS config is a service that enables you to assess audit and evaluate the configurations of your AWS Resources with AWS config you can monitor and record your AWS resource configurations and review changes in configurations and relationships between resources AWS config simplifies compliance auditing which is also very very important security analysis change management and operational troubleshooting now some of the AWS let's say miscellaneous management tools not that they're not important but really really not important for the cloud practitioner exam anyway you should know uh what it is uses shift to automate how servers are configured deployed and managed across your ec2 instances AWS service catalog allows organizations to create and manage catalogs of I.T services that are approved for use on AWS trusted advisor provides real-time guidance to help you provision your resources following AWS best practices AWS managed Services automates common activities such as change requests monitoring patch management security and backup services and provides full lifecycle life cycle services to provision run and support your infrastructure and the last one is AWS personal health dashboard this service provides alerts and Remediation guidance when AWS is experiencing events that might affect you thank you and see you in the next section in this section we will cover AWS security identity and compliance services we will start with AWS identity and access management or IM continue with Key Management Service AWS shield and AWS weft Amazon Cloud directory AWS directory service Amazon inspector AWS organizations AWS certificate manager and we will wrap up this section with AWS Cloud HSM services in this section are covered under security identity and compliance section in AWS console let's start now with AWS IM so AWS identity and access management enables you to securely control access to AWS services and resources for your users with IAM you can create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources using identity and access management you can control who can access what resources when is this possible and how AWS Key Management Service AWS KMS or Key Management Service is a managed service that makes it easy for you to create and control the encryption Keys used to encrypt your data AWS KMS is integrated with different AWS services to help you protect data used or stored by these services also KMS is also integrated with cloudtrail in order to provide logs of what encryption keys are being used let's continue now with Security Services AWS Shield AWS Shield is a managed distributed denial of service or DDOS Protection Service that safeguards web applications running on AWS but maybe you're not aware of Dos and DDOS a denial of service or dos attack is a malicious attempt to overwhelm an online service and make it unusable so literally to shut down the service while this distributed denial of service didos attack occurs when multiple systems or hackers orchestrate a synchronized dos attack to a single Target and you can protect from this kind of attacks DDOS attacks with AWS Shield AWS web AWS web application firewall or west is a web application viral that helps protect your web applications from common web exploits or attacks you can use AWS web to create custom rules that block attack patterns such as SQL injection cross-site scripting and others with AWS web you can also Define which traffic to allow or block to your web application now Amazon Cloud directory Amazon Cloud directory enables you to build flexible Cloud native directories for organizing hierarchies of data along multiple dimensions now maybe you're thinking what are directories so directories store information about users groups and devices and idea administrators use them to manage access to information and resources based on these attributes users groups and devices Amazon Cloud directory is used for cloud cloud scale deployments of this kind now on the other hand AWS directory service is going to be used for smaller deployments so you should use AWS directory service for Microsoft active directory either standard or Enterprise Edition if you need an actual Microsoft active directory in the AWS Cloud some of the features supported active directory aware workloads or AWS applications and services such as Amazon workspaces and Amazon quickside also ldap support for Linux applications so the takeaway for this service is that AWS directly Services equals Microsoft ad some of the other miscellaneous services so Amazon inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS AWS organizations allow you to create groups of AWS accounts for multiple accounts centralized management AWS certificate manager is a service that lets you provision manage and deploy as a cell TLS certificates for use with AWS services so this is some kind of a certificate Authority so a CA AWS Cloud HSM is a cloud-based Hardware security module HSM that enables you to easily generate and use your own encryption keys on the AWS cloud and this is different than using the KMS the key management system when you rely on the AWS let's say service in order to generate the encryption keys with Cloud HSM you can use your own encryption Keys you can securely generate store and manage the cryptographic keys used for data encryption such that they are accessible only by you thank you and see you in the next section in this section we will cover AWS security identity and compliance services we will start with AWS identity and access management or IM continue with Key Management Service AWS shield and AWS web Amazon Cloud directory AWS directory service Amazon inspector AWS organizations AWS certificate manager and we will wrap up this section with AWS Cloud HSM services in this section are covered under security identity and compliance section in AWS console let's start now with AWS IM so AWS identity and access management enables you to securely control access to AWS services and resources for your users with IAM you can create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources using identity and access management you can control who can access what resources when is this possible and how AWS Key Management Service AWS KMS or Key Management Service is a managed service that makes it easy for you to create and control the encryption Keys used to encrypt your data AWS KMS is integrated with different AWS services to help you protect data used or stored by these services also KMS is also integrated with cloudtrail in order to provide logs of what encryption keys are being used let's continue now with Security Services AWS Shield AWS Shield is a managed distributed denial of service or DDOS Protection Service that safeguards web applications running on AWS but maybe you're not aware of Dos and DDOS a denial of service or dos attack is a malicious attempt to overwhelm an online service and make it unusable so literally to shut down the service while this distributed denial of service Adidas attack occurs when multiple systems or hackers orchestrate a synchronized dos attack to a single Target and you can protect from this kind of attacks DDOS attacks with AWS Shield AWS web AWS web application firewall or west is a web application viral that helps protect your web applications from common web exploits or attacks you can use AWS weft to create custom rules that block attack patterns such as SQL injection cross-site scripting and others with AWS web you can also Define which traffic to allow or block to your web application now Amazon Cloud directory Amazon Cloud directory enables you to build flexible Cloud native directories for organizing hierarchies of data along multiple dimensions now maybe you're thinking what are directories so directories store information about users groups and devices and idea administrators use them to manage access to information and resources based on these attributes users groups and devices Amazon Cloud directory is used for cloud cloud scale deployments of this kind now on the other hand AWS directory service is going to be used for smaller deployments so you should use AWS directory service for Microsoft active directory either standard or Enterprise Edition if you need an actual Microsoft active directory in the AWS Cloud some of the features supported active directory aware workloads or AWS applications and services such as Amazon workspaces and Amazon quickside also ldap support for Linux applications so the takeaway for this service is that AWS directly Services equals Microsoft ad some of the other miscellaneous services so Amazon inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS AWS organizations allow you to create groups of AWS accounts for multiple accounts centralized management AWS certificate manager is a service that lets you provision manage and deploy as a cell TLS certificates for use with AWS services so this is some kind of a certificate Authority so a CA AWS Cloud HSM is a cloud-based Hardware security module HSM that enables you to easily generate and use your own encryption keys on the AWS cloud and this is different than using the KMS the key management system when you rely on the AWS let's say service in order to generate the encryption keys with Cloud HSM you can use your own encryption Keys you can securely generate store and manage the cryptographic keys used for data encryption such that they are accessible only by you thank you and see you in the next section [Music] in this section we will cover AWS developer tools Services we'll go through code commit code build code deploy code Pipeline and we will wrap up this section with AWS X3 services covered in this section are under developer Tools in AWS console so let's start now AWS code commit is a fully managed Source control service that makes it easy for companies to host private git repositories AWS code build is a fully managed build service that compiles source code runs tests and produces software packages that are ready to deploy code deploy is a service that automates code deployments to any instance including ec2 instances and instances running on premises code pipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates code pipeline builds tests and deploys your code every time there is a code change and last one AWS x-ray helps developers analyze and debug distributed applications in production or under development with x-ray you can understand how your app is performing so you can identify and troubleshoot the root cause of performance issues and errors thank you and see you in the next section in this section we will cover AWS messaging services specifically we will go through Amazon simple Q service or sqs Amazon simple notification service or SNS and Amazon's simple email service or SCS these Services can be found in AWS console under application integration and customer engagement sections so let's start now Amazon simple Q service or sqs is a fast reliable scalable fully managed message queuing service using sqs you can send store and receive messages between software components Amazon simple notification service or SNS is a fast flexible fully managed push notification service that lets you send individual messages or to large number of recipients the last service is Amazon simple email service Amazon SCS is a cost-effective email service with Amazon SES you can send transactional email marketing messages or any other type of high quality content to your customers you can also use Amazon SES to receive messages call your custom code via an AWS Lambda function or publish notifications to Amazon SNS thank you and see you in the next section in this section we will go over AWS analytics services now services covered in this section Amazon Athena Amazon elastic map reduce or EMR Amazon Cloud search elasticsearch service Kinesis redshift quick site data Pipeline and we will wrap up this section with AWS glue services covered in this section can be found in AWS console on analytics let's say category and let's start now with the different miscellaneous analytics Services first Amazon Athena it is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL syntax you point to your data in S3 Define the schema and start querying using standard SQL Amazon elastic mapreduce EMR provides a managed Hadoop framework that makes it easy fast and cost effective to process vast amounts of data across dynamically scalable ec2 instances so this refers to Big Data framework Amazon Cloud search is a managed service in the AWS Cloud that makes it simple and cost effective to set up manage and scale a search solution for your website or application Amazon elasticsearch service makes it easy to deploy operate and scale elasticsearch for log analytics full text Search application monitoring and quite even more now let's start with Kinesis and its variants Amazon Kinesis is a platform for streaming data on AWS offering powerful services to make it easy to load and analyze streaming data with Kinesis you simply collect process and store data continuously so let's start with the first one Kinesis Firehouse which means you simply collectize it right so Amazon Kinesis fire hose is the easiest way to load streaming data into AWS you can capture transform and load string data into Kinesis analytics S3 redshift and elasticsearch service enabling near real-time analytics with existing businesses the next two variants so Kinesis analytics is the easiest way to process streaming data in real time with standard SQL without having to learn new programming languages or processing Frameworks and the last one for storing data Kinesis streams Amazon Kinesis streams can continuously capture and store terabytes of data per hour from hundreds of thousands of sources let's continue now with Amazon redshift redshift is a fast fully managed petabyte scale data warehouse that makes it simple and cost effective to analyze all your data using your existing business intelligent tools so please know that or retain that Amazon reshift is simply a data warehousing service Amazon quicksite is a fast Cloud powered business analytics service that makes it easy to build visualizations perform ad hoc analysis and quickly get business insights from your data and this is something that you will use probably for your c-level Executives you can create with quick site stunning visualizations and Rich dashboards now the last two Services AWS data pipeline is a web service that helps you reliably process and move data between different AWS compute and storage Services as well as on-premises data sources at specified intervals at your own convenience the last one AWS glue is a fully managed ETL service that makes it easy to move data between your data stores thank you and see you in the next section in this section we will cover AWS artificial intelligence services so Amazon lacks Amazon poly Amazon recognition and we will wrap up this section with Amazon machine learning so services covered in this section can be found in AWS console under machine learning category so let's start with Amazon Lex Amazon Lex is a service for building conversational interfaces into any application using Voice and text Lex brings automatic speech recognition or ASR for converting speech to text and natural language understanding to recognize the intent of the text Amazon poly is a service that turns text into lifelike speech poly is an Amazon artificial intelligence or AI service that uses Advanced deep learning Technologies to synthesize speech that sounds like a human voice and this is really powerful Amazon recognition is a service that makes it easy to add image analysis to your applications with recognition you can detect objects scenes and faces in images Amazon machine learning is a service that makes it easy for developers of all skill levels to use machine learning technology Amazon ml provides visualization tools and wizards that guide you through the process of creating ml models without having to learn complex ml algorithms and Technology thank you for your time and see you in the next section in this section we will cover AWS mobile services AWS mobile hub Amazon Cognito Amazon pinpoint and the last section AWS device farm so this Services can be found in AWS console under customer engagement and mobile categories so let's start with AWS Mobile Hub Enterprise Mobile Hub provides an integrated console experience that you can use to quickly create and configure powerful mobile app backend features and integrate them into your mobile app Amazon Cognito lets you easily add user sign up and sign in to your mobile and web applications with Cognito you also have the option to authenticate users through social identity providers such as Facebook Twitter or Amazon pinpoint makes it easy to run targeted campaigns to drive user engagement in mobile applications pinpoint helps you understand user Behavior Define which user to Target determine which messages to send schedule the time to deliver the messages and then track the campaign results the last service AWS device Farm is an application testing service that lets you test and interact with your Android iOS and web applications on many devices at once or reproduce issues on a device in real time for testing purposes thank you for your time and see you in the next section in this section we will cover AWS application services we'll go through AWS step functions Amazon API Gateway Amazon elastic transcoder and we will wrap up this section with Amazon's simple workflow service or swf these Services can be found under the AWS console in networking and content delivery application integration and Media Services categories so let's start with step functions built-in applications from individual components that each perform a discrete function lets you scale and change applications quickly step functions provides a graphical console to arrange and visualize the components of your application as a series of steps Amazon API Gateway handles all the tasks involved in accepting and processing concurrent API calls including traffic management authorization and access control monitoring and API version management elastic transcoder is a media transcoding in the cloud you can convert or transcode media files from their Source format into versions that will play on devices like smartphones tablets and PCs the last service Amazon simple workflow service or swf makes it easy to build applications that use Amazon's Cloud to coordinate work across distributed components swf tracks the state and coordinates tasks of background jobs of your applications thank you and see you in the next section in this section we will go over the AWS business productivity services so the services are Amazon work docs Amazon workmail and Amazon chime these Services can be found under the AWS console in end user Computing and business applications categories with Amazon workdocs users can comment on files send them to others for feedback and upload new versions users can take advantage of these capabilities using the device of their choice including PCS Max tablets and phones another similar product from Google for example is Google Docs and you can work with Google Docs in the cloud in word in Excel PowerPoint and so on Amazon workmail is a secure managed business email and calendar service with support for existing desktop and mobile email client applications so this is similar to Gmail for example right now the last service is Amazon chime Amazon chime is a Communications service for online meetings you can use Amazon chime for online meetings video conferencing calls chat and to share content inside the and outside your organization and probably the most popular one would be WebEx from Cisco as an alternative thank you for your time and see you in the next section in this section we will go over AWS desktop and app streaming services so only two services covered in this section Amazon workspaces and Amazon appstream 2.0 you can find these Services Under AWS console in end user Computing category Amazon workspaces is a fully managed secure desktop Computing service that runs on the AWS cloud you provide your users access to the documents applications and resources they need from any supported device and this is basically the AWS vdi service alternative Amazon appstream 2.0 is a fully managed streaming service that allows you to stream desktop applications from AWS to any device running a web browser other examples in the industry would be site tricks or Microsoft FV Google app streaming and others as well thank you for your time and see you in the next section in this section we will cover AWS internet of things or simply iot services so we will cover AWS iot platform and the two Services AWS Greengrass and AWS iot button these Services can be found in AWS console under Internet of Things category AWS iot is a managed Cloud platform that lets connected devices easily and securely interact with Cloud applications and other devices AWS green grass is software that lets you run local compute messaging and data caching for connected devices in a secure way with AWS green grass connected devices can run Lambda functions keep device data in sync and communicate with other devices securely the AWS iot button is a programmable button so it is actually a piece of Hardware this simple Wi-Fi device is is easy to configure and it's designed for developers to get started with AWS iot AWS Lambda Amazon dynamodb Amazon SNS and many other services without writing device specific code now you can literally configure button clicks to do whatever you want to count or track items to call someone to start or stop something let's say to open the garage door to order pizza and many many others interesting things thank you and see you in the next section in this section we go over the AWS game development services and it is actually a single service in game development category in the AWS console the name is Amazon Gameloft Amazon gamelift is a many service for deploying operating and scaling dedicated Game servers for session based multiplayer games Amazon Gameloft makes it easy to manage server infrastructure scale capacity to lower latency and cost match players into available game sessions and defend from DDOS attacks thank you and see you in the next section [Music] module 3 AWS Services high level overview so congrats for your progress on the course you have learned quite a lot in these first three modules you created your free tier AWS account and install useful software on your PC you have gone through the introduction to AWS cloud computing and learn about regions availability zones and Edge locations and also about the different AWS management interfaces before sitting the real Cloud practitioner exam please make sure you know the different AWS core services and what was the task these Services were designed to solve you should at least be prepared to answer different questions that are related to the following service categories compute storage database migration networking content delivery management tools security identity and compliance and messaging services all slides are available for download in their respective section if you want to use them while reviewing the AWS services covered in this module in the next two modules we will start to Deep dive on each of the core AWS services with a real Hands-On and practical approach so please be ready to use your AWS account extensively with that said please join me in our next module module 4 AWS core Services the backbone where the course gets really really Hands-On and a lot of fun so let's get started foreign [Music] [Music] we will clean up your AWS account so that we make sure you will not be charged for any AWS services that you may have left running in there so now let's switch over to AWS Management console and get started alright I have logged into AWS Management console and remember that almost everything that we have configured is related to our VPC so let's check our VPC first so services and then I will just go down to networking and content delivery and I will just click on VPC here we can see our VPC our own created VPC and also the default one so let's go on and click on vpcs and I will see here my AWS CCP VPC now if I just click on this one and go to actions and say delete VPC it says that it is unable to delete this VPC because different resources are tied to this VPC and I have here ec2 instance and also some network interfaces defined here so what we need to do actually is go to AWS so this is the landing page and I'll just navigate to ec2 so elastic compute cloud and I'll go into running instances with this one only being selected it may differ from what you currently have but you get the point so you have to delete everything that it says there in the VPC in my case I have only this one so I'll just click on actions and instance State and go to terminate and I will click yes terminate in order to validate my choice it said that I also have some um some interfaces there so let's now go over anything that is covered in ec2 so instances it's here I have no other instances volumes let's see if you have any other so this one is in use let's try to go and detach volume yes detach it is not being detached right so it's it is related to our ec2 instance that is currently being terminated so this one should disappear once the instance is terminated and deleted now if I go to elastic IPS I don't have anything here this is great network interfaces this is something that the VPC was complaining about so I can see here in security groups is this one and for the VPC so it says that it is the same VPC all right so I will select both and say detach and force Detachment yes detach and again let's go over the load balancers I believe we should have at least one here so we have this application load balancer I will just click on actions and then say delete and yes delete good any Target groups we have the target group one this so this is selected click on delete and then yes to confirm let's go down launch configurations do we have anything here we should have because we have played with auto scaling groups so I have this one and the name was web server launch configuration so this is selecting then I will just say delete launch configuration and yes delete it cannot be deleted because it is attached to this specific Auto scaling group so now let's go to auto scaling group and I will say that actions and this is delete and yes delete it is now being deleted and let's see if we now can delete the uh the launch configuration and yes and yes so we are good here great systems manager no we're fine so let's go over again and take a look in VPC and let's see if now it's complaining or not so if I go again into vpcs and I will try to delete my first one here and let's say so it is complaining about this specific network interface so I'm taking the name and I will just go again to the landing page of AWS and go to ec2 and search for network interfaces and it is saying so it is saying that this is the one so I'm searching yes this is the one security groups description RDS network interface good so now we'll have to go to the RDS databases so services and then just go back the go down here to RDS with under database and let's see if we have any databases here and if I click on databases I see this one and the status is available so do not forget to also delete your RDS database I am selecting the the database going to actions and they and then say delete delete me and delete and let's refresh now it is saying deleting and if you go back to ec2 let's get back to ec2 and go down to network interfaces so here is the network interfaces and let me just say detach and force Detachment and yes detach no so I'll just wait for a couple of minutes in order for the RDS database to be uh to be deleted and afterwards I should be able to um to delete this network interface as well now let's do a Refresh on the database it still says deleting let me just try again and delete the the network interface going to ec2 and then going down to network interfaces it says here that no interfaces are available so deleting the RDS the RDS instance actually deleted everything here in ec2 great now what else have we used so let's get to AWS landing page we have also used Im so identity and access management and you can just leave it here it will not be um charge you anything but it is also a good idea to delete everything here so that you have a clean account when you start just using the uh the platform so now let's get back to the VPC itself so going to the recently visited services I should now see again the two vpcs so clicking on the number I will select my VPC and then go to actions and delete VPC so are you sure you want to delete this VPC and I'm providing the identifier and also the name deleting this VPC will also delete these objects associated with this VPC in this region yes I really want to delete everything I want to leave my account clean so click on delete VPC and everything is being deleted by AWS on your behalf the VPC was deleted close and the page should now be updated so now what's next well two exam tests are ready in order to test your knowledge 65 questions each of the tests and 100 minutes in order to complete that and this is just like in the real exam exam tests should be used as a learning tool also and this means that you can just go over and over the test until you master everything that it's included there please make sure that you take the time and complete each of the tests with no interruption so I'm referring to a quiet place no phones emails and anything that can distract you well good luck thank you and see you in the last module of the course right after you finish these two tests we will just book your exam good luck again [Music] alright so it's now time to book your exam the starting point https aws.amazon.com training so let's start right now alright so this is the landing page of aws.amazon.com training in order to continue just click on get started now if you don't have an account on the AWS training page you'll have to create one now I will click on certification and now I'll just click on go to your account clicking on this one will just redirect me to searchmetrics.com from here only single step schedule new exam and your exam is the first one on the list so AWS certified Cloud practitioner exam so PSI or person view which either of the two suits you best good luck with your exam foreign [Music] certified Cloud practitioner training bootcamp in this first section of the course I would like to provide you a brief overview on the course content so that you can understand more what topics are covered in each of the course modules we will also briefly touch on recommended study guidelines so we will talk about the AWS white papers we will start with module 1 the course introduction and talk about the AWS certifications currently available on overview and also recommended path if you choose to go for multiple AWS certifications very important we will talk about next about the AWS certified Cloud practitioner official exam blueprint so what is the the format of the exam and what are the expectations from AWS side when you will see the real exam we will also create an AWS free tier account for you say that so that you can practice every single topic in the course and we will wrap up this module module 1 by installing some software on your Mac on and also Windows operating system now we will continue with downloading the course slides and white paper so just after module 1 you can download all 400 plus course lights and also the white paper so the recommended reading and why is this important and I will show you right now also download useful files code and uh and AWS policies that we will use throughout the course so now let's get back to AWS white papers alright so I'm now on AWS certified Cloud practitioner official exam page the important fact exam resources if you just click on download the exam guide you'll be provided this specific document and again scrolling down to exam preparation you have here a list of AWS white papers that you should read before sitting the exam now the first one overview of Amazon web services white paper the recommended reading addiction is from April 2017 and if you click on white paper you'll be provided the December 2018 white paper and why is this important is because this white paper is 88 pages long now if you take a look of the overview of Amazon web services April 2017 white paper you'll see that this one has 48 pages long so I just assume that you'd just want to read what is needed for the exam and not more than that so that your time is really efficiently used so my advice is that you go into the download section of this specific section just between a module 1 and module 2 in the course and download the white papers as they are highlighted in in the official exam guide alright so now let's just continue next is module 2 AWS Cloud introduction so we will talk about cloud computing what is cloud computing also the advantages of AWS cloud computing and we will move next to AWS Global infrastructure regions availability zones and Edge locations we will wrap up module 2 after talking about AWS management interfaces so really how can we interact with the AWS Cloud platform through Management console CLI sdks but anyway we will talk more in that specific module module 3 AWS Services high level overview this module covers a high level overview on most important AWS services and this is based on AWS white paper overview of Amazon web services April 2017 as it is presented in the recommended reading list it may seem dry it is Powerpoint based but it is what it is and it is important for the cloud practitioner exam it is easy to consume though so sections vary in length between 1 and up to around five minutes in general so you may just uh mix this module so sections in this module with other sections in the course if you just I don't know think it would be better for you but please do not skip it it is important for the exam content is covered by a category as it is available in AWS Management console so some examples now compute Services storage Services database Services networking and content delivery and more than that as you'll see in this module module 3. now continuing on with module 4 AWS core Services the backbone now in this module we literally Deep dive into AWS and I believe you'll like it really really much for every topic covered we will first lay down the foundation from a theoretical perspective and then move on to AWS Management console or the CLI is a command line interface for Hands-On Labs we'll first create a billing alarm in order to monitor our potential spending in AWS and continue with AWS core services like the following so identity and access management or IM virtual private cloud or VPC elastic compute Cloud ec2 security groups elastic Block store or EBS and wrap-up module for the core Services module with simple story service or Amazon S3 module 5 AWS key Services following the same approach as module 4 first understanding the technology what it does what problem does it solve and why it was even invented and of course then we move on to Hands-On Labs AWS key services covered in this module Route 53 AWS cloudfront application load balancers and also Auto scaling relational database service or RDS AWS Lambda AWS elastic Beanstalk cloud formation and also simple notification service or SNS and cloudwatch in module 4 and module 5 we also cover information related to billing and pricing so module 6 will just wrap up the discussion around billing and pricing we will start with fundamentals of pricing cost optimization through ec2 reservations AWS cost calculators AWS trusted advisor and we will also talk about the AWS support plans as indeed these type of questions appear in the exam we'll move on to module 7 Security in AWS it is going to be a pretty short module but it is covering important information for the exam so we will start with an introduction to AWS security and talk about also the AWS web or web application viral shield and also the firewall manager we will wrap up this module with AWS inspector module 8 AWS architecture best practices this module is based on AWS white paper architecting for the cloud AWS best practices and I'm referring to the February 2016 Edition this module covers AWS best practices from the architecture perspective so topics covered in this module include for example design principles as related to the scalability automation loose coupling and more than that module 9 AWS account Cleanup in this section we will clean up your AWS account so with AWS you pay as you use the service when the service is not being used you will pay nothing so we will delete resources and stop every running AWS service in your AWS account so that we will not get charged for nothing right module 10 final exam tests it is time to test your knowledge and potentially learn even more two full practice tests in this module 65 questions each of the tests 100 minutes in order to complete that and an 80 passing score anyway in the exam you'll have like a 70 something 70-ish in the real exam passing score and I'd like to wish you good luck we will wrap up the course after recover exam booking in module 11 so we'll talk about what portal to use so where to authenticate and do what what is the exam code what are the options and you will see that you have two options like PSI or person view exam test centers so thank you and see you in the next section [Music] thank you [Music]
Info
Channel: Cloud Guru
Views: 2,246
Rating: undefined out of 5
Keywords: aws certified cloud practitioner, aws cloud practitioner, cloud practitioner, aws ccp, AWS Certified Cloud Practitioner Exam, awscloudpractitioner, amazonwebservices, aws, aws certified cloud practitioner exam questions, aws certified cloud practitioner exam, aws cloud practitioner exam, aws cloud practitioner certification, aws cloud practitioner practice exam, aws practitioner, aws cloud certification, aws training and certification, aws exam questions, aws practice exams
Id: mO2JZRzibuI
Channel Id: undefined
Length: 495min 19sec (29719 seconds)
Published: Mon Oct 17 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.