AWS Certified Cloud Practitioner 2021 FULL COURSE for Beginners (2019 Course Updated)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everybody my name is qasim shah and i'm the instructor for this course now as an aws certified professional i have been involved in helping organizations digitize and realize the benefits of the cloud for over 14 years now now my goal in this course is to impart the knowledge and the techniques that i have gained throughout my experience and through becoming aws certified to make sure that you guys get comfortable and have enough knowledge to sit and pass the aws certified cloud practitioners exam now i have designed this course in a way to not only familiarize you with the concepts of aws but also give you the hands-on techniques that you require to configure different options that that are in aws so let me walk you through the agenda of what you can expect to get out of this course so what we're going to do is going to start off with a broad overview about all of the services that aws offers now there are lots of services that are within the aws framework but not all of them are going to be covered in the cloud practitioners exam so what i have done is i've given you an overview about all of the services but i have focused on those few that are going to be covered in the exam in more detail so we're going to do is we're going to begin off by learning how we can actually create an account in aws then we'll familiarize ourselves with the main management console that aws has to see what options there are then we'll dive deep into the compute section of aws and learn how we can configure and launch ec2 instances which are the basic framework of the aws environment we'll look at the different storage options that aws offer such as the s3 buckets such as elastic block storage and so on then we'll continue on to look at the different database options that aws offer such as the relational database service the dynamodb and the aurora services that are specific to aws then we're going to continue on and look at how aws has made itself into a highly available and fault tolerant environment so we'll look at different options such as a cloud front such as cloud watch such as cloudtrail and don't worry if these terms are foreign to you i will make sure that they are explained in detail so you guys can get comfortable in working in an aws environment now i have put a lot of effort in this course to make sure that i impart the knowledge and the techniques that i have learned to become certified to you while you guys are going through the course if there's any question that you guys have there's a q a section in this course website please post the questions in there i'll be more than happy to clarify any issues or any concerns that you guys might have in any of the lessons again i am super excited for you guys to be part of this course so without further ado let's get started hi everybody and thank you for taking this course to get yourself certified as a aws certified cloud practitioner in 2019 so i want to take you through what we'll be covering throughout the rest of this course to get you guys familiarized with the different topics that you can expect now i have tried to cover all the topics that are presented in the exam now the exam was changed back in february of 2018 to a new format and i've covered all of the topics that are in the new exam because the old exam was retired in august of this year so we're going to begin off by talking about computing we'll look at what is cloud computing and we'll cover all of the services at a high level that aws offers now the exam does cover your overall knowledge of aws and the services that are offered by it so we're gonna look at all of them but we'll concentrate specifically on the ones that you'll be tested on in a little bit more detail so the different things that we'll look at in terms of services that are offered by aws we'll look at storage so we'll look at the different storage options that are in aws specifically the s3 buckets the ebs and efs and don't worry i'll go through them in detail with you throughout the rest of this course we'll look at how aws handles users and accounts in terms of users groups and the policies that can be associated with both of them then we'll look at the main databases that are offered by aws we'll look at the relational database service fully managed service that aws offers and we'll look at aurora database which is unique to amazon and a very popular dynamodb if you have been involved with aws i'm sure you've heard of the name dynamodb we'll look at how aws has become a high availability platform and we'll look at the technologies that make it into a high availability platform we'll go on to look at virtual private clouds and they are covered in a little bit more detail in the cloud practitioners exam so i have concentrated on them a little bit more and then finally we'll look at the tools that are available for us to manage our cloud environment through aws now for most of these courses i have done a hands-on tutorial after the lectures to get you guys familiarized with aws and how we can navigate through the management console how we can set up storage accounts and users and databases and virtual private clouds so this is not a lecture course this is a hands-on course so you will be familiarized with all of the technologies that you will be tested on and also get a hands-on feel for what can be done in a real world scenarios now the test is a multiple choice exam you guys can see that it covers four main domains cloud concepts security technology and billing and pricing and then you guys can see the percentages of each of those domains and i've covered all of these domains in the topics of this course additionally at the end i have created a quiz and i've tried to make the questions as real as possible and as they appear in the actual exam to get you guys comfortable when you actually sit in the exam with all the topics that are covered now i highly suggest you guys go through all of these lectures in detail and thoroughly before doing the exam and treat the exam as you would when you're sitting in the actual test center additionally i have uploaded several resources for you several white papers and i do suggest that you should go through all of these white papers read them thoroughly before studying for the exam because there are some details in the white papers that could possibly appear on the exam so please prepare yourselves thoroughly in terms of reading the white papers going through these lectures and also going through the exam questions that appear at the end of this course because we should enjoy learning so thank you again for taking this course i hope you guys enjoy it and i hope you guys find it fruitful and it aids you in getting certified in your cloud practitioner in 2019 hi everybody and welcome to the first lesson in the aws certified cloud practitioners course in this lesson we're going to be looking at an overview of what is cloud computing so cloud computing essentially is an on-demand delivery of compute power database storage applications and other it resources through a cloud services platform via the internet with pay-as-you-go pricing that's offered by amazon now whether you're running applications that share photos to millions of mobile users or you're supporting critical operations of your business a cloud services platform provides rapid access to flexible and low-cost i.t resources with cloud computing you don't need to make large upfront investments in hardware and spend a lot of money on heavy lifting of managing that hardware instead you can provision exactly the right type and size of computing resources you need to power your newest and brightest idea you can access as many resources as you need almost instantly and only pay for what you use now cloud computing provides a simple way to access servers storage databases and a broad set of application services over the internet a cloud services platform such as the amazon web services owns and maintains the network connected hardware required for these application services while you provision and you use what you need via a web application so essentially there are six main advantages of the amazon cloud computing first one is trade capital expenses for variable expenses now instead of having to invest heavily in data centers and servers before you know how you're going to use them you can pay only when you consume competing resources and pay only for how much you consume second is a benefit from massive economies of scale now by using cloud computing you can achieve a lower variable cost than you can get on your own because usage from hundreds of thousands of customers is aggregated in the cloud providers such as amazon web services can achieve higher economies of scale which again translates into lower costs for you third is you can stop guessing at capacity eliminate guessing on your infrastructure capacity needs when you make capacity decisions prior to deploying an application you often end up either sitting on expensive idle resources or dealing with limited capacity with cloud computing these problems go away you can access as much or as little capacity as you need and scale up and down as required with only a few minutes notice next one is increased speed and agility in a cloud computing environment new it resources are only a click away which means you reduce the time to make those resources available to your developers from weeks to just minutes this results in a dramatic increase in agility for the organization since the cost and time it takes to experiment and develop is significantly lower the fifth benefit is you can stop spending money running and maintaining data centers instead you can focus on projects that differentiate your business not the infrastructure again this allows you to put your customers first and lastly go global in a matter of minutes you can easily deploy your application in multiple regions around the globe within just a few clicks this means you can provide lower latency and a better experience for customers at a minimal cost so there are various types of cloud computing three main ones that we're going to look at and that aws looks at especially in this exam now cloud computing provides developers and i t departments with the ability to focus on what matters most and avoid undifferentiated work such as procurement maintenance and capacity planning as cloud computing has grown in popularity over the last few years several different models and deployment strategies have emerged to help meet specific needs for different users now each type of cloud service and deployment method provides you with different levels of control flexibility and management and it's very important to get a understanding of what the difference is in these environments so the very first thing that you see on the left hand side which is referred to as enterprise it or the legacy it is where you manage everything on your own from the hardware all the way up to the applications then as you move on first you have infrastructure as a surface or is now that contains the basic building blocks for cloud it and typically provide access to networking features computers virtual or on a dedicated hardware and storage space is provides you with the highest level of flexibility and management control over your it resources and is most similar to existing it resources that many departments and developers are familiar with today moving one notch up we have platform as a service or pass now that removes the need for your organization to manage the underlying infrastructure which is usually the hardware and the os and allows you to focus on deployment and management of your applications this helps you be more efficient as you don't need to worry about the resource procurement capacity planning software maintenance patching and so on and the final level we have is software as a service or sas which a lot of you are probably the most familiar with because it provides you with a complete product that is run and managed by the service provider in most cases people referring to software as a service are referring to end user applications with sas offering you do not have to think about how the service is maintained or how the underlying infrastructure is managed you only need to think about how you will use that particular piece of software again a common example of a sas application is a web-based email which most of us use either hotmail or gmail so these are the three main computing platforms that we need to keep in mind especially for the exam which is i as pas and sas now looking at the three different deployment models that come in with the cloud computing environment first we have obviously the cloud and the cloud-based application is fully deployed in the cloud and all parts of the application run in the cloud applications in the cloud have either been created in the cloud or have been migrated from an existing infrastructure to take advantage of the benefits of cloud computing which we just discussed then we have on the other side of the scale the on-prem or on-premises here the deployment of resources is on-premise using virtualization and resource management tools is sometimes called a private cloud on-premise deployment doesn't provide many of the benefits of cloud computing but it sometimes sought for its ability to provide dedicated resources or for compliance reason and then in the middle we have kind of a best of both worlds which is referred to as the hybrid or the hybrid cloud and that's a way to connect infrastructure and applications between cloud-based resources and existing resources that are not located in the cloud the most common method of hybrid deployment is between the cloud and existing on-prem infrastructure to extend and grow an organization's infrastructure into the cloud while connecting cloud resources to the internal system now looking briefly at the global infrastructure that amazon web services offer now aws serves over a million active customers in more than 190 countries they're steadily expanding their global infrastructure to help customers achieve low latency and high throughput now for the exam the two main things that you'll have to keep in mind is how the cloud infrastructure is broken up into regions and availability zones a region is a physical location in the world where they have multiple availability zones and availability zones are also referred to as az consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities and these availability zones offer the ability to operate production applications and databases that are more highly available fault tolerant and scalable now the aws cloud operates more than 57 availability zones within 19 geographic locations around the world now for the cloud practitioners exam we do not need to know every single availability zone or every single region just know the difference between a region and an availability zone whereas the region like i mentioned is a geographic location and an availability zone comes within a region and each region would have a minimum of two availability zones so here you guys see all of the services that are offered by amazon web services don't be alarmed by the number of services that you guys see on the screen for the cloud practitioners exam you will not be tested on each one of these the ones that we'll be looking at specifically for this exam are the ones that are in red which is compute storage database management security and identity and networking but i just wanted to show you guys all the different services that are offered by amazon in terms of cloud computing and again it allows you to do an a to z of your it environment using their services now in the next in the next lesson i will actually go on and log into the management console so you guys get a little bit more view and information about each one of these but throughout this course we will be specifically concentrating on the ones that i've highlighted in red and the ones that are in blue are more for some of the advanced exams such as the solution architect or the devops we'll touch base lightly on what they are because the practitioner's exam does cover some of these in terms of a high level overview that you need to know of what is within analytics or what is within machine learning but they do not ask you specific details what they do ask you specific details on are the ones that we'll be looking at throughout the rest of this course so let's go ahead and log into the management console and see all these all of the different services that are offered by amazon welcome back super excited in this lesson i'm going to talk about the new aws management console and some of the areas that you may find some additional services some additional tools so as soon as you log into your management console you will notice several set of services so under all services there are several new ones because what amazon does it keeps adding additional services additional tools for example let's scroll down to the basic one such as the security identity and compliance now in the previous aws management console version you may not be able to find several of these services here you'll notice services such as amazon macy for example or cloud hsm or all of these are newly added or aws signer is one of the newest right services that you can actually use similarly under management and governance you'll see a whole set of new services for example the personal health dashboard amazon or aws proton amazon graffana prometheus and so on as you move within the aws ecosystem you will notice that in the next coming months or so you will see even more and tools for it admins security administrators so in this lesson just quickly scroll through all of these and again there are so many of them amazon continuously keeps adding all of these services machine learning you'll have amazon code guru which is excellent because it helps you identify code and reduce cost so if you have redundant code going to go ahead and take a look at that and there are several other services as well so let's not be overwhelmed by all of these tools and services as you work through the ecosystem and based on your specific niche or skill set you will only work with certain services so one of the other things in the management dashboard if i were to scroll up you'll notice on the top here there's a search bar which is available for you to search for services features marketplace products for example if you need to deploy a vm or some pre-configured amazon images you could simply navigate to marketplace let me give an example more specific example let's say if you were to deploy a wordpress website right fully configured you don't have to do any configuration you could simply navigate to the marketplace and type wordpress it's going to give you install wordpress best practices plugins and this is really what the images i'm talking about is fully configured wordpress website all you need to do just pick up one of the marketplace there are certain charges there's certain times they're free but sometimes you'll you know may incur certain charges as well so just navigate through and take a look at these and lastly there's an icon called the cloud shell and this is fairly helpful something new within the aws management console so if i were to click on the cloud shell it opens up a new tab and what it does it allows the cloud shell in the cloud and this enables which is simply a browser-based shell which gives me command line access to aws resources so if i need to let's say create a new bucket i could do so right from the cloud shell so this is really helpful if you need to do something fairly quickly if i were to close this and here's the cloud shell so for example if i were to execute a command such as aws it's going to give me the version okay similarly i can execute other commands such as aws configure and be able to provide access keys to a profile user in this way it's really helpful whether you're in the usb street virginia you can change the region and you can use other commands as well under actions you'll be able to open up a new tab rows columns and so on we'll restart the aws cloud shell so let's switch back to our management console so once again just quickly going through some of the newer areas within the management console so go ahead explore all of these services and tools these are helpful for if you're pursuing a certification or otherwise if you're working within let's say the devops pipelines machine learning or whichever area you're focusing on so i hope this helps if you have any questions post in the discussion area and with this let's move to the next lesson hi everybody and welcome to this first tutorial on working with amazon aws so the first thing that we want to do is go ahead and get our account created so we want to navigate to this url aws.amazon.com and on the top right we have the option to sign into our console now one good thing about aws is it does allow you a free tier access where it gives you access to most of their services without any charges but again you are limited to the usage of those resources but there is no upfront cost associated with it so we're going to go ahead and fill out this information now please keep in mind that one thing that you will need to provide is a credit or debit card number now even though this is a free free tier account sometimes some resources are chargeable so in order to create any account on aws there needs to be a credit or debit card number backed with it but not to worry in the next lesson i'm going to show you guys how you can set up billing alerts and we can specify a small threshold let's say 50 cents whenever we start using resources that are chargeable will get notified right away and we can go ahead and stop those but unfortunately in order to create the account we do we do need a credit card number so as soon as it verifies your telephone number that you provided again it's going to give you a short call with a code as soon as you verify the code your identity and the phone number is verified and here are the different plans that are offered by aws we want to go ahead and stick with the free plan and there we go it's a pretty simple setup in order to get our account created and activated i'm going to go ahead and sign into the console all right so this is the main dashboard that is offered by amazon web services as you guys can see it has some basic services you recently visited services it gives us an option for resources exploring different services that aws has to offer and again it gives us a easy build a solution area where we can go ahead and start either launching a vm building an app so depending on what we want to do they have different section setup so we can dive right in and lastly towards the bottom it has some tutorials let's say if you want to learn how to build websites or databases or big data it has some very nice tutorials both video and text based that we can go ahead and watch and learn a little bit more about these different services that amazon offers within their aws framework up here on the top right you guys can see it says london right now if you click on the drop down arrow these are all of the regions that are within aws and if you guys remember from the previous lesson i showed you guys the picture of the globe where it showed all of the different regions that aws has so this is where we can go ahead and change regions if you want northern virginia ohio or we want an apac or eu or south america sao paulo which is a fairly new one now they do have some new ones coming in in 2018 i know bahrain is one of them since i'm based in the uae i know they are working on getting bahrain up and running and they are expected to get that up beginning of 2009 so this is where we can go ahead and change our region and specify which region we are in if we go ahead and click on the services this is where we can see all of the different services that are offered by amazon web services so if you guys remember that slide that had all of the services offered by aws this is basically it i showed you guys the heading but each heading has all of the granular details of all of the different services that come within management and governance or storage or compute so this is one view we can go ahead and do it or if you click on the top right the a to z it also gives it in alphabetical order i always like to keep it grouped just so i can differentiate what different services i'm trying to do or you can there's always a search function if you want to search for a specific service we can also do that here lastly one thing i wanted lastly one thing i want to show you is if you guys see the pin up here now if there are certain services that you are going to be using or that you you do use on a regular basis you can go ahead and pin them on your taskbar up on top so let's say if i use cloud watch quite often or lambda quite often or code deploy quite often i can go ahead and pin those up here so i have easy access to them rather than going into services and trying to find them or navigate to them every time they'll be easily and readily available on top for us so that's basically it again creating an account is a fairly simple and straightforward process in aws and next we're going to look at how we can go ahead and create the billing alarm so we get notified if we gets if we start getting charged for some of the services that we will be using throughout the rest of this course hi everybody and welcome to this lesson and the amazon web services computing platform this is one of the main offerings that are utilized by most people when they're using the amazon web services and the cloud practitioners exam does have a fair amount of questions on this so we'll a little bit of the finer details of what pertains within the aws computing platform so what makes up computing within aws as you guys can see there are a bunch of things that are within the computing platform which is offered by amazon web services now don't be alarmed you do not need to know each and every single one of these in detail for the cloud practitioners exam but it is good to know what all offerings are offered within the computing platform and we'll look at specifically a few of them that are concentrated on or that make up the core of the computing so the first one is the ec2 which is also known as the elastic compute cloud ecr which is the ec2 container registry there's a ecs which is the auto scaling there's ekr which is a service for kubernetes or some of you might be familiar with docker kubernetes is another version of docker then we have beanstalk lambda light sale and serverless applications so the ones that we'll be concentrating on in this lecture are ec2 we'll look at beanstalk we'll look at lambda we'll look at light sale and then we'll briefly look at ecr and ecs but the main service offered by compute is something referred to as ec2 or an ec2 instant now ec2 is a web service that provides secure resizable compute capacity in the cloud it's designed to make web scale computing easier for developers so in layman's terms is basically a computer that or a server that you are using in the cloud now there are many benefits that are offered by ec2 first and foremost is the elastic web scale computing it enables you to increase or decrease capacity within minutes not hours or days you can commission 100 or even thousands of server instances simultaneously and when we go into the lab you guys can see how easy it is to commission an ec2 instance another benefit is that it's completely controlled you have complete control over your instance you have root access to each one and you can interact with them as you would any machine you can stop your instance while retaining the data on your boot partition and then subsequently restart the same instant using web service api you can choose among multiple instances you can choose among multiple instant types operating systems and software packages the ec2 allows you to select the memory configuration cpu instant storage and boot partition size they're optimal for your choice of os and application and in the lab we'll look at all the all of the different options that are offered by amazon in terms of booting up a ec2 instance it's also fully integrated with most aws services such as the simple storage service the relational database the virtual private cloud and again all of these we will look at in more detail later on in the course ec2 offers a highly reliable environment where replacement instances can be rapidly and predictably commissioned the service runs within amazon's proven network infrastructure and data centers now the sla for an ec2 commitment is 99.95 availability for each region now that's a highly available instant additionally extremely secure and the ec2 works in conjunction with the amazon vpc to provide security and robust networking functionality for all of your compute services you can utilize security groups you can utilize policies you can utilize encryptions so there are many different ways that you can implement security within your ec2 instances and lastly it is extremely inexpensive as compared to hosting something on-prem you pay a very low rate for compute capacity you actually consume now there are three different types of purchasing options for an ec2 instance you have on demand which is you pay for compute capacity by the hour or by the second with no long term commitments you can increase or decrease your compute capacity depending on the demands of your application and only pay the specified hourly rate for the instances you use the use of on-demand instances frees you from the costs and complexities of planning purchasing and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable cost another option is a reserved instance which provides you with a significant discount sometimes up to 75 percent compared to an on-demand instance pricing you have the flexibility to change families operating system types and tendencies while benefiting from reserved instant pricing when you use convertible reserved instances so in reserved instances you basically reserve machines that you'll be using and you are committing to a one or three year term lastly are spot instances now they're quite unique to amazon and they allow you to bid on spare ec2 computing capacity now since spot instances are often available at a discount compared to on-demand pricing you can significantly reduce the cost of running your apps grow your application's compute capacity and throughput for the same budget so for spot instances you basically bid on a price that you want to pay for using an ec2 instance and when that price point is met you will automatically be commissioned ec2 instances when the price goes above your spot and they will be decommissioned automatically so most of the time companies such as big pharma organizations that do a lot of analytics and number crunching they use spot instances at 12 am or 1 am when most of the companies are not utilizing amazon services and they use that time to crunch the numbers or do analytics and they get ec2 instances at a very cheap price as compared to doing the same thing during the day or on-prem so depending on your business case or what type of business you are in or what type of computing you are trying to do using spot pricing can significantly decrease your cost now this just shows you all of the different options that are within the compute services like as i had mentioned in the beginning common one is an amazon ec2 instance which i just discussed the benefits of and which is covered the most in the cloud practitioners exam and also in the solution architects exam and here you guys can see all of the different options that are within an amazon ec2 instance i've discussed most of them for you just a few other ones to keep in mind is an ami which is an amazon machine image that's basically a pre-built image that you guys can purchase when you're commissioning an ec2 instance for example you can you can use an image of a windows server or a linux server with pre-built applications and pre-built configurations so they have a whole they have an entire marketplace of amis that you can use and pick for pre-built images this significantly reduces your cost and time to get up and running another compute service is something referred to as elastic bean stock now that's an easy to use service for deploying and scaling web applications and services developed with java.net php node.js python and so on unfamiliar servers such as apache passenger and iis with this service you can simply upload your code and the beanstalk automatically handles the deployment from capacity provisioning to load balancing and auto scaling to application health monitoring at the same time you retain full control over the amazon web service resources powering your application and can access the underlying resources at any time there's also the aws lambda service which lets you run code without provisioning or managing servers you pay only for the compute time you consume there's no charge when your code is not running with lambda you can run code for virtually any type of application or backend service all with zero administration just upload your code and lambda takes care of everything required to run and scale your code with high availability you can set up your code to automatically trigger from other aws services or you can call it directly from any web or mobile app so let's go ahead and take a look at all of these services on the amazon web services console and let's go ahead and commission our first ec2 instance hi everybody and welcome to this lesson on launching our first ec2 instance so i've already logged into our console which i showed you in the last lesson how you can go ahead and get your account created for free so once we're logged in we want to go ahead and click on services and in the compute section so once we select ec2 it takes us to our ec2 dashboard now this provides us a host of information regarding ec2s on the top for the resources we can see how many instances we have running any dedicated hosts volumes elastic ips snapshots load balancers and so on towards the bottom you can see service health it gives you status of the health of the region that you're currently in if you guys see on the top right i'm currently in london so you can see there's a green check box next to london meaning that london is in good service health and the availability zones within this region eu west you can see that there are three availability zones within this region and if you guys remember a region can have two or more availability zones so this region in london has three availability zones eus 2a 2b and 2c and all of them are in good health so we can also go here to see if any region or any availability zone is not in good health because if we have any mission critical instances or apps running in specific regions we can see and mitigate any risks that might arise if a region is down or if in availability zone is down what we want to do is go ahead and launch our instance and once we select launcher instance it takes us to a pretty robust wizard in getting our instance up and running so the first step is choosing an ami or amazon machine image so these are basically images that amazon has created and are available for us to use that have pre-built os and applications on top so there are a host of images that are pre-built linux based red hat based and towards the bottom also windows based windows server based and so on so we can go ahead and select which image suits best our use case for example if you want a windows server that has microsoft sql server pre-installed on it we can go ahead and select these so there are a host of images that are already pre-built by amazon for us to use also on here on the left hand side there's an option for my amis now if your organization has created images they would show up here so if there are any specific machine images that are created for your organization that are pre that are pre-populated with some specific apps or configurations they will show up here so you can go ahead and select those and launch that instance and most organizations have their own customized amis that they've uploaded on aws that they use on a regular basis and then there's also the aws marketplace this is where you can go ahead and buy any software that runs on the aws cloud from various vendors such as sap microsoft and so on so we can go ahead and find and launch our software directly within the ec2 from this marketplace whether it's juniper barracuda trend micro so i suggest you guys go ahead and just browse through this section and see what all different softwares are available in the marketplace for you to use and they're also broken up by specific industry sections so if you're in the healthcare or financial services you can find those specific software's also here also so it's a very very robust system that aws has created in terms of launching ec2 instances so what i'm going to do is just go ahead and launch a basic linux system and here are all of the different types of ec2 instances that are available with this ami it gives you some basic information regarding the hardware of these instances for example the cpus the memory the storage the performance and whether it supports ipv6 or not so there's a host of different options that are available now for the cloud practitioners exams you do not need to know these specific instances that are available for example what t2 or t3 stand for or what m or c stand for they're just different hardware platforms that amazon offers based on specific use cases some are optimized for cpu usage some are optimized for memory usage some are optimized for gpu usage so it just depends on what type of applications and what type of use you will be doing for this ec2 instance that will determine what type of hardware will be required now for the purposes of this demonstration and since we are in a free tier i'm going to go ahead and stick with this free tier eligible one now just keep in mind that for the first 12 months following today's date is when i signed up for the account we get up to 750 hours of micro instances each month if so if we go above that that's when they start charging us for using it and they are charged on an hourly basis so just keep that in mind and again that's why setting up those billing alarms is very important especially if you're going to be testing and playing around within the aws platform it's very good and smart to set those billing alarms because sometimes when you get too involved you lose track of time and how much you've actually used these instances for during your testing phase so after that on the bottom we have several options i can either review and launch from right here so launch my instance based on this or i can go ahead and configure some additional information i just want to show you guys the screens that are available and the options that are available if we go ahead and do that so here we can select any number of instances that we want to launch whether it's 1 5 10 whether we want to launch any spot instances and if you guys remember spot instances are based on availability so here we can select what type of price we want to bid so once that price is reached these instances will be launched if the price goes above that these instances will be terminated the network this is where we can go ahead and select the vpc that we want to launch this instance into and again a vpc is a virtual private cloud i will discuss that towards the end of this course there is one vpc that's created for you by default so we're just going to stick with that placement groups capacity information iamroll which i will discuss next those are basically just your user accounts and your groups monitoring and again i will go through these when we come to cloud watch or when we come to monitoring and i am i will discuss these but this is where we can go ahead and configure that when we're launching our ec2 instance and if you want some quick information there is the i right next to the name if we click on that it'll give you some basic information let's say if you want to know what a placement group is if we forgot we can go ahead and click here and it'll give us a brief information in terms of what is the placement group which is so basically what you're doing is grouping a set of ec2 instances together so if one fails the other one will take over and towards the bottom we also have an option for advanced details now this is where bootstrapping comes into play let's say if we want our instance to go ahead and update certain softwares or apply certain patches when it's booted up we can go ahead and paste that information here we can also select the different storage that we want to add if you want to add additional volumes again the root volume comes with it and we've we can also change what type of volume we want we can also add tags to our group and a tag is basically a label that you assign to a resource and each tag consists of a key and an optional value both of which we can define and they basically enable us to categorize our aws resources in different ways for example by purpose by owner or by environment so let's say if we want to have a certain group of ec2 instances that's only used by finance we can tag those as finance for the finance department or ops for the operations department so it's just a way for us to go ahead and group our ec2 instances if they're going to be used by certain departments or certain groups and the last option that we are able to configure for the ec2 instance before launching it is the security group and the security group is basically the firewall of your ec2 instance now when i get into vpc we'll discuss security groups and the access control lists but just keep in mind this is where we can go ahead and configure the security groups for our ec2 instance whether we want to create a new group select an existing group so i'm going to leave everything as default for this one and go ahead and review and launch my instance so before launching it it allows us to review everything that we've selected if we've modified anything if we want to go ahead and change anything we can go ahead and do that but i'm going to leave everything as default and go ahead and launch my instance now when i click on launch it takes us to this screen that allows us to create our key pair of both a public and a private key so if you have an existing key you can go ahead and choose that here or you can go ahead and create a new key pair or proceed without a key pair now since i already have a keeper i'm just going to go ahead and use that and just as a reminder a key pair is basically going to be used if we're going to be connecting to our linux machine through ssh so here we have it we're back into our ec2 instances section and you guys can see that it's launching our instance for us it takes a few minutes for the instance to launch and once that's launched it will show up here as active and towards the bottom we can see the different information for this instance in terms of the public dns the public ip and a private ip and if you guys remember an ec each ec2 instance has both a public ip and a private ip along with a public dns that's used to access this ec2 instance additionally we can check the status of our ac2 instance we can set alarms which is what cloud watch is used for and i'll discuss that when we discuss cloud watch monitoring just gives you a basic health check or a resource monitor for your ec2 instance and again we can also create alarms for those and that's discussed when i talk about cloudwatch and then here's where we can see the tags that we've added and we can go ahead and add or edit tags after the fact also so if we forgot to tag this instance for the finance department we can do go ahead and do that here or if an ec2 instance is switching departments we can go ahead and edit those tags here so that's it it's pretty simple process in order to launch our ec2 instance hi everybody and welcome to this demonstration on iam or identity and access management so if you guys recall from the lesson i mentioned that we need to make sure that our root user is extremely secure and not use that for our daily access and right now i'm logged in as my root user which is the email address and credentials i use to create and set up my aws account so in order to show you guys what iam is and to take you through creating a different users and groups what we're going to do is perform a couple of tasks we're going to create an administrators group and give that group permission to access all of our aws account resources we're going to create a user for ourself and then add that user to that admin group and then create passwords for the users we can access and sign into the aws management console we're also going to be doing is granting the admin group permission to access all available aws account resources so basically the users in this administrators group that we're going to create will be will be able to access aws account information except for the aws accounts security credentials and those security credentials can only and and should only be accessed by the root user account which should be locked away somewhere so let's go ahead and create a administrator iam user and group console so as you guys can see i've already logged into my aws management console we're going to go ahead and go into services we're going to go ahead and go into iam and just like for the ec2 this is the main dashboard for iam as you guys can see this is the basic security status of iam and it's very important that we have a green checkbox on all of those in terms of making sure you delete our root access keys we have mfa or also known as multi-factor authentication so it basically sends a text message in order for you to log in create individual users groups to assign permissions and then apply password policies so what we're going to do is go ahead and go into users i'm going to go ahead and add a new user and for this user i'm going to call it administrator and we'll be logging in with the management console now if you're going to be logging in through any api or the command line interface or any sdks or any other development tools for example let's say if you have a devops team that will require admin access you can also give them programmatic access which gives you the keys the public and private key but since we're going to stick with the management console we will only grant these admins access through the management console auto generator password or a custom one we'll just go ahead and stick with a custom one and here we can have either add the user to a group and right now we do not have any groups created that's why you don't see anything here we can copy permissions from an existing user i have one user already created so i can copy the permissions from this user onto this one or attach any existing policies directly so what this will do is instead of adding the user to a specific group this will attach a specific policies that should be assigned to a group directly to this user now it's not recommended that we do this it's always recommended that we stick with creating groups and adding users to those groups i'm going to go ahead and create a group and here's where a group dialog box comes up group name i will give it administrators and here are all of the different policies that we can assign to this specific group you guys can see there are 404 pre-built policies that aws comes with that we can assign to groups now we are also able to create our own customized policies that's a bit advanced and out of the scope of the cloud practitioners again but just keep in mind that if there is a certain policy that you cannot find which 99 of the time you will find here you are able to create customized policies and it's going to be a very cumbersome task to scroll through five 404 different policies so we can either search let's say if you want to create or assign policy specific to s3 we can search specific for s3 or we can also filter policies by policy type that's managed by us aws managed or by job functions or by specific use so we want to do it based on job function because again this is this is for administrators and if we do that again it narrows it down from 404 to 10 based on drop function and we want to give it administrator access now if you guys click on a drop down arrow it tells you that there it provides full access to aws services and resources and here is where we can see a basic json of what happens when this policy is applied to this specific group it's basically allowing every action to every resource i'm going to go ahead and create a group so now as you can see this user will be added to the administrators group and tags for users are the same thing as tags for ec2 instances which you looked at before in terms of the function that they do so we're going to go ahead and click on reviews it gives us again a last time to review the user that we're creating and we're going to go ahead and click on create user now once the user is created it gives us a url that this user can use to log into the same management console so instead of using the root account credentials they can use their credentials that we just created for this administrator so i'm going to go ahead and copy this and open up a new window and see if we can log into our management console as an admin user rather than as a root user so as you guys can see the login looks a bit different as when we logged in with our user so here it gives us the account id or alias which again the account id is the same as my name the iam user which is administrator and the password that we just set and if you see on the bottom we also have the option to sign in using the root account but again it is not recommended practice to do that and here you have it so now we're signed into our management console as administrator at kasamsha and again for your organization would be administrator at whatever company name that you have and since we gave it admin access it will have access to all of these resources the same as we had for the user now if we were to create some restrictive groups let's say people who have only access to the business applications or have access only to storage or most databases then they would only see those specific options and again like you guys saw there are 404 different policies that aw has that we can you that we can utilize or create our own custom policies for restricting access to certain resources next there's also an option for roles now i am role is basically very similar to a user in that it's an aws identity with permission policies that determine what the identity can and cannot do in aws however instead of being uniquely associated with one person a role is intended to be assumable by anyone who needs it also a role does not have a standard long-term credentials password or access keys associated with it instead if a user assumes a role let's say temporary security credentials are created dynamically and provided to the user so basically a role can be utilized by a physical person or by an application so you can use them to delegate access to users applications or services that don't normally have access to the aws resources so if you go if so if we create a role we can see all the different options that we have we can create roles for our ec2 instance we can create lambda roles and we can create roles for a host of different aws services or we can create role for another aws account so let's say that you have a partner organization or let's say that you have auditors coming in that will be checking your books that require access to your s3 buckets or to information stored on aws you can create a specific role for them which will be temporary because they're not part of your organization we have role for web identity or saml 2.0 federation now saml federation or federation is basically utilizing your users and groups that you already have within your infrastructure let's say if you're using windows active directory most likely you already have all of your users and groups created so it does not or would not make sense for you to create them all over again in aws so this saml federation allows you to utilize your same credentials that you have and are using within your organization whether it's windows or ac whether it's windows active directory or an other platform and use those same credentials and give everybody access to the aws platform so rules come in very handy in terms of giving temporary access to either applications to ec2 instances or to other organizations or to people that will be temporary working within your organization that might require access to aws for a short period of time and policies again when we create the group we saw all the policies that are available so this just gives us a good overview to see if there are policies that we need or if we want to create our own specific policy we can do that here we have two options we can either use a visual editor or a json so if you're familiar with coding you can go ahead and code in your own policy or use a visual editor to create a specific policy for a specific service and creating a policy is a bit outside of the scope of this course and the cloud practitioners exam in general but just know that you are able to create customized policies identity provider this is where we can create our saml identity providers if we want to link our on-prem active directory users and groups with aws account settings this is where we specify our password policy if you want to restrict passwords and this is highly recommended that you have a restrictive password policy in terms of making it complicated having it expired so depending on how secure your environment or what type of information you organization is working in or determine how restrictive your password policy is and lastly this report is very useful especially if you have a large host of users and groups and policies you can download a report that gives you a list of all of your accounts users and the status of their various credentials to see especially if you have users that are using private and public keys that are that have expiry dates you can use this report to see which keys have expired which keys need to be reissued and so on so that's basically your iam for the cloud practitioners exam it's very useful for you to be familiarized with users groups roles and policies and creating them so i suggest you go ahead and create a few users few groups policies just play around with the different configurations and also highly recommended to read this iam best practices that you guys see here they have some very good faqs which come in very handy for the exam hi everybody and welcome to this lesson on aws storage so in the next few lessons we're going to be looking at the different storage options that amazon web services offers so let's first take a look at all of the options that are available to us in terms of storage in aw in aws now cloud storage is a critical component of cloud computing holding the information used by applications big data analytics data warehouses iot databases and backup and archive applications all rely on some form of data storage architecture cloud storage is typically more reliable scalable and secure than traditional on-prem storage systems now aws offers a complete range of cloud storage services to support both application and archival compliance requirements you have the option to select from object file and block storage services as well as cloud data migration options to start designing the foundation of your it environment we're going to take a look at some of the most popular ones that are used and that will be covered in the exam so the five main storage options that aws offers are s3 which is by far the most popular there's the elastic block storage we have the elastic file system and then we have two of the lesser known or less used options which is glacier and the storage gateway so in this lesson let's go ahead and take a little bit deeper dive into the s3 or also known as a simple storage service and just in exam tips sometimes aws is known to ask simple questions such as what does s3 stand for so just keep in mind that it stands for simple storage service now s3 is an object storage service that offers scalability availability security and performance so this means that customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases such as websites mobile apps backup restore archive and so on now s3 provides easy to use management features so you can organize your data and configure fine-tuned access controls to meet your specific business organizational and compliance requirements now some of the key benefits of s3 is first and foremost it's the industry-leading performance scalability availability and durability now i know that's a mouthful but just keep in mind the term known as 11 nines now s3 is designed for 99.9999 basically 11 9 of data durability of data durability because it automatically creates and store copies of all s3 objects across multiple systems this means your data is available when needed and protected against failures errors and threats the s3 also has a wide range of cost effective storage options which we'll look at later on in this lecture it has unmatched security compliance and audit capabilities you can store your data in s3 and secure it from unauthorized access with encryption features and access management controls it maintains compliance programs such as the pci dss the fedramp the eu data protection directive and many more and aws also supports numerous auditing capabilities to monitor access requests to your f3 resources which we can do through cloudtrail which we'll look at later on in this course additionally you can use management tools for granular data control you can classify manage and report on your data using features such as the s3 storage class analysis or the s3 life cycle or the cross region replication and again we will look at these when we do the lab towards the end of this lecture you can run big data analytics across your s3 objects and other data sets in aws with their query in place services and finally store and protect your data in s3 by working with a partner from the aws partner network now the s3 offers a range of storage classes designed for different use cases so there are five main ones that are there first you have the standard one which offers the high durability availability and performance object storage for frequently accessed data now because it delivers low latency and high throughput the standard is appropriate for a wide variety of use cases including cloud applications dynamic websites content distribution mobile and gaming applications and big data analytics now keep in mind that storage classes can be configured at the object level and a single bucket can contain objects stored across s3 standard tiering ia and so on there's also the s3 intelligent tiering now the tiering is something new that has been introduced by amazon just recently and the 2019 exam version is most likely going to be covering this also now this storage class is designed to optimize costs by automatically moving data to the most cost effective tier without performance impact or operational overhead it works by storing objects in two access tiers one tier that is optimized for frequent access and an other lower cost tier that is optimized for infrequent access the next one is the standard ia or the standard infrequent access is for data that is accessed less frequently but still requires rapid access when needed the ia offers the high durability the high throughput and low latency of the standard with a low per gb storage price and a per gb retrieval fee then you have the one zone infrequent access or one zone ia it's meant for data that is accessed less frequently but requires rapid access when needed unlike the other storage classes which store data in a minimum of three availability zones the one zone ia stores data in a single availability zone but it costs 20 less than the standard ia the one zone ia is ideal for customers who want a lower cost option for infrequently accessed data but do not require availability and resilience of the standard or the standard ia so it's a good choice for storing secondary backup copies of on-prem data or easily recreatable data because keep in mind one of the main drawbacks of the one zone ia is that it is not fault tolerant and then lastly you have the glacier which is a secure durable and low-cost storage class for archiving data you can reliably store any amount of data at costs that are competitive with or cheaper than on-prem solutions now keep in mind that the glacier provides three retrieval options that range from few minutes to a few hours you can upload objects directly to s3 glacier or use the s3 lifecycle policies to transfer data between any of the storage classes and we'll look at the lifecycle policies when we do the lab towards the end of this lecture so this table you guys see gives you a good comparison of the different options that are available and again there's an asterisk by the tiering because it's a newly introduced service by amazon but i highly suggest that you guys take a close look and memorize the different options the different s3 storing options that are available because there will be a few questions that are solely based on the storage classes for s3 so it's good to know the difference between standard ia and the one zone ia and the glacier for storage purposes now one thing to keep in mind for exam purposes and for real world application purposes is that the s3 is an object-based storage class it means that it cannot be a boot volume and it cannot hold applications it can only hold objects so think of an s3 bucket as a folder in which you can put files and other folders so you can put any amount of files any amount of folders within an s3 bucket and lastly another thing to keep in mind for s3 which again i will mention again when we do the lab is that the names have to be unique across the aws platform their names are dns based names so they have to be unique so let's go ahead and create our first s3 bucket hi everybody and welcome to this lesson on amazon s3 so before we can upload data to amazon s3 the first thing that we need to do is go ahead and create our bucket so i'm going to go ahead and navigate to services i'm going to find storage and what i want to do is s3 now this is the dashboard for s3 any buckets that we would have created will show up here now since there are no buckets nothing is showing up so first thing i want to do is go ahead and click on create bucket so the first thing we need to provide this bucket is a bucket name and please keep in mind that the bucket name has to be uni unique across all existing bucket names in amazon s3 that's across the entire amazon s3 not just your organization because the names are dns compliant hence they need to be unique next option is the region we can select which region we want the bucket to reside in at an optional setting if we already have set up a bucket that has the same settings that we want to use for this new bucket we can set it up quickly by choosing this option copy settings from an existing bucket and if we had an existing bucket all these options would show up here so this makes it easy for routine bucket creation so getting back to the name there's some things that we need to keep in mind again it needs to be unique it cannot contain any uppercase letters it must start with a lowercase letter or number and the name must be between 3 and 63 characters long and most importantly once the bucket name is created we cannot change it so make sure you choose the names wisely because if it's created and if it's being used by your organization and then you find out that the name needs to be changed you will need to go ahead and create a brand new bucket because these names are not changeable so let's say if i were to name the bucket test now i'm sure there are other buckets by the name of test within amazon s3 so let's see what happens if i choose test as you guys can see it says the bucket name already exists now it does not exist within my account but within amazon s3 the bucket name test does does exist so i'm going to name the bucket casting sha s3 the region i will keep it in northern virginia that's one of the main reasons for amazon aws and since i don't have any buckets i'm going to leave this blank i'm click on next now here's we can choose different configuration options first option is versioning now what versioning basically does it keeps track of all of the objects within the same bucket so let's say that you created an object and you updated it and somebody else updated it then you deleted it if you have enable versioning it will keep track of every single thing that happens to that object whether it's updated whether it's deleted if you want to keep a history of that object it's always good to enable versioning the service access logging it provides detailed records for the requests that are made to your bucket and then the tags we can use cost allocation bucket tags to annotate billing for use of a bucket and each tag is a key value pair that represents a label that you want to assign a bucket and if you guys recall these tags are also available for ec2 instances and for groups and iam there's also an object for object level logging this enables logging in cloud trail and we'll discuss briefly what cloud trail basically is a little bit later on in this course but just as an introduction cloudtrail is basically a way we can audit access and audit things that are going on within our aws account and then encryption it would automatically encrypt any object that is stored within aws additionally some of the advanced settings we have an option for an object lock if you want to lock objects in the bucket now keep keep in mind that in order for this to happen you need to have versioning enabled so we're going to go ahead and enable versioning for the bucket and click on next now keep in mind for yourself and also for the cloud practitioners exam that by default every bucket is private someone to create a bucket and if you've added objects into that bucket every single object will be private meaning it will not be accessible to anybody outside of your organization so if you're creating let's say a static website and hosting content on an s3 bucket by default all of that content will be private so nobody from the internet will be able to access content within this s3 bucket so we'll leave this as default for now and click on next and lastly we can review all of the configuration options we've selected go ahead and change them if we want and click on create bucket and there we have it we have our first bucket created we can see that the access is private it's not public the region that's in and the date and time that the bucket was created if we click on the bucket we can see that there are no objects within this bucket we can see the properties let's say if we did not have versioning enabled we can go ahead and enable it here as you can see it is already enabled same goes for server access logging object level logging and the default encryption which is what we saw when we were creating the bucket additionally you can also it also gives us a neat way to host static websites so if we're going to be hosting static websites we can enable that here and one option that i do want to point out is transfer acceleration the exam sometimes does have questions related to transfer acceleration so basically enables fast and secure transfer of files to and from your s3 bucket so an exam question might ask that you are trying to upload large amounts of data to an s3 bucket and you are in a time constraint what options are available in order to expedite the data transfer process and then the option would be transfer acceleration so in the permissions tab we can go ahead and change the settings of the s3 bucket if we want to make it public instead of private we're able to do that here and again always keep in mind that by default it is private additionally we have option for access control lists basically a policy option to grant basic read write permissions to other aws accounts or even to public access let's say if there are other aws accounts that we want to grant access to this s3 bucket we are able to do that here we can also write bucket policies and this is a simple json editor we can create simple jsons to grant people access to this bucket if you're not too familiar with writing json aws does come with some documentation if you would like to learn or a policy generator so through this policy generator we can select to create different types of policies let's say we want to create a s3 bucket policy we want to allow certain people access to the bucket and what actions we want them to do whether create delete so it gives a host of actions that we are able to either allow or deny these principles and once we do that it will generate a json policy for us to copy and paste and lastly there's a course configuration or also known as cross-origin resource sharing so let's say that your s3 bucket is being hosted in another region and the application or users that are trying to access it are located in another region this basically allows access from different domains within aws so let's say if we want to upload any documents and again that's also a very simple process so we can upload any kind of documents here and once we upload we can either directly upload or if we click on next we are able to define certain permissions for this object that we're uploading we can make this specific object public grant access certain accounts specifically for this object so this comes in handy when you're creating folders within an s3 bucket and you want to allow access to certain folders to certain people you are able to do that here and we can also set properties for this object that we're uploading to amazon s3 in terms of the storage class whether we want standard the intelligent tiering the ia the one zone ia or also the infrequent access glacier and reduced run and reduced redundancy same goes for encryption so if you want to specify these additional options we would need to define them here when we're uploading the objects to s3 so we have our first object uploaded onto our s3 bucket and if you guys remember i had enabled the versioning when i created the bucket so in the versions i can click on show and it will show all of the different versions of this object so let's say if i were to delete this object from my s3 bucket so the object no longer exists within the s3 bucket but if i were to go into my versions i can see that the object still exists within s3 it's just not showing up in my bucket so this is very important and a very useful tool that we can use versioning for to keep track of our objects that are within our s3 bucket and also keep in mind if we are setting cycle policies for our s3 bucket so life cycle policies are if we want to archive data that's not used let's say if we want to set a policy of certain folders or certain objects that are not accessed within 30 days we want to move them to glacier so let's say if we want to archive data after 30 days we would need to have versioning enabled in order for archiving to work or in order for data to be moved from the s3 bucket to glacier versioning will need to be enabled hi everybody and welcome to the second part of the storage options that are offered by aws so in the previous lecture we looked at the s3 in this lecture we're going to look at the next one which is offered by amazon and that's referred to as the elastic block storage or ebs now ebs provides persistent block storage volumes for use with ec2 instances in the cloud each amazon ebs volume is automatically replicated within its availability zone to protect you from component failure offering high availability and durability the abs volumes offer the consistent and low latency performance needed to run your workloads now with ebs you can scale your usage up or down within minutes all while paying a low price for only what you provision now the ebs is designed for application workloads that benefit from fine-tuning your performance cost and capacity typical use cases include big data analytics relational and nosql databases stream and log processing applications just to name a few now some of the benefits of ebs are it's a reliable and secure storage each volume provides redundancies within its availability zone to protect against failures encryption and access control policies deliver strong defense in-depth security strategy for your data the amazon ebs general purpose volumes and the provision iap volumes deliver low latency through ssd technology and consistent i o performance scale to the needs of your application and we will take a look at the different options that are offered by ebs next you can also protect your data by taking point in time snapshots of your ebs volumes providing long-term durability for your data boost the agility of your business by using the ebs snapshots to create new ec2 instances it also allows you to optimize your volumes for capacity performance or cost giving you the ability to dynamically adapt to the changing needs of any business they also provide the ability to copy snapshots across regions enabling geographical expansion data center migration and disaster recovery providing flexibility and protection for your business and lastly the ebs optimized instance provides dedicated network capacity for the ebs volumes this provides the best performance for your volumes by minimizing network contention between ebs and your instance now ebs provides a range of options that allow you to optimize storage performance and cost for your workload these options are basically divided into two major categories the ssd back storage for transactional workloads such as databases and boot volumes and the hdd-backed storage for throughput intensive workloads such as map reduce or log processing the ssd backed volumes include the highest performance provisioned iops ssd or io1 for latency sensitive transactional workloads and the general purpose ssd or the gp2 that balance price and performance for a wide variety of transactional data the hdd-backed volumes on the other hand include a throughput optimized one the st1 for frequently accessed throughput intensive workloads and the lowest cost cold hdd sc1 for less frequently accessed data and just to give you a little bit of an example in terms of the difference the io1 is designed to deliver a consistent baseline performance of up to 50 iops per gb to a maximum of 64 000 iops and provide up to 1000 megabytes per second of throughput per volume so that's an amazing performance that's offered by this type of volume in comparison the maximum iops for a gp2 is only 16 000. so that gives you a good baseline comparison of the difference between the performance of a gp2 and io1 and the same holds true for the s s c one and the s t one now some of the other options that are available for the ebs volumes we have something called the data lifecycle manager for ebs snapshots which provides a simple automated way to backup data stored on ebs volumes by ensuring that snapshots are created and deleted on a custom schedule so you no longer need to use scripts or other tools to comply with data backup and retention policies specific to your organization or even your industry with lifecycle management you can be sure the snapshots are cleaned up regularly and keep costs under control so you're getting the best of both worlds then the ebs also has elastic volumes which is a feature that allows you to easily adapt your volumes as the need of your applications change they allow you to dynamically increase capacity tune performance and change the type of any new or existing current generation volume with no down time or performance impact you can easily right-size your deployment and adapt a performance change the snapshots are like i mentioned are basically backups of your hard drive they give you the ability to save point in time snapshots of your volumes to an amazon s3 bucket now the snapshots are stored incrementally so only the blocks that have changed after your last snapshots are saved and you're built only for the changed block as a rule of thumb you should always keep a snapshot so you should always keep your original snapshot in a separate place and then keep your incremental snapshots in a separate bucket now for an additional fee amazon allows you to launch certain ec2 instant types as ebs optimized instances now these enable the ec2 to fully use the iops provisioned on an ebs volume now these optimized instances deliver dedicated throughput between amazon ec2 and amazon ebs with options between 500 and 10 000 megabits per second depending on the instance type the availability and durability so these volumes are designed to be highly available and reliable at no additional charge these ebs volumes are replicated across multiple servers in an availability zone to prevent the loss of data from the failure of any single component now these volumes are designed for an annual failure rate of between 0.1 to 0.2 percent where a failure refers to a complete or partial loss of the volume depending on the size and performance of the volume and lastly is the encryption it offers seamless encryption of data boot volumes and snapshots eliminating the need to build and manage a secure key management infrastructure now this encryption enables the data at rest security by encrypting your data volumes boot volumes and snapshots using the amazon managed keys or keys you create and manage using the aws key management service or kms so let's go ahead and log into the management console and see how we can provision ebs on our ec2 instance we created in the previous lab hi everybody and welcome to this lesson on the elastic block storage so in this lesson we're going to see how we can create a abs volume using our aws management console so in order to create the ebs volume first we need to navigate to our ec2 dashboard because ebs volumes are connected to our ec2 instances and once we're there we're going to go ahead and select elastic block volume and we want to go ahead and create our first volume here's where we can select the different configurations of the volume that we want to create the volume type and if you guys remember there are the main volume types that are offered by ebs where there's a gp ssd whether we want a provisioned high throughput ssd drive whether we want a cold hdt a throughput optimized regular hard drive or a standard magnetic one so depending on what type of use your ec2 instance or this ebs volume will be used for will determine what type of configuration you will use here and again and the gib the size of the volume we can specify that here now this with the availability zone becomes fairly important we need to because ebs volumes can only be attached to ec2 instances within the same availability zone so let's say if we were to create this ebs volume in us east 1a but our instances are in us east 1c this ebs volume will not be able to be attached to those ec2 instances so it's important we keep track of which availability zones we are creating our ec2 instances and our abs volumes in also we have an option for snapshots if we have previous snapshots that we want to use for this ebs volume we can select that here snapshots are basically a point in time backups of hard drives so if we have those from previous ebs volumes that we want to replicate here we can do that here and lastly encrypt this volume by default ebs volumes are not encrypted so we will need to encrypt the volume here if we are able to do that so we'll leave the everything by default and click on create volume so here we have it we have our first volume created obviously the volume is no good unless it's attached to a ec2 instance so so here's where we can go ahead and attach our volumes to our ec2 instance which we created in a previous lab now keep in mind there are some volume limits to how many can be attached to certain instances for example linux supports up to a 40 volumes whereas whereas windows machines or less depending on what type of configuration you have so make sure you find out how many volumes can be attached to a specific instance if you will be attaching multiple volumes to certain instances so the actions is where you can modify the volume if you want to create a snapshot which again is a point in time backup of this specific volume delete the volume or what we want to do is attach the volume and when we see when we click on the instance since i only have one only one issue but if you have multiple instances this is where you will need to either put in the name tag or the instance id and then click on attach now i have two ebs volumes attached to my instance one which was done by default when i created the instance and one which i just created now now a couple of other things in the elastic block storage in the snapshots this is where we can create snapshots of our east of our ec2 instances ebs volumes and more importantly is the lifecycle manager this is where we can automate the creation of snapshots of our ebs volumes so it's basically backing up our ebs volumes on an incremental basis so please keep in mind that all snapshots are incremental so it's recommended that you take an initial snapshot of your ebs volumes and either save that as an ami or put that away in an s3 bucket and then keep your regular snapshots saved in another bucket because each snapshot is an incremental version of the previous one so if you only have one place where all of your snapshots are being stored then you will have the latest version of that ebs volume you will not have the original ebs volume so that's it it's a pretty simple process to create an ebs volume and have that attached to our ec2 instances hi everybody and welcome to the third and last part of the aws storage so we've looked at the s3 buckets we looked at the elastic block storage and in this one we're going to look at the elastic file system so the efs provides a simple scalable elastic file system for linux based workloads for use with the aws cloud services and on-premise resources it's built to scale on demand to petabytes without disrupting applications growing and shrinking automatically as you add and remove files so your applications have the storage they need when they need it it's designed to provide massively parallel shared access to thousands of ec2 instances enabling your applications to achieve high levels of aggregate throughput and iops with consistent low latencies now it's a fully managed service that requires no changes to your existing applications and tools providing access to a standard file system interface for seamless integration it's a regional service storing data within and across multiple availability zones for high availability and durability you can access your file systems across availability zones regions and vpcs and share files between thousands of ec2 instances and on-prem servers via the direct connect or a vpn with the amazon web services now you might be asking well what's the difference between an efs or an s3 or an ebs well this table gives you a good comparison in terms of the differences between three main storage services offered by aws now the efs is basically a file storage the s3 buckets are object storage and the ebs is a block based the majority of the changes comes in terms of the access and use cases so first and foremost you have to look at your own business and your own it uses and see what type of storage will suit your needs the best now just an exam tip most of the time when a question is asked in terms of the best storage the s3 would be your answer the s3 would be your best answer because that that is the main storage service that is recommended by amazon and that is used by most businesses whereas the efs and ebs are limited in terms of their scope and in terms of their use but the s3 can be used consistently throughout any services that are offered by amazon web services so hopefully you guys got a good overview in terms of the different storage classes that are offered by aws the s3 the ebs the efs and the glacier which is the long term storage so let's go ahead and log into our management console and see where we can find efs and what different configuration options are available for us hi everybody and welcome to this lesson on the relational database service or rds that is offered by amazon web services so the rds makes it really easy to set up operate and scale a relational database in the cloud it provides cost efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning database setup patching and even backups it frees you to focus on your applications so you can give them the fast performance high availability security and compatibility they need in order to provide superior service to either your internal employees or your customers the rds is available on several database instance types either for optimized memory for performance or io and provides you with six familiar database engines to choose from such as the amazon aurora the postgresql mysql mariadb oracle database and the microsoft sql server and amazon also has a database migration service which allows you to easily migrate or even replicate your existing databases to amazon rds so they've made it very easy for you to start utilizing rds in a simple and easy fashion so let's go ahead and take a look at some of the advantages and benefits that rds has first and foremost it is extremely easy to administer it makes it very easy to go from project conception to deployment you can use the rds management console which we will look at when we go through the demonstration or the command line interface to access capabilities of a production-ready relational database in a matter of mere minutes so there's no need for infrastructure provisioning and no need for installing and maintaining a complex database software it's also highly scalable like most of the aws offerings the rds is extremely scalable and utilizes the global infrastructure that aws has it's also available and durable because it runs on the same infrastructure it's extremely fast and it supports the most demanding database applications you can choose between two ssd-backed storage options one optimized for high performance oltp applications and the other for cost effective general purpose use in addition when we look at the amazon aurora a little bit closely it provides performance on par with commercial databases at 1 10 of the cost the rds also makes it easy to control network access to your database the rds lets you run your database instances in a virtual private cloud which enables you to isolate your database instances and to connect to your existing it infrastructure through an industry standard encrypted ipsec vpn the engine types offer encryption at rest and encryption in transit lastly it is extremely inexpensive when compared to other options you pay very low rates and only for the resources you actually consume in addition you can benefit from the on-demand pricing that comes with aws which allows you no upfront or long-term commitments or even lower hourly rates if you want to use the reserved instance pricing so let's go ahead and take a look a closer look at the six different instances that are offered by amazon so the postgresql has literally become the preferred option the preferred open source relational database for many enterprise developers and startup now rds makes it easy to set up operate and scale deployments of postgresql in the cloud with rds you can deploy scalable deployments in minutes with cost efficient and resizable hardware capacity it manages the complex and time-consuming admin tasks such as the software installation and upgrades storage management replication for high availability and read throughput and backups for disaster recovery now the rds for pulsar sql gives you access to the capabilities of a familiar postgrad postgresql database engine this means that the code applications and tools you already use today with your existing databases can be used with the amazon rds so with just a few clicks in the management console you are able to deploy a postgresql database with automatically configured database parameters for optimal performance now once provisioned you can literally scale it up to 16 terabytes of storage and 40 000 iaps and the rds for postgresql also enables you to scale out beyond the capacity of a single database deployment for read heavy database workloads the mysql is the world's most popular open source relational database and rds makes it easy to set up operate and scale the mysql deployments in the cloud it frees you up to focus on the application development by managing time consuming database admin tasks such as it did with the posterior sql it supports mysql community versions 5.5 5.6 5.7 and 8.0 which means that the code applications and tools you already use today can be used with rds it also comes with a standard backup and recovery that amazon web services offers along with the high availability and read replicas which makes it easy to elastically scale out beyond the capacity constraints of a single database instance additionally the rds provides amazon cloud watch metrics for your database instances at no additional charge and the rds enhanced monitoring provides access to over 50 cpu memory file system and disk io metrics lastly just as with postgresql you are given an isolation and security with the utilization of a amazon vpc or through the amazon key management service now the mario db is not as popular as the postgresql or mysql but it was literally created by the original developers of mysql and it's got most of the same benefits that are given in the other database engines that come with rds one one main one that i want to point out is the high performance so you can provision again up to 16 terabytes of storage and 40 000 i apps per database and select instances with up to 30 with up to 32 cpus and 244 gib of memory so it's a very easy option to deploy a mariadb if that is what you are using on-prem and have it deployed on an aws infrastructure which allows you to not only scale it out but but give away some of the administration tasks to amazon web services such as the fault tolerance or such as the backups so let's go ahead and log into our management console and see how and see what the different options are in configuring and provisioning some of these database instances hi everybody and welcome to this lesson on the rds service or the relational database service offered by aws so we're going to do in this lesson is i will create a database instance and link it to the rds service in aws now if you guys remember from the previous lesson there are multiple databases that are supported by the rds service you can see them listed here such as mysql postgresql mariodb oracle or the ms sql server so what we're going to be doing is creating a database instance for mysql in this demonstration so i've already navigated to the rds service to our through my management console i'm going to go ahead and create the database to get started now just to keep in mind for rds it's a bit out of the scope of the cloud practitioners exam but that information is required when you if and when you are preparing for the solution architects exam there are some prerequisites that you need to keep in mind before creating the database instance for example you should have the iam users created with the default permissions that would be required to access the database instances given to them so we would need to create iam users and groups and make sure that the required permissions for them are there in order for them to access these database instances in rds now by default we have already created an administrator account that has admin privileges across all of the aws services so we're not going to go ahead and get into the details of creating additional users and groups we're just going to use our administrator account or our root user for the purposes of this demonstration additionally each database instance resides or will reside in a vpc or a virtual private cloud now the default vpc that is created by amazon when an ec2 instance is created automatically allows access for database instances within it but if you are creating a custom vpc then you will need to create within the security groups access for the database instances just keep those in mind when you are going through this and i'll cover the security groups in the vpc section so you can get familiarized with how you can create and configure them within vpc but just keep in mind that the default vpc that amazon creates automatically grants permissions for the rds instances we're going to create a database and i'm going to select the mysql service and it gives you a bit a snippet of information regarding mysql it supports up to 32 terabytes of information supports general purpose memory optimized and versatile performance instance classes supports automatic backup and point in time recovery and up to five read replicas per instance within a single region or cross region now here's where we can specify the instance details first is the license model which again we're going to keep the general public license if you have customized or specific licenses they're more geared towards if you have ms sql server they would be required here what version of ms sql you would want to use we're just going to stick with the version 5.6 and this is also since i've enabled it for a free tier usage it has some limitations it provides a single instance as well as up to 20 gigs of storage for testing purposes and for training purposes this is more than enough in the additional settings you have the db instance identifier here we can type a name for the db instance that's unique for the account in the region that we are working in the master username again just like the name suggests this will be the main account that you can use to log into the database instance in our advanced settings page we can provide some additional information that rds needs to launch the mysql database instance so first we can select which vpc we want to install this instance in again i do not have any user defined vpcs created we will create those in the vpc section but this is the default one that is created by aws and same goes for the subnet public accessibility if this database is going to be accessed by the public you can click on yes or no now just as a general practice rule most databases should reside on a private subnet and not have public accessibility when we get into vpcs i'll explain that in a little bit more detail but usually the databases reside on a public subnet and they work through a net or a network access table to get access or give access to people that are on the internet also we can specify which availability zone we want this instance to be or if you don't have a preference then aws will automatically pick an availability zone to launch this instance in and also keep in mind that the default port for databases is 3306 sometimes in some of the scenario questions in the exam they do refer to this port by the number and also keep in mind since rds is a fully managed relational database service offered by aws it provides an automatic backup for your database instances and by default the backup retention period is 7 days but you are able to increase that if you want aws to retain the backups for longer periods of time we can also specify if we want aws to back up the database during certain times so if there are non non-peak hours in your business environment you can select a specific window when this database will be backed up so it does not affect any performance we also have an option for enhanced monitoring again it just gives you additional metrics so you can monitor the access into and out of your database and here's where we can export specific logs audit error general and slow query logs to amazon cloud watch again going again like i mentioned it is a fully managed service so you can also allow aws to automatically upgrade minor versions of the database instances when they come out so any security patches can automatically be applied to the database instance always recommended to make sure this is enabled but if you have or if the organization has specific rules in terms of applying new patches then you would want to disable it and then manually apply those patches or those upgrades and it's very important this deletion protection make sure this is always enabled because it's very surprisingly it's very easy to delete database instances from rds and that's it now we click on the view of the database instance it takes us back to the dash dashboard and this gives you the entire metrics of the custom sql db that we just created in rds so once we have this created let's go ahead and see how we can delete this database instance so up here in the actions there's an option to either stop reboot or delete and we can also create read replicas of this database along with taking a snapshot or restoring a snapshot or migrating a snapshot to a different availability zone or to a different region so before we delete if you guys remember when we were creating the database instance we ensured that we checked that box making sure that we are not easily able to delete this database so let's see what happens when we actually click on delete we basically have to go back modify the database in order for us to be able to delete it so if we click on modify here we can change all of the configurations for the database what we want to do is go ahead and find where we specified the deletion protection uncheck this box and then click on continue and here we can either apply this change during the next case scheduled maintenance window or we can apply it immediately let's go ahead and apply it immediately and now we can go ahead and delete this instance it'll ask us if we want to take a final snapshot of this database or retain any automatic backups we do not want to do either one simply type and delete and here it will delete our database instance for us hi everybody and welcome to this lesson on looking at the amazon aurora database now in the previous lesson we looked at all of the database instances which amazon relational database service offers which of which amazon aura was one but i wanted to take a closer look at amazon aurora to demonstrate some of the benefits it has because the cloud practitioner exam does focus a little bit more on the amazon aurora since it is a proprietary database which is offered by aws so essentially aurora is a mysql and postgresql compatible relational database built for the cloud that combines the performance and availability of traditional enterprise databases with the simplicity and cost effectiveness of open source databases it's up to five times faster than standard mysql databases and three times faster than the standard postgray sql database it provides the security availability and reliability of commercial databases at literally one-tenth of the cost it's a it's fully managed by the relational database service which we looked at in the previous lesson which automates time consuming admin tasks like hardware provisioning database setup patching and backups aurora features a distributed fault tolerant self-healing storage system that auto scales up to 64 terabytes per database instance it delivers high performance and availability with up to 15 low latency replicas point in time recovery continuous backup to the amazon s3 and replication across three different availability zones additionally mysql and postgresql compatibility make aurora a compelling target for database migrations to the cloud so if your organization is migrating from mysql or postgresql amazon does offer support for them to move into aurora since they are pushing organizations to start using aurora they have made it a lot easier for companies to migrate their existing mysql or postgresql databases into aurora and for most organizations it would definitely make sense because first of all the cost is extremely low and secondly the performance is extremely high so it is a great option if an organization is moving to aws they might as well also think about and look at migrating the databases into aurora as well another benefit of the amazon aurora on amazon aurora is that it gives you custom endpoints which allow you to distribute and load balance workloads across different sets of database instances so for example you may provision a set of aurora replicas to use an instance type with higher memory capacity in order to run an analytics workload a custom endpoint can then help you route the analyt the analytics workload to these appropriately configured instances while keeping other instances isolated from this workload now there are a host of other benefits that come with aurora which get a little bit in depth and are out of the scope of this course those in-depth information are more geared towards the solutions architect associate and solutions architect professional examinations so we're not going to get into those but just keep these main benefits in mind for the cloud practitioners exam so let's go ahead and log into our management console and see all the different options that we are able to configure for amazon aurora hi everybody and welcome to this demonstration on looking at amazon's aurora database so the aurora db is basically amazon's proprietary database service that they're offering it does offer as you guys saw from the lecture before some good benefits in terms of the other databases that are out and used by most organizations today so let's take a look at what we can configure and how we can get it and get an amazon aurora db up and running in aws so the first thing i want to do is go ahead and navigate to the aurora db it's currently located under databases or what we can do is go right into the rrds which is the relational database service which is offered by aws now since aurora is offered as a managed service with aws it is currently under the relational database service under which you can provision additional databases also such as the microsoft sql or the oracle and so on so what we're interested in is right now currently looking at the aurora database now since it is amazon's proprietary service as you guys can notice and if you have not done so before it's promoting it quite heavily within the aws platform for example right when you go to rds right on the top you see an ad kind of kind of an advertisement for amazon aurora letting you know what it what it is able to do in terms of its benefits and allows you to create a database in aurora right here from the top as compared to doing it in some of the other databases which you are able to do below so we're interested in is let's go ahead and take a look at how we can go ahead and create an aurora database so once we create a database again it takes the same screen as if we were to create a normal database what we want to do is go ahead and select the amazon aurora now for most databases if you're not familiar in terms of what is mysql or mariadb and so on if you guys notice on the bottom it gives you a quick snippet in terms of the main benefits that each one of these databases offers and and also a good indication of the pricing of what it will be charged and these change based on if you select the different databases now keep in mind if you are provisioning an amazon or a database for yourself and if you're practicing it is chargeable so it does not it is not included in the free tier so please keep that in mind if you will be practicing and if you do want to provision an aurora database you will start getting charged for it so what i want to do is go ahead and select amazon aurora in terms of the addition it supports multiple editions for example mysql 5.6 5.7 or the posts gray sql compatible versions or the engines so it depends which one you will be working with depends on again your organization and what kind of data you have what kind of engine you've been working with will help you determine which one you want to have it compatible with so we're just going to go ahead and choose the sql 5.6 i'm going to go ahead and click on next here i can specify various details for the rrodb now first i can define the capacity information now in terms of the capacity type now when you choose provision capacity type you basically manage the database instance class when your database for example workload changes you might need to modify the instance class to provide the appropriate resources now you can choose the provisioned capacity the aurora parallel query enabled to improve the performance of the analytic queries and when you choose a serverless capacity you basically specify the minimum and maximum resources required for your database cluster and aurora is automatically going to scale up or scale down the capacity based on the database load so it just depends on your management how you would like it to be managed either provisioned provisioned with the parallel query enabled or the serverless so again with the serverless you guys notice that the options below disappear because it is fully managed by aws whereas for the provisioned and the provisioned with or apparel query you are responsible for the database engine so it just depends again on how much control you want on your engine and how much or in comparison to how much ease you want in terms of administration so if you do not have a full-time database person it's probably preferential to do serverless so you don't have to worry about all of the underlying infrastructure in terms of the management but again if you do have a database person or database team then either provisioned or the provision with the parallel query enabled so if we do the provision we can specify the engine version and we can specify the specific class and here's where we can pick and choose what type of server in terms of hardware we want provisioned for our service so let's say i pick a dbr3 extra large another option is a multi availability zone that's what ac stands for multi-az deployment what it basically does it creates a replica in a different zone which bit which is going to provide for the high availability and redundancy you can opt out of this also if you prefer but as a best practice it's always good to have that extra redundancy in case something fails or in case there's a hardware failure it's always good to have a read replica in a separate zone here's where we can specify the settings in terms of the database instance identifier the username and the password after i specify all this information i'm going to go ahead and click on next here are some of the advanced settings that we can specify for our aurora db we can specify it had to have it launched in a specific vpc if we have additional vpcs created we can launch it within that specific one and if you have a for example if you are going to be using an aurora db or any database for that matter it's always good practice in terms of developing an architecture to have the database in a separate vpc or a separate virtual network a private virtual network away from the public one if you have any public facing resources additionally you can also do the same thing for the subnet and again since i only have a default vpc it's defaulting to a default subnet group and the public accessibility so here if you do yes the ec2 instances and devices again outside of the vpc that's hosting the dp will be able to connect so it just depends whether you want it to be publicly accessible or if you do not want it to be publicly accessible availability zone which specific availability zone you want this database to be launched into if you do no preference aws will randomly pick one of the availability zones within your region uh kind of this kind of like the same practice that happens with ec2 instances right where it picks randomly or you can specifically pick a availability zone here you can specify either use an existing security group or you can use or you can create a new security group specifically for this database and some additional advanced settings in terms of the cluster identifier the database name and the port and if you leave these default it'll what we'll do for example the default identifier is going to be used and if you do not specify a data database name the rds obviously is not going to create a database and then you will have to do that manually after aurora is provisioned in terms of the parameter group so the database parameter group is basically the configuration values for the engine that is currently being used and right now it's currently the default one since aurora is going to be a fully managed database so it's basically the engine configuration properties additionally we can also specify iam roles which again is identity and authentication management rules for database authentication whether we can enable it or disable it so we can opt to choose to manage our database using credentials through iam or disable that option and again that would be dependent on how you have everything set up within aws encryption we are also able to either enable or disable encryption so if we disable it obviously we don't have to specify any values if we enable it then obviously we have that master and we have that master key and that private key which is used to access our database and all of that is managed to the key management service which is provided by aws so here is the failover priority so every replica that you do is basically assigned a tier whether it is 0 all the way up to 15 in terms of the priority so what aws is going to do is going to specify which read replica you want to take priority over so if you have a main read replica that you are using or that you specifically make sure that ever it is it stays updated and is up and running so you can specify in aurora that you want in case of a failover you want to fill over the main database to this read replica and you can specify a tiering value here additionally backups you can configure backups as frequently as one day or as infrequently as 35 days so again depending on your organization backup policies backtrack is again kind of like what the name suggests is going back to a point in time in a specific database so if for example you want to enable it or disable it you can specify a backtrack if you enable it you have to specify a backtrack window and it supports a maximum of 72 hours so you're able to backtrack up to 72 hours before that specific point in time and then finally monitoring through cloud watch so you can specify enhanced monitoring or disable enhanced monitoring so enhanced monitoring just basically allows you to get some more insightful metrics in your database so if your database is one of your primary business drivers you want to make sure that you have enhanced monitoring and you can have the granularity in terms of the frequency every one second if it's mission critical or every one minute if it's not that much mission critical and then we can also log our exports in the cloudwatch logs and then we can pick and choose which specific log we want to log and monitor with through cloud watch and we can select either one or multiple ones depending on again our preference maintenance we can let aws know to do minor version upgrades to the database instances or the database engines or we want to disable it so if our database use is dependent on that specific db engine or db instance then obviously we want to disable it if our work and our database work that we're doing is not dependent on the engine version or or the instance version then we can enable it to have aws do minor upgrades and we can also specify a selection window so let's say if there is a upgrade that needs to occur aws will do it on certain day in a certain time and obviously we can select when our traffic will be the lowest now and if we do no preference aws will kind of use its machine learning to pick and choose a day and time that will that has the least amount of traffic to apply this update and then finally deletion protection very similar to our ec2 instances when it's provisioned we can have deletion protection to make it a little bit more difficult to delete our databases we have to specifically go in and disable deletion protection and then delete it if we are going to be deleting it and then we can go ahead and create our database and our aurora db is going to be launched so essentially these are all of the configuration options that we can select to have a fully managed amazon aurora db so essentially these are all the options we can specify to have a fully managed amazon or db up and functioning in our aws ecosystem hi everybody and welcome to this lesson on migration general because most of the time when organizations are thinking about or considering moving to aws migration plays a big role in anybody's job in terms of designing that move from on-prem to aws or from another hosting provider to aws so aws actually provides a migration hub which is a single location to track the progress of application migrations across multiple aws and partner solutions this hub allows you to choose the aws and partner migration tools that best fit your needs while providing visibility into the status of migrations across portfolio of applications the hub also provides key metrics and progress for individual applications regardless of which tools are being used to migrate them for example you might use the migration service the server migration service and partner migration tools such as the ata data or the ata motion the cloud and door and so on to migrate an application comprised of a database virtualized web servers and bare metal servers so using this hub you can view the progress of all of these resources in the application this allows you to quickly get progress updates across all of your migrations easily identify and troubleshoot any issues and reduce overall time and effort spent on the migration projects so let's go ahead and take a look at four of the main migration options that are available and which are also covered in the cloud practitioners exam so first the application discovery service which helps enterprise customers plan migration projects by gathering information about their on-prem data centers so planning data center migrations can involve thousands of workloads that are often deeply interdependent server utilization data and dependency mapping are important early first steps in the migration process this app discovery service collects the collects and presents configuration usage and behavior data from your servers to help you better understand your workloads the collected data is retained in encrypted formats and an aws application discovery service data store you can export this data as a csv file and then use it to estimate the tco or total cost of ownership of running on aws and to plan your migration accordingly additionally this data is also available in the migration hub which i just discussed some of the benefits that are offered with this application discovery service are the reliability it's integrated with the migration hub making it easier for you to get a holistic view of your entire migration process additionally you can protect data with encryption both in transit and at rest and lastly engage with the migration experts so the aws professional services and migration partners usually help enterprise customers successfully complete their migration to the cloud now some of the features of this application discovery service first is discover and that's what it does of your on-prem infrastructure next it identifies the server dependencies after it discovers all infrastructure it identifies and identifies interdependencies it measures server performance so it captures performance information about applications and processes by measuring the whole cpu memory disk usage as well as disk and network performance this information establishes a performance baseline to use as a comparison after you migrate to aws finally after that's done the data exploration the data exploration and amazon athena begins we're exp we were able to explore the data collected from your on-prem servers with a service called amazon athena by running pre-defined queries to analyze the time series system performance for each server the type of processes that are running on them and the network dependencies between different servers next we have the database migration service which helps you migrate databases to aws quickly and securely the source database remains fully operational during the migration which minimizes downtime to any applications that rely on it this service can migrate data to and from most widely used commercial and open source databases it supports homogeneous migrations such as oracle to oracle as well as well as heterogeneous ones such as from oracle to microsoft sql server for example or to amazon aurora so with this service you can continuously replicate your data with high availability and consolidate databases into petabyte-scale data warehouses by streaming data to amazon redshift and amazon s3 now one neat thing about this is when you're migrating data databases to amazon aurora or amazon redshift or amazon dynamodb you can use the migration service free for up to six months so amazon is is really trying to promote their aurora and their dynamodb databases and it makes sense also because they are considerably better for most use cases and some of the benefits that you guys can see on the screen it's first and foremost it's extremely simple to use through the management console minimal downtime because like i mentioned your host database is up and running during the migration processes supports wide used databases it's extremely low cost and fast and easy to set up and as with most amazon services it is extremely reliable now the migration service also has something called a schema conversion tool which makes heterogeneous database migrations predictable by automatically converting the source database schema and a majority of the database code objects including views stored procedures and functions to a format compatible with the target database any objects that cannot be automatically converted are clear are clearly marked so that they can be manually converted to complete the migration this conversion tool can also scan your application source code for embedded sql statements and convert them as part of the database schema conversion project during this process the schema tool performs cloud native code optimization by converting legacy oracle and sql server functions to their equivalent aws service thus helping you modernize the application at the same time of database migration now once schema conversion is complete the tool can help migrate data from a range of data warehouses to amazon redshift which is amazon's version of a data warehouse using built-in migration agents now the table you guys see on the screen are the conversions that that the tool currently supports now don't worry you do not have to memorize or know these conversions i have just put them on there as a food for thought it's good especially if you're looking to continue on your certification path and get the solutions architect or the dev ops or the sysops and this will come in very handy lastly two of the unique options that are offered by amazon are something called the snowball and something called the snowmobile now the snowball which is the device that you guys see on the bottom right is a petabyte scale data transport solution that uses secure appliances to transfer large amounts of data into and out of aws now how this came about is originally what companies did is they mailed their physical hard drives to the data center to amazon data centers where somebody unpacked them plugged them in and uploaded the data that became very cumbersome especially as aws grew that became more and more cumbersome and unattainable so they've developed these highly secured devices so with snowball you don't need to write any code or purchase any hardware to transfer your data simply create a job in the management console and the snowball appliance will automatically be shipped to you now once it arrives attach the appliance to your local network download and run the snowball client to establish a connection and then use a client to select the file directories that you want to transfer to the clients the client will then automatically encrypt and transfer the files to the appliance at high speeds once the transfer is complete and the appliance is ready to be returned the shipping label will automatically update and then you can track the job status using your management console now the snowball uses multiple layers of security designed to protect data including tamper-resistant enclosures 256 bit encryption and the industry standard trusted platform module or tpm designed to ensure both security and full chain of custody of your data on a larger scale we have the snowmobile which is an exabyte scale data transfer service used to move extremely large amounts of data to aws you can transfer up to 100 petabytes per snowmobile a 45 foot long rugged shipping container pulled by a semi-trailer you guys see now snowmobile makes it easy to move massive volumes of data to the cloud just like the snowball the snowmobile also uses multiple layers of security designed to protect your data such as gps tracking alarm monitoring 24 7 video surveillance and an optional escort security vehicle while in transit and again all data is encrypted in transit and at rest so just an exam tip keep in mind that the snowmobile is an exabyte scale data transfer and for a bit smaller transfer that's when you would use the snowball so let's go ahead and log into our management console and see what the migration hub looks like hi everybody and welcome to this lesson on looking at the virtual private cloud or vpc now the cloud practitioners exam does have a few questions related to vpc so it is a very important topic and then if you're moving on to the solutions architect that one focuses a lot more heavily on the virtual product cloud and the details associated with it so the vpc lets you provision logically isolated sections of the cloud where you can launch resources in a virtual network that you define you literally have complete control over your vital networking environment including selection of your own ip address range subnets routing tables and gateways you can use both ipv4 and ipv6 in your vpc for secure and easy access to resources and applications you can easily customize the network configuration for your vpc for example you can create a public-facing subnet for your web servers that has access to the internet and place your backend system such as databases or app servers in a private facing subnet with no internet access you can leverage multiple layers of security including security groups and network access control list to help control access to the ec2 instances in each subnet additionally you can also create hardware vpn connection between your corporate data center and your virtual private cloud and leverage the aws cloud as an extension of your corporate data center now the vpc provides advanced security features such as security groups and access control lists to enable inbound and outbound filtering at the instance and subnet levels in addition you can store data in s3 and restrict access so that it's only accessible from instances in your vpc optionally you can choose to launch dedicated instances which run on hardware dedicated to a single customer for additional isolation you can create a vpc quickly and easily using aws management console you can select one of the common network setups that best match your needs and press start the wizard and we'll do that when we'll take a look at the wizard and the different options that are available during the lab that's coming up next additionally it provides all of the same benefits as the rest of the aws platform in terms of the scalability and reliability now a variety of connectivity options exist for the amazon vpc you can connect your vpc to the internet to your data center or other vpcs known as vpc peering based on aws resources that you want to expose publicly and those that you want to keep private so for example you can connect directly to the internet by using public subnets you can launch instances you can connect to an internet using a net or a network address translation private subnets can be used for instances that you do not want to be directly addressable from the internet instances in a private subnet can access the internet without exposing their private ip by routing their traffic through a network address translation or nat gateway in a public subnet and just an exam tip always keep in mind that a nat and gateway sits always in a public subnet you can also connect securely to your corporate data center all traffic to and from instances in your vpc can be routed to your data center over a encrypted ipsec hardware vpn connection and again you can also privately peer vpcs which is connecting one vpc to another you can privately connect to sas solutions supported by aws private link now just as a side note private link basically simplifies security of data shared with cloud-based applications by eliminating the exposure of data to the public internet private link provides private connectivity between vpcs aws services and on-prem applications securely on the amazon network now just some vpc limitations to keep in mind especially for exam purposes you can have up to 5 non-default vpcs per aws account per region you can have up to four secondary ip ranges per vpc you can create up to 200 subnets per vpc you can have up to five vpc elastic ip addresses per account per region and you can have up to 10 hardware vpn connections per amazon vpc now for those of you who are not familiar with what an elastic ip address is it's basically a static ip4 address designed for dynamic cloud computing so an elastic ip address is basically associated with your aws account with an elastic ip you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account an elastic ip address in a public addre in a public domain is reachable from the internet so if your instance does not have a public ip4 address you can associate an elastic ip address with your instance to enable communication with the internet for example to connect to your instance from your local computer and keep in mind that as of right now amazon does not support elastic ip addresses for ip version 6. so let's go ahead and go into our management console and check out some of the other options and configurations that we are able to do with a virtual private cloud hi everybody and welcome to this tutorial on virtual private clouds or vpcs so in this exercise what we're going to do is we'll create a non-default vpc with a single subnet and just as a reminder subnets enable us to group instances based on our security and operational needs so there's two different types of subnets there's a public subnet which is accessible by the internet or a private subnet which is accessible only within that vpc so after doing that we'll create a security group for our instance that allows traffic only through specific ports we'll launch an amazon s will launch an ec2 instance into our subnet and associate an elastic ip address with that instance which will allow our instance to access the internet so first and foremost we're going to go ahead and create our vpc so let's navigate to our vpc section and from our vpc dashboard we're going to launch the vpc wizard and here we have several options in order to launch vpcs we can launch it with a single subnet we can have vpcs with public and private subnets we can have vpcs with public and private subnets and hardware vpn access or with private subnet only and hardware vpn access and the hardware vpn basically allows us to connect our on-prem network onto the vpc hosted by aws so we'll stick with the single subnet let's give this vpc a name and up here it mentions an ipv4 cidr or sitter block now this is a bit outside the scope of the cloud practitioners exam we'll need to know about this in the solutions architect course and exam so this is basically gives us a number of ip addresses that we're able to use in binary formats in a total there's 65 531 ip addresses available so if you want to limit the number of ip addresses that are allocated to this subnet we can do that through the sitter block the availability zone gives us a choice of where we want to create this instance in based on the region which i'm currently in london we'll just leave it as no preference we can also additionally give this subnet a name if we will have multiple subnets it's always good to differentiate between private and public ones so with the service endpoint to the service endpoint section we can select a subnet in which to create a vpc endpoint to amazon s3 in the same region so here we can either allow dynamodb or s3 we can limit the amount of access that these ins that these services will have to this vpc or customized access and the custom one we'll just have to use either this json editor or we can use the policy creation tool so with this enable dns hostnames when set to yes it ensures that instances that are launched into our vpc receive a dns host name and the hardware tenancy option enables us to select whether instances launched into the vpc are run on a shared or dedicated hardware just keep in mind that if you select dedicated tenancy it incurs additional costs we'll go ahead and click on create vpc so if we go into our vpcs we see there are two vpcs the one that we just created and the default one that's created when we launched our first ec2 instance so now that we have our vpc created let's go ahead and create a security group and if you remember security grip basically acts as a firewall a virtual firewall to control the traffic for associated instances so for a security group we need to add inbound rules to control incoming traffic to the instance and outbound rules to control the outgoing traffic from the instance now vpc does come with a default security group any instance not associated with another security group during launch is automatically associated with the default security group of the vpc so let's go ahead and navigate to our security group section in security or we're going to create a security group and here we're going to select the id of the vpc that we just created and create the security group so here you can see the security group that we just created so once we select it on the bottom we can see a description and here's where we can specify the inbound rules and the outbound rules so in the inbound rules let's go ahead and click on edit we're going to add a rule and here are all of the rules that we can add so let's go ahead and allow http traffic and the source is basically the 0.000 means from anywhere and let's also add in an https and if you guys can see it automatically detects the default ports for both http and https now that we have our vpc created we have a security group associated with it let's associate or let's add instances to this vpc so if we go back to our dashboard before we collect it before we select it launch our vpc wizard so let's go ahead and select launch ec2 instances we'll just select the default instance that we created in the previous lesson so here is where we can select the vpc that we want this to be associated with if you guys remember the last one that we created was associated with the default vpc because we only had one now that we have the other vpc created we can associate this instant with this vpc and here also if we had multiple subnets they would show here but since we only have one only one is showing up and here is where we can associate security groups with this instance so here we can either create a new security group or select an existing security group we want to select an existing one because if you recall we just created this one previously so we'll associate this instance with this security group and here it's it defines what the security group allows it allows both http and https traffic inbound and outbound so once we have that instance launched we go back to our vpc dashboard and attach an elastic ip to that instance and an elastic ip address is basically a static ipv4 address designed for dynamic cloud computing so an ad so an elastic ip address is associated with our aws account and with it we can mask the failure of an instance or software by rapidly remapping the address to another instance in our account and just keep in mind that the elastic ip address is a public ipv4 address which is reachable from the internet so if we were to associate an elastic ip address with the instance that we just created it will allow http and https traffic from the internet to reach this instance basically now the instance is accessible from the internet with the elastic ip address being associated with it so those are the steps that we would need to do in order to create a custom vpc but just a few things to keep in mind in security we have both a network access control list or and security groups now a network access control list are applicable at the subnet level so any instance in the subnet with an associated knuckle will follow the rules of the knuckle the security groups as we saw have to be associated with a specific instance by default our default vpc will have a default knuckle which will allow all traffic both inbound and outbound so if you want to restrict access at the subnet level it's always good practice to create a custom network access control list also keep in mind that network access control lists are stateless unlike security groups which are stateful so in a security group let's say if you add an inbound rule for port 80 it's automatically allowed out meaning outbound rule for that particular port need not be explicitly added but for the network access control list you need to specifically provide an explicit inbound and outbound rule lastly in security groups we cannot deny traffic from a particular instance by default everything is denied we can set rules only to allow whereas in a network access control list we can set rules both to allow and to deny and the peering connections is where we can do vpc peering if we want to connect two aws vpcs together we can accomplish that through peering the net gateways allow two subnets let's say if you have a private and public subnet they will access the internet through a nat gateway or network address translation gateway and the endpoints and endpoint services allow applications to connect instances in the vpc hi everybody and welcome to this lesson on looking at what cloudfront is so what is cloudfront it's basically a cdn that securely delivers data videos applications and apis to customers globally with low latency and high transfer speeds it's deeply integrated into aws both physical locations that are directly connected to the global infrastructure as well as other aws services it works seamlessly with services including the aws shield which is the ddos mitigation the s3 buckets the load balancing or ec2 instances now looking at some of the benefits that it offers it's massively scaled and globally distributed the network has 150 points of presence and leverages a highly resilient amazon backbone network for superior performance and availability for end users it's a highly secure cdn that provides both network and application level protection your traffic and applications benefit through a variety of built-in protections such as the aws shield standard at no additional cost you can also use configurable features such as the certificate manager to create and manage custom ssl certificates at no extra cost its features can be customized for your specific application requirements lambda functions triggered by events extend your custom code across aws locations worldwide allowing you to move even complex application application logic closer to your end users to improve responsiveness it also supports integrations with other tools and automation interfaces for today's devops and ci cicd environments by using native apis or aws tools it's fully integrated with aws services like i mentioned such as the s3 buckets ec2 the root 53 which we'll look at later on and the media services they're all accessible via the same console and all features in the s in the cdn can be programmatically configured by using apis or the management console so this basically shows you the global footprint of amazon and specifically of what cloudfront leverages so to deliver content to end users with low latency it uses a global network of 150 points of presence which are 139 edge locations and 11 regional edge caches in 65 cities across 29 countries don't worry you do not have to have all of these memorized this is just for your information just to demonstrate and help you understand how cloudfront is able to decrease latency some of the other features that cloudfront offers it offers protection against network and application layer attacks in conjunction with the aws shield and the web application firewall also known as waf and the route 53 they work seamlessly together to create flexible layered security perimeter against multiple types of attacks including network application layer ddos attacks it has ssl tls encryptions and https and the best part is that they are enabled automatically you can use the certificate manager to easily create custom ssl certificate and deploy your cloudfront distribution for free with cloudfront you can also restrict access to your content through a number of capabilities with signed urls and signed cookies you can support token authentication to restrict access to only authenticated viewers through geo restriction capability you can prevent users in specific locations from accessing content that you're distributing through cloudfront now with original access identity feature you can restrict access to an s3 bucket to only be accessible from cloudfront it's also programmable and devops friendly it provides developers with a full featured api to create configure and maintain cloudfront distributions in addition developers have access to a number of tools such as cloud formation code deploy code commit and aws sdks to configure and deploy their workloads with amazon cloudfront lastly lambda edge helps web developers mobile developers and cloudfront customers run their code closer to their users using lambda using lambda allows you to respond to requests at the lowest latency across aws locations globally for web or mobile requests the compute requests from your users can be delivered closer to them improving their overall experience and the best part is you pay only for the computer time you use there is no charge when your code is not running which makes this extremely cost effective so let's go ahead and take a look at the different options that we can configure for cloudfront in our management console but just keep in mind for exam purposes and exam tips that that cloudfront utilizes edge locations in order to distribute content and achieve the low latency that it promises hi everybody and welcome to this lesson on route 53 and before you get worried no this does not refer to a road or any sort of transportation modes route 53 is basically amazon's dns system it's a highly available and scalable cloud domain name system web service it's designed to give developers and businesses an extremely reliable and cost effective way to route end users to internet applications by translating names like www.aws.com into numeric ip addresses like 168.254.1.1 that computer is used to connect to each other now root 53 is fully compliant with ipv6 as well it effectively connects user requests to infrastructure running in aws such as ac2 instance load balancers or s3 buckets and can also be used to route users to infrastructure outside of aws you can use route 53 to configure health checks to route traffic to healthy endpoints or to independently monitor the health of your application and its endpoints the route 53 traffic flow makes it easy for you to manage traffic globally through a variety of routing types including latency based routing geodns geoproximity and weighted around robin all of which can be combined with dns failover in order to enable a variety of low latency fault tolerant architectures using the route 53 traffic flows simple visual editor you can easily manage how your end users are routed to your applications endpoints whether in a single aws region or distributed around the globe route 53 also offers domain name registration you can purchase and manage domain names such as carson.com and amazon route 53 will automatically configure dns settings for your domains so it's a pretty robust dns offering that amazon has done with route 53. now the benefits of root 3 are fairly standard and comparable to most of the other services which are offered by aws in terms of it being highly available and reliable flexible it routes traffic based on multiple criteria such as endpoint health geolocation and latency and you are also able to configure multiple traffic policies and decide which poly which policies are active at any given time it's especially designed for use with other amazon web services you can use root 53 to map domain names to your ec2 instances to s3 buckets or to cloudfront distributions by using the aws iam or identity and access management with root 53 you can get find you can get fine-grained control over who can update your dns data you can use the root 53 to map your zone apex to your elastic load balancing instance amazon cloudfront distribution the elastic bean stock environment or s3 website bucket using a feature called alias records it's extremely simple and easy to set up and it's cost effective when compared to other offerings offered by other vendors some of the key features that are offered by route 53 you can get recursive dns for your amazon vpc and on-prem networks you can create conditional forwarding rules and dns endpoints to resolve custom custom names mastered in amazon route 53 private hosted zones or in your on-prem dns servers in terms of traffic flow easy to use and cost effective global traffic management like i mentioned before end users can be routed based on geoproximity latency health and a bunch of other considerations and some of the other ones that i've already mentioned previously in terms of the dns failover the health checks the domain name registration the geo and private dns and then finally the load balancing integration that are not familiar with dns or domain name system it's basically a globally distributed service that is foundational to the way people use the internet dns is a hierarchical name structure and different levels in the hierarchy are separated with a dot for example www.aws so in this example the com is a top-level domain the aws is a second level domain and there can be a number of lower levels such as www and aws below the second level domain now computers use dns hierarchy to translate human readable names such as aws.com into ip addresses like 192.00 that computers use to communicate and connect with one another route 53 is an authoritative dns system an authoritative dns system provides an update mechanism that developers use to manage their public dns names it then answers dns queries translating domain names into ip addresses so computers can communicate with each other now lastly in case you guys are wondering where and how the name root 53 came from it's just a good food for thought it came from the fact that dns servers respond to queries on port 53 and provide answers that route end users to your applications on the internet hence the name route 53 or route 53 and lastly with route 53 you don't have to pay any upfront fees or commit to the number of queries that the service answers for your domain like most other amazon web services you pay as you go and only for what you use whether using managed hosted zones whether using whether you're using it for serving dns queries or whether you're managing domain names so now it's time again to log into our management console and check out route 53 and how we can get this configured for our dns use hi everybody and welcome to this lesson on looking at route 53's management console so as you guys can see i've already logged into my aws management console and i've navigated to route53 again you can search for route 53 and you can navigate to the main dashboard for route 53 where it allows you to do everything regarding your dns management in aws so this is the main dashboard for a for route53 it allows you to see all of the zones that you are currently managing managing it allows you to do traffic management do some health monitoring and also register domain names right through the aws console as you guys can see if there's additional domain names that you want to register you can do that right here or you can transfer your existing domains to amazon's route 53 so everything can be managed from one console throughout for your entire organization so one thing that i do want to show you in terms of route 53 is creating a different traffic policy or to managing traffic for different sources so let's say i want to create a policy where i route traffic based on geography so people that are in asia get routed to a certain server as compared to people that are in the us get routed to a certain server so there's very intricate policies that we can create through route 53 to manage our internet traffic so i'm just going to give the policy a generic name and again you can create multiple policies and you would want to name them accordingly unless you can differentiate what each policy is used for additionally you are also able to add a description on here also so after giving it a policy name i'm going to go ahead and click on next now here is a very intuitive gui interface that allows me to create a traffic policy based on certain metrics so as you guys can see the starting point is the dns type now i can either start with a ip address i can start with a c name i can start with a mail exchanger and different dns records so depending on where i want to start from so let's say that i'm going to go ahead and start from an ip address right so i'm going to connect to a different types of rules so i can have a weighted rule failover rule geolocation latency multi-value geoproximity and a new endpoint which is the final result so again depending on what kind of rule that you would like to create we can pick either a weighted rule let's say 70 of the traffic goes here 30 of the traffic goes here and so on so i what i want to do is show you guys is how to create a geolocation rule so i'm going to select geolocation and as you guys can see i can select a different location so let's say that i want everybody from africa to connect to this specific endpoint and i want everybody from let's say asia to connect to this specific end and again since i specified an ip address format i have to specify an ipad or so i'm just going to put in a generic ip address so what it's going to do everybody from africa is going to get routed to this endpoint this web server everybody from asia is going to get routed to this web server now optimally these should be your elastic ips that are that have been assigned to your both of these web servers depending on where they're located and optimally this web server should be located in the african region this web server should be located in the asian region just so we can minimize the amount of latency additionally you're also able to do health checks right so we can do health checks for both of these so we can evaluate the target health to to determine if traffic should be routed so what it will do is it will periodically check the health of these machines and if the machines are healthy it will keep routing traffic to this machine if for example the health checks fail it will automatically revert to the next available healthy machine which will be the one which will be the other one or consequently either africa or asia so let's say this is the generic rule that i want to create for routing traffic one other thing that i want to mention you can also import a traffic policy so if you already have a traffic policy created you are also able to import a traffic file policy in json format so if you want to go ahead and code your own json policy in your text editor you can also do that and import it here so i'm going to go ahead and create there we go so here you guys can see the traffic policy has been successfully created now i have to create policy records for this traffic policy now the policy record again this is where we're going to specify the hosted zone right for example casting.com or your organization.com that you want to associate this configuration with now we can create more than one policy record in the same hosted zone or in a different hosting zone by using the same traffic policy when you create this policy record what route 53 is going to do is going to create a tree of records the root record is going to appear in the list of records for your hosted zone and the root record has the dns name that you have specified when we created the policy record now raw 53 is also going to create records for the entire rest of the trade but it's going to hide them from the hosted zone now in terms of the policy records right so the dns name so when when you're creating a policy record we have to enter the domain name or the subdomain name for what you want router 33 to respond to to dns queries so let's say i'm just going to type in a dummy name as you guys can see i had already created a dummy dot custom domain name so right now this would be www.kasim.com but essentially this would be you know yourorganizationname.com time to live this is the amount of time in seconds that you want the dns recursive resolvers to cache information about this record now if you specify a longer value you're going to pay less for the raw 53 service because recursive resolvers and requests to route 353 less often so right now it's 60 seconds as you guys can see it's costing 50 per month but if you were to increase this this cost would significantly go down and i can also add multiple policy records within this main policy so again depending on what i want to add i can add multiple domains multiple sub domains within this now just keep in mind if you are practicing if you do create this and you have an actual domain hosted which i do not then you will start getting charged so just keep that in mind if you're practicing but still have a live domain hosted in aws and you can configure a traffic policy live you'll start getting charged per month so that's essentially how you would create a policy record and once you create the policy record all the records will show up here and all of the traffic policies are going to show up here so you can have a number of policies and again you can have a number of of policy records additionally in the dashboard you're able to see the different domains that you have registered through aws or with aws you can transfer domains you can check out your domain billing report you can also see different pending requests that you have with aws so let's say if you register domains if you sent a request to register if you've sent a request to register a domain you're able to see all of those requests live within this dashboard so again route 53 is basically your dns management system with aws in which it allows you to not only customize the traffic routing within your domain or within your web server or within your company's server but it also allows you to manage specific tasks such as domain registering your billing and everything all in one console so it allows you to have a comprehensive system within the aws ecosystem hi everybody and welcome to this lesson on elastic load balancing or also sometimes referred to as elb now elastic load balancing automatically distributes incoming application traffic across multiple targets whether it be an ec2 instance it be a container an ip address or even lambda functions it can also handle a very load of your application traffic in a single availability zone or you live across multiple ones it offers three different types of load balancers that all feature the high availability automatic scaling and robust security necessary to make your applications fault tolerant and we'll look at the three that are offered in the next slide what you guys see are some of the benefits that elb brings first and foremost it's highly available because it automatically distributes incoming traffic across multiple targets it can also load balance across a region routing traffic to healthy targets in different availability zones it's also extremely secure because it works with the virtual private cloud to provide robust security features including integrated certificate management user authentication and even ssl or tls decryption and some of the other features it has it's elastic flexible and again like with most amazon web services features it has a robust monitoring and auditing that you can do through either cloud watch or cloud trail now one of the good things about the load balancer is that it offers the ability to balance across aws and on-premises resources using the same load balancer so this makes it easier for you to migrate burst or failover on-prem applications to the cloud now one of the balancers that it offers is the application load balancer which operates at the request level or layer 7 load balancing it routing traffic to targets ec2 instances ip addresses and lambda functions like i mentioned it's ideal for advanced load balancing of http and https traffic application load balancer provides advanced request routing targeted at delivery of modern application architectures including microservices and even container-based applications it simplifies and improves security of your application by ensuring that latest ssl or tls ciphers and protocols are being used at all times the next one is the network load balancer which is best suited for load balancing of tcp traffic where extreme performance is required this one operates at the layer 4 and it routes traffic to targets within an amazon virtual private cloud and is capable of handling millions of requests per second while maintaining ultra low latencies it's also optimized to handle sudden and volatile traffic patterns and lastly the classic load balancer which provides basic load balancing across multiple ec2 instances and operates at both the request level and the connection level now the classic load balancer is intended for applications that were built within the ec2 classic network now there are a lot more features to each three of these and we're not going to get into those because they're a bit outside of the scope of this course but just keep in mind the basic benefits of an elastic load balancer and the three main balancers that are offered by amazon web services which is the application the network and the classic so let's log into our management console and see what different features the elastic load balancing can offer us hi everybody and welcome to this demonstration on elastic load balancers so now that we know what elastic load balance are let's see how we can configure them within aws so the first thing we want to do is navigate to where the load balancers are so let's say if i want to find elastic load balancers i will for example type elastic load balancer and as you guys can see the only option that shows up is an ec2 instance because if you remember elastic load balancers are configured within the ec2 dashboard so i'm going to go ahead and navigate to our to my ec2 dashboard and towards the left hand side if i scroll down a little bit here's where i can see where the load balancing options are i'm going to go ahead and click on load balancers and it will bring me to a dashboard where it will show all of the load balancers that your organization would have as you guys can see i don't have any at the moment so i'm going to go ahead and click on create load balancer it will take me to another screen would give me an option to either create an application load balancer create a network load balancer or a classic load balancer now a classic one is used if you have an existing application that's running on an ec2 instance which is it's also classified as a previous generation one a network one will handle your network traffic or an application one will app will handle your application so let's say if we want to create an application load balancer when i click on that it'll take me to a screen where i can specify different information for my load balancer so let's say if i just give my load balancer a name next i have two options whether an internal facing one or a purely internal one so an internal facing load balancer like if you bring your mouse pointer to the eye it'll give you an information it's for requests from clients over the internet and an internal load balancer purely routes requests from clients to targets using private ip addresses so these options would depend on if you have traffic coming in from the outside or if it's purely an internal network load balancer for you the ip address i will leave it as default which is an ipv4 or you're going to have a dual stack if you want version 4 and version 6. in terms of the listeners we can specify an http or an htcps or if we click on add listener we can have both the regular http and the secured http towards the bottom is where we can specify what vpc we want this load balancer to reside in if we have multiple vpcs that will be showing up here and here is where we can configure how many availability zones we want the load balancer to span across what as you guys can see currently i'm in northern virginia and here's where all the availability zones within the specific regions are so let's say if i want the load balancer to span across multiple availability zones 1a and 1b here's where we are able to cl specify security settings so if i go back one screen if you guys remember we can specify about http and https if we were to do https for example and if i were to click next here's where we would specify the certificates now i'm going to only do http because i do not have any certificates specified or created within aws but if you are an organization that is working on a secured channel here is where you can specify either a certificate that's managed by aws or if you have your own certificates you can also upload them here but since i don't have certificates i'm going to just stick with http and now when i click on next again you guys see i don't have any options to specify certificates when i click on next here's where we can specify which security group this elb will be tagged against you can either create a new one or select an existing one so i'll just keep the default one that's there and here's where we can configure routing again the load balancer routes requests to targets in this target group using the protocol which we specified which was http and then it's also going to perform some health checks on the target using the using the settings that we specify here there's also additional held settings we can specify in terms of the thresholds the timeout the intervals and so on so i'm going to just give this a name also and we can either specify the target type whether it's an instance or ac2 instance whether it's an ip or whether it's a lambda function now since i specified lambda function we can specify which lambda function is going to do and obviously if you do not have a lambda function created you are also able to add one later so let's say if i were to configure this and add a lambda function later i'm going to go ahead and click on review and it will take me to a basic information to see what all of the information that i have specified for this load balancer as you guys can see the name the the ports i've left all the other things as default there's no security settings because we're only working with http security groups are also default and then here is the routing information i'm going to go ahead and click on review and there we go our load balancer is successfully created now if i were to click on close here as you guys can see right now it's currently provisioning my load balancer and towards the bottom we can have a overview of the load balancer we can see where it's listening we can also view the monitoring to see the traffic and how the load balancer has handled the traffic that's coming in so it gives a a host of information for you to see the traffic that's coming in and that's being handled by your load balancer and find the tags if you specify this tag let's say if this load balancer is for specific applications specific departments and so on so that's basically a general overview of creating a elastic load balancer this specifically one was a application load balancer but the information is primarily very similar if you want to create a network load balancer hi everybody and welcome to this lesson on the management tools offered by aws now aws provides a set of management tools that allow you to programmatically provision monitor and automate all of the components of your cloud environment using these using these tools you can maintain consistent controls without restricting development velocity now there are multiple categories of tools which are provided by aws that you guys see on the screen i will go through six of the main ones that are used most of the time and that are also covered in the cloud practitioners exam first first one is cloud formation and this is a very this is a very neat service that provides a common language for you to describe and provision all infrastructure resources in your cloud environment it allows you to use a simple text file to model and provision in an automated and secure manner all the resources needed for your applications across all regions and all accounts once everything is modeled this text file serves as a single source of truth of your cloud environment you can also create a collection of approved cloud formation files in aws service catalog to allow your organization to only deploy approved and compliant resources now what you guys see is the basic process that cloud formation takes first you code your infrastructure from scratch using the cloud formation template and when we log into the management console i will show you the template that is provided by aws and it's in either a yaml or json format once you've developed your template you can use the cloudformation via the browser console or the command line tools or even apis to create a stack based on your template code after everything is done you've defined everything that you want to do in your cloud environment the formation tool provisions and configures stacks and resources you specified on your template so it's basically a codified way of provisioning and deploying your entire cloud environment next we have cloud watch which is a monitoring service for aws cloud resources and the applications you run on aws you can use cloudwatch to collect and track metrics collect and monitor log files set alarms and automatically react to changes in your aws resources it can monitor resources such as ec2 instances dynamo db tables rds db instances as well as any custom metrics or log files generated by your applications it also provides a stream of events describing changes to your aws resources that you can use to react to changes in your applications so it basically gives you complete visibility into your cloud resources and your applications next we have something called cloudtrail and this is a service that enables governance compliance operational auditing and risk auditing of your entire aws account so with cloudtrail you can log continuously monitor and retain account activity related to actions across your aws infrastructure it provides event history of your aws account activity including actions taken through aws management console any sdks command line tools and any other aws services this event history basically simplifies security analysis resource change tracking and troubleshooting so if you ever want to find out who has logged into the aws console or which applications have access resources or which people have access resources such as s3 buckets this cloudtrail keeps a detailed record of everything that is accessed in your entire cloud environment so while cloudwatch gives you an overview of the performance of your aws infrastructure cloudtrail keeps an detailed track for auditing purposes of what's going on and who's accessed what next we have an ec2 systems manager so basically helps you safely manage and operate your resources or your ec2 instances across your entire aws infrastructure you're able to create groups of resources across all aws services such as applications or different layers of an application stack and the best part is it gives you a very neat dashboard where you can visualize the data and aggregate data across your aws account or if you have and are managing multiple aws accounts you can also aggregate data across multiple aws accounts and then obviously this allows you to respond to insights and operate operational actions across resource groups whether it be finance resources hr resources so it depends how you have them grouped within your infrastructure you're able to monitor everything with the ec2 systems manager so the ec2 systems manager is very easy to use you basically simply access the manager from the ec2 management console select the instances you want to manage and then define the management tasks now there are also multiple tools available for you to manage the ec2 instances for example there's a run command which provides a simple way of automating common admin tasks like remotely executing shell scripts or partial commands installing software or even making changes to your os the state manager helps you define and maintain consistent os configurations such as firewall settings and anti-malware definitions to comply with your policies the inventory just like the name suggests help you collect and query configuration and inventory information about your instances and the software installed on them so you can gather details about your instances such as the applications that are installed dhcp settings agent details and much much more the maintenance window lets you define a recurring window of time to run admin and maintenance tasks across your instances so this ensures that installing patches and updates or even making configuration changes does not disrupt business critical operations the patch manager again like the name suggests helps you select and deploy os and software patches automatically across large groups of instances and finally the automation simplifies common maintenance and deployment tasks such as updating amis or also known as amazon machine images use the automation feature to apply patches update drivers and agents or big applications into your ami using a streamlined repeatable and auditable process the next management tool we have is the aws config and this is a fully managed service that provides you with an aws resource inventory configuration history and configuration change notifications to enable security and governance the config rules feature enable you to create rules that automatically check the configuration of aws resources recorded by aws config with this you can discover existing and deleted aws resources determine your overall compliance against rules and dive into configuration details of a resource at any point in time now these robust capabilities enable compliance auditing security analysis resource change tracking and even troubleshooting now this is a very neat tool that comes within the management suite is the trusted advisor is basically an online resource to help you reduce cost increase performance and improve security by optimizing your aws environment the advisor provides real-time guidance to help you provision your resources following aws best practices so basically scans your infrastructure compares it to what aws has defined as its best practices and provides recommended actions across five categories which is cost optimization performance security fault tolerance and service limits so just as an exam note please keep these five categories in mind which is cost optimization performance security fault tolerance and service limits because the exam sometimes asks you specific questions about the trusted advisor don't worry about the details of what it can do within these it's also outside of the scope of this course but just keep in mind and memorize these five categories then we have the personal health dashboard which provides alerts and remediation guidance when aws is experiencing events that might affect you now while the dashboard displays general status of aws services there's a personal health dashboard which gives you a personalized view into the performance and availability of the aws services underlying your aws resources the dashboard displays relevant and timely information to help you manage events in progress and provides proactive notification to help you plan for schedule activities the personal health dashboard alerts are automatically triggered by changes in the health of aws resources giving you event visibility and guidance to help quickly diagnose and resolve issues there's also an aws ops works which you guys might come across that's a bit outside of the scope of this course and the exam really does not focus or ask about ops works it's more geared towards the sysops exams but just as a note it's basically a configuration management service that uses chef as an automated platform that treats server configurations as a code so just like with cloud formation you can code your entire aws platform the ops work does the same thing but on a limited scale and is limited to the servers are configured and operated so let's go ahead and log into our management console and check out the different tools that are available for us to manage our aws platform hi everybody and welcome to this demonstration on looking at the different management tools that are available within aws so if i were to go ahead and go on to the services menu these are all of the different management tools that are available within aws now i'm not going to go through all of them because that would be a bit outside the scope of this course i'll just show you a couple of the main ones that are used quite often within aws one of them which you should already be familiar with is cloudwatch because if you have been going through this course in the beginning we showed you how to create different building alarms which is also done through cloudwatch so let's say if i were to go into the cloudwatch dashboard now the overview it just gives me a basic overview of what's going on within cloudwatch so let's say if i were to go into the dashboards here's where you can see all the different dashboards that are created for us to monitor our different metrics within aws so let's say if i want to create a dashboard to monitor one of my s3 buckets i'll give this dashboard a specific name i'm going to go ahead and click on create and when i do that it gives me an option to see what type of information i want to be displayed within that specific dashboard whether i want in line stack line number text or query results so depending on what you are trying to monitor will determine how you want that data to be visualized so since it's an s3 bucket i just want to monitor the progress of it i'm going to go ahead and click on this specific line and click on configure when i do that here's where all the different metrics that are available for us to configure within cloudwatch api gateways dynamodbs and so on what i'm interested in is s3 and as you guys can see it also lets you know how many metrics are available within each of these functions so s3 has six metrics recognition has 12 metrics and so on so when i click on s3 i have storage metrics as you guys can see i have three buckets that are created within my account and for each bucket i'm able to monitor the bucket size invites and the number of objects within that bucket so let's say if you have 30 buckets within your account here you will have 60 different options to choose from because let's say if i only wanted to monitor the size of the bucket i can select this or if i want to monitor the size and the number of objects i would select both of these options and i will create the widget and there we have it now when either the size of the bucket increases or the number of objects then that bucket increase this will also increase another thing we can do within our cloudwatch metrics is specify different events so let's say if i want a specific rule to be triggered when something happens i can specify that here so if i were to go ahead and click on create rule i can create a rule for a service so let's say if i were to stick with s3 again i'm going to go ahead and find s3 and for the event i can specify bucket level operations or object level operations so let's say if i were to stick with bucket level up operations and i can do it see if anything happens within that bucket or a specific operation within that bucket and for specific operation it'll give me more detailed information so let's say i want to be notified if anything happens to my bucket let's say if a bucket is deleted i want the administrator to be notified because again that is a big thing to happen if an entire s3 bucket is deleted so if i let's say if i want to be notified if an s3 bucket is deleted and as you guys can see it also lets you know the different json script that it's been writing for you and here's where we can let aws know what will happen through a lambda function if this s3 bucket were to be deleted and here's where you can specify again a specific lambda function so let's say if you have configured a lambda function to trigger an sms or an ses or sending email we can specify that here and if you can click click on configure the details we'll configure the rule details and create the rule so now every time a bucket is deleted it will trigger that lambda function which will send an email or whatever the lambda function is configured to do to let the specific person or the group know that a bucket has been deleted and again you can specify rules for any of the services that are within aws so that's an overview of cloudwatch let's say if we were to go into cloud formation and if you remember from the brief overview of cloudformation we can basically configure an entire network through cloudformation stacks as you guys can see i've configured a couple of cloud formation stacks that are available through amazon for sumerian but here's where you can let you know how many cloud formation stacks you have deployed you can create a stack and creating a stack is a bit outside the scope of this course but just to let you know there are templates available by aws for creating stacks or you can use one of the sample templates that aws has if you want to let's say create a wordpress blog or you want to create a ruby on rails text these are the default templates that are available by aws so let's say if you want to create a wordpress blog through cloudformation so i'm going to go ahead and select that i will click on next well before i do that if i go back here is where you can view what the cloudformation stack actually looks like so if i click on view in designer now here are all of the services that are being provisioned through this cloudformation stack on the bottom here it lets you know json so if you want to write a json script to do this you are also able to do that but it'll give you a visual overview of what the cloud formation stack looks like in terms of all of the different services that are going to be provisioned for example the load balancers the database instance the ec2 instance the web servers and so on so all this is provisioned when i provision this cloudformation stack now if i were to close this and let's say if i want to not use one of the templates that are provided by aws that you guys can see here let's say if you want to design your own cloud formation template you are also able to do that through this designer by itself so if you go into the designer here it will give you a blank slate to design your entire network and on the left side are all of the services that you are able to use and utilize so let's say if you want to start off with an ec2 host you just click and drag this out into this screen and there we go and as you guys can see uh on the bottom is where we see the json file where we can modify it if for example you want to modify the json through coding or we can also have this visual designer on top so it's a very very powerful tool for you to deploy and manage entire networks within aws and the best part about this is for example let's say if you already deployed a network in one of your locations and you want to duplicate that in a new location that your business is opening up you are able to do that through call formation stack so you can have default cloudformation templates so whenever a new office opens up it'll simply launch this cloud formation stack and they will be ready to go so it's a very powerful tool for you to utilize and deploy entire networks within aws so if i leave this the last thing that i want to show you within the management tools is the cloud trail now the cloud trail basically lets you monitor everything that's happening within aws so it's a good audit it's actually a great auditing tool for your organization to use and utilize to know what's going on within your aws network so here's where it'll give you a good dashboard of all of the different audit trails that are going on what happened and the recent events that are recorded through audit trails if you click on view trails it will let you know all of the trails that have been created by the organization you can also create new trails for anything that you want within your network we can save that data on s3 buckets or we can trigger lambda functions for something to happen so let's say if you want emails to be sent off these audit trails to management or your it administrators you are also able to do that so it's a very powerful tool for auditing purposes so if you are an administrator of your aws network you always want to be notified about everything that's going on and again most organizations also have periodic audits of their network so this would be a great tool for those meetings or for those reviews so let everybody know all of this has happened within the past month or within the past quarter and so on so that's a good that's a general overview of the different management tools and again all of these are available but most of them are a bit outside of the scope of this course but please do take the time to go through the rest of them get these configured go through the steps to get you guys familiarized with the different management tools that are available within aws hi everybody and welcome to this lesson on the messaging tools that are offered by aws now there are three main tools that aws uses for messaging which is sqs sns and ses so let's go ahead and take a look at each one of these in a little bit more detail so first there's something called the sqs which is the simple q service it's basically a fully managed messaging queue service that enables you to decouple and scale microservices distributed systems and serverless applications it eliminates the complexity and overhead associated with managing and operating message-oriented middleware and empowers developers to focus on differentiating their work using sqs you can send code using sqs you can send store and receive messages between software components at any volume without losing messages or requiring other services to be available there are also two different types of messaging queues that are offered by sqs there's the standard queue which offers maximum throughput best effort ordering and at least once delivery and then there's the first in first out cues which are designed to guarantee that messages are processed exactly once in the exact order that they are sent so again depending on how you are configuring and what you'll be using sqs for will determine what type of messaging queues you will deploy so again some of the benefits of sqs it manages all ongoing operations and underlying infrastructure needed to provide a highly available and scalable messaging message queuing service the best part about sqs is is there is no upfront cost no need to acquire install and configure messaging software and no time consuming build out and maintenance of supporting infrastructure the queues are dynamically created and scale automatically so you can build and grow apps quickly and efficiently it's reliable again like most as it's extremely reliable like all aws services and it keeps sensitive data secure so you can use sqs to exchange sensitive data between applications using server-side encryption to encrypt each message body the sqs sse integration with the key management service allows you to centrally manage the keys that protect sqs messages along with keys that protect your other aws resources and then like i mentioned you can scale elastically and cost effectively since there's no upfront cost related to sqs then we have the sns or also known as a simple notification service now this is a highly available durable secure fully managed publishing and subscribing messaging service that enables you to decouple microservices distributed systems and services and serverless applications just like sqs does the sns provides topics for high throughput push-based many-to-many messaging so using the sns topics your publisher systems can fan out messages to a large number of subscriber endpoints for parallel processing including amazon sqs queues additionally sns can be used to fan out notifications to end users using mobile push sms and even email so you might be asking well what's the difference between sns and sqs well with the sns you can send push notifications to even apple google fire os and windows devices as well as to android devices you can use sns like i mentioned to send sms messages to mobile devices also while sqs is mainly used to decouple applications or integrate applications messages can be stored in sqs for short duration of time maximum of 14 days the sns distributes several copies of message to cues to several subscribers for example let's say you want to replicate data generated by an application to several storage systems you could use sns and send this data to multiple subscribers each replicating the messages it receives to different storage systems whether it be an s3 or glacier so let's go ahead and log into our management console and see what some of the differences between sns and sqs hi everybody and welcome to this tutorial on cloudfront so in this exercise what we're going to do is do a basic configuration of cloudfront that will store some documents and some objects in an s3 bucket and then we'll create distributions of that content such as text or graphics through cloudfront so in order to do that let's go ahead and first create an s3 bucket that we'll be using for this cloudfront so after this bucket is created let's go ahead and upload some documents into this bucket so here i've added one picture and one pdf document and in the manage public permissions we want to make sure that we grant public read access to this object we'll keep this just as standard in terms of the tiering so here we have the two documents uploaded now let's just confirm that they are accessible by the internet so if we go into this object it gives us a link that should be accessible by the internet so if i copy this link and as you guys can see this picture is accessible through this link on the internet so after that let's go ahead and create our cloudfront distribution so i'm going to navigate to cloudfront and create a new distribution now when you're creating a distribution with cloudfront if you remember we have two options either web or rtmp we'll be using the web one because we're just distributing files and documents through the http and https traffic but if we're using adobe flash media server or the rtmp protocol we would get started with this distribution method now here the origin domain name we want to select the s3 bucket that we just created and we want to make sure we don't restrict access to this bucket now basically if we accept these default values for this what cloudfront will do it'll forward all requests that use the cloudfront url for distribution to the amazon s3 bucket that we've specified it'll allow end users to use either http or https traffic it will respond to requests for our object it'll cache our objects at the cloudfront edge locations for 24 hours it'll forward only the default request headers to the origin and not cache the objects based on the values in the headers it'll exclude cookies and query string parameters if any when forwarding requests for objects to our origin which is the s3 bucket and it's not configured to distribute media files in the streaming format and lastly it'll allow everyone to view our content here's where we can specify the distribution settings we can either use all edge locations use only us canada europe and so on but optimally if it is for a global distribution you all obviously want to use all edge locations now the awf wav or the web application firewall allows us to block http and https requests based on criteria that we can specify or we can choose a web acl or access controllers to associate with this distribution but since we don't have any access controllers or wf defined no option will show up here in the cname we can specify one or more domain names that we want to use for urls for the objects instead of the domain name that cloudfront assigns when we create the distribution here's where we can specify any ssl certificates if we have any custom ones or use the one provided by cloudfront and additionally you have options for logging or cookie logging and then the distribution state enabled or disabled now select enable if you want cloudfront to begin processing requests as soon as the distribution is created or disabled if you do not want cloudfront to begin processing requests after the distribution is created so let's go ahead and create this distribution all right so now that the distribution is deployed we can see the status is deployed let's go ahead and test our links now if you recall we created this s3 bucket called costume cloudfront and i added two different files to this one was a jpeg and one was a pdf now if i click on this jpeg in order to access this jpeg directly from the s3 bucket i pasted this link into my browser and it opened up this picture now that we've set up cloudfront i want to make sure that instead of accessing the image from this link which goes directly to our s3 bucket i want to access the image from our cloudfront cache so if we go back into our cloud front you we see that there's been a domain name that's been provisioned for this cloud front if i simply copy and paste this link we can see that this picture is also accessible via this new domain name on cloudfront so right now i'm accessing the cloudfront edge location in order to access this image i'm not going to my s3 bucket which is located in the london region additionally if we click on our distribution id we can get additional information and make changes and edit the origins and so on that gets a bit out of the scope of this course but we are able to change all of the options if you want to add additional s3 buckets or change the behaviors that we we are able to do that here [Music] now let's review the well architected framework which you've used before also so as we move forward the lectures are based on the well architected framework which is a collection of white papers now this forms primarily the essential knowledge that you need to not only pass the exam itself but also really become an effective solutions architect the well-architected framework is simply a collection like i mentioned right of best practices processes techniques concepts that you can gather over the years right by hands-on experiences from various amazon clients from across the globe like large companies such as you know netflix or other companies 20th century fox so how these companies enterprise companies have actually solved their challenges so the well architected framework is presented in five pillars so you have the reliability pillar which talks about the availability right and building systems that are reliable and resilient you have the performance efficiency which you know we are going to talk about in depth basically the efficient use of computing and storage resources and other tools within the aws ecosystem then you have the security pillar which talks about securing our application securing a network whether it's data security cost optimization pillar guides making decisions on how to save money for the organization and of course lastly we have the operational excellence pillar which is really about the processes and methods in practices basically that the business uses to deploy and manage aws services so again as we move forward within this course the rest of the lectures are primarily based on these five pillars so we'll be designing architectures we'll be talking in depth about these five pillars because that's really what the exam guide entails and also in my own experience as a real world you know designing architectures these five pillars are extremely important so as we move forward we'll be learning more and more about each of these areas so i would highly encourage you not only watch the you know course maybe certain lectures over and over again because it's you know sometimes you have to go back to the lecture and take a look at it again to take a look at the concept and the architecture itself so i'm going to highlight as we move forward which lessons are important and which lectures you need to review a few times so i hope this helps practice with this if you have any questions feel free to reach out with this let's move to the next lesson
Info
Channel: ClayDesk E-Learning
Views: 40,738
Rating: 4.9360638 out of 5
Keywords: AWS Certified Cloud Practitioner 2021 FULL COURSE for Beginners, aws full course claydesk, aws full course 2021, aws certified cloud, aws certified cloud practitioner training, aws certified cloud practitioner exam, cloud computing, aws tutorial for beginners, aws certified cloud practitioner training 2020 - full course, aws tutorial, s3, aws certified cloud practitioner 2019 full course for beginners, aws cli, aws certified cloud practitioner, aws for beginners, Aws certified
Id: SpfO55NPhx8
Channel Id: undefined
Length: 219min 54sec (13194 seconds)
Published: Thu Feb 04 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.