AWS Certified Solutions Architect - Associate 2020 (PASS THE EXAM!)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hey, this is Andrew Brown from exam Pro. And I'm bringing you another free Eva's certification course. And this one happens to be the most popular in demand. And it is the solutions architect associate certification. So if you're looking to pass that exam, this is the course for you. And we are going to learn a broad amount of database services, we're going to learn how to build applications that are highly available, scalable or durable. And we're going to learn how to architect solutions based on the business use case. Now, if you're looking at this course, and it's 2020, and you're wondering if you can use it to pass, you definitely can because the video content here was shot in late 2019. The only difference is that the AWS interface has changed a little bit in terms of aesthetic, but it's more or less the same. So you can definitely use this course, if you want a high chance of passing, I definitely recommend that you do the hands on labs here in your own AWS account. If you enjoyed this course, I definitely want to get your feedback. So definitely share anything that you've experienced throughout this course here. And if you pass, I definitely want to hear that as well. And I also hope you enjoy this course. And good luck studying. Hey, this is Andrew Brown from exam Pro. And we are going to look at the solution architect associate and whether it's a good fit for us to take the certification. So the first thing I want you to know is that this kind of role is for finding creative solutions by leveraging cloud services instead of reinventing the wheel. It's all about big picture thinking. So you're going to need broad knowledge across multiple domains. It's great for those who get bored really easily. And so you're gonna have to wear multiple hats. And it's really less about how are we going to implement this and more about what are we going to implement, okay, so you would come up with an architecture using multiple different cloud services, and then you would pass it on to your cloud engineers to actually go implement, it's not uncommon for a solution architect to be utilized within the business development team. So it's not quite unusual to see solution architects being very charismatic speakers and extroverts, because they're going to have to talk to other companies to collaborate with, alright, and just to really give you a good idea of what a solution architect does, they're going to be creating a lot of architectural diagrams. So here, I just pulled a bunch from the internet, and you can see kind of the complexity and how they tie into different services, you're going to require a lot of constant learning, because AWS is constantly adding new services and trying to figure out how they all fit together is a common thing. And advice that I get from some senior solution architects at large companies, is you're always thinking about pricing, and you're always thinking about can you secure a whatever that is okay, but at best is gonna have their own definition there, which is all about the five pillars, which comes into the well architected framework. But you know, we'll learn that as we go along here. Okay, so let's talk about what value do we get out of the solution architect associate? Well, it is the most popular at a certification out of every single one. It's highly in demand with startups, because you can help wherever help is needed startups, from small to medium size, just need people to fill any possible role. And because you're gonna have broad knowledge, you're going to be considered very, very valuable, it is recognized as the most important certification at the associate level, and it's going to really help you stand out on a resumes, I would not say the associate is going to help you increase your salary too much. But you're definitely going to see a lot more job opportunities to see those increase in salaries, you're gonna have to get those pros and specialty certifications. Okay, so if you're still not sure whether you should take the solution architect associate, let me just give you a little bit more information. So it is the most in demand a certification. So it has the most utility out of any other certification because of that broad knowledge. It's not too easy, but it's not too hard. So it's not too easy, in the sense that, you know, the information you're learning is superficial, it's actually going to be very useful on the job. But it's also not that hard. So you're not going to risk failing the exam because you don't know the nitty gritties of all the services, okay, it requires the least amount of technical knowledge. So if you're really more of a, a academic or or theory based learner, instead of having that hands on experience, you're going to excel here taking the solution architect associate. And again, when in doubt, just take this certification because it gives you the most flexible future learning path. So I always say that if you aren't sure what specialty you want to take, take the solution architect associate. So you get to familiarize yourself with all the different kinds of roles that you can encounter. So if you're definitely thinking about doing big data security, machine learning, I would absolutely think a to do Take the solution architect associate first. Of course, you can always do the solution architect professional, if you want to keep on going down this specific path. And if you are new to AWS and cloud computing in general, that I strongly recommend that you take the CCP before taking the solution architect associate because it's a lot easier. And it's going to give you more foundational knowledge so that you're going to have a really easy time with this exam. And it specifically is the direct upgrade path. So all that stuff you learn in the CCP is directly applicable to the Solution Architect associate. So how much time are we going to have to invest in order to pass the solution architect associate. And this is going to depend on your past experience. And so I've broken down three particular archetypes to give you an idea of time investment. So if you are already a cloud engineer, you're already working with AWS, you're looking at 20 hours of study, you could pass this in a week, okay, but that's if you're using AWS on a day to day basis, if you are a bootcamp grad, it's going to take you one to two months. So we're looking between 80 to 160 hours of study. If you have never used AWS or heard of it, then you probably should go take the certified cloud practitioner first, it's going to make things a lot easier, which has a lot more foundational information, you might start this here and be overwhelmed, because you feel that you're missing information. So you will probably want to go there first. If you are a developer, and you've been working in the industry for quite a few years, but maybe you've just never used AWS, then you're looking at one month of study. So that's about 80 hours of study. Okay, and so that will give you an idea how much time you need to commit. Okay, so let's just touch on the exam itself here. So the exam itself is going to cost $150, for you to take and that's in USD, you have to take it at a test center that is partnered with AWS. So you will have to go through the portal there and book it and then you'll be going down to that test center to right at the exam gives you 130 minutes to complete it. There's a 65 questions on the exam, the passing score is around 72%. And once you have the certification, it's going to be valid for three years. All right. So hopefully, that gives you a bit of perspective and whether the solution architect associate is right for you. Here I have on the right hand side, the exam guide. And I'm just going to walk you quickly through it just so you get a kind of a breakdown of what it is that AWS recommends that we should learn and how this this exam is broken up in terms of domains, and also its scoring. Okay, so here on the left hand side, we're going to first look at the content outline. Okay, if we just scroll down here, you can see it's broken up into five domains. And we get a bunch a bunch more additional information. Okay, so we have a design resilient architectures, design performance architectures, specify secure applications and architectures designed cost optimized architectures, and define operational excellent architectures. Now I highlighted the word in there resilient performance, secure cost, optimizing operational excellence, because this actually maps to the five pillars of the well architected framework, which is a recommended read for study here, okay. So there is a rhyme and rhythm to this layout here, which we will talk about when we get to the white paper section. But let's just look inside of each of these domains. So for resilient architecture, you have to choose reliable and resilient storage. So there we're talking about elastic block store in s3 and all the different storage options available to us design, how to design decoupling mechanisms using AWS services. So they're talking about application integration, such as Sq s, and SNS. Then we have a design how to or determine how to design a multi tier architecture solution. Maybe they're hinting there at once as multi tier. So when you have tiers you, you'd have your database layer, your web layer, your load balancing layer, okay, so that's probably what they mean by tiers. Determine how to design high available availability or fault tolerant architectures. So that's going to be knowing how to use row 53. Load Balancing auto scaling groups, what happens when an AZ goes out what happens when a region goes out? That kind of stuff, okay. The next thing is design, performance architecture. So choose performance storage, and databases. So that's just going to be knowing Dynamo DB versus RDS versus redshift, okay? That we can apply caching to improve performance. That's going to know that dynamodb has a caching layer that's going to be knowing how to use elastic cache or maybe using cloud Front to cache your static content, then we have designed solutions for elasticity and scalability. So that sounds pretty much like auto scaling groups to me, okay. And then we got specify secure applications in architecture. So determine how to secure application tiers. So again, there's three tiers database, web network, or load balancing. There's obviously other tiers there. But just knowing when to check box to turn on security for those services and how that stuff works. From a general perspective, okay, Turman, how do you secure data, so just knowing data at rest, like data in transit? Okay, then defining the networking infrastructure for a single VPC application. This is about knowing v PCs inside and out, which we definitely cover heavily. I'm a solution architect associate and all the associate certifications, because it's so darn important that we have designed cost optimize architecture. So determine how to design cost optimized storage, determine how to design cost, optimize, compute, we're talking about storage, they're probably really, really talking about s3, s3 has a bunch of storage classes that you can change and they get cheaper, the further down you go, and knowing when and how to use that for compute, maybe they're talking about just knowing when to use us different kinds of ECU instances, or maybe using auto scaling groups to reduce that cost to, to scale out when you don't have a lot of usage. Then the last one here is design, operational, excellent architecture. So design features and solutions that enable opera enable operational excellence, okay, and so you can see, and I'm not even exactly sure what they're saying here. But that's okay. Because it's worth 6%. Okay, it's definitely covered in the course, it's just, it's a funny word in one there, I never could remember what they're saying there. Okay. But you can see the most important one here is designing resilient architecture. Okay, so that's the highest one there. And the last two is cost and operational excellence. So you're not going to be hit with too many cost questions, but you just generally have to know, you know, when it makes sense to use x over y. Alright. So yeah, there's the outline, and we will move on to the next part here. And that's the response types. Okay. So this exam, I believe, has a 65 questions. I don't think it actually states it in here. But generally, it's 65. Okay, a lot of times when you take the exams, they'll actually have additional questions in there that are not scored, because they're always testing out new questions. Questions are going to come in two formats, multiple choice. So we're going to have the standard one out of four. And then we're going to have multiple response, which is going to be choose two or more out of five or more, okay, generally, it's always two out of five. But I guess sometimes you could have three out of six. All right. And so just be aware of that. Now, the passing score for this is going to be 720 points out of 10,000 points. Okay, so they have this point system, and so 720 is passing. So the way you can think about it, it's 72%, which is a C minus two pass. All right, I put a tilde there, I know it looks a bit funny there. But the Tilda means to say like about or around because that value can fluctuate. So the thing is, is that it's not exactly 72%, you could go in and get 72%, and fail, you can go and get 75% and fail, it just depends on how many people are taking the exam, and they're going to adjust it based on how you feel or passing or failing, okay, but it doesn't, it doesn't fluctuate too far from this point, okay, it's not gonna be like, you have to get 85%. Alright. And then just the last thing here is the white paper. So each of us recommends white papers for you to read. And they're not very clear here. So they do architecting for the cloud as best practices, that's when you should definitely read. It's not a very difficult read. So it's on the top of your reading list. And then there's Eva's well, architected architected webpage. And so that web page contains a bunch of white papers. And this is the full list here. Okay, so we have the well architected framework, which talks about the five pillars and then then they actually have a white paper for each pillar. And then there's these other ones down below, which are kind of new additions. So the question is, do you have to read all of these things? No. In fact, you should just probably just read the top one here as well architecture framework, and you could read half of that and you'd still be good. It is great to dove dive into these. These ones here. So um, there are still listed here. The last ones here are definitely 100% optional. I do not believe they are on the exam. But again, they just tell you to go to the entire page. So it is a bit confusing there. So hopefully, that gives you a bit of a breakdown so you are prepared. what's ahead of you for study. Hey, this is Angie brown from exam Pro. And we are looking at simple storage service, also known as s3, which is an object based storage service. It's also serverless storage in the cloud. And the key thing is you don't have to worry about falling systems or display to really understand three, we need to know what object storage is. And so it is a data storage architecture that manages data as objects, as opposed to other storage architectures, other architectures being file systems, where you manage data as files within a file hierarchy, or you have block storage, which manages data as blocks, when within sectors and tracks, the huge benefit to object storage is you don't have to think about the underlying infrastructure, it just works. And with s3, you have practically unlimited storage. So you just upload stuff, you don't worry about disk space. s3 does come with a really nice console that provides you that interface to upload and access your data. And the two most key key components to s3, our s3 objects and s3 bucket so objects are is what contains your data. And they're kind of like files. And an object is composed of a key value of version ID and metadata. So the key is just the name of the file or the object, the value is actually the data itself made up as a sequence of bytes. The version ID, if you're using versioning, then you have to enable that on s3, then each object you upload would have an ID. And then you have metadata, which is just additional information you want to attach to the object. An s3 object can be zero bytes in size and five terabytes. Please note that I really highlighted zero bytes, because that is a common Trick question on the exam, because a lot of people think you can't do zero bytes, and you definitely can. And then you have buckets and buckets, hold objects, they are kind of like top level folders or directories. Buckets can also have folders, which in turn hold objects. So buckets have a concept called folders. And so you can have had those objects directly in that bucket. Or in those folders. When you name an s3 bucket, it's using a universal namespace. So the bucket names must be unique. It's like having a domain name. So you have to really choose unique names. Okay. So the concept behind storage classes is that we get to trade retrieval time accessibility and durability for cheaper storage. And so when you upload data to s3, by default, it's using standard. And it's really, really fast. It has 99 point 99% availability, it has 11 nines of durability, and it replicates your data across three availability zones. And as we go down, this list is going to get cheaper, we're going to skip over until intelligent tearing, we'll come back to that. And we're gonna look at standard and frequency access, also known as IAA. So it's just as fast as standard. The trade off here is that it's cheaper if you access files less than once a month. There isn't an additional retrieval fee when you access that data. But the cost overall is 50%. Less than standard. So the trade off here is you're getting reduced availability, then you have one zone IAA. And as the name implies, it only runs your data or only replicate your data in one AZ so you don't you have reduced durability. So there is a chance that your data could get destroyed, a retrieval fee is applied just like an AI a, your availability is going to drop down to 99.5%. Okay, then you have glacier and glaciers for long term cold storage. It's for the trade off here, though, is that the retrieval is going to take minutes to an hour, but you get extremely, extremely cheap storage. There also is a retrieval fee applied here as well. And glacier normally is like pitched kind of like its own service. But really, it's an s3 service, then you have glacier deep archive. And this is just like glacier except now it's going to take 12 hours before you can access your data. Again, it's very, very, very, very cheap at this level. It's the cheapest tier here. And so this is really suited for long archival data. Now we glossed over intelligent, tearing, but let's talk about it. So what it does is it uses machine learning to analyze your object usage and determine the appropriate storage class. So it's going to decide for you what storage class you should use so that you save money, okay, and so that is all of the classes and we're going to compare them in a big chart in the next slide. I just have here up the comparison of storage classes just to make it a bit easier for you to see what's going on here. So you can see across the board we have durability at the 11 nines across all services. There is reduced durability in one zone I A but I guess it's trying to say that maybe it has 11 nines in that one zone. I'm not sure so that one confuses me a bit. But you have to think that if you're only running one zone, there has to be reduced durability. For availability, it's 99.9% until we hit one zone IAA. For glacier and glacier deep archive. It's not applicable because it's just going to take a really long time to access those files. So availability is like indefinitely Hello, we're not going to put a percentage on that. For azs. It's going to run in three or more azs. From standard to standard ay ay ay ay. Actually, across the board. The only one that is reduced is for one zone IAA, I always wonder, you know, if you're running in Canada Central, it would only use the two because there's only two availability zones there. So it's always a question I have on the top of my head. But anyway, it's always three or more azs, you can see that there is a capacity charge for standard IAA. and above, for there is a storage duration charge for all the tiers with the exception of standard. And then you have your retrieval fees, which are only going to come in your eyes and your glacier, okay. And then you have the latency. That's how fast you can access files. And you can see Ms means milliseconds. So for all these tiers, it's super, super fast. And you know, it's good to just repeat, but you know, AWS does give a guarantee of 99.99% availability, it has a guarantee of 11 nines durability. Alright, so there you go, that is the big. Now we're taking a look at s3 security. So when you create a bucket, they're all private by default. And AWS really obsesses over not exposing public buckets, they've changed the interface like three or four times, and they now send you email reminders telling you what buckets are exposed because it's a serious vulnerability for AWS, and people just seem to keep on leaving these buckets open. So when you create a new bucket, you have all public access denied. And if you want to have public access, you have to go check off this for either for your ACLs or your bucket policies. Okay. Now, in s3, you can turn on logging per request. So you get all the detailed information of what objects were accessed or uploaded, deleted in granular detail. log files are generated, but they're not putting the same bucket, they're putting in a different bucket. Okay. Now to control access to your objects. And your buckets, you have two options. We have bucket policies and access control lists. So access control lists came first, before bucket policies, they are a legacy feature, but they're not depreciated. So it's not full pot to use them. But they're just simpler in action. And sometimes there are use cases where you might want to use them over bucket policies. And so it's just a very simple way of granting access, you right click in a bucket or an object and you could choose who so like there's an option to grant all public access, you can say lists the objects, right the objects, or just read and write permissions. And it's as simple as that now bucket policies are, they're a bit more complicated, because you have to write a JSON document policy, but you get a lot more rich, complex rules. If you're ever setting up static s3, hosting, you definitely have to use a bucket policy. And that's what we're looking at here. This is actually an example website where we're saying allow read only access, forget objects to this bucket. And so this is used in a more complicated setup. But that is the difference. So bucket policies are generally used more, they're more complex and ACLs are just simple. And there's no foolproof. So we talked about security and very big feature about that is encryption. And so when you are uploading files to s3, it by default uses SSL or TLS. So that means we're going to have encryption in transit. When it comes to server side encryption, actually what is sitting on the actual data at rest. We have a few options here. So we have SSE a s, we have SSE kms. And SSE C, if you're wondering what SSE stands for, it's server side encryption. And so for the first option, this is just an algorithm for encryption. So that means that it's going to be 256 bytes in length or characters in length when it uses encryption, which is very long. And s3 is doing all the work here. So it's handling all the encryption for you. Then you have kms, which is key management service, and it uses envelope encryption. So the key is then encrypted with another key. And with kms. It's either managed by AWS or managed by you the keys itself, okay, then you have customer provided keys, this is where you provide the key yourself. There's not an interface for it here, it's a bit more complicated. But you know, all you need to know is that the C stands for customer provided, then you have client side encryption, there's no interface or anything for that. It's just you encrypting the files locally and then uploading them to s3 or looking at s3 data consistency. Sorry, I don't have any cool graphics for here because it's not a very exciting topic, but we definitely need to know what it is. So when you put data or you write data to s3, which is when you're writing new objects, The consistency is going to be different when you are overwriting files or deleting objects. Okay, so when you send new data to s3, as a new object, it's going to be read after write consistency. What that means is as soon as you upload it, you can immediately read the data and it's going to be consistent. Now when it comes to overwriting and deleting objects, when you overwrite or delete, it's going to take time for s3 to replicated to all those other azs. And so if you were to immediately read the data, s3 may return to you an old copy. Now, it only takes like a second or two for it to update. So there, it might be unlikely in your use case, but you just have to consider that that is a possibility. Okay. So we're taking a look at cross region replication, which provides higher durability in the case of a disaster, okay, and so what we do is we turn it on, and we're going to specify a destination bucket in another region, and it's going to automatically replicate those objects from the region source region to that destination region. Now, you can also have it replicate to a bucket in another AWS account. In order to use this feature, you do have to have versioning turned on in both the source and destination buckets. Now in s3, you can set on a bucket versioning. And what versioning does is it allows you to version your objects, all right. And the idea here is to help you prevent data loss and just keeping track of versions. So if you had a file, here, I have a an image called tarok. Nor the name is the same thing as a key, right, it's gonna have a version ID. And so here we have one, which is 111111. And when we put a new file, like a new object with the exact same key, it's going to create a new version of it. And it's going to give it a new ID, whatever that ID is 121212. And the idea is now if you access this object, it's always going to pull the one from the top. And if you were to delete that object, now it's going to access the previous one. So it's a really good way of protecting your data from and also, if you did need to go get an older version of it, you can actually get any version of the file you want, you just have to specify the version ID. Now when you do turn on s3, versioning, you cannot disable it after the fact. So you'll see over here it says enabled or suspended. So once it's turned on, you cannot remove versioning from existing files, all you can do is suspend versioning. And you'll have all these baseball's with one version, alright. So s3 has a feature called Lifecycle Management. And what it does is it automates the process of moving objects to different storage classes or deleting them altogether. So down below here, I have a use case. So I have an s3 bucket. And I would create a lifecycle rule here to say after seven days, I want to move this data to glacier because I'm unlikely to use that data for the year. But I have to keep it around for compliancy reasons. And I want that cheaper cost. So that's what you do with lifecycle rule, then you create another lifecycle rule to say after a year, you can go ahead and delete that data. Now, Lifecycle Management does work with a version and it can apply to both current or previous version. So here you can see you can specify what you're talking about when you're looking at a lifecycle rule. So let's look at transfer acceleration for s3. And what it does is it provides you with fast and secure transfer of files over long distances between your end users and an s3 bucket. So the idea is that you are uploading files and you want to get them to s3 as soon as possible. So what you're going to do is instead of uploading it to s3, you're going to send it to a distinct URL for a edge location nearby. an edge location is just a data center that is as close as you as possible. And once it's uploaded there, it's going to then accelerate the uploading to your s3 bucket using the AWS backbone network, which is a an optimized network path. Alright. And so that's all there is to it. So pre signed URLs is something you're definitely going to be using in practicality when you're building web application. So the idea behind it is that you can generate a URL which provides you temporary access to an object to either upload or download object data to that endpoint. So presale URLs are commonly used to provide access to private objects. And you can use the CLR SDK to generate pre signed URLs Actually, that's the only way you can do it. So here using the COI you can see I'm specifying the actual object and I'm saying that it's going to expire after 300. I think that's seconds. And so, anyway, the point is, is that, you know, it's only going to be accessible for that period of time. And what it's going to do is gonna generate this very long URL. And you can see it actually has an axis axis key in here, sets the expiry and has the signature. Okay, so this is going to authenticate a temporary temporarily to do what we want to do to that object of dairy, very common use cases, if you have a web application, you need to allow users to download files from a password protected part of your web app, you'd also expect that those files on s3 would be private. So what you do is you generate out a pre signed URL, which will expire after like something like five seconds, enough time for that person to download the file. And that is the concept. So if you're really paranoid about people deleting your objects in s3, what you can do is enable MFA delete. And what that does is it makes you require an MFA code in order to delete said object. All right, now in order to enable MFA, you have to jump through a few hoops. And there's some limitations around how you can use it. So what you have to do is you have to make sure version is turned on your bucket or you can't use MFA delete. The other thing is that in order to turn on MFA delete, you have to use the COI. So here down below, I'm using the C ally. And you can see the configuration for versioning. I'm setting it to MFA delete enabled. Another caveat is that the only the bucket owner logged in as the root user can delete objects from the bucket. Alright, so those are your three caveats. But this is going to be a really good way to ensure that files do not get deleted by act. Hey, this is Andrew Brown from exam Pro, and welcome to the s3 Follow along where we're going to learn how to use s3. So the first thing I want you to notice in the top right corner is that s3 is in a global region. So most services are going to be region specific. And you'd have to switch between them to see the resources of them, but not for s3, you see all your buckets from every single region in one view, which is very convenient. That doesn't mean that buckets don't belong to a region, it's just that the interface here is global. Okay, so we're going to go ahead and create our first bucket and the bucket name has to be unique. So if we choose a name that's already taken by another database user, we're not gonna be able to name it that and the name has to be DNS compliant. So it's just like when you register for a domain name, you're not allowed certain characters. So whatever is valid for a domain name, or URL is what's going to be valid here. So I'm going to try to name it exam Pro. And it's going to be in this region here. And we do have all these other options. But honestly, everybody always just goes ahead and creates and configures after the fact, we're gonna hit Create. And you're gonna notice that it's gonna say this bucket name has already been taken, and it definitely has been because I have it in my other AWS account here. So I'm just gonna go ahead and name it something else, I'm gonna try 000. Okay, I'm gonna hit Create. And there we go. So we have our own bucket. Alright, now if we wanted to go ahead and delete this bucket, I want you to do that right away here, we're going to go ahead and delete this bucket, it's gonna pop up here, and it's going to ask us to put in the name of the bucket. So I was just copy it in here, like this. And we're going to delete that bucket. Okay, so that's how you create a bucket. And that's how you delete a bucket. But we're gonna need a bucket for to learn about s3. So we're gonna have to go ahead and make a new bucket here. So I'm going to put in exam pro 000. And we will go ahead and create that. And now we have our buckets. So great. And we'll go click into this bucket. And there we go. So let's start actually uploading our first file. So I prepared some files here for upload. And just before I upload them, I'm just gonna go create a new folder here in this bucket. Okay. And I have a spelling mistake there. Great. And so I'm gonna just go upload these images. Here. They are from Star Trek The Next Generation. They're the characters in the show. And so I all I have to do here is click and drag, and then I can go ahead and hit upload down here. And they're just going to go ahead and upload them. Alright. And we'll just give that a little bit of time here. And they are all in Great. So now that I have all my images here in my bucket, I can click into an individual one here. And you can see who the owner is, when it was uploaded, uploaded the storage class, the size of it. And it also has an object URL, which we're going to get to in a moment here. All right. So but if we want to actually just view it in the console here, we can click open and we can view it or we can hit download. Alright, and so that's going to download that file there. But then we have this object URL and this was why we're saying that you have to have unique you Unique a bucket names because they literally are used as URLs. Okay, so if I were actually to take this URL and try to access it, and I actually just had it open here a second ago, you can see that we're getting an Access denied because by default, s3 buckets are private. Okay? So if we wanted to make this public, so anyone could access this URL, we want to hit the make public button, we're going to see that it's disabled. So if we want to be able to make things public, we're gonna have to go to our bucket up here at the top here, go to the properties, or sorry, I should say permissions. And we're going to have to allow public access. Okay, so this is just an additional security feature that eight of us has implemented, because people have a really hard time about making things public on their bucket and getting a lot of bad stuff exposed. So we're gonna go over here and hit edit, and we have a bunch of options. But we're first going to untick block all public access, and this is gonna allow us to now make things public. So I hit save, okay, and I go to type in confirm. Okay. And so now, if I go back to our bucket here into the enterprise D and data, I now have the ability to make this public. So I'm going to click make public. And so now, this file is public. And if I were to go back here and refresh, okay, so there you go. So now I could, I could take this link and share it with you, or anybody, and anyone in the world can now view that file. Okay. Great. So now that we learned how to upload a file, or files, and how to make a file public, let's learn about versioning. So we uploaded all these different files here. But let's say we had a newer version of those files. And we wanted to keep track of that. And that's where version is going to help us in s3. So, you know, we saw that I uploaded these characters from Star Trek The Next Generation. And there, we actually have a newer images here of the characters, not all of them, but some of them. And so when I upload them, I don't want the old ones to vanish, I want it to keep track of them. And that's where virgin is going to come into play. So to turn on versioning, we're going to go to exam pro 000. And we're going to go to properties here and we have a box here to turn on versioning, I just want you to notice that when you turn on versioning, you cannot turn it off, you can only suspend it, okay. So what that means is that objects are still going to have version history, it's just that you're not gonna be able to add additional versions, if you turn it off, we'll go ahead and we're going to enable versioning here, and versioning is now turned on. And you're going to notice now that we have this versions tab, here we go hide and show and it gives us additional information here for the version ID. Okay, so I'm going to go into the enterprise D, I hit show here, so I can see the versions. And now you're gonna see, we've got a big mess here. Maybe we'll turn that off here for a second. And we're going to go ahead and upload our new files here, which have the exact same name, okay, so I'm gonna click and drag, and we're gonna hit upload, and Oh, there they go. Okay, so we're gonna upload those files there. And now let's hit show. And we can see that some of our files where we've done some uploading there have have additional versions. So you're gonna notice that the first file here actually has no version ID, it's just No, okay, but the latest version does, it's just because these were the, the initial files. And so the initial files are going to have null there. But you know, from then on, you're going to have these new version IDs. Okay, so let's just see if we can see the new version. So we're looking at data before. And what we want to see here is what he looks like now. So I click open, and it's showing us the latest version of data. All right. Now, if we wanted to see the previous version, I think if we drop down here, see vs latest and prior, so we have some dates there, we click here, we hit open, and now we can see the previous version of data. Okay. Now, one other thing I want to check here is if we go up to this file here, and we were to click this link, is this going to be accessible? No, it's not. Okay. Now, let's go look at the previous example. Now we had set this to be public, is this one still public? It is okay, great. So what you're seeing here is that when you do upload new files, you're not going to inherit the original properties, like for the the public access, so if we want data to be public, we're going to have to set the new one to be public. So we're going to drop down the version here and hit make public. And now if we go open this file here, he should be public. So there you go, that is versioning. Now, if we were to delete this file out of here, let's go ahead and delete data out of here. Okay, so I'm going to hit actions and I'm going to delete data. So I'm going to hit Delete. Notice it says the version ID here, okay? So I hit delete, and data is still here. So if we go into data, and we hit open, okay, we're Now we get the old data, right? So we don't have the previous versions. I'm pretty sure we don't. So we just hit open here open. Great. And if I go here and go this one open, okay, so the specified version does not exist. So we, it still shows up in the console, but the the file is no longer there. Okay? Now let's say we wanted to delete this original data file, right? Can we do that, let's go find out, delete, and we're gonna hit delete, okay, and still shows the data's there, we're gonna hit the refresh, we're going to go in there, and we're going to look at this version and open it. Okay, and so you can still see it's there. So the thing about versioning is that it's a great way to help you protect from the deletion of files. And it also allows you to, you know, keep versions of stuff and those properties, you know, again, does not carry over to the next one. So, now that we've learned about versioning, I think it'd be a bit of fun to learn about encryption. Okay. Actually, just before we move on to encryption, I just want to double check something here. So if we were to go to versions here, and I was saying, like, the initial version here is normal, what would happen if we uploaded a file for the first time? That because remember, these were uploaded, before we turned on? versioning? Right. And so they were set to novel what happens when we upload a new file with versioning turned on? Is it going to be normal? Or is it going to have its own version ID? Okay, that's a kind of a burning question I have in the back of my mind. And so I added another image here, we now have Keiko, so we're going to upload kago. And we're going to see is it going to be no, or is going to have an ID and so look, it actually has a version ID. So the only reason these are no is because they existed prior to versioning. Okay, so if you see no, that's the reason why. But if you have versioning turned on, and then from then on, it's always going to have a version ID. Actually, just before we move on to encryption, I just want to double check something here. So if we were to go to versions here, and I was saying like, the initial version here is novel, what would happen if we uploaded a file for the first time? That because remember, these were uploaded, before we turned on? versioning? Right. And so they were set to novel what happens when we upload a new file with versioning turned on? Is it gonna be normal? Or is it gonna have its own version ID? Okay, that's a kind of a burning question I have in the back of my mind. And so I added another image here, we now have Keiko and so we're gonna upload Keiko, and we're gonna see is it going to be no? Or is it going to have an ID and so look, it actually has a version ID. So the only reason these are null is because they existed prior to versioning. Okay, so if you see no, that's the reason why. But if you have versioning turned on, and then from then on, it's always going to have a version of it. Alright, so let's explore how to turn on server side encryption, which is very easy on s3. So we're going to go back to our bucket, go to our properties, and then click on default encryption. And by default, you're gonna see we don't have any encryption turned on. But we can turn on server side encryption using a Aes 256, which is uses a 256 algorithm in length or key in length. I'm always bad on that description there. But the point is, it's 256 something in length for encryption, and then we have AWS kms, we're going to turn on a Aes 256, because that's the easiest way to get started here. But look at the warning up here, it says the property, this property does not affect existing objects in your bucket. Okay. And so we're going to turn on our encryption. So we got our nice purple checkmark. And we're going to go back to our bucket to see if any of our files are encrypted, which I don't think they're going to be based on that message there. So we're going to go and check out data. And we can see that there is no server side encryption. Okay. So in order to turn it on for existing files, I would imagine it's gonna be the same process here, we'll go to properties. We're going to have encryption here, and we're going to turn on a Aes 256. So you can see that you can set individual encryption profiles. And you can also do it per bucket. And so we're going to go ahead there and encrypt data there. Alright, so now if we were to go access this URL, do we have permissions even though it is set to public? So remember, data is public, right? But can we see it when encryption is turned on? And apparently, we totally totally can. So encryption doesn't necessarily mean that the files aren't accessible, right? It just because we have made this file public, it just means that when they're at rest on the servers on AWS, there are going to be encrypted. Okay. So, you know, that is how easy it is to turn on encryption. Now when it comes to accessing files via the csli and kms. There is a little bit more work involved there. So you know, for that there's going to be a bit more of a story there. But, you know, if we do get to see alive, we'll talk about that. Okay. Now, I want to To show you how to access private files using pre signed URL. But just before we get to that, I figured this is a good opportunity to learn how to use the COI for s3. And we'll work our way to a pre signed URL. So here I have my terminal open. And I already have the a vcli installed on my computer here. So what we're going to do first is just list all the buckets within our Eva's count. So we can do ABS s3 Ls LS stands for list. And then what's going to do is we're going to see that single bucket that we do actually have here, if we wanted to see the contents of it, we can just type AWS s3 Ls, and then provide the bucket name. And it's gonna show us its content, which is a single folder. And then if we wanted to see within that folder, you kind of get where we're going here, we can put that on the end there and hit enter. Okay, and then we're gonna see all our files. So that's how easy it is to use. ls, you're going to notice over here, I do have a very slightly different syntax here, which is the using the s3 protocol here in the front. This is sometimes as needed for certain commands, which are going to find out here with CP in a moment. But not all commands require it, okay, like so for instance, in LS, we've admitted that protocol there, but yeah, moving on to copying files, which CP stands for, we can download objects to and from our desktop here. So let's go ahead and actually go download a Barkley from our bucket here. So I'm just going to clear this here, and type A, this s3, CP, we're gonna use that protocol, we definitely have to use it for CP or will air out and we'll do exam Pro, there was a zero enterprise enterprise D here. And then it's going to be Barclay, okay. And we're just going to want to download that, that fellow there to our desktop. Okay, and then we're just gonna hit enter there. And it's just complain, because I typed it in manually. And I have a spelling mistake, we need an R there. And it should just download that file. Great. So if we go check our desktop, there we go, we've downloaded a file from our s3 bucket. Now, we want to upload a file. So down in here, I have an additional file here called Q, and I want to get that into my bucket vcli, it's going to be the same command, we're just going to do it in the reverse order here. So we're gonna do Avis s3, CP, and we're first gonna provide the file locally, we want to upload here, and that's going to be enterprise d, q dot jpg, and we're going to want to send that to s3. So we have to specify the protocol of the bucket name, the folder here that it's going to be in enterprise D, make sure it won't be spelling mistakes this time. And we're just going to put q dot jpg, okay. And we're going to send out there to s3 and you can see it's uploaded, we're going to refresh. And there it's been added to our s3 bucket. Great. So now we know how to list things and upload or download things from s3. And now we can move on to pre signed URL. So we saw earlier that was data, we had access to data here because he was public. So if we were to click this fellow here, we can access him right. But let's say we wanted to access a queue that we just uploaded, right? And so by default, they are private. Okay, so if I was to open this, I'm not going to be able to see it, it's access tonight, which is a good, good, good and sane default. But let's say I wanted to give someone temporary access. And this is where pre signed URLs come. So pre signed URLs, what it's going to do is going to generate a URL with the credentials that we need to be able to temporarily temporarily access it. Okay. So if I were to copy this AWS s3 pre signed URL here, and what we'll just type it out, it's not a big deal here. And we're going to try to get access to this, this Q file here. So we're going to want to do enterprise D. And we're gonna say q dot j, jpg, and we're gonna put an expires on there expires, like by default, I think it's like an hour or something. But we want it to expire after 300 seconds. So people like these aren't these links aren't staying around there. Again, they're temporary, right? And I'm just going to hit enter there. Um, and I've made a mistake, I actually forgot to write the word pre signed in there. Okay, what's going to do, it's going to spit back a URL. So if we were to take this URL, right, and then supply it up here, now we actually have access. So that's a way for you to provide temporary access to private files. This is definitely a use case that you'd have if let's say you had paid content behind, like a web application that you'd have to sign up to gain access. And this is how you give them temporary access to whatever file they wanted. And I just wanted to note, I think this is the case, where if we were to actually open this here, so again, if we have this URL that has the Access ID, etc, up there, but if we were to open it up via this tab, I think it doesn't exact same thing. So it has a security token here. So I guess maybe it's not exactly the same thing, but I was hoping maybe this was actually also using a pre signed URL here. But anyway, the point is, is that if you want to access temporary files, you're going to be using pre signed URLs. So we uploaded all our files here into this bucket. And when we did, so it automatically went to the storage class standard by default. So let's say we want to change the storage class for our objects, we're not going to do that at the bucket level, we're going to do it at the object level here. So we're gonna go to properties here for Gaiden. And all we have to do here is choose the class that we want to standardize. And we're going to hit save, and now we can start saving money. So that's all it takes to switch storage classes. But let's say we want to automate that process. Because if we were handling a lot of log files, maybe after 30 days, we don't really need need them anymore, but we need to hold on to them for the next seven years. And that's where lifecycle policies are going to come in play. So what we're going to do is we're going to go back to our bucket, we're going to go to management. Here, we have lifecycle, and we're going to add a new lifecycle rule. So we'll say, here, we'll do just for something simple, say so after 30 days, so 30 day rule. And we could limit the scope of what files we want. So if we wanted to say just enterprise D here, we could do enterprise D, okay, that's not what I'm going to do. I'm just going to say, all files within the actual bucket here, go next. And then we can choose the storage class. So transition. So here, we have to decide whether it's the current version of the previous versions, okay, and so I'm just gonna say it's gonna be the current version. All right, always the current version here, we're going to add a transition, and we're going to move anything that's in standard into standard ay ay ay. and it's going to be after 30 days, I don't think you can go below that, if I try seven here, see, so the minimum value here has to be 30. So we're gonna have to set it to 30. I think we saw those, those, those minimums in the actual storage class when we were setting them. So if you're wondering what those are, they're probably over there. But we'll just hit next here. And so then after we're seeing that it's been transitioned, after 30 days, it's going to move to move to that. And we can also set an expiration. So we don't necessarily need to set this but this is if we wanted to actually delete the file. So after current days, we could then say, to completely delete the files, which is not what we're going to do, we're just going to hit next. And click that. And now we have a rule that was going to automate the moving of our files from one storage class to another. Alright, so there you go. So we're gonna learn how to set up cross region replication here. And so this is going to allow us to copy one files to from a bucket to another bucket. And this could be in another region and in another AWS account, okay, so there's a few different possibilities here as to why we'd want to do that. But let's just learn how to actually do it. And so we did, we're going to need to create a replication. But before we can do that, we're going to need a destination bucket. So we're going to go back here to s3, we're going to create a new bucket, I'm just going to call it exam pro BBB. And I'm going to set this to Canada Central, okay, if this name is not available, you're just going to have to come up with your own names here. But just make sure you're saying it to another region for this, this example here. And so now we have a bucket in the States and in Canada, and we're almost ready to go, we just have to make sure that we have versioning turned on in both buckets, both the source and destination. So we'll go here to our new bucket, turn on versioning. Okay, and I already know that we have versioning turned on in our source. But we'll just take a quick look here. So here it is turned on. And so now we are ready to turn on cross region replication. So we'll go ahead and create our rule in our source bucket, our source bucket is selected here. Then we'll go next. And we will choose our destination bucket. And so now we have a couple options here, which can happen during replication. So we can actually change the storage class, which is a good idea if you want to save money. So the other bucket is just like your backup bucket. And you don't plan to really use those files, you probably would want to change the storage class there to save money. And you can also send the this to someone else's bucket in another AWS account. So maybe you your use case is this bucket has a files and you want to provide it to multiple clients. And so you've used that replication rule to replicate it to their, their buckets, okay, but we're just this is going to be our buckets for the time being. And we're going to go ahead here and create a new role. And we'll just call it cc R. us to Canada. Okay. I will just create a new rule. So we have permissions to go ahead and do that there. And we'll get a nice little summary here and hit save. And we will wait and we'll cross our fingers as As the replication figuration was not found, so this sometimes happens, it's not a really big deal here. So just go back to replication. And it actually did work, I think. So, sometimes what happens is, the roll isn't created in time. So you know, sometimes it's green, and sometimes it's red, but just come back and double check here because it definitely is set. So now we have replication set up. So now we're going to learn how to set up bucket policies. So we can create custom rules about the type of access we want to allow to our buckets. So in order to do so we're going to go to our exam pro 000 bucket, we're going to go to permissions, and we're going to go to bucket policy, okay. And so this is where we're going to provide a policy in the format of Jason file here, it's very hard to remember these. So luckily, they have a little policy generator down here, I'm gonna open it in a new tab. And we're going to drop this down to s3, and we're going to define what kind of access control we want to have. So let's say we wanted to deny anyone being able to upload new files in this bucket. I don't know why you'd want to do that. But maybe there's a use case. So we're gonna say denied, we're gonna give it a asterik here. So we can say this applies to everyone. The service is going to be s3, of course, and we're going to look for the actions here. So we're just going to look for the puts. So we'll say put bucket ACL. And there should just be a regular put in here. Oh, that's, that's bucket ACL. We want objects. So we say bucket, put object and put object ACL so we can't upload files. And we'll have to provide the Arn and they give you a bit of an indicator as to what the format is here. That's going to be exam pro hyphen, 000, at Ford slash Asterix, so it's gonna say any of the files within that bucket. And we're gonna go add that statement and generate that policy. And now we have our JSON. Okay, so we'll copy that, go back over here, paste it in, save it, cross your fingers, hope it works. And it has saved Yeah, so you don't get like a response here. I'm just gonna save again, and then just refresh it to just be 100% sure here and go back to your bucket policy. And there it persists. So our bucket policy is now in place. So we should not be able to upload new files. So let's go find out if that is actually the case. So here I am in the overview here and enterprise D. And I want to upload a new file to to see so let's go to our enterprise D and we have a new person here we have Tom Alok, he is a Romulan, and we do not want him in the enterprise D here. So we're going to drag it over here and see what happens, we're going to hit upload. Okay. And you're going to see that it's successfully uploaded. Okay, so we're going to go ahead and do a refresh here and see if it actually is there. And it looked like it worked. But I guess it didn't, because we do have that policy there. So there you go. But let's just to be 100% sure that our policy actually is working, because I definitely don't see it there. We're gonna go back there. And we're just going to remove our policy. Okay, so we're gonna go ahead here and just delete the policy, right? It's where the interface shows it to you as if you know, it's actually working. And so Tom lock is definitely not in there. But our policy has been removed. So now if we were to upload it, Tom should be able to infiltrate the enterprise D bucket here, we're going to do an upload here. Okay. Let's see if we get a different result. And there it is. So there you go. So our bucket policy was working, you can see that ABS can be a little bit misleading. So you do have to double check things that happened for me all the time. But there you go, that is how you set up. So we are on to the s3 cheat sheet. And this is a very long cheat sheet because s3 is so important to the eight of us associate certification, so we need to know the service inside and out. So s3 stands for simple storage service. It's an object based storage, and allows you to store unlimited amounts of data without worrying of the underlying storage infrastructure. s3 replicates data across at least three availability zones to ensure 99.9% availability and 11 nines of durability on just contain your data. So you can think of objects like files, objects, objects can be sized anywhere from zero bytes to five terabytes, I've highlighted zero bytes in red because most people don't realize they can be zero bytes in size. Buckets contain objects and buckets can also contain folders, which can in turn contain objects. And you can also just think of buckets themselves as folders. Buckets names are unique across all AWS accounts. So you can treat them like domain names. So there you your bucket name has to be unique within within the entire world. When you upload a file to s3 successfully then you'll receive an HTTP 200 code Lifecycle Management feature So this allows you to move objects between different storage classes. And objects can be deleted automatically based on a schedule. So you will create lifecycle lifecycle rules or policies to make that happen, then you have versioning. So this allows you to have version IDs on your objects. So when you upload a new object, the overtop of an existing object, the old objects will still remain, you can access any previous object based on their version ID. When you delete an object, the previous object will be restored. Once you turn on versioning cannot be turned off, it can only be suspended. Then we have MFA delete. So this allows you to enforce all delete operations to require an MFA token in order to delete an object, so you must have versioning turned on to use this, you can only turn on MFA delete from the ADC Li and it's really just the root account or the root user who's allowed to delete these objects. All new buckets are private by default, logging can be turned on on a bucket. So you can track all the operations performed on objects. Then you have access control, which is configured using either bucket policies or access control lists. So we have bucket policies, which are Jason documents, which let you write complex control access. Then you have ACLs. And they are the legacy legacy method. They came before bucket bucket policies. And they're not depreciated. So there's no full part in using them, but they're just not used as often anymore. And it allows you to grant object access to objects and buckets with simple actions. And so now we're on to the security portion. So security in transit is something you have with s3, because all the files uploaded are done over SSL. And so you have SSC, which stands for server side encryption. s3 has three options for SSE. We have SSE A S. And so s3 handles the key itself, and it uses a Aes 256 algorithm as the encryption method, then you have SSE kms. And as the name implies, it is using a key management service, which is an envelope encryption service. And so AWS manages the the key and so do you, then you have SSE C, and the C stands for work customer. So it's a customer provided key, you actually upload the key, and you have full control over key but you also have to manage that key. All right, s3 doesn't come with client side encryption, it's up to you to encrypt your files locally, and then upload them to s3, you could store your client side key in kms. So that is an option for you. But it's not that important to actually have here on the cheat sheet. You have also cross region replication. This allows you to replicate files across regions for greater durability, you must have versioning turned on in the source and destination bucket in order to use cross region replication. And you can replicate a source bucket to a bucket in another AWS account, then you have transfer acceleration. This provides fast and secure uploads from anywhere in the world data is uploaded via a distinct URL to an edge location. And data is then transported to your s3 bucket via the AWS backbone network, which is super fast, then you have pre signed URLs. And this is a URL generated via the HSC ally or SDK, it provides temporary access to write or download to an object like data to that actual object via that endpoint. Pre signed URLs are commonly used to access private objects. And the last thing is our storage classes. And we have six different kinds of storage classes started with standard. And that's the default one. And it's fast. It's has 99.99% availability, 11 nines of durability, you access files within the milliseconds and it replicates your data across at least three azs. Then you have the intelligent tear, tearing storage class. And this uses XML to analyze your objects usage and determine the appropriate storage to help you save money and just move to those other storage classes which recovering now, then you have standard and frequency access. Review to IAA it's just as fast as standard. It's cheaper to access files, if you're only accessing files less than once a month. So just one file in the month, if you access it twice. Now it's the same class of standard probably a little bit more because there's an additional retrieval fee. When you try to grab those files. It's 50% less than standard. The trade off here is reduced availability, then you have one zone IAA. And as the name implies, it's not replicating cross three azs just at least three Z's. It's only in one az, so it's going to be super fast. And the trade off here is it's going to be 20% cheaper than standard IAA, but now you also have reduced durability. And again, it has a retrieval fee. Then you have glacier and Glazier is for long term cold storage. It's archival storage, and it's a very very, very cheap the tradeoff here is that it's going to take between minutes to hours for you to actually access your files if you need them. Then you have glacier deep archive, it is the cheapest, the cheapest solution or storage class on our list. And you can't access your files for up to 12 hours. So if that's how long it's going to take before you can use them. So that is the s3 cheat sheet. It was a very long cheat sheet. But there's a lot of great information here. So hey, this is Angie brown from exam Pro. And we are looking at AWS snowball, which is a petabyte scale data transfer service. So move data onto a database via a physical briefcase computer. Alright, so let's say you needed to get a lot of data onto AWS very quickly and very inexpensively. Well, snowball is going to help you out there because if you were to try and transfer 100 terabytes over a high speed internet to AWS, it could take over 100 days where with a snowball, it will take less than a week. And again, for cost. If you had to transfer 100 terabytes over high speed internet, it's gonna cost you 1000s of dollars, where snowball is going to reduce that cost by 1/5. Now we'll just go through some of the features of snowball here, it does come with an E ink display, it kind of looks like your shipping label, but it is digital, which is kind of cool. It's tamper and weatherproof. The data is encrypted end to end using tivity. Six bit encryption, it has a Trusted Platform Module, TPM is just this little chip here. And as it says here, endpoint device that stores RSA encryption keys specific to host systems for hardware authentication. So that's a cool little hardware feature. And for security purposes, data transfers must be completed within 90 days of the snowball being prepared. And this data is going to come into s3, you can either import or export. So not only can you you know use this to get data into the cloud, it can be a way for you to get data out of the cloud it snowball comes in two sizes, we have 50 terabytes and 80 terabytes. Now you don't get to utilize all the space on there. So in reality, it's really 42 terabytes and 72 terabytes. And you're going to notice that it said this was a petabyte scale migration, while they're suggesting that you use multiple multiple snowballs to get to petabytes. So you don't you don't transport petabytes in one snowball, it's going to take multiple snowballs to do that. Alright. Now we're going to take a look here at Ada snowball edge, which again, is a petabyte scale data transfer service move data on database via a physical briefcase computer, but it's going to have more storage and onsite compute capacity capabilities. So just looking at snowball edge here in greater detail, you're going to notice one aesthetic difference is that it has these little orange bars, maybe that's the way to distinguish snowball from snowball edge, but it's similar to snowball, but with more storage and with local processing. So going through these features, instead of having an ink display, it's going to have an LCD display. So again, it's the shipping information. But with other functionality. The huge advantage here is that it can undertake local processing and edge computing workloads. It also has the ability to cluster so you can get a bunch of these noble edges and have them work on a single job kind of like having your own little mini data center up to five to 10 devices. And it comes in three options for device configuration. So you can optimize for storage compute or with GPU optimization. And the CPU amounts are going to change in this device based on what you need. And snowball edge comes in two sizes. We have 100 terabytes, that's 83 terabytes of usage space. And then we have the clustered version, and it's lesser, a fewer amounts of terabytes. But of course, you're gonna be using this in clusters. So there's good reasons for that. So there you go. That's still no, now we're taking a look here at snowmobile, and it is a 45 foot long shipping container pulled by a semi trailer track, it can transfer up to 100 petabytes personal mobiel. So in order to get to exabytes, you're gonna need a few of these, but it definitely is feasible. And it has some really cool security features built in. We have GPS tracking, alarm monitoring, 24 seven video surveillance and an escort security vehicle while in transit. Now that is an optional feature. I don't know if it costs more, but it's definitely sounds really cool. So you know, just to wrap this up here, eight of us personnel will help you connect your network to the snowmobile. And when data transfer is complete, we'll drive it back data bus and import it into s3 or s3 Glacier. So there you are. I'm for the cheat sheet and it's for snowball, snowball edge and snowmobile. So let's jump into it. So snowball and snowball edge is a rugged container, which contains a storage device. snowmobile is a 45 foot long ruggedized shipping container pulled by a semi trailer truck snowball and snowball edge is for petabyte scale migration, whereas snowmobile is for exabyte scale migration. So the advantages here with snowball is low cost so 1000s of dollars to transfer 100 terabytes over high speed internet snowball comes at 1/5 of the price, then we have speed 100 terabytes over 100 days to transfer over high speed internet. Or you can use snowball, which takes less than a week. And so then we talked about snowball here. So snowball comes in two sizes, we have 50 terabytes and 80 terabytes, but the actual usable space is less so it's 42 and 72. Then you have snowball edge, it comes in two sizes, we have 100 terabytes and 100 terabytes clustered. And then the usability here is gonna be 83 and 45. snowmobile comes in one size 100 petabytes per vehicle, and you can both export or import data using snowball or snowmobile Okay, and that also includes snowball edge there, you can import into s3 or glacia snowball edge can undertake local processing and edge low edge computing workloads. snowball edge can use be used in a cluster in groups of five to 10 devices. And snowball edge provides three options for device configurations, we have storage optimized, compute optimized and GPU optimized and the variation there is going to be how many CPUs are utilized and GPU have is going to have more GPUs on board there. So there you go, that that is your snowball, snowball edge and snowmobile. Hey, this is Andrew Brown from exam Pro. And we are looking at Virtual Private Cloud known as VPC. And this service allows you to provision logically isolated sections of your database cloud where you can launch eight of his resources in a virtual network that you define. So here we are looking at an architectural diagram of a VPC with multiple networking resources or components within it. And I just want to emphasize how important it is to learn VPC and all components inside and out because it's for every single aidable certification with the exception of the cloud practitioner. So we definitely need to master all these things. So the easiest way to remember what a VPC is for is think of it as your own personal data center, it gives you complete control over your virtual networking environment. All right, so the idea is that we have internet, it flows into an internet gateway, it goes to a router, the router goes to a route table, the route table passes through knakal. And the knakal sends the traffic to the public and private subnets. And your resources could be contained within a security group all within a VPC. So there's a lot of moving parts. And these are not even all the components. And there's definitely a bunch of different configurations we can look at. So looking at the core components, these are the ones that we're going to learn in depth, and there are a few more than these, but these are the most important ones. So we're going to learn what an internet gateway is. We're gonna learn what a virtual private gateway is route tables, knackles, security groups, public and private subnets, Nat gateway and instances customer gateway VPC endpoints and VPC peering. So, this section is very overwhelming. But you know, once you get it down, it's it's pretty easy going forward. So we just need to master all these things and commit them to memory. Now that we kind of have an idea, what's the purpose of VPC, let's look at some of its key features, limitations and some other little things we want to talk about. So here on the right hand side, this is the form to create a VPC, it's literally four fields. It's that simple. You name it, you give it an address, you can also give it an additional ipv6 address. You can't be or it's either this and this. And you can set its tendencies to default or dedicated, dedicated, meaning that it's running on dedicated hardware. If you're an enterprise, you might care about that. This is what the ipv6 cider block would look like because you don't enter it in Amazon generates one for you. So v PCs are region specific. They do not span regions, you can create up to five epcs per region. Every region comes with a default VPC, you can have 200 subnets per VPC, that's a lot of subnets. You can create, as we said, Here, an ipv4 cider block, you actually have to create one it's a requirement. And in addition to you can provide an ipv6 cider block. It's good to know that when you create a VPC, it doesn't cost you anything. That goes the same for route tables, knackles, internet gateway security groups subnets and VPC peering. However, there are resources within the VPC that are going to cost you money such as Nat gateways, VPC endpoints, VPN gateways, customer gateways, but most the time you'll be working with the ones that don't cost any money so that there shouldn't be too much of a concern of getting over billed. One thing I do want to point out is that when you do create a VPC, it doesn't have DNS host names turned on by default. If you're wondering what that option is for what it does is when you launch easy two instances and so here down below, I have an easy to instance and it will get a public IP but it will only get a Public DNS, which looks like a domain name, like an address, and that's literally what it is. But if this isn't turned on that easy to instance, won't get one. So if you're wondering, why isn't that there, it's probably because your host names are disabled and they are disabled by default, you just got to turn that. So we were saying earlier that you get a default VPC for every single region. And the idea behind that is so that you can immediately launch EC two instances without having to really think about all the networking stuff you have to set up. But for Eva certification, we do need to know what is going on. And it's not just a default dBc It comes with other things and with specific configurations. And we definitely need to know that for the exams. So the first thing is it creates a VPC of cider block size 16. We're going to also get default subnets with it. So for every single AZ in that region, we're going to get a subnet per AZ and there gonna be a cider block size 20. It's going to create an internet gateway and connect it to your default VPC. So that means that our students is going to reach the internet, it's going to come with a default security group and associated with your default VPC. So if you launch an EC two instance, it will automatically default to the security group unless you override it. It will also come with by by default, a knakal. And associated with your VPC, it will also default DHCP options. One thing that it's implied is that you It comes with a main route table, okay, so when you create a VPC, it automatically comes to the main route table. So I would assume that that comes by default as well. So there are all people. So I just wanted to touch on this 0.0 dot zero forward slash zero here, which is also known as default. And what it is, is it represents all possible IP addresses. Okay, and so you know, when you're doing a device networking, you're going to be using this to get the GW to have a route like routing traffic to the GW to the internet. When you're using a security group, when you set up your inbound rules, you're going to set 0.0 dot 0.0 to allow any traffic from the internet to access your public resources. So anytime you see this, just think of it as giving access from anywhere or the internet. Okay. We're looking at VPC peering, which allows you to connect one VPC to another over direct network route using private IP addresses. So the idea is we have VPC, a VPC Zb. And we want to treat it so like the they behave like they're on the same network. And that's what VPC peering connection allows us to do. So it's very simple to create a peering connection, we just give it a name, we say V, what we want is the requester. So that could be VP ca and then we want as the acceptor which could be VP CB, and we can say whether it's in my account, or another account, or this region or another region. So you can see that allows v PCs from same or different regions to talk to each other. There is some limitations around the configuration. So you know, when you're peering, you're using star configuration, so you'd have one central VPC and then you might have four around it. And so for each one, you're going to have to have a peering connection. There's no transitive peering. So what does that mean? Well, the idea is like, let's say VPC c wants to talk to VPC, B, the traffic's not going to flow through a, you actually would have to create another direct connection from C to B. So it's only to the nearest neighbor, where that communication is going to happen. And you can't have overlapping cider blocks. So if these had the same cider block, this was 172 31. This was 172 31, we're gonna have a conflict and we're not gonna be able to talk to each other. So that is the VPC peering in a nutshell. Alright, so we're taking a look here at route tables. The route tables are used to determine where network traffic is directed, okay. And so each subnet in your V PC must be associated with a route table. And a subnet can only be associated with one route table at a time, but you can associate multiple subnets subnets with the same route table. Alright, so now down below, I have just like the most common example of where you're using route tables. And that's just allowing your easy two instances to gain access to the internet. So you'd have a public subnet where that easy to instance resides and that's going to be associated with a route table. That read route table is going to have us routes in here. And here you can see we have a route, which has the internet gateway attached that allows access to the internet. Okay, so there you go. That's all there is to it. We're taking a look at Internet gateway internet gateway allows your VPC access to the internet and I N GW does two things. It provides a target in your VPC route tables for internet routable traffic. And it can also perform network address translation Nat, which we'll get into in another section for instances that have been assigned a public ipv4 address. Okay, so down below here, I have a representation of how I GW works. So the idea is that we have internet over here and to access the internet, we need an internet gateway, but to route traffic from our EC two instances or anything, they're gonna have to pass through a route table to get to a router. And so we need to create a new route in our route table for the ID W. So I gwi hyphen, Id identifies that resource, and then we're going to give it 0.0 point zero as the destination. Alright, so that's all there is to it. So we talked about how we could use Nat gateways or Nat instances to gain access to the internet for our EC two instances that live in a private subnet. But let's say you wanted to SSH into that easy to essence, well, it's in a private subnet, so it doesn't have a public IP address. So what you need is you need an intermediate EC two instance that you're going to SSH into. And then you're going to jump from that box to this one, okay? And that's why bastions are also known as jump boxes. And this institute instance for the bastion is hardened. So it should be very, very secure, because this is going to be your point of entry into your private EC two instances. And some people might always ask, Well, if a NAT instance, like Nat gateways, we can't obviously turn into bastions, but a NAT instance is just a new situation, it's Couldn't you have it double as a bastion, and the possibility of it is possible, but generally the way you configure NATS and also, from a security perspective, you'd never ever want to do that, you'd always want to have a different EC two instance, as your Bastion. Now, there is a service called SYSTEMS MANAGER, session manager and it replaces the need for bastions so that you don't have to launch your own EC two instances. So generally, that's recommended in AWS. But you know, bastions are still being commonly used throughout a lot of companies because it needs to meet whatever their requirements are, and they're just comfortable with them. So there you go. So we're gonna take a look at Direct Connect, and Direct Connect is in aid of a solution for establishing dedicated network connections from on premise locations to AWS, it's extremely fast. And so depending on what configuration you get, if it's in the lower bandwidth, we're looking between 1550 megabytes to 500 megabytes, or the higher bandwidth is one gigabytes to 10 gigabytes. So the transfer rate to your on premise environment, the network to AWS, is it considerably fast. And this can be really important if you are an enterprise and you want to keep the same level of performance that you're used to. So yeah, the takeaway here with Direct Connect is that it helps reduce network costs increase bandwidth throughput, it provides a more consistent network experience than a typical internet internet based connection. Okay, so that's all. We're looking at VPC endpoints, and they're used to privately connect your V PC to other Ada services, and VPC endpoint services. So I have a use case here to make it crystal clear. So imagine you have an EC two instance, and you want to get something from your s3 bucket. So what you normally do is use the ABS SDK and you would make that call, and it would go out of your internet gateway to the internet back into the AWS network to get that file or or object out of s3. So wouldn't it be more convenient if we could just keep the traffic within the AWS network and that is the purpose of a VPC endpoint. It helps you keep traffic within the EBS network. And the idea is now because it does not leave a network, we do not require a public IP address to communicate with these services. I eliminates the need for an internet gateway. So let's say we didn't need this internet gateway, the only reason we were using it was to get to s3, we can now eliminate that and keep everything private. So you know, there you go. There are two types of VPC endpoints inter interface endpoints and gateway endpoints. And we're going to get into that. So we're going to look at the first type of VPC endpoint and that is interface endpoints. And they're called interface endpoints because they actually provision an elastic network interface, an actual network interface card with a private IP address, and they serve as an entry point for traffic going to a supported service. If you read a bit more about interface endpoints, they are powered by AWS private link. There's not much to say here, that's just what it is. So Access Services hostname is easily securely by keeping your network traffic within a bus network. This is always confused me this branding of Eva's private link. But you know, you might as well just think of interface endpoints, and it is prevalent to be in the same thing. Again, it does cost something because it is speeding up and he and I, and so you know, it's it's point 01 cents per hour. And so over a month's time, if you had it on for the entire time, it's going to cost around $7.50. And the interface endpoint supports a variety of native services, not everything. But here's a good list of them for you. The second type of VCP endpoint is a gateway endpoint and a gateway endpoint. It has a target for a specific route in your row table used for traffic destined for a supported database service. And this endpoint is 100%. Free because you're just adding something to your row table. And you're going to be utilizing it mostly for Amazon s3 and dynamodb. So you saw that first use case where I showed you that we were getting the AC units in stock yesterday that was using a gateway endpoint. So there you go. Here we are at the VPC endpoint cheat sheet and this is going to be a quick one, so let's get to it. VPC endpoints help keep traffic between awa services within the AWS network. There are two kinds of VPC endpoints interface endpoints and gateway endpoints. interface endpoints cost money, whereas gateway endpoints are free interface endpoints uses a elastic network interface in the UI with a private IP address part. And this was all powered by private link. gateway endpoints is a target for a specific route in your route table. And interface endpoints support many ad services, whereas gateway endpoints only support dynamodb and s3. So we're going to take a look at VPC flow logs, which allows you to capture IP traffic information in and out from your network interfaces within your VPC. So you can turn on flow logs at three different levels. You can turn it on at the VPC level, which we're doing right here. You can turn it on at a specific subnet, or you can turn it on for a specific network interface. The idea is this all trickles down. So the turn off VBC is it's monitoring everything below. And same thing with subnets. To find VPC flow logs, you just go to go to the VPC console, and there's gonna be a tab for flow logs. Same thing with subnets and network interface, we're gonna be able to create that flow log. And so here is that forms Do you have an idea what you can do here. So the idea is you can choose to filter for only the accepted or rejected or all. So I'm saying it all, and it can deliver those logs to cloudwatch logs, they can also deliver them to an s3 bucket If you would prefer them to go there instead. So you know, that's the general stuff. But once you create a flow log, you can't really edit it, all you can do is delete it. So there you go. So we now know what VPC flow logs are for. But let's actually take a look at what VPC flow logs look like. And so here I have the structure up here of the data that is stored on a VPC flow logs so it stores these on individual lines. And immediately below, we actually have an example of a VPC flow log. And this is the full description of all these attributes. And these are pretty straightforward. The only thing I really want you to go away with here is that the fact that it stores the source IP address and the destination IP address. There's some exam questions and the probably at the pro level or the specialty level where we're talking about VPC flow logs. And the question might have to do with like, you know, just the VPC flow logs contain host names or does it contain IP addresses? And, and the answer is it contains IP addresses. So that's the big takeaway here that I wanted to show. So now that we've learned everything about VPC flow logs, here's your cheat sheet for when you go sit the exam. So the first thing is VPC flow logs monitors the in and out traffic of your network interfaces within your VPC. You can turn on flow logs at the VPC subnet or network interface level. VPC flow logs cannot be tagged like other resources. You cannot change the configuration of a flow log after it's been created. You cannot enable full logs for epcs which are appeared within your VPC unless it's the same account. VPC flow logs can be delivered to s3 or cloud watch logs. VPC flow logs contain the source and destination IP addresses so not the host names okay. And there's some instance traffic that will not get monitored. So instance traffic generated by Contact the Avis DNS servers, Windows license activation traffic from instances traffic to and from instance metadata addresses DHCP traffic any traffic to the reserved IP address of the default VPC router. So there you go. Andrew Brown from exam Pro, and we are looking at network access control lists, also known as knackles. It is an optional layer of security that acts as a firewall for controlling traffic in and out of subnets. So knackles act as a virtual firewall at the subnet level. And when you create a VPC, you automatically get a knakal by default, just like security groups knackles have both inbound and outbound rules of the difference here is that you're going to have the ability to allow or deny traffic in either way. Okay, so for security groups, you can only allow whereas knackles, you have deny. Now, when you create these rules here, it's pretty much the same as security groups with the exception that we have this thing called rule number And rule number is going to determine the order of evaluation for these rules, and the way it evaluates is going to be from the lowest to the highest, the highest rule number, it could be 32,766. And AWS recommends that when you come up with these rule numbers, you use increments of 10 or 100. So you have some flexibility to create rules in between if you need be, again, subnets are at the subnet level. So in order for them to apply, you need to associate subnets to knackles. And subnets can only belong to a single knakal. Okay, so yeah, where you have security groups, you can have a instances that belong to multiple ones for knackles. It's just a singular case, okay? Alright, we're just gonna look at a use case for knackles. Here, it's going to be really around this deny ability. So let's say there is a malicious actor trying to gain access to our instances, and we know the IP address, well, we can add that as a rule to our knakal and deny that IP address. And let's say we know that we never need to SSH into these instances. And we just want to additional guarantee in case someone Miss configures, a security group that SSH access is denied. So we'll just deny on port 22. And now we have those two cases covered. So there you go. So we're on to the knackles cheat sheets. Let's jump into it. So network access control list is commonly known as nachal. v PCs are automatically given default knakal, which allow all outbound and inbound traffic. Each subnet with any VPC must be associated with a knakal. subnets can only be associated with one knakal. At a time associating a subnet with a new knakal will remove the previous Association. If a knakal is not exclusively associated with a subnet, the subnet will automatically be associated with the default knakal knakal has inbound and outbound rules just like security groups, rules can either allow or deny traffic, unlike security groups, which can only allow knackles or stateless. So that means your it's going to allow inbound traffic and also outbound. When you create a knakal, it will deny all traffic by default knackles contain a number numbered list of rules that gets evaluated in order from lowest to highest. If you need to block a single IP address, you could you could be a knackles. You cannot do this via security groups because you cannot have deny actions. Okay, so there you go. Hey, it's Andrew Brown from exam Pro, we are looking at security groups. And they help protect our EC two instances by acting as a virtual firewall controlling the inbound and outbound traffic, as I just said, security groups acts as a virtual firewall at the instance level. So you would have an easy to instance and you would attach to it security groups. And so here is an easy to instance. And we've attached a security group to it. So what does it look like on the inside for security groups, each security group contains a set of rules that filter traffic coming into. So that's inbound, and out of outbound to that easy to instance. So here we have two tabs, inbound and outbound. And we can set these are rules, right. And we can set these rules with a particular protocol and a port range. And also who's allowed to have access. So in this case, I want to be able to SSH into this YSU instance, which uses the TCP protocol. And the standard port for SSH is 22. And I'm going to allow only my IP so anytime you see forward slash 32 that always means my IP. All right. So that's all you have to do to add inbound and outbound rules. There are no deny rules. So all traffic is blocked by default unless a rule specifically allows it. And multiple instances across multiple subnets can belong to a security group. So here I have three different EC two instances, and they're all in different subnets. And security groups do not care about subnets, you just assign EC two instance, to a security group. And, you know, just in this case, and they're all in the same one, and now they can all talk to each other, okay? You're I have three security group scenarios, and they all pretty much do the same thing. But the configuration is different to give you a good idea of variation on how you can achieve things. And so the idea is we have a web application running on a situ instance. And it is connecting to an RDS database to get its information running in a private subnet. Okay. And so in the first case, what we're doing is we have an inbound rule on the SU database saying allowing for anything from 5432, which is the Postgres port number, for this specific IP address. And so it allows us these two instance to connect to that RDS database. And so the takeaway here is you can specify the source to be an IP range, or specific IP. And so this is very specific, it's forward slash 32. And that's a nice way of saying exactly one IP address. Now in the second scenario, it looks very similar. And the only difference is, instead of providing an IP address as a source, we can provide another security group. So now anything within the security group is allowed to gain access for inbound traffic on 5432. Okay, now, in our last use case, down below, we have inbound traffic on port 80, and inbound traffic on port 22, for the SG public group, and then we have the EC two instance and the RDS database within its own security group. So the idea is that that EC two instance is allowed to talk to that RDS database, and that EC two instance is not exposing the RDS database to it well wouldn't, because it's in a private subnets, that doesn't have a public IP address. But the point is, is that this is to instance, now is able to get traffic from the internet, it's also able to accept someone from like for an SSH access, okay. And so the big takeaway here is that you can see that an instance can belong to multiple security groups and rules are permissive. So when we have two security groups, and this one has allows, and this is going to take precedence over su stack, which doesn't have anything, you know, because it's denied by default, everything, but anything that allows is going to override that, okay, so you can nest multiple security groups onto one EC two instance. So just keep that stuff. There are a few security group limits I want you to know about. And so we'll look at the first you can have up to 10,000 security groups in a single region, and it's defaulted to 2500. If you want to go beyond that 2500, you need to make a service limit increase request to Eva support, you can have 60 inbound rules and 60 outbound rules per security group. And you can have 16 security groups per EMI. And that's defaulted to five. Now, if you think about like, how many scripts Can you have on an instance? Well, it's depending on how many annise are actually attached to that security group. So if you have to realize that it's attached to a security group, then by default, you'll have 10. Or if you have the upper limit here, 16, you'll be able to have 32 security groups on a single instance. Okay, so those are the limits, you know, I thought were worth telling. So we're gonna take a look at our security groups cheat sheet. So we're ready for exam time. So security groups act as a firewall at the instance level, unless allowed, specifically, all inbound traffic is blocked by default, all outbound traffic from the instance is allowed by default, you can specify for the source to be either an IP range, a single IP address or another security group. security groups are stateful. If traffic is allowed, inbound is also allowed outbound. Okay, so that's what stateful means. Any changes to your security group will take effect immediately. ec two instances can belong to multiple security groups. security groups can contain multiple EC two instances, you cannot block a specific IP address security groups. For this, you need to use knackles. Right? So again, it's allowed by default, sorry, everything's denying you're only allowing things okay. You can have up to 10,000 security groups per region default is 2500. You can have 60 inbound and 60 outbound rules per security group. And you can have 16 security groups associated to that. And I default is five and I can see that I added an extra zero there. So don't worry when you print out your security scripts to cheat it will be all correct. Okay. We're looking at network address translation, also known as Nat. And this is the method of remapping one IP address space into another. And so here you can see we have our local network with its own IP address space. And as it passes through the NAT, it's going to change that IP address. Well, why would we want to do this? Well, there's two good reasons for this. So if you have a private networking need to help gain outbound access to the internet, you need to use a NAT gateway to remap those private IPS. If you have two networks, which have conflicting network addresses, maybe they actually have the same, you can use a NAT to make the addresses more agreeable for communication. So when we want to launch our own Nat, we have two different options in AWS, we have Nat instances, and Nat gateways. So we're just going to go through the comparison of these two. So before Nat gateways, all there was was Nat instances, and so you have to configure that instance, it's just the regular situ instance, to do that remapping. And so luckily, the community came up with a bunch of Nat instances. And so through the aect marketplace, you go to community am eyes, you can still do this. And some people have use cases for it. And you can launch a NAT instance, okay, and so in order for Nat instances to work, they have to be in a public subnet, because there it has to be able to reach the internet, if it wasn't a private subnet, there's no way it's going to get to the internet. So you would launch a NAT instance there, and there you go, there'd be a few more steps to configuration. But that's all you need to know. Now, when we go over to Nat gateways, um, it's a managed service. So it's going to set up that easy to instance for you, you're not going to have access to it, Avis is going to win 100% manage it for you. But it's not just going to launch one, it's going to have a redundant instance for you. Because when you launch your own Nat instances, if for whatever reason it gets taken down, then you'd have to run more than once. And now you have to do all this work to make sure that these instances are going to scale based on your traffic or have the durability that you need. So for Nat gateways, they take care of that for you. And again, you would launch it in a public subnet. The only thing that Nat gateway doesn't do is it doesn't launch them automatically across other azs for you. So you need to launch a NAT gateway per AZ but you do get redundancy for your instances. So those are the two methods. And generally you want to use Nat gateways when possible, because it is the new way of doing it, but you could still use the legacy way of doing it. So we're on to the NAT cheat sheet and we have a lot of information here. It's not that important for the solution architect associate, it would definitely come up for the sysops. Some of these details might matter. So we'll just go through this here. So when creating a NAT instance, you must disable source and destination checks on the instance. NAT instances must exist any public subnet, you must have a route out of the private subnet to the NAT instance, the size of a NAT instances is determined how much traffic can be handled. Highveld bill availability can be achieved using auto scaling groups, multiple subnets in different Z's and automate failover between them using a script so you can see there's a lot of manual labor. When you want to have availability and durability and scalability for Nat instances, it's all on on to you. And then we'll go look at Nat gateway so Nat gateways are redundant inside an availability zone so they can survive failure of EC to have an EC two instance. You can only have one Nat gateway inside one AZ so they cannot span multiple eyzies starts at five gigabytes per second and scales all the way up to 45 gigabytes per second. NAT gateways are the preferred setup for enterprise systems. There is no requirement to patch Nat gateways and there's no need to disable source and destination checks for the NAT gateway. Unlike Nat instances, Nat gateways are automatically assigned a public IP address route tables for the NAT gateway must be updated resources. in multiple easy's sharing a gateway will lose internet access if the gateway goes down unless you create a gateway in each AZ and configure route tables accordingly. So there you go. That is your net. Hey, this is Andrew Brown from exam pro and we are starting to VPC follow along and this is a very long section because we need to learn about all the kind of networking components that we can create. So we're going to learn how to create our own VPC subnets, route tables, internet gateways, security groups, Nat gateways knackles we're going to touch it all okay, so it's very core to learning about AWS And it's just great to get it out of the way. So let's jump into it. So let's start off by creating our own VPC. So on the left hand side, I want you to click on your VPC. And right away, you're gonna see that we already have a default VPC within this region of North Virginia. Okay, your region might be different from mine, it doesn't actually does kind of matter what region you use, because different regions have different amounts of available azs. So I'm going to really strongly suggest that you switch to North Virginia to make this section a little bit smoother for you. But just notice that the default VPC uses an ipv4 cider, cider block range of 172 31 0.0 forward slash 16. Okay, and so if I was to change regions, no matter what region will go to us, West, Oregon, we're going to find that we already have a default VPC on here as well. And it's going to have the same a cider block range, okay. So just be aware that at best does give you a default VPC so that you can start launching resources immediately without having to worry about all this networking, and there's no full power with using the default VPC, it's totally acceptable to do so. But we definitely need to know how to do this ourselves. So we're going to create our own VPC. Okay, and so I'm a big fan of Star Trek. And so I'm going to name it after the planet of Bayshore, which is a very well known planet in the Star Trek universe. And I'm going to have to provide my own cider block, it cannot be one that already exists. So I can't use that 172 range that AWS was using. So I'm gonna do 10.0 dot 0.0, forward slash 16. And there is a bit of a rhyme and rhythm to choosing these, this one is a very commonly chosen one. And so I mean, you might be looking at this going, Okay, well, what is this whole thing with the IP address slash afford 16. And we will definitely explain that in a separate video here. But just to give you a quick rundown, you are choosing your IP address that you want to have here. And this is the actual range, and this is saying how many IP addresses you want to allocate. Okay. Um, so yeah, we'll cover that more later on. And so now we have the option to set ipv6 cider, or a cider block here. And so just to keep it simple, I'm going to turn it off. But you know, obviously, ipv6 is supported on AWS. And it is the future of, you know, our IP protocol. So it's definitely something you might want to turn on. Okay, and just be prepared for the future there, that we have this tendency option, and this is going to give us a dedicated hosts. For our VPC, this is an expensive, expensive option. So we're going to leave it to default and go proceed and create our VPC. And so there it has been created. And it was very fast, it was just instantaneous there. So we're going to click through to that link there. And now we can see we have our VPC named Bayshore. And I want you to notice that we have our IP V for cider range, there is no ipv6 set. And by default, it's going to give us a route table and a knakal. Okay, and so we are going to overwrite the row table because we're going to want to learn how to do that by ourselves. knackles is not so important. So we might just glossed over that. But um, yeah, so there you are. Now, there's just one more thing we have to do. Because if you look down below here, we don't have DNS resolution, or DNS, or sorry, DNS hostnames is disabled by default. And so if we launch an EC two instance, it's not going to get a, a DNS, DNS hostname, that's just like a URL. So you can access that ecsu instance, we definitely want to turn that on. So I'm going to drop this down to actions and we're going to set a host names here to enabled okay. And so now we will get that and that will not cause this pain later down the road. So now that we've created our VPC, we want to actually make sure the internet can reach it. And so we're going to next learn about internet gateways. So we have our VPC, but it has no way to reach the internet. And so we're going to need an internet gateway. Okay, so on the left hand side, I want you to go to internet gateway. And we are going to go ahead and create a new one. Okay. And I'm just going to call it for internet gateway, bay shores and people do it w e do it w doesn't hurt. And so our internet gateway has been created, and so we'll just click through to that one. And so you're gonna see that it's in a detached state. So internet gateways can only be attached to a very specific VP, VPC, it's a one to one relationship. So for every VPC, you're going to have an internet gateway. And so you can see it's attached and there is no VPC ID. So I'm going to drop this down and attach the VPC and then select Bayshore there and attach it and there you go. Now it's attached and we can see the ideas associated. So we have an internet gateway, but that still doesn't mean that things within our network can Reach the internet, because we have to add a route to our route table. Okay, so just closing this tab here, you can see that there already is a route table associated with our VPC because it did create us a default route table. So I'm just going to click through to that one here to show you, okay, and you can see that it's our main route tables, it's set to main, but I want you to learn how to create route tables. So we're going to make one from scratch here. Okay. So we'll just hit Create route table here. And we're just going to name it our main route table or RG, our internet road table, I don't know doesn't matter. Okay, we'll just say RT, to shorten that there and we will drop down and choose Bayshore. And then we will go ahead and create that route table. Okay, and so we'll just hit close. And we will click off here. So we can see all of our route tables. And so here we have our, our, our main one here for Bayshore. And then this is the one we created. Okay, so if we click into this route table here, you can see by default, it has the full scope of our local network here. And so I want to show you how to change this one to our main. So we're just going to click on this one here and switch it over to main, so set as main row table. So the main row table is whenever you know, just what is going to be used by default. All right, and so we'll just go ahead and delete the default one here now, because we no longer need it. Alright, and we will go select our new one here and edit our routes. And we're going to add one for the internet gateway here. So I'm gonna just drop down here, or sorry, I'm just gonna write 0.0 dot 0.0, forward slash, zero, which means let's take take anything from anywhere there. And then we're going to drop down, select internet gateway, select Bayshore, and hit save routes. Okay, and we'll hit close. And so now we, we have a gateway. And we have a way for our subnets to reach the internet. So there you go. So now that we have a route to the internet, it's time to create some subnets. So we have some way of actually launching our EC two instances, somewhere. Okay, so on the left hand side, I want you to go to subnets. And right away, you're going to start to see some subnets. Here, these are the default ones created with you with your default VPC. And you can see that there's exactly six of them. So there's exactly one for every availability zone within each region. So the North Virginia has six azs. So you're going to have six, public subnets. Okay, the reason we know these are public subnets. If we were to click on one here and check the auto assign, is set to Yes. So if a if this is set to Yes, that means any EC two instance launch in the subnet is going to get a public IP address. Hence, it's going to be considered a public subnet. Okay. So if we were to switch over to Canada Central, because I just want to make a point here, that if you are in a another region, it's going to have a different amount of availability zones, Canada only has two, which is a bit sad, we would love to have a third one there, you're going to see that we have exactly one subnet for every availability zone. So we're going to switch back to North Virginia here. And we are going to proceed to create our own subnets. So we're going to want to create at least three subnets if we can. So because the reason why is a lot of companies, especially enterprise companies have to run it in at least three availability zones for high availability. Because if you know one goes out and you only have another one, what happens if two goes out. So there's that rule of you know, always have at least, you know, two additional Okay, so we're going to create three public subnets and one, one private subnet, we're not going to create three private subnets, just because I don't want to be making subnets here all day. But we'll just get to it here. So we're going to create our first subnet, I'm going to name this Bayshore public, okay, all right, and we're going to select our VPC. And we're going to just choose the US East one, eight, and we're going to give it a cider block of 10.0 dot 0.0 forward slash 24. Now, notice, this cider range is a smaller than the one up here, I know the number is larger, but from the perspective of how many IP addresses it allocates, there's actually a fewer here, so you are taking a slice of the pie from the larger range here. So just be aware, you can set this as 16, it's always going to be less, less than in by less, I mean, a higher number than 16. Okay, so we'll go ahead and create our first public subnet here. And we'll just hit close. And this is not by default public because by default, the auto sign is going to be set to No. So we're just going to go up here and modify this and set it so that it does auto assign ipv4 and now is is considered a public subnet. So we're going to go ahead and do that for our B and C here. So it's going to be the same thing Bayshore public, be, okay, choose that. We'll do B, we'll do 10.0 dot 1.0 24. Okay. And we're going to go create that close. And we're going to that auto assign that there. All right. And the next thing we're going to do is create our next subnet here so Bayshore How boring I beige or public. See? And we will do that. And we'll go to see here, it's going to be 10.0 dot 2.0, Ford slash 24. Okay, we'll create that one. Okay, let it close. And we will make sure did I set that one? Yes, I did that I said that one, not as of yet. And so we will modify that there, okay. And we will create a another subnet here, and this is going to be a beige, your private a, okay. And we are going to set that to eight here. And we're going to set this to 10.0 dot 3.0 24. Okay, so this is going to be our private subnet. Alright, so we've created all of our subnets. So the next thing we need to do is associate them with a route table, actually, we don't have to, because by default, it's going to use the main, alright, so they're already automatically associated there. But for our private one, we're not going to be wanting to really use the, the, the the main route table there, we probably would want to create our own route table for our private subnets there. So I'm just gonna create a new one here, and we're gonna just call it private RT. Okay, I'm going to drop that down, choose Bayshore here. And we're going to hit close, okay. And the idea is that the, you know, we don't need the subnet to reach the internet. So it doesn't really make sense to be there. And then we could set other things later on. Okay, so what I want you to do is just change the association here. So we're gonna just edit the route table Association. And we're just going to change that to be our private one. Okay. And so now our route tables are set up. So we will move on to the next step. So our subnets are ready. And now we are able to launch some EC two instances. So we can play around and learn some of these other networking components. So what I want you to do is go to the top here and type in EC two. And we're going to go to the EC two console. And we're going to go to instances on the left hand side. And we're going to launch ourselves a couple of instances. So we're going to launch our first instance, which is going to be for our public subnet here. So we're going to choose t to micro, we're going to go next, and we are going to choose the Bayshore VPC that we created. We're going to launch this in the public subnet here public a, okay, and we're going to need a new Im role. So I'm just going to right click here and create a new Im role because we're going to want to give it access to both SSM for sessions manager and also, so we have access to s3. Okay, so just choosing EC to there. I'm going to type in SSM, okay, SSM, there it is at the top, that will type in s3, we're gonna give it full access, we're going to go next, we're going to go to next and we're going to just type in my base, your EC two. Okay. And we're going to hit Create role. Okay, so now we have the role that we need for our EC two instance, we're just going to refresh that here, and then drop down and choose my beige or EC two. Okay, and we are going to want to provide it a script here to run. So I already have a script pre prepared that I will provide to you. And this is the public user data.sh. All this is going to do. And if you want to just take a peek here at what it does, I guess they don't have it already open here. But we will just quickly open this up here. all it's going to do is it's going to install an Apache server. And we're just going to have a static website page here served up. Okay, and so we're going to go ahead and go to storage, nothing needs to be changed here. We're going to add, we don't need to add any tags. We're gonna go to security group and we're going to create a new security group, I'm going to call it um, my my beige your EC two SG, okay. And we're going to make sure that we have access to HTTP, because this is a website, we're going to have to have Port 80 open, we're going to restrict it down to just us. And we could also do that for SSH, so we might as well do that there as well. Okay, we're going to go ahead and review and launch this EC two instance and already have a key pair that is created. You'll just have to go ahead and create one if you don't have one there. And we'll just go ahead and launch that instance there. Okay, great. So now, we have this EC two instance here. Which is going to be for our public segment. Okay. And we will go ahead and launch another instance. So we'll go to Amazon Lex to here, choose teach you micro. And then this time we're going to choose our private subnet. Okay, I do want to point out that when you have this auto assign here, see how it's by by default disabled, because it's inheriting whatever the parents have done has, whereas when we set it, the first one, you might have not noticed, but it was set to enable, okay. And we are going to also give it the same role there, my beige or EC two. And then this time around, we're going to give it the other scripts here. So I have a private script here, I'm just going to open it up and show it to you. Okay, and so what this script does, is a while it doesn't actually need to install Apache, so we'll just remove that, I guess it's just old. But anyway, what it's going to do is it's going to reset the password on the EC to user to chi win. Okay, that's a character from Star Trek Deep Space Nine. And we're also going to enable password authentication. So we can SSH into this using a password. And so that's all the script does here. Okay, and so we are going to go ahead and choose that file there. And choose that. And we will move on to storage, storage is totally fine. We're not going to add tags, secure groups, we're gonna actually create a new security group here. It's not necessarily necessary, but I'm going to do anyway, so I'm gonna say my private, private EC two, SD, maybe put Bayshore in there. So we just keep these all grouped together note, therefore, it's only going to need SSH, we're not going to have any access to the internet there. So like, there's no website or anything running on here. And so we'll go ahead and review and launch. And then we're going to go launch that instance, and choose our key pair. Okay, great. So now we're just going to wait for these two instances to spin up here. And then we will play around with security groups and knackles. So just had a quick coconut water. And now I'm back here and our instances are running, they don't usually take that long to get started here. And so we probably should have named these to make it a little bit easier. So we need to determine which is our public and private. And you can see right away, this one has a public public DNS hostname, and also it has its ip ip address. Okay, so this is how we know this is the public one. So I'm just going to say, base your public. Okay. And this one here is definitely the private one. All right. So we will say a base your private. Okay. So, yeah, just to iterate over here, if we were to look, here, you can see we have the DNS and the public IP address. And then for the private, there's nothing set. Okay, so let's go see if our website is working here. So I'm just going to copy the public IP address, or we can take the DNS when it doesn't matter. And we will paste this in a new tab. And here we have our working website. So our public IP address is definitely working. Now, if we were to check our private one, there is nothing there. So there's nothing for us to copy, we can even copy this private one and paste it in here. So there's no way of accessing that website is that is running on the private one there. And it doesn't really make a whole lot of sense to run your, your website, in the private subnet there. So you know, just to make a very clear example of that, now that we have these two instances, I guess it's a good opportunity to learn about security groups. Okay, so we had created a security group. And the reason why we were able to access this instance, publicly was that in our security group, we had an inbound rule on port 80. So Port 80, is what websites run on. And when we're accessing through the web browser, there's and we are allowing my IP here. So that's why I was allowed to access it. So I just want to illustrate to you what happens if I change my IP. So at the top here, I have a VPN, it's a, it's a service, you can you can buy, a lot of people use it so that they can watch Netflix in other regions. I use it for this purpose not to watch Netflix somewhere else. So don't get that in your mind there. But I'm just going to turn it on. And I'm going to change my IP. So I get I think this is Brazil. And so I'm going to have an IP from Brazil here shortly once it connects. And so now if I were to go and access this here, it shouldn't work. Okay, so I'm just going to close that tab here. And it should just hang Okay, so it's hanging because I'm not using that IP. So that's how security groups work. Okay, and so I'm just going to turn that off. And I think I should have the same one and it should resolve instantly there. So great. So just showing you how the security groups work for inbound rules. Okay, for outbound rules, that's traffic going out to the internet, it's almost always open like this. 0.0 dot 0.0 right. Because you'd want to be able to download stuff, etc. So that is pretty normal business. Okay. So now that now that we can see that maybe we would like to show off how knackles work compared to security groups to security groups. As you can see if we were just to open this one up here, okay. security groups, by default, only can allow things so everything is denied. And then you're always opening things up. So you're adding allow rules only, you can't add an explicit deny rule. So we're knackles are a very useful is that you can use it to block a very specific IP addresses, okay, or IP ranges, if you will. And you cannot do that for a security group. Because how would you go about doing that? So if I wanted to block access just to my IP address, I guess I could only allow every other IP address in the world except for mine. But you can see how that would do undue burden. So let's see if we can set our knakal to just block our IP address here. Okay. So security groups are associated with the actual EC two instances, or? So the question is, is that, how do we figure out the knackles and knackles are associated with the subnets. Okay, so in order to block our IP address for this EC, two instance, we have to determine what subnet it runs in. And so it runs in our beige or public a right and so now we get to find the knakal, that's associated with it. So going up here to subnets, I'm going to go to public a, and I'm going to see what knackles are associated with it. And so it is this knakal here, and we have some rules that we can change. So let's actually try just blocking my IP address here. And we will go just grab it from here. Okay. All right. And just to note, if you look here, see houses forward slash 32. That is mean, that's a cider block range of exactly one IP address. That's how you specify a single IP address with forward slash 32. But I'm going to go here and just edit the knakal here, and we are going to this is not the best way to do it. So I'm just going to open it here. Okay. And because I didn't get some edit options there, I don't know why. And so we'll just go up to inbound rules here, I'm going to add a new rule. And it goes from lowest to highest for these rules. So I'm just going to add a new rule here. And I'm going to put in rule 10. Okay, and I'm going to block it here on the side arrange. And I'm going to do it for Port 80. Okay, so this and we're going to have an explicit deny, okay, so this should, this should not allow me to access that easy to instance, any any longer. Okay, so we're going to go back to our instances here, we're going to grab that IP address there and paste it in there and see if I still have access, and I do not okay, so that knakal is now blocking it. So that's how you block individual IP addresses there. And I'm just going to go back and now edit the rule here. And so we're just going to remove this rule, and hit save. And then we're going to go back here and hit refresh. Okay. And I should now have access on I do. So there you go. So that is security groups and knakal. So I guess the next thing we can move on to is how do we actually get access to the private subnet, okay, and the the the ways around that we have our private EC two instance. And we don't have an IP address, so there's no direct way to gain access to it. So we can't just easily SSH into it. So this is where we're going to need a bastion. Okay, and so we're going to go ahead and go set one up here. So what I want you to do is I want you to launch a new instance here, I'm just gonna open a new tab, just in case I want this old tab here. And I'm just going to hit on launch instance here. Okay, and so I'm going to go to the marketplace here, I'm gonna just type in Bastion. And so we have some options here, there is this free one Bastion host, SSH, but I'm going to be using guacamole and there is an associated cost here with it, they do have a trial version, so you can get away without paying anything for it. So I'm just going to proceed and select guacamole. And anytime you're using something from the marketplace, they generally will have the instructions in here. So if you do view additional details here, we're going to get some extra information. And then we would just scroll down here to usage information such as usage instructions, and we're going to see there is more information. I'm just going to open up this tab here because I've done this a few times. So I remember where all this stuff is okay, and we're just going to hit continue here. Okay, and we're going to start setting up this instance. So we're going to need a small so this one doesn't allow you to go into micros. Okay, so there is an associated cost there. We're going to configure this instance, we're going to want it in the same VPC as our private, okay, when we have to launch this in a public subnet, so just make sure that you select the public one Here, okay. And we're going to need to create a new Im role. And this is part of guacamole these instructions here because you need to give it some access so that it can auto discover instances. Okay? And so down here, they have the instructions here, and they're just going to tell you to make an IM role, we could launch a cloudformation template to make this, but I would rather just make it by hand here. So we're going to grab this policy here, okay. And we are going to make a new tab and make our way over to I am, okay. And once we're in I am here, we're going to have to make this policy. So I'm going to make this policy. Okay, unless I already have it. Let's see if it's already in here. New, okay, good. And I'm gonna go to JSON, paste that in there, review the policy, I'm going to name it, they have a suggestion here, what to name it, Glock AWS, that seems fine to me. Okay, and here, you can see it's gonna give us permissions to cloud watch an STS. So we'll go ahead and create that policy. It says it already exists. So I already have it. So just go ahead and create that policy. And I'm just going to skip the step for myself. Okay, and we're just going to cancel there. So I'm just going to type walk. I don't know why it's not showing up, says it already exists. Again, so yeah, there it is. So I already have that policy. Okay, so I couldn't hit that last step. But you'll be able to get through that no problem. And then once you have it, you're gonna have to create a new role. So we're going to create a role here, and it's going to be for EC two, we're going to go next. And we're going to want I believe EC to full access is that the right Oh, read only access, okay. So we're going to want to give this easy to read only access. And we're also going to want to give it that new GWAC role. So I'm going to type in type AWS here. Oh, that's giving me a hard time here, we'll just copy and paste the whole name in here. There it is. And so those are the two, two policies you need to have attached. And then we're just going to name this something here. So I'm gonna just call it my Glock, Bastion. Okay, roll here. I'm going to create that role. Okay, and so that role has now been created, we're going to go back here, refresh the IM roles, and we're going to see if it exists. And there it is my Glock Bastion role. Am I spell bashing wrong there, but I don't think that really matters. And then we will go to storage. There's nothing to do here, we'll skip tags, we'll go to security groups. And here you can see it comes with some default configurations. So we're going to leave those alone. And then we're going to launch this EC two instance. Okay. So now we're launching that it's taking a bit of time here, but this is going to launch. And as soon as this is done, we're going to come back here and actually start using this Bastion to get into our private instance. So our bashing here is now already in provisioned. So let's go ahead and just type in Bastion, so we don't lose that later on, we can go grab either the DNS or public IP, I'll just grab the DNS one here. And we're going to get this connection, not private warning, that's fine, because we're definitely not using SSL here. So just hit advanced, and then just click to proceed here. Okay, and then it's might ask you to allow we're going to definitely say allow for that, because that's more of the advanced functionality, guacamole there, which we might touch in. At the end of this here, we're going to need the username and password. So it has a default, so we have block admin here, okay. And then the password is going to be the name of the instance ID. All right, and this is all in the instructions here. I'm just speaking you through it. And then we're going to hit login here. And so now it has auto discovered the instances which are in the VPC that is launched. And so here, we have Bayshore, private. So let's go ahead and try to connect to it. Okay. So as soon as I click, it's going to make this shell here. And so we'll go attempt and login now. So our user is easy to user. And I believe our password is KI wi n Chi win. And we are in our instance, so there you go. That's how we gain access to our private instance here. Just before we start doing some other things within this private easy to I just want to touch on some of the functionality of Bastion here, or sorry, guacamole, and so why you might actually want to use the bastion. So it does, it is a hardened instance, it does allow you to authenticate via multiple methods. So you can enable multi factor authentication to use this. It also has the ability to do screen recordings, so you can really be sure what people are up to, okay, and then it just has built in audit logs and etc, etc. So, there's definitely some good reasons to use a bastion, but we can also use a sessions manager which does a lot of this for us with the exception of screen recording within the within AWS. But anyway, so now that we're in our instance, let's go play around here and see what we can do. So now that we are in this private EC two instance, I just want to show you that it doesn't have any internet access. So if I was to ping something like Google, right, okay, and I'm trying to get information here to see how it's hanging, and we're not getting a ping back, that's because there is no route to the internet. And so the way we're going to get a route to the internet is by creating a NAT instance or a NAT gateway. Generally, you want to use a NAT gateway, there are cases use Nat instances. So if you were trying to save money, you can definitely save money by having to manage a NAT instance by herself. But we're gonna learn how to do Nat gateway, because that's the way to this wants you to go. And so back in our console, here, we are in EC two instances, where we're going to have to switch over to a V PC, okay, because that's where the NAT gateway is. So on the left hand side, we can scroll down and we are looking under VPC, we have Nat gateways. And so we're going to launch ourselves a NAT gateway, that gateways do cost money. So they're not terribly expensive. But you know, at the end of this, we'll tear it down, okay. And so, the idea is that we need to launch this Nat gateway in a public VPC or sorry, public subnet, and so we're gonna have to look, here, I'm gonna watch it in the beige or public a doesn't matter which one just has to be one of the public ones. And we can also create an elastic IP here. I don't know if it actually is required to sign a pipe network, I don't know if it really matters. Um, but we'll try to go ahead and create this here without any IP, no, it's required. So we'll just hit Create elastic IP there. And that's just a static IP address. So it's never changing. Okay. And so now that we have that, as associated with our Nat gateway, we'll go ahead and create that. And it looks like it's been created. So once your Nat gateway is created, the next thing we have to do is edit your route table. So there actually is a way for that VPC to our sorry, that private instance to access the internet. Okay, so let's go ahead and edit that road table. And so we created a private road table specifically for our private EC two. And so here, we're going to add it the routes, okay. And we're going to add a route for that private or to that Nat gateway. Okay. So um, we're just going to type in 0.0 dot 0.0, Ford slash zero. And we are then just going to go ahead, yep. And then we're going to go ahead and choose our Nat gateway. And we're going to select that there, and we're going to save that route. Okay, so now our Nat gateway is configured. And so there should be a way for our instance to get to the internet. So let's go back and do a ping. And back over here, in our private EC two instance, we're just going to go ahead and ping Google here. Okay, and we're going to see if we get some pings back, and we do so there you go. That's all we had to do to access the internet. All right. So why would our private easy to instance need to reach the internet. So we have one one inbound traffic, but we definitely want outbound because we would probably want to update packages on our EC two instance. So if we did a sudo, yum, update, okay, we wouldn't be able to do this without a outbound connection. All right. So it's a way of like getting access to the internet, only for the things that we need for outbound connections. Okay. So now that we've seen how we can set an outbound connection to the Internet, let's talk about how we could access other AWS services via our private EC two instance here. So s3 would be a very common one to utilize. So I'm just going to go over to s3 here, I'm just going to type in s3, and open this in a new tab, I'm going to try to actually access some s3 files here. Okay. And so I should already have a bucket in here called exam pro 000. And I do have some images already in here that we should be able to access. And we did get that Iam role permissions to access that stuff there. So the ACL, the ACL, I should be already pre installed here. And so we'll just type in AWS s3, and it should be, if we wanted to copy a file locally, we'll type in CP. And we're going to need to actually just do LS, okay, so we'll do LS here, okay. I don't think we have to go as advanced as copying and doing other stuff here. But you can definitely see that we have a way of accessing s3 via the COI. So what would happen if we removed that Nat gateway, would we still be able to access s3? So let's go find out. All right. I think you know the answer to this, but let's just do it. And then I'll show you a way that you can still access s3 without a NAT gateway. All right. So we're going to go ahead here and just delete this Nat gateway, it's not like you can just turn them off so you have to delete them. And we'll just wait till that finishes deleting here. So our Nat gateway has deleted after a few months. here just hit the refresh button here just in case because sometimes it will say it's deleting when it's already done. And you don't want to be waiting around for nothing. So let's go back to our EC two instance here. We'll just clear the screen here. And now the question is, will we be able to access AWS s3 via the via the CLR here, okay, and so I hit Enter, and I'm waiting, waiting, waiting. And it's just not going to complete because it no longer has any way to access s3. So the way it works when using the COI through UC Davis is it's going to go out out to the internet out of the universe network, and then come back into nativas network to then access s3. And so since there is no outbound way of connecting to the internet, there's no way we're going to be able to connect to s3. Okay, so it seems a little bit silly, because you'd say, Well, why wouldn't you just keep the traffic within the network because we're already on an easy to within AWS network, and s3 is with Ava's network. And so that brings us to endpoints, which is actually how we can create a like our own little private tunnel within the US network, so that we don't have to leave up to the internet. So let's go ahead and create an endpoint and see if we can connect to s3 without having outbound Connect. So we're going to proceed to create our VPC endpoints on the left hand side, you're going to choose endpoints, okay. And we're going to create ourselves a new endpoint. And this is where we're going to select it for the service that we want to use. So this is going to be for s3. But just before we do that, I want you to select the VPC that we want this for down below. And then we're going to need this for s3. So we'll just scroll down here and choose s3. Okay, and we're going to get a blotch options here. Okay. And so we're going to need to configure our route tree, our route table. So we have that connection there. And it's going to ask what route table Do you want put it in, and we're going to want to put it in our private one, because that's where our private easy to instance is. And then down below, we will have a policy here. And so this is going to be great. So we will just leave that as is, and we're going to hit Create endpoint. Okay, so we're gonna go back and hit close there. And it looks like our endpoint is available immediately there. And so now we're going to go find out if we actually have access to s3. So back over here we are in our private EC two instance. And I'm just going to hit up and see if we now have access and look at that. So we've created our own private connection to s3 without leaving the egress network. Alright, so we had a fun time playing around with our private EC two instance there. And so we're pretty much wrapped up here for stuff. I mean, there's other things here, but you know, at the associate level, it's, there's not much reason to get into all these other things here. But I do want to show you one more thing for VP C's, which are VPC flow logs, okay, and so I want you to go over to your VPC here, okay, and then I just want you to go up and I want you to create a flow log, so flow logs will track all the, the traffic that is going through through your VPC. Okay, and so it's just nice to know how to create that. So we can have it to accept reject, or all I'm going to set it to all and it can either be delivered to cloudwatch logs, or s3 Cloud watch is a very good destination for that. In order to deliver that, we're going to need a destination log group on I don't have one. So in order to send this to a log group, we're going to have to go to cloud watch, okay. We'll just open this up in a new tab here. Okay. And then once we're here in cloud watch, we're going to create ourselves a new cloud watch log, alright. And we're just going to say actions create log group, and we'll just call this Bayshore, your VPC flow logs or VPC logs or flow logs, okay. And we will hit Create there. And now if we go back to here and hit refresh, we may have that destination now available to us. There it is. Okay, we might need an IM role associated with this, to have permissions to publish to cloud watch logs. So we're definitely going to need permissions for that. Okay. And I'll just pop back here with those credentials here in two seconds. So I just wanted to collect a little bit of flow log data, so I could show it off to you to see what it looks like. And so you know, under our VPC, we can see that we have flow logs enabled, we had just created that a log there a moment ago. And just to get some data, I went to my EC two instances, and we had that public one running right. And so I just took that IP address. And I just started refreshing the page. I don't know if we actually looked at the actual webpage I had here earlier on, but here it is. And so I just hit enter, enter, enter here a few times. And then we can go to our cloud watch here and look for the log here and so we should have some streams, okay. And so if we just open that up, we can kind of see what we have here. And so we have some raw data. So I'll just change it to text. And so here we can see the IP address of the source and the destination and additional information, okay, and that we got a 200 response. Okay, so there you go. I'm just wanting to give you a little quick peek into there. So now we're done the VPC section, let's clean up whatever we created here, so we're not incurring any costs. So we're gonna make our way over two EC two instances. And you can easily filter out the instances which are in that. That VPC here by going to VPC ID here, and then just selecting the VPC. So these are the three instances that are running, and I'm just going to terminate them all. Because you know, we don't want to spend up our free credits or incur cost because of that bash in there. So just hit terminate there, and those things are going to now shut down. We also have that VPC endpoint still running, just double check to make sure your Nat gateway isn't still there. So under the back in the VPC section here, just make sure that you had so and there we have our gateway endpoint there for s3. So we'll just go ahead and delete that. I don't believe it cost us any money, but it doesn't hurt to get that out of the way there. We'll also check our elastic IPS. Now it did create a e IP when we created that gateway, IPS that are not being utilized cost us money. So we'll go ahead and release that tip. Okay, we'll double check our gateway. So under Nat gateways, making sure nothing is running under there, and we had deleted previously, so we're good, we can go ahead and delete these other things not so important. But we can go ahead and attempt to delete them, it might throw an error here. So we'll see what it says. Nope, they're all deletes. That's great. So those are deleted now. And then we have our route table. So we can delete those two route tables, okay. And we can get rid of our internet gateway, so we can find that internet gateway. There it is, okay, we will detach it, okay. And then we will go ahead and delete that internet gateway, okay. And we'll go attempt to delete our, our actual VPC now, we'll see if there's any dependencies here. So if we haven't deleted all the things that it wants, from here, it's going to complain. So there might be some security groups here, but we'll find out in a second. Oh, just delete the forest. Great. So it just deleted it there. So we're all cleaned up. So there you go. Hey, this is Andrew Brown from exam Pro. And we are looking at identity access management Iam, which manages access of AWS users and resources. So now it's time to look at I am core components. And we have these installed identities. And those are going to be users groups and roles. Let's go through those. So a user is an end user who can log into the console or interact with AWS resources programmatically, then you have groups, and that is when you take a bunch of users and you put them into a logical grouping. So they have shared permissions. That could be administrators, developers, auditors, whatever you want to call that, then you have roles and roles, have policies associated with them. That's what holds the permissions. And then you can take a role and assign it to users or groups. And then down below, you have policies. And this is a JSON document, which defines the rules in which permissions are loud. And so those are the core components. But we'll get more in detail to all these things. Next. So now that we know the core components aren't, let's talk about how we can mix and match them. Starting at the top here, we have a bunch of users in a user group. And if we want to on mass, apply permissions, all we have to do is create a role with the policies attached to that role. And then once we attach that role to that group, all these users have that same permission great for administrators, auditors, or developers. And this is generally the way you want to use Iam when assigning roles to users. You can also assign a role directly to a user. And then there's also a way of assigning a policy, which is called inline policy directly to a user. Okay, so why would you do this? Well, maybe you have exactly one action, you want to attach this user and you want to do it for a temporary amount of time. You don't want to create a managed role because it's never it's never going to be reused for anybody else. There are use cases for that. But generally, you always want to stick with the top level here. A role can have multiple policies attached to it, okay. And also a role can be attached to certain AWS resources. All right. Now, there are cases where resources actually have inline policies directly attached to them, but there are cases where You have roles attached to or somehow associated to resources, all right. But generally, this is the mix and match of it. If you were taking the eight of a security certification, then this stuff in detail really matters. But for the associate and the pro level, you just need to conceptually know what you can and cannot do. All right. So in I am you have different types of policies, the first being managed policies, these are ones that are created by AWS out of convenience for you for the most common permissions you may need. So over here, we'd have Amazon easy to full access, you can tell that it's a managed policy, because it says it's managed by AWS, and an even further indicator is this orange box, okay? Then you have custom customer managed policies, these are policies created by you, the customer, they are edible, whereas in the Manage policies, they are read only. They're marked as customer managed, you don't have that orange box. And then last are inline policies. So inline policies, you don't manage them, because they're like they're one and done. They're intended to be attached directly to a user or directly to a, a resource. And they're, and they're not managed, so you can't apply them to more than one identity or resource. Okay, so those are your three types of roles. So it's now time to actually look at a policy here. And so we're just going to walk through all the sections so we can fully understand how these things are created. And the first thing is the version and the version is the policy language version. If this changes, then that means all the rules here could change. So this doesn't change very often, you can see the last time was 2012. So it's going to be years until they change it, if they did make changes, it probably would be minor, okay, then you have the statement. And so the statement is just a container for the other policy elements. So you can have a single one here. So here I have an array, so we have multiples. But if you didn't want to have multiples, you just get rid of the the square brackets there, you could have a single policy element there. Now going into the actual policy element, the first thing we have is Cid and this is optional. It's just a way of labeling your statements. So Cid probably stands for statement identifier, you know, again, it's optional, then you have the effect, the effect can be either allow or deny. And that's going to set the conditions or the the access for the rest of the policy. The next thing is we have the action. So actions can be individualized, right. So here we have I am and we have an individual one, or we can use asterisk to select everything under s3. And these are the actual actions the policy will allow or deny. And so you can see we have a deny policy, and we're denying access all to s3 for a very specific user here, which gets us into the principal. And the principal is kind of a conditional field as well. And what you can do is you can specify an account a user role or federated user, to which you would like to allow or deny access. So here we're really saying, hey, Barkley, you're not allowed to use s3, okay, then you have the resource, that's the actual thing. That is we're allowing or denying access to so in this case, it's a very specific s3 bucket. And the last thing is condition. And so condition is going to vary based on the based on the resource, but here we have one, and it does something, but I'm just showing you that there is a condition in here. So there you go, that is the makeup of a policy, if you can master master these things, it's going to make your life a whole lot easier. But you know, just learn what you need to learn. So you can also set up password policies for your users. So you can set like the minimum password length or the rules to what makes up a good password, you can also rotate out passwords, so that is an option you have as well. So it will expire after x days. And then a user then must reset that password. So just be aware that you have the ability to password. Let's take a look at access keys. Because this is one of the ways you can interact with AWS programmatically either through the ad vcli or the SDK. So when you create a user and you say it's allowed to have programmatic access, it's going to then create an access key for you, which is an ID and a secret access key. One thing to note is that you just can only have up to two access keys within their accounts down below, you can see that we have one, as soon as we add a second one, that grey button for create access key will vanish. And if we want more we would either have to we'd have to remove keys, okay. But you know, just be aware that that's what access keys are and you can make them inactive and you're only allowed to have Let's quickly talk about MFA. So MFA can be turned on per user. But there is a caveat to it where the user has to be the one that turns it on. Because when you turn it on, you have to connect it to a device and your administrator is not going to have the device notes on the user to do so there is no option for the administrator to go in and say, Hey, you have to use MFA. So it cannot be enforced directly from an administrator or root account. But what the minister can do if if if they want is they can restrict access to resources only to people that are using MFA, so you can't make the user account itself have MFA. But you can definitely restrict access to API calls and things like that. And this is Andrew Brown from exam Pro, and we are going to do the I am follow along. So let's make our way over to the IM console. So just go up to services here and type in Im, and we will get to learning this a right right away. So here I am on the IM dashboard, and we have a couple things that a device wants us to do. It wants us to set MFA on our root account. It also wants us to apply an IM password policy, so that our passwords stay very secure. So let's take what they're saying in consideration and go through this. Now I am logged in as the root user. So we can go ahead and set MFA. So what I want you to do is drop this down as your root user and we'll go manage MFA. And we will get to this here. So this is just a general disclaimer here to help you get started here. I don't ever want to see this again. So I'm just going to hide it. And we're going to go to MFA here and we're going to activate MFA. So for MFA, we have a few options available. We have a virtual MFA, this is what you're probably most likely going to use where you can use a mobile device or computer, then you can use a u2f security key. So this is like a UB key. And I actually have an OB key, but we're not going to use it for this, but it's a physical device, which holds the credentials, okay, so you can take this key around with you. And then there are other hardware mechanisms. Okay, so but we're going to stick with virtual MFA here. Okay, so we'll hit Continue. And what it's going to do is it's going to you need to install a compatible app on your mobile phone. So if we take a look here, I bet you authenticator is one of them. Okay. So if you just scroll down here, we have a few different kinds. I'm just looking for the virtual ones. Yeah. So for Android or iPhone, we have Google Authenticator or authy, two factor authentication. So you're going to have to go install authenticator on your phone. And then when you are ready to do so you're going to have to show this QR code. So I'm just going to click that and show this to you here. And then you need to pull out your phone. I know you can't see me doing this, but I'm doing it right now. Okay. And I'm not too worried that you're seeing this because I'm going to change this two factor authentication out here. So if you decide that you want to also add this to your phone, you're not going to get too far. Okay, so I'm just trying to get my authenticator app out here, and I'm gonna hit plus and the thing and I can scan the barcode, okay. And so I'm just going to put my camera over it here. Okay, great. And so is is saved the secret. All right, and so it's been added to Google Authenticator. Now, now that I have it in my application, I need to enter in to two consecutive MFA codes. Okay, so this is a little bit confusing. It took me a while to figure this out. The first time I was using AWS, the idea is that you need to set the first one. So the first one I see is 089265. Okay, and so I'm just going to wait for the next one to expire, okay, so there's a little circle that's going around. And I'm just waiting for that to complete to put in a second one, which just takes a little bit of time here. Still going here. Great. And so I have new numbers. So the numbers are 369626. Okay, so it's not the same number, but it's two consecutive numbers, and we'll hit assign MFA. And now MFA has been set on my phone. So now when I go and log in, it's going to ask me to provide additional code. Okay, and so now my root account is protected. So we're gonna go back to our dashboard, and we're gonna move on to password policies. Okay. So let's take the recommendation down here and manage our password policy. Okay. And we are going to set a password policy. So password policy allows us to enforce some rules that we want to have on Your users. And so to make passwords a lot stronger, so we can say it should require at least one upper letter, one lowercase letter, or at least one number, a non non alphanumeric character, enable the password expiration. So after 90 days, they're going to have to change the password, you can have password expiration requires the administrator reset, so you can't just reset it, the admin will do it for you allow users to change their own password is something you could set as well. And then you could say prevent password reuse. So for the next five passwords, you can't reuse the same one. Okay? So and I would probably put this a big high numbers, so that is a very high chance they won't use the same one. Okay, so, um, yeah, there we go. We'll just hit Save Changes. And now we have a password policy in place. Okay. And so that's, that's how that will be. So to make it easier for users to log into the Iam console, you can provide a customized sign in link here. And so here, it has the account ID, or I think that's the account ID but we want something nicer here. So we can change it to whatever you want. So I can call it Deep Space Nine. Okay. And so now what we have is if I spelt that, right, I think so yeah. So now that we have a more convenient link that we can use to login with, okay, so I'm just going to copy that for later, because we're going to use it to login. I mean, obviously, you can name it, whatever you want. And I believe that these are just like, I'm like picking like your Yahoo or your your Gmail email, you have to be unique. Okay, so you're not gonna be at least Deep Space Nine, as long as I have to use I believe. But yeah, okay, so maybe we'll move on to actually creating a. So here I am under the Users tab, and I am, and we already have an existing user that I created for myself, when I first set up this account, we're going to create a new user so we can learn this process. So we can fill the name here, Harry Kim, which is the character from Star Trek Voyager, you can create multiple users in one go here, but I'm just gonna make one. Okay, I'm going to give him programmatic access and also access to the console. So you can log in. And I'm going to have an auto generated password here, so I don't have to worry about it. And you can see that it will require them to reset their password and they first sign in. So going on to permissions, we need to usually put our users within a group, we don't have to, but it's highly recommended. And here I have one called admin for edit, which has add administrator access, I'm going to create a new group here, and I'm going to call it developers okay. And I'm going to give them power access, okay, so it's not full access, but it gives them quite a bit of control within the system. Okay. And I'm just going to create that group there. And so now I have a new group. And I'm going to add Harry to that group there. And we will proceed to the next step here. So we have tags, ignore that we're going to review and we're going to create Harry Kim the user. Okay. And so what it's done here is it's also created a secret access key and a password. Okay, so if Harry wants that programmatic access, he can use these and we can send the an email with the, with this information along to him. Okay. And, yeah, we'll just close that there. Okay. And then we will just poke around here in Harry Kim for a little bit. So just before we jump into Harry Kim, here, you can see that he has never used his access key. He, the last time his password was used was today, which was set today. And there is no activity and he does not have MFA. So if we go into Harry Kim, we can look around here. And we can see that he has policies applied to him from a group. And you can also individually attach permissions to him. So we have the ability to give them permissions via group. Or we can copy permissions from existing user or we can attach policies directly directly to them. So if we wanted to give them s3, full access, we could do so here. Okay. And then we can just apply those permissions there. And so now he has those permissions. We also have the ability to add inline policies. Okay, so in here, we can add whatever we want. And so we could use the visual editor here and just add an inline policy. Okay, so I'm just trying to think of something we could give him access to some ABC to okay, but he already has access to it because he's a power user, but we're just going through the motions of this here. So I'm gonna select EC two, and we're going to give him just a list access, okay. And we're going to say review policy, okay. And so we have full, full access there. And we can name the policy here so we can say, full Harry, okay. Full hair, you see, too. And we'll go ahead and create that policy, their maximum policy size exceeds for Harry Kim, I guess it's just that it has a lot of stuff in there. So I'm gonna go previous, okay, and I guess it's just a very large policy. So I'm just going to pare that down there. Okay. Again, this is just for show So it doesn't really matter what we select here. And then we'll go ahead and review that policy, and then we will create that policy. Okay. And so here we have an inline policy, or a policy directly attached. And then we have a manage policy, okay. And then we have a policy that comes from a group. Alright. So that's policies, we can see what group he belongs to here and add them to additional groups, tags or tags, we can see his security credentials here. So we could manage whether we want to change whether he has full access or not retroactively. And we can fiddle with his password or reset it for him here. And we can manage MFA. So we can set MFA for this user. Normally, you want the user to do it, by themselves, because if you had to set MFA for them, as administrator, they'd have to give me their phone right to set this up. But I guess if they had, if you had a yubikey, that set up the yubikey for them, and they give them the UB key. And then we have the access keys. So you can have up to two access keys within your account here. Okay, so I can go ahead and create a another one, it's a good habit to actually create both of them, because for security purposes, if you take the ADA security certification, one way of compromising account is always taking up that extra slot, you can also make these inactive. Okay? So if this becomes an active, you can set them all right. But see, we still can't create any additional keys, we have to hit the x's here. And so then we can create more access keys. Okay, if we were using code commit, we could upload their SSH key here. And so same thing, we can generate a credentials for code commit. And then there's access advisor. This is gives you general advice of like what access they have, I think Suzy can scroll down here and see what do they actually have access to? And when did they last access something? Okay. And then there's the Arn to Harry Kim. So it's something that we might want to utilize there. So we got the full gambit of this here. Okay. And so I'm just going to go ahead and delete Harry, because we're pretty much done here. Okay. Great. And so there we are. So that was the run through with users. So just to wrap up this section, we're just going to cover a rules and policies here. So first, we'll go into policies. And here we have a big list of policies here that are managed by AWS, they say they're managed over here, and you can definitely tell because they're camelcase. And they also have this nice little orange box, okay. And so these are policies, which you cannot edit, they're read only, but they're a quick and fast way for you to start giving access to your users. So if we were just to take a look at one of them, like the EC 214, or maybe just read only access, we can click into them. And we can see kind of the summary of what we get access to. But if we want to actually see the real policy here, we can see it in full. Alright. And so we do have some additional things here to see actually who is using this policy. So we can see that we have a roll named that I created here for a different follow along and it's utilizing this policy right now. We also have policy versions. So a policy can have revisions over time, and to have up to five of them, okay, so if you ever need to roll back a policy or just see how things have changed, we can see that here. And we also have the access advisor, which tells us who is utilizing this. So again, for Amazon ECS, we're seeing that role being utilized for this custom role that I've created. Okay, so let's just actually copy the the Jason here, so we can actually go try and make our own policy. Okay, because we did create a policy for Harry Kamba, it would be nice to actually create one with the JSON here. So we'll go back to the policy here. And we'll create a new policy, we'll go to the JSON editor here, and we will paste that in. And we do cover this obviously, in the lecture content, but you know, you have to specify the version, then you need a statement. And the statement has multiple, multiple actions within here that you need to define. Okay. And so here we have one that has an allow, allow effect, and it is for the action, easy to describe. And it's for all possible resources. Okay, so we're going to go ahead and create that there. And we're just going to name this as my read, only easy to access, okay. We're just gonna go ahead and create that policy. Okay. And so we have that policy there. I'm just going to search it there quickly. And you can see that this is customer manage, because it doesn't have the orange box. And it says that it's customer managed. All right. So let's just go ahead and go ahead and create a role now. So we can go ahead and create a role. And so generally, you want to choose the role, who this role is for. We're gonna say this is for EC two, okay, but you could also set it for another ABC account. This is for creating cross rules. Then you have web identity and SAML. We're gonna stick with services here and go to EC two and now we have the option You need to choose their policies and we can create or sorry, choose multiple policies here. So I'm going to do my read, only easy to hear, okay? But we could also select on them, and I guess three, okay. And I'm just going to skip tags because they don't care. And we're going to go next review, I'm going to say my role, alright, and shows the policies that were attached there. And now we can create that role. Okay. And so we now have that role. And so we can now attach it, we can attach that role to resource such such as when we launched got easy to instance, we could assign it that way, or, you know, affiliate with a user but yeah, there you go. We're onto the IM cheat sheets. Let's jump into it. So identity access management is used to manage access to users and resources I am is a universal system, so it's applied to all regions at the same time, I am is also a free service. A root account is the account initially created when AWS is set up, and so it has full administrator access. New Iam accounts have no permissions by default until granted, new users get assigned an access key ID and secret when first created when you give them programmatic access. Access keys are are only used for the COI and SDK, they cannot access the console. Access keys are only shown once when created, if lost, they must be deleted and recreated again, always set up MFA for your root accounts. Users must enable MFA on their own administrators cannot turn it on for each user, I am allows you, you to Set password policies to set minimum password requirements or rotate passwords. Then you have Iam identities, such as users, groups and roles and we'll talk about them now. So we have users, those are the end users who log into the console or interact with AWS resources programmatically, then you have groups. So that is a logical group of users that share all the same permission levels of that group. So think administrators, developers, auditors, then you have roles, which associates permissions to a role, and then that role is then assigned to users or groups. Then you have policies. So that is a JSON document, which grants permissions for specific users groups, roles to access services, policies are generally always attached to Iam identities, you have some variety of policies, you have managed policies, which are created by AWS, that cannot be edited, then you have customer managed policies, those are policies created by you that are editable and you have inline policies which are directly attached to the user. So there you go, that is I am. Hey, this is Andrew Brown from exam Pro. And we are looking at Amazon cognito, which is a decentralized way of managing authentication. So think sign up sign in integration for your apps, social identity providers, like connecting with Facebook, or Google. So Amazon cognito actually does multiple different things. And we are going to look at three things in specific, we're going to look at cognito user pools, which is a user directory to authenticate against identity providers, we're going to look at cognito identity pools, which provides temporary credentials for your users to access native services. And we're gonna look at cognito sync, which syncs users data and preferences across all devices. So let's get to so to fully understand Amazon cognito, we have to understand the concepts of web identity Federation and identity providers. So let's go through these definitions. So for web identity Federation, it's to exchange the identity and security information between an identity provider and an application. So now looking at identity provider, it's a trusted provider for your user identity that lets you authenticate to access other services. So an identity provider could be Facebook, Amazon, Google, Twitter, GitHub, LinkedIn, you commonly see this on websites where it allows you to log in with a Twitter or GitHub account, that is an identity provider. So that would be Twitter or GitHub. And they're generally powered by different protocols. So whenever you're doing this with social social accounts, it's going to be with OAuth. And so that can be powered by open ID Connect, that's pretty much the standard now, if there are other identity providers, so if you needed a single sign on solution, SAML is the most common one. Alright. So the first thing we're looking at is cognito. User pools, which is the most common use case for cognito. And that is just a directory of your users, which is decentralized here. And it's going to handle actions such as signup sign in account recoveries, that would be like resetting a password account confirmation, that would be like confirming your email after sign up. And it has the ability to connect to identity providers. So it does have its own like email and password form that it can take, but it can also leverage. Maybe if you want to have Facebook Connect or Amazon connect and etc. You can do that as well. The way it persists a connection after it authenticate is that generates a JW t. So that's how you're going to persist that connection. So let's look at more of the options so that we can really bake in the utility here of user pools. So here left hand side, we have a bunch of different settings. And for attributes, we can determine what should be our primary attribute should be our username when they sign up, or should it be email and phone phone number. And if it is, you know, can they sign up or sign in, if the email address hasn't been verified, or the conditions around that, we can set the restrictions on the password the length, if it requires special characters, we can see what kind of attributes are required to collect on signup, if we need their birthday, or email or etc. It has the capabilities of turning on MFA. So if you want multi factor authentication, very easy way to integrate that if you want to have user campaigns, so if you're used to like sending out campaigns via MailChimp, you can easily integrate cognito with pinpoint, which is user campaigns, right. And you also can override a lot of functionality using lambda. So anytime like a sign up or sign in, or recovery passwords triggered, there is a hook so that you can then trigger lambda to do something with that. So that's just some of the things that you do with cognito user pools. But the most important thing to remember is just it's a way of decentralizing your authentication that's for for user pool. All right. So now it's time to look at cognito identity pools, identity pools provide temporary natives credentials to access services such as dynamodb, or s3, identity pools can be thought of as the actual mechanism authorizing access to the AWS resources. So you know, the idea is you have an identity pool, you're gonna say, who's allowed to generate those AWS credentials, and then use the SDK to generate those credentials. And then that application can then access those at the services. So just to really hit that home here, I do have screenshots to give you an idea what that is. So first, we're going to choose a provider. So our provider can be authenticated. So we can choose cognito, or even a variety of other ones, or you can have it on authenticated. So that is also an option for you. And then after you create that identity pool, they have an easy way for you to use the SDK. So you could just drop down your platform and you have the code and you're ready to go to go get those credentials. If you're thinking did I actually put in my real, real example or identity pool ID there? It's not, it's not I actually go in and replace all these. So if you're ever wondering and watching these videos, and you're seeing these things I always replaced. We're going to just touch on one more, which is cognito. Sync. And so sync lets you sync user data and preferences across all devices with one line of code cognito, uses push notifications to push those updates and synchronize data. And under the hood, it's using simple notification service to push this data to devices. And the the data which is user data and preferences is key value data. And it's actually stored with the identity pool. So that's what you're pushing back and forth. But the only thing you need to know is what it does. And what it does is it syncs user data and preferences across all devices with one line of code. So we're onto the Amazon cognito cheat sheets, and let's jump into it. So cognito is a decentralized managed authentication system. So when you need to easily add authentication to your mobile or desktop apps, think cognito. So let's talk about user pools. So user pool is the user directory allows users to authenticate using OAuth two ipds, such as Facebook, Google Amazon to connect to your web applications. And cognito user pool isn't in itself an IPD. All right, so it can be on that list as well. User pools use JW T's to persist authentication. Identity pools provide temporary database credentials to access services, such as s3 dynamodb, cognito. Sync can sync user data preferences across devices with one line of code powered by SNS web identity Federation, they're not going to ask you these questions. But you need to know what these are exchange identity and security information between identity provider and an application. identity provider is a trusted provider for your user to authenticate or sorry to identify that user. So you can use them to dedicate to access other services. Then you have Oh IDC is a type of identity provider which uses OAuth and you have SAML which is a type of identity provider which is used for single sign on so there you go. We're done with cognito Hey, this is Angie brown from exam Pro, and we are going to take a look here at AWS command line interface also known as COI, which control multiple at the services from the command line and automate them through scripts. So COI lets you interact with AWS from anywhere by simply using a command line. So down below here I have a terminal and I'm using the AWS CLI which starts with AWS so to get installed on your computer. AWS has a script, a Python script that you can use to install the COI. But once it's installed, you're going to now have the ability to type AWS within your terminal, followed by a bunch of different commands. And so the things that you can perform from the CLR is you could list buckets, upload data to s3, launch, stop, start and terminate easy to instances, update security groups create subnets, there's an endless amount of things that you can do. All right. And so I just wanted to point out a couple of very important flags, flags are these things where we have hyphen, hyphen, and then we have a name here. And this is going to change the behavior of these COI commands. So we have output and so the outputs what's going to be returned to us. And we have the option between having Jason table and plain text. I'm for profiles, if you are switching between multiple AWS accounts, you can specify the profile, which is going to reference to the credentials file to quickly let you perform CL actions under different accounts. So there you go. So now we're going to take a look at eight of a software development kit known as SDK. And this allows you to control multiple AWS services using popular programming languages. So to understand what an SDK is, let's go define that. So it is a set of tools and libraries that you can use to create applications for a specific software package. So in the case for the Ava's SDK, it is a set of API libraries that you you that let you integrate data services into your applications. Okay, so that fits pretty well into the distribution of an SDK. And the SDK is available for the following languages. We have c++, go Java, JavaScript, dotnet, no, Jess, PHP, Python, and Ruby. And so I just have an example of a couple of things I wrote in the SDK. And one is a no Jess and one is Ruby into the exact same script, it's for ABS recognition for detecting labels, but just to show you how similar it is among different languages, so more or less the, the syntax is going to be the same. But yeah, that's all you need to know, in order to use the line SDK, we're gonna have to do a little bit work beforehand and enable programmatic access for the user, where we want to be able to use these development tools, okay. And so when you turn on programmatic access for user, you're going to then get an access key and a secret. So then you can utilize these services. And so down below, you can see I have an access key and secret generated. Now, once you have these, you're going to want to store them somewhere. And you're going to want to store them in your user's home directory. And you're going to want them within a hidden directory called dot AWS, and then a file called credentials. Okay, so down below here, I have an example of a credentials file. And you'll see that we have default credentials. So if we were to use CLR SDK, it's going to use those ones by default if we don't specify any. But if we were working with multiple AWS accounts, we're going to end up with multiple credentials. And so you can organize them into something called profiles here. And so I have one here for enterprise D, and D, Space Nine. So now that we understand pragmatic access, let's move on to learning about Ceylon. Hey, this is Andrew Brown from exam Pro. And we are going to do the COI and SDK follow along here. So let's go over to I am and create ourselves a new user so we can generate some database credentials. So now we're going to go ahead and create a new user. And we're going to give them programmatic access so we can get a key and secret. I'm going to name this user Spock. Okay. And we're going to go next here. And we're going to give them developer permissions, which is a power user here. Okay, and you can do the same here, if you don't have that, just go ahead and create that group. Name it developers and just select power access there, okay? power with a capital I guess, there, okay. But I already created it earlier with our I am, I am walk through there or follow along. So we're going to skip over to tags and a review. And we're gonna create that user. Alright. And so now we're gonna get an Access ID and secret. So I actually want to hold on to these. So I'm just gonna copy that there. And we're gonna move over here and just paste that in. Okay. And so we're just going to hold on to these and now we need an environment where we can actually do a bit of coding and use the CLA and the best place to do that in AWS is cloud nine. So we're gonna make our way over to cloud nine, and we're gonna spin ourselves up a new environment, okay. So here we are, and still have these two other environments here. I just seem to not be able to get rid of them. Generally, they they do delete with very little trouble because I messed with the cloudformation stack, they're sticking around here, but you won't have this problem. So I'm just going to create a new environment here. And we are going to go ahead and call this Spock Dev. Okay, and we're gonna go to the next step. And I'm just going to continuously ignore these silly errors. And we're going to use the smallest instance here, the TT micro, which will be on the free tier, okay, and we're going to use Amazon Linux. And this, cloud nine will actually spin down after 30 minutes Non Us. So if you forget about it, it will automatically turn itself off, start us off, which is really nice. And we'll go ahead and go to next step. And it's going to then just give us a summary here, we'll hit Create environment. And now we just have to wait for this environment to start up. So we'll just wait a few minutes here. So our cloud nine environment is ready here. Okay. And we have a terminal here, and it's connected to any c two instance. And the first thing I'm going to do is I just can't stand this white theme. So I'm gonna go to themes down here, go to UI themes, and we're going to go to classic dark, okay, and that's gonna be a lot easier on my eyes here. And so the first thing we want to do is we want to plug in our credentials here so that we can start using the CLA. So the COI is already pre installed on this instance here. So if I was to type AWS, we already have access to it. But if we wanted to learn how to install it, let's actually just go through the motions of that. Okay, so I just pulled up a couple of docs here just to talk about the installation process of the COI, we already have the COI installs of a type universe, it's already here. So it's going to be too hard to uninstall it just to install it to show you here, but I'm just going to kind of walk you through it through these docks here just to get you an idea how you do it. So the COI requires either Python two or Python three. And so on the Amazon Linux, I believe that it has both. So if I was to type in pi and do virgin here, okay. Or just Python, sorry, I'm always thinking of shorthands. This has version 3.6, point eight, okay. And so when you go ahead and install it, you're going to be using Pip. So PIP is the way that you install things in Python, okay, and so it could be PIP or PIP three, it depends, because there used to be Python two, and southern Python three came out, they needed a way to distinguish it. So they called it PIP three. But Python two is no longer supported. So now PIP three is just Pip. Okay, so you know, he's got to play around based on your system, okay. But generally, it's just Pip pip, install ATSC Li. And that's all there is to it. And to get Python install, your system is going to vary, but generally, you know, it's just for Amazon, Linux, it is a CentOS, or Red Hat kind of flavor of Linux. So it's going to use a Yum, install Python. And just for all those other Unix distributions, it's mostly going to be apt get Okay, so now that we know how to install the CLA, I'm just gonna type clear here and we are going to set up our credentials. Alright, so we're gonna go ahead and install our credentials here, they're probably already installed, because cloud nine is very good at setting you up with everything that you need. But we're going to go through the motions of it anyway. And just before we do that, we need to install a one thing, cloud nine here. And so I'm going to install via node package manager c nine, which allows us to open files from the terminal into Cloud Nine here. And so the first thing I want you to do is I want to go to your home directory, you do that by typing Tilda, which is for home, and forward slash, OK. And so now I want you to do l LS, hyphen, LA, okay. And it's going to list everything within this directory, and we were looking for a directory called dot AWS. Now, if you don't have this one, you just type it MK Dir. And you do dot AWS to create it, okay, but already exists for us. Because again, cloud nine is very good at setting things up for us. And then in here, we're expecting to see a credentials file that should contain our credential. So typing see nine, the program we just installed there, I'm just going to do credentials here, okay. And it's going to open it up above here. And you can already see that it's a set of credentials for us, okay. And I'm just going to flip over and just have a comparison here. So we have some credentials. And it is for I don't know who, but we have them and I'm going to go ahead and add a new one, I'm just going to make a new one down here called Spark, okay. All right. And basically, what I'm doing is I'm actually creating a profile here, so that I can actually switch between credentials. Okay. And I'm just going to copy and paste them in here. Alright, and so I'm just going to save that there. And so now I have a second set of credentials within the credential file there, and it is saved. And I'm just going to go down to my terminal here and do clear. And so now what I'm going to do is I'm going to type in AWS s Three LS, and I'm going to do hyphen, hyphen profile, I'm going to now specify Spock. And that's going to use that set of credentials there. And so now I've done that using sparks credentials, and we get a list of a bucket stuff. Okay? So now if we wanted to copy something down from s3, we're going to AWS s3, CP. And we are going to go into that bucket there. So it's going to be exam Pro, enterprise D, I have this from memory. And we will do data dot jpg, okay. And so what that's going to do is it's going to download a file, but before I actually run this here, okay, I'm just going to CD dot dot and go back to my home directory here. Okay. And I'm just going to copy this again here and paste it and so I should be able to download it. But again, I got to do a hyphen having profile specifies proc Spark, because I don't want to use the default profile there. Okay, um, and, uh, complain, because I'm missing the G on the end of that there, okay? And it's still complaining. Maybe I have to do s3 forward slash forward slash, huh? No, that's the command. Oh, you know why? It's because when you use CP, you have to actually specify the output file here. So you need your source and your destination. Okay, so I'm just gonna type Spock, or sorry, data, sorry, data dot JPG there. Okay. And that's going to download that file. So, I mean, I already knew that I had something for AWS there. So I'm just going to go to AWS to show you that there. So if you want to do the same thing as I did, you knew you definitely need to go set up a bucket in s3. Okay. So if I just go over here, we have the exam, pro 00, enterprise D, and we have some images there. Okay. So that's where I'm grabbing that image from. And I could just move this file into my environment directory, so I actually can have access to it there. Okay, so I'm just going to do MB data. And I'm just going to move that one directory up here. Okay. All right. And so now we have data over here, okay. And so you know, that's how you'd go about using the CLA with credentials. Okay. Yeah, we just open that file there if we wanted to preview it. Okay. So now let's, let's move on to the SDK, and let's use our credentials to do something programmatically. So now that we know how to use the COI, and where to store our credentials, let's go actually do something programmatically with the SDK. And so I had recently contributed to database docs, for recognition. So I figured we could pull some of that code and have some fun there. Okay. So why don't you do is go to Google and type in Avis docs recognition. And we're going to click through here to Amazon recognition, we're going to go to the developer developers guide with HTML, apparently, they have a new look, let's give it a go. Okay, there's always something new here. I'm not sure if I like it. But this is the new look to the docks. And we're gonna need to find that code there. So I think it is under detecting faces here. And probably under detecting faces in an image, okay. And so the code that I added was actually the Ruby and the no GS one, okay, so we can choose which one we want. I'm going to do the Ruby one, because I think that's more fun. And that's my language of choice. Okay. And so I'm just going to go ahead and copy this code here. Okay. And we're going to go back to our cloud nine environment, I'm going to create a new, a new file here, and I'm just going to call this detect faces, ooh, keep underscore their faces.rb. Okay. And I'm just going to double click in here and paste that code in. Alright. And what we're going to have to do is we're going to need to supply our credentials here, generally, you do want to pass them in as environment variables, that's a very safe way to provide them. So we can give that a go. But in order to get this working, we're going to have to create a gem file in here. So I'm just going to create a new file here, because we need some dependencies here. And we're just going to type in gem file, okay. And within this gem file, we're going to have to provide the gem recognition. Okay, so I'm just gonna go over here and supply that there. There is a few other lines here that we need to supply. So I'm just gonna go off screen and go grab them for you. Okay, so I just went off screen here and grabbed that extra code here. This is pretty boilerplate stuff that you have to include in a gem file. Okay. And so what this is going to do, it's going to install of AWS SDK for Ruby, but specifically just for recognition. So I do also have open up here the AWS SDK, for Ruby and for no GS, Python, etc. They all I have one here. And so they tells you how you can install gems. So for dealing with recognition here, I'm just going to do a quick search here for recognition. Okay, sometimes it's just better to navigate on the left hand side here. Alright, and so I'm just looking for a recognition. Okay, and so if we want to learn how to use this thing, usually a lot of times with this, it's going to tell you what gem you're gonna need to install. So this is the one we are installing. And then we click through here through client, and then we can get an idea of all the kind of operations we can perform. Okay, so when I needed to figure out how to write this, I actually went to the CLR here, and I just kind of read through it and pieced it together and looked at the output to figure that out. Okay, so nothing too complicated there. But anyway, we have all the stuff we need here. So we need to make sure we're in our environment directory here, which is that Spock dev directory. So we're going to tilde out, which goes to our home directory environment. Okay, we're gonna do an ls hyphen, LA. And just make sure that we can see that file there and the gem file, okay, and then we can go ahead and do a bundle install. All right, and so what that's going to do is it's going to now install that dependency. so here we can see that installed the EVAs SDK, SDK, core and also recognition. Okay. And so now we have all our dependencies to run the script here. So the only thing that we need to do here is we need to provide it an input. So here, we can provide it a specific bucket and a file, there's a way to provide a locally, we did download this file, but I figured what we'll do is we'll actually provide the bucket here. So we will say what's the bucket called exam pro 000. And the next thing we need to do is define the key. So it's probably the key here. So I'm going to do enterprise D. Okay. And then we're just going to supply data there. All right. And we can pass these credentials via the environment variables, we could just hard code them and paste them in here. But that's a bit sloppy. So we are going to go through the full motions of providing them through the environment here. And all we have to do that is we're just going to paste in, like so. Okay. And we're just going to copy that. That's the first one. And then we're going to do the password here. Oops. Okay. And hopefully, this is going to work the first time and then we'll have to do bundle, exactly. detect faces, okay. And then this is how these are going to get passed into there. And assuming that my key and bucket are correct, then hopefully, we will get some output back. Okay. All right, it's just saying it couldn't detect faces here, I just have to hit up here, I think I just need to put the word Ruby in front of here. Okay, so my bad. Alright, and we is it working. So we don't have the correct permissions here. So we are a power user. So maybe we just don't have enough permission. So I'm just going to go off screen here and see what permissions we need to be able to do this. So just playing around a little bit here, and also reading the documentation for the Ruby SDK, I figured out what the problem was, it's just that we don't need this forward slash here. So we just take that out there, okay, and just run what we ran last there, okay. And then we're going to get some output back. And then it just shows us that it detected a face. So we have the coordinates of a face. And if we used some additional tool there, we could actually draw overtop of the image, a bounding box to show where the face is detected. There's some interesting information. So it detected that the person in the image was male, and that they were happy. Okay. So, you know, if you think that that is happy, then that's what recognition thinks, okay. And it also detected the face between ages 32 and 48. To be fair, data is an Android, so he has a very unusual skin color. So you know, it's very hard to do to determine that age, but I would say that this is the acceptable age range of the actor at the time of, so it totally makes sense. Okay. Um, so yeah, and there you go. So that is the pragmatic way of doing it. Now, you don't ever really want to ever store your credentials with on your server, okay? Because you can always use Iam roles, attach them to EC two instances, and then that will safely provide credentials onto your easy two instances, to have those privileges. But it's important to know how to use the SDK. And whenever you're in development working on your local machine, or maybe you're in cloud nine environment, you are going to have to supply those credentials. Okay. So there you go. So now that we are done with our ADA, or SC Li and SDK, follow along here, let's just do some cleanup. So I'm just going to close this tab here for cloud nine. And we're going to go over to cloud nine and we're going to just delete that and Now again, it's not going to be bad for you to have it hanging around here, it's not going to cause you any problems, it's going to shut down on its own. But you know, if we don't need it, we might as well get rid of it. Okay. And so I'm just gonna have the word delete here. And hopefully this one deletes as long as they don't fiddle and delete these security groups before it has an opportunity to delete. That should have no problem for me here. And then we're just going to go to our Im role, or sorry, our user there. And what we really want to do is since they're not being used anymore, we want to expire those credentials. But I'm actually going to also go ahead and delete the user here. So they're going to be 100% gone there. Okay, so there, that's all the cleanup we had to do. over onto the AWS ccli SDK cheat sheet so let's jump into it. So COI stands for command line interface SDK stands for software development kit. The COI lets you enter interact with AWS from anywhere by simply using a command line. The SDK is a set of API libraries that let you integrate Ada services into your applications. promatic access must be enabled per user via the Iam console to UCLA or SDK into its config command is used to set up your database credentials for this Eli, the CLA is installed via a Python script credentials get stored in a plain text file, whenever possible use roles instead of at this credentials. I do have to put that in there. And the SDK is available for the following programming languages c++, go Java, JavaScript, dotnet, no GS, PHP, Python and Ruby. Okay, so for the solution architect associate, they're probably not going to ask you questions about the SDK, but for the developer, there definitely are. So just keep that in. Hey, this is Andrew Brown from exam Pro. And we are looking at domain name systems also abbreviated as DNS, and you can think of them as the phonebook of the internet. DNS translates domain names to IP addresses, so browsers can find internet resources. So again, domain name servers are a service which handles converting a domain name such as exam prodotto, into a rentable Internet Protocol address. So here we have an example of an ipv4 address. And this allows your computer to find specific servers on the internet automatically, depending what domain name you browse. so here we can see it again. So we see example, CO, it looks up the the domain name and determines that it should go this IP address, which should go to this server. So that is the process. So we need to understand the concept of Internet Protocol, also known as IP. So IP addresses are what are uniquely used to identify computers on a network. And it allows communication between using them. And that is what the IP is. And so IPS come in two variations. We have ipv4, which you're probably most familiar with, because it's been around longer. And then we have ipv6, which looks a bit unusual, but there are definitely benefits to it. So ipv4 is an address space with 32 bits. And this is the amount of available addresses that are out there. And the issue with ipv4 is we're running out of IP addresses, okay, because it's based on this, this way of writing numbers, and there's a limit to how many numbers are here. So to come up, or to combat that solution. That's where ipv6 comes in. And so ipv6 uses an address space of 128 bits. And it has up to 340 on the sillian potential addresses. And so basically, they've invented a way so we will not run out of addresses, okay, and this is what that address looks like. So it's big and long, it's not as easy to look look at as an IP, ipv4 address, but we're never gonna run out of them. So you know, come the future. We're gonna see this implemented more, you can definitely use ipv6 on AWS, as well as ipv4, but you know, this is future. So domain registers are the authorities who have the ability to assign domain names under one or more top level domains. And if you're thinking about what are some common registrar's, we have them listed down below. So you've probably seen them before like Hostgator, GoDaddy, domain.com. AWS is also their own domain register with route 53. And name cheap, okay, domain domains get registered through the internet, which is a service provided by the internet Corporation for Assigned Names and Numbers, I can and enforces the uniqueness of domain names all over the internet. And when you register a domain name, it can be found publicly in the central who is database. So if you've ever wondered who owns a domain, there's a high chance that you could go type it in who is and they might have the registers contact information. Now you can pay additional, or in the case of revenue three, I don't think there's any additional money to keep that information private. But that's it. You have a registered domain name. And you wonder why somebody is calling you out of the blue, maybe they've been looking you up through here. So there you go domain. So we're looking at the concept of top level domains. And so if you've ever typed a domain name in and you're wondering what that.com is, that is the top level domain name. And there are domains that also have second level domains. So in the example, of.co.uk the.co is the second level, top level domain names are controlled by the Internet Assigned Numbers Authority. And so anytime there's new ones, they're the ones who are the number one authority on new top level domains. These domains are managed by different organizations. And it would surprise you to see that there's hundreds upon hundreds of top level domains. And you might not even know about them, because these companies are just sitting on them. But like, you can see Disney has one for dot ABC. And then we have a dot Academy one, and also AWS has their own, which is dot AWS. Okay, so there you go, that's top level. So when you have a domain name, you're gonna have a bunch of records that tell it what to do. And one that's very important and is absolutely required is the SLA, the start of authority. And the SLA is a way for the domain admins to provide additional information about the domain such as how often it's updated, what's the admins email address, if there was a failure with responding to master? How many seconds should it tried to fault failover to the secondary namespace, so it can contain a bunch of information, and you can see it on the right hand side here. And you don't necessarily have to provide all the information. But those are all the options. And it comes in the format of one big long string. So you can see, we can see the format here. And we have an example. And then we got an eight it was example. So there you go. And yeah, there you can only actually have one, so a record within a single zone. So you can't have more than one. But yeah, it's just to give additional information. It's absolutely. So now we're gonna take a look at address records. And a records are one of the most fundamental types of DNS records. And the idea is that you're going to be able to convert the name of a domain directly into an IP address. Okay. So if you had testing domain comm, and you wanted to point it to the ipv4 address 50 221 6834, you'd use an a record for it. And one more thing I want to know about is that you can use naked domain names, or root domains, that's when you don't have any www.no. subdomains as an A record. Okay. So canticle names, also known as C names are another fundamental DNS record used to resolve one domain name to another rather than an IP address. So if you wanted, for example, to send all your naked domain traffic to the www route, you could do so here, right? So here we have the we're specifying the naked domain, and we're going to send it to look like the www domain for some reason, for some reason, I have four W's in here. But you can't give me a hard time because at least that error is consistent. But yeah, that's all there is to it. So a records are IP addresses, and C names are domain. So besides the SLA, the second most important record or records are the name server records. And they're used by top level domain servers to direct traffic to DNS to the DNS server contain the authoritive DNS records. If you don't have these records, your domain name cannot do anything. Typically, you're gonna see multiple name servers provided as redundancy. Something like with GoDaddy, they're gonna give you two, with AWS, you're gonna have four. But the more the merrier. Okay, and so you can see an example down below here. So if you're managing your DNS records with route 53, DNS records for the domain would be pointing to AWS servers, and there we have four we have a.com, it's probably the US dotnet. I'm not sure that is co n.org. Okay, so we have a lot of redundancy there. Oh, now I want to talk about the concept of TTL Time To Live and this is the length of time that a DNS record gets cached on resolving servers or the user's own local machine. So the lower the TTL, the faster the changes to DNS records will propagate across the internet. TTL is always measured in seconds under ipv4, so you're gonna see more TTL is here. So if it's not super clear, it will make sense further into this. So it's time to wrap up DNS with another cheat sheet. So let's get to it. So Domain Name System, which is DNS is an internet service that converts domain names into routable IP addresses. We have two types of internet protocols. We have ipv4, which is a 32 bit address space and has a limited number of addresses. And then we have an example of one there. And then we have ipv6, which is 128 bit address space and has unlimited number of addresses. And we also have an example there as well. Then we talked about top level domains. And that's just the last part of a domain like.com, then you have second level domains. And this doesn't always happen. But it's usually the second last part of the domain. So in.co.uk, it's going to be the.co. Then there we have domain registers. These are third party companies who register don't you register domains through, then you have name servers, they're the servers which contain the DNS records of the domain, then we have some records of interest here. So we have the SLA. This contains information about the DNS zone and associated DNS records, we have a records, these are records which directly convert a domain name into an IP address, then we have a C name records. And these are records, which lets you convert a domain name into another domain name. And then we have TT ELLs, and it's the time it takes for DNS records, or it's the time that a DNS record will be cached for a cache for and the lower that time means, the faster it will propagate or update. Okay, and there you go. Hey, this is Andrew Brown from exam Pro. And we are looking at remedy three, which is a highly available and scalable domain name service. So whenever you think about revenue three, the easiest way to remember what it does is think of GoDaddy or Namecheap, which are both DNS providers. But the difference is that revenue three has more synergies with AWS services. So you have a lot more rich functionality that you could do on on AWS than you could with one of these other DNS providers. What can you do with revenue three, you can register and manage domains, you can create various record sets on a domain, you can implement complex traffic flows such as bluegreen, deploys, or fail overs, you can continuously monitor records, via health checks, and resolve epcs outside of AWS. So here, I have a use case, and this is actually how we use it at exam Pro is that we have our domain name, you can purchase it or you can have refer to three manage the the name servers, which allow you to then set your record sets within route 53. And so here we have a bunch of different record sets for subdomains. And we want those sub domains to point to different resources on AWS. So for our app, our app runs behind elastic load balancer. If we need to work on an ami image, we could launch a single EC two instance and point that subdomain there for our API, if it was powered by API gateway, we could use that subdomain for that, for our static website hosting, we would probably want to point to CloudFront. So the WW dot points to a CloudFront distribution. And for fun, and for learning, we might run a minecraft server on a very specific IP, probably would be elastic IP because we wouldn't want it to change. And that could be Minecraft exam pro quo. So there's a basic example. But we're gonna jump into all the different complex rules that we can do in revenue three here. So in the previous use case, we saw a bunch of sub domains, which were pointing to AWS resources, well, how do we create that link so that a remedy three will point to those resources, and that is by creating record sets. So here, I just have the form for record sets. So you can see the kind of the types of records that you can create, but it's very simple, you just fill in your sub domain or even leave the naked domain and then you choose the type. And in the case for a this is allows you to point this sub domain to a specific IP address, you just fill it in, that's all there is to it. Okay, now, I do need to make note of this alias option here, which is a special option created by AWS. So here in the next slide here, we've set alias to true. And what it allows us to do is directly select specific AWS resources. So we could select CloudFront, Elastic Beanstalk, EOB, s3 VPC API gateway, and why would you want to do this over making a traditional type record? Well, the idea here is that this alias has the ability to detect changes of IP addresses. So it continuously keeps pointing that endpoint to the correct resource. Okay. So that's normally when if, if and whenever you can use alias always use alias because it just makes it easier to manage the connections between resources via roughly three records. That's, and the limitations are listed here as follows. So the major advantage of Route 53 is it's seven types of routing policies. And we're going to go through every single one here. So we understand the use case, for all seven. Before we get into that a really good way to visualize how to work with these different routing policies is through traffic flow. And so traffic flow is a visual editor that lets you create sophisticated routing configurations within route 53. Another advantage of traffic flow is that we can version these policy routes. So if you created a complex routing policy, and you wanted to change it tomorrow, you could save it as version one, version two, and roll, roll this one out or roll back to that. And just to play around traffic flow, it does cost a few dollars per policy record. So this whole thing is one policy record. But they don't charge you until you create it. So if you do want to play around with it, just just create a new traffic flow, and name it and it will get, you'll get to this visual editor. And it's not until you save this. So you can play around with this to get an idea of like all the different routing rules and how you can come up with creative solutions. But now that we've covered traffic flow, and we know that there are seven routing rules, let's go deep and look at what we can do. We're gonna look at our first routing policy, which is the simple routing policy. And it's also the default routing policy. So when you create a record setting here, I have one called random, and we're on the a type here. Down below, you're gonna see that routing policy box that's always by default set to simple. Okay, so what can we do with simple The idea is that you have one record, which is here, and you can provide either a single IP address or multiple IP addresses. And if it's just a single, that just means that random is going to go to that first IP address every single time. But if you have multiples, it's going to pick one at random. So it's good way to make like a, if you wanted some kind of random thing made for a to b testing, you could do this. And that is as simple as it is. So there you go. So now we're looking at weighted routing policies. And so what a weighting routing policy lets you do is allows you to split up traffic based on different weights assigned. Okay, so down below, we have app.example.co. And we would create two record sets in roughly three, and they'd be the exact same thing, they both say app.example.com. But we'd set them both to weighted and we give them two different weights. So for this one, we would name it stable. So we've named that one stable, give it 85%. And then we make a new record set with exact same sub domain and set this one to 15% call experiment, okay. And the idea is that when ever traffic, any traffic hits app.example.co, it's going to look at the two weighted values. 85% is gonna go to the stable one. And for the 15%, it's going to go to the experimental one. And a good use case for that is to test a small amount of traffic to minimize impact when you're testing out new experimental features. So that's a very good use case for a weighted routing. Now we're going to take a look at latency based routing. Okay, so lane based routing allows you to direct traffic based on the lowest network latency possible for your end user based on a region. Okay, so the idea is, let's say people want to hit AP dot exam protocol. And they're coming from Toronto. All right, so coming from Toronto, and the idea is that we have, we've created two records, which have latency with this sub domain, and one is set to us West. So that's on the west coast. And then we have one central Canada, I believe that's located in Montreal. And so the idea is that it's going to look here and say, Okay, which one produces the least amount of latency, it doesn't necessarily mean that it has to be the closest one geographically, just whoever has the lowest return in milliseconds is the one that it's going to route traffic to. And so in this case, it's 12 milliseconds. And logically, things that are closer by should be faster. And so the, so it's going to route it to this lb, as opposed to that one. So that's, that's how latency based routing works. So now we're looking at another routing policy, this one is for failover. So failover allows you to create an Active Passive setup in situations where you want a primary site in one location, and a secondary data recovery site and another one. Okay, another thing to note is that revenue three automatically monitors via health checks from your primary site to determine if that that endpoint is healthy. If it determines that it's in a failed state, then all the traffic will be automatically redirected to that secondary location. So here, we have done following examples, we have app.example.co. And we have a primary location and a secondary one. All right. And so the idea is that roughly three, it's going to check and it determines that this one is unhealthy based on a health check, it's going to then reroute the traffic to our secondary locations. So you'd have to create, you know, two routing policies with the exact same. The exact same domain, you just set which one is the primary and which one is the secondary, it's that simple. So here, we are looking at the geolocation routing policy. And it allows you to direct traffic based on the geolocation, geographical location of where the request is originating from. So down below, we have a request from the US hitting app dot exam pro.co. And we have a a record set for a geolocation that set for North America. So since the US is in North America, it's going to go to this record set. Alright. And that's as simple as that. So we're going to look at geo proximity routing policy, which is probably the most complex routing policy is a bit confusing, because it sounds a lot like geolocation, but it's not. And we'll see shortly here, you cannot create this using record sets, you have to use traffic flow, because it is a lot more complicated, and you need to visually see what you're doing. And so it's gonna be crystal clear, we're just going to go through here and look at what it does. So the idea is that you are choosing a region. So you can choose one of the existing Eva's regions, or you can give your own set of coordinates. And the idea is that you're giving it a bias around this location, and it's going to draw boundaries. So the idea is that if we created a geo proximity routing for these regions, this is what it would look like. But if we were to give this 120 5% more bias, you're gonna see that here, it was a bit smaller, now it's a bit larger, but if we minus it, it's going to reduce it. So this is the idea behind a geo proximity where you have these boundaries, okay. Now, just to look at in more detail here, the idea is that you can set as many regions or points as you want here. And so here, I just have two as an example. So I have China chosen over here. And this looks like we have Dublin shows. So just an idea to show you a simple example. Here's a really complicated one here, I chose every single region just so you have an idea of split. So the idea is you can choose as little or as many as you want. And then you can also give it custom coordinates. So here I chose Hawaii. So I looked at the Hawaii coordinates, plugged it in, and then I turned the bias down to 80%. So that it would have exactly around here and I could have honed it in more. So it just gives you a really clear picture of how geo proximity works. And it really is boundary based and you have to use traffic flow for that. So the last routing policy we're going to look at is multivalue. And multivalue is exactly like simple routing policy. The only difference is that it uses a health check. Okay, so the idea is that if it picks one by random, it's going to check if it's healthy. And if it's not, it's just going to pick another one by random. So that is the only difference between multivalue and simple. So there you go. Another really powerful feature of Route 53 is the ability to do health checks. Okay, so the idea is that you can go create a health check, and I can say for AP dot exam dot prodotto, it will check on a regular basis to see whether it is healthy or not. And that's a good way to see at the DNS level if something's wrong with your instance, or if you want to failover so let's get into the details of here. So we can check health every 30 seconds by default and we can it can be reduced down to 10 seconds, okay, a health check and initiate a failover. If the status is returned unhealthy, a cloudwatch alarm can be created to alert you of status unhealthy, a health check can monitor other health checks to create a chain of reactions. You can have up to 50 in a single AWS account. And the pricing is pretty affordable. So it's 50 cents. So that's two quarters for per endpoint on AWS. And there are some additional features, which is $1 per feature. Okay. So if you're using route 53, you might wonder well, how do I route traffic to my on premise environment and that's where revenue three resolver comes into play, formerly known as dot two. resolver is a regional service that lets you connect route DNS queries between your VBC and your network. So it is a tool for hybrid environments on premises and cloud. And we have some options here. If we just want to do inbound and outbound inbound only or outbound Only. So that's all you really need to know about it. And that's how you do hybrid networking. So now we're taking a look at revenue three cheat sheet, and we're going to summarize everything that we have learned about roughly three. So revenue three is a DNS provider to register and manage domains create record sets, think GoDaddy or namecheap. Okay, there's seven different types of routing policies, starting with simple routing policy, which allows you to input a single or multiple IP addresses to randomly choose an endpoint at random, then you have weighted routing, which splits up traffic between different weights, assign some percentages, latency based routing, which is based off of routing traffic to the based on region for the lowest possible latency for users. So it's not necessarily the closest geolocation but the the lowest latency, okay, we have a failover routing, which uses a health check. And you set a primary and a secondary, it's going to failover to the secondary if the primary health check fails, you have geolocation, which roads traffic based on the geolocation. So this is based on geolocation would be like North America or Asia, then you have geo proximity routing, which can only be done in traffic flow allows you to set biases so you can set basically like this map of boundaries, based on the different ones that you have, you have multi value answer, which is identical to a simple, simple routing, the only difference being that it uses a health check. In order to do that. We looked at traffic flow, which is a visual editor for changing routing policies, you conversion those record those policy records for easy rollback, we have alias record, which is a device's Smart DNS record, which detects IP changes freedoms resources and adjusts them automatically always want to use alias record, when you have the opportunity to do so you have route 53 resolver, which is a hybrid solution. So you can connect your on premise and cloud so you can network between them. And then you have health checks, which can be created to monitor and and automatically failover to another endpoint. And you can have health checks, monitor other health checks to create a chain of reactions, for detecting issues for endpoints. Hey, this is Angie brown from exam Pro. And we are looking at elastic Compute Cloud EC two, which is a cloud computing service. So choose your last storage memory, network throughput and then launch and SSH into your server within minutes. Alright, so we're on to the introduction to EC two. And so EC T was a highly configurable server. It's a resizeable compute capacity, it takes minutes to launch new instances, and anything and everything I need is uses easy to instance underneath. So whether it's RDS or our ECS, or simple system manager, I highly, highly believe that at AWS, they're all using easy to okay. And so we said they're highly configurable. So what are some of the options we have here? Well, we get to choose an Amazon machine image, which is going to have our last so whether you want Red Hat, Ubuntu windows, Amazon, Linux or Susi, then you choose your instance type. And so this is going to tell you like how much memory you want versus CPU. And here, you can see that you can have very large instances. So here is one server that costs $5 a month. And here we have one that's $1,000 a month. And this one has 36 CPUs and 60 gigabytes of memory with 10 gigabyte performance, okay, then you add your storage, so you could add EBS or Fs and we have different volume types we can attach. And then you can configure your your instance. So you can secure it and get your key pairs. You can have user data, Im roles and placement groups, which we're all going to talk about starting. All right, so we're gonna look at instance types and what their usage would be. So generally, when you launch an EC two instance, it's almost always going to be in the T two or the T three family. And yes, we have all these little acronyms which represent different types of instance types. So we have these more broad categories. And then we have subcategories, or families have instances that are specialized. Okay, so starting with general purpose, it's a balance of compute and compute memory and networking resources. They're very good for web servers and code repository. So you're going to be very familiar with this level here. Then you have compute optimized instances. So these are ideal for compute bound applications that benefit from high performance processors. And as the name suggests, this is compute it's going to have more computing power. Okay, so scientific modeling, dedicated gaming servers, and ad server engines. And notice they all start with C. So that makes it a little bit easier to remember. Then you have memory optimized and as the name implies, it's going to have more memory on the server. So fast performance for workloads that process large data sets in memory. So use cases in memory caches in memory databases, Real Time big data analytics, then you have accelerated optimized instances, these are utilizing hardware accelerators or co processors. They're going to be good for machine learning, computational finance, seismic analysis, speech recognition, really cool. Future tech uses a lot of accelerated optimized instances. And then you have storage optimized. So this is for high sequential reads and write access to very large data sets on local storage. Two use cases might be a no SQL database in memory or transactional databases or data warehousing. So how is it important? How important is it to know all these families, it's not so important to associate track at the professional track, you will need to know themselves, all you need to know are these general categories and what and just kind of remember, which, which fits into where and just their general purposes. All right. So in each family of EC two instance types, so here we have the T two, we're gonna have different sizes, and so we can see small, medium, large x large, I just wanted to point out that generally, the way the sizing works is you're gonna always get double of whatever the previous one was, generally, I say generally, because it does vary. But the price is almost always double. Okay, so from small to medium, you can see the ram has doubled, the CPU has doubled for medium large, isn't exactly doubled. But for here, the CPU has doubled. Okay, but the price definitely definitely has doubled, almost nearly so it's almost always twice inside. So general rule is, if you're wondering when you should upgrade, if you need to have something, then you're better off just going to the next version. So we're gonna look at the concept called instance profile. And this is how your EC two instances get permissions. Okay? So instead of embedding your AWS credentials, your access key and secret in your code, so your instance has permissions to access certain services, you can attach a role to an instance via an instance profile. Okay, so the concept here is you have any situ instance, and you have an instance profile. And that's just the container for a role. And then you have the role that actually has the permissions. Alright. And so I do need to point out that whenever you have the chance to not embed a those credentials, you should never embed them. Okay, that's like a hard rule with AWS. And anytime you see an exam question on that, definitely, always remember that the way you set an instance profile tuneecu instance, if you're using the wizard, you're going to see the IM role here. And so you're going to choose, you're going to create one and then attach it. But there's one thing that people don't see is they don't see that instance profile, because it's kind of like this invisible step. So if you're using the console, it's actually going to create it for you. If you're doing this programmatically through cloud formation, you'd actually have to create an instance profile. So sometimes people don't realize that this thing exists. Okay. We're gonna take a look here at placement groups, the placement groups let you choose the logical placement of your instances to optimize for communication performance, or durability. And placement groups are absolutely free. And they're optional, you do not have to launch your EC two instance in within a placement group. But you do get some benefits based on your use case. So let's first look at cluster so cluster PACs instances close together inside an AZ and they're good for low latency network performance for tightly coupled node to node communication. So when you want servers to be really close together, so communication superfast, and they're well suited for high performance computing HPC applications, but clusters cannot be multi az, alright, then you have partitions. And so partitions spread instances across logical partitions. Each partition does not share the underlying hardware. So they're actually running on individual racks here for each partition. They're well suited for large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka, because these technologies use partitions and now we have physical partitions. So that makes total sense there, then you have spread and so spread is when each instance is placed on a different rack. And so, when you have critical instances that should be kept separate from each other. And this is the case where you use this and you can spread a max of seven instances and spreads can be multi az, okay, whereas clusters are not allowed to go multi AZ. So there you go. So user data is a script, which will automatically run when launching easy to instance. And this is really useful when you want to install packages or apply updates or anything you'd like before the launch of a an instance. And so when you're going through the easy to wizard, there's this advanced details step where you can provide your bash script here to do whatever you'd like. So here I have it installing Apache, and then it starts that server. If you were Logging into an ECU instance. And you didn't really know whether user data script was performed on that instance, on launch, you could actually use the this URL at 169 24 169 24. If you were to curl that within that easy to instance, with the user data, it would actually return whatever script was run. So that's just good to know. But yeah, user data scripts are very useful. And I think you will be using one. So metadata is additional information about your EC two instance, which you can get at runtime. Okay. So if you were to SSH into your EC two instance, and run this curl command with latest metadata on the end, you're going to get all this information here. And so the idea is that you could get information such as the current public IP address, or the app ID that was used to watch the students, or maybe the instance type. And so the idea here is that by being able to do this programmatically, you could use a bash script, you could do somebody with user data metadata to perform all sorts of advanced Ada staging operations. So yeah, better data is quite useful and great for debugging. So yeah, it's time to look at the EC two cheat sheet here. So let's jump into it. So elastic Compute Cloud EC two is a cloud computing service. So you configure your EC two by choosing your storage, memory and network throughput, and other options as well. Then you launch an SSH into your server within minutes. ec two comes in a variety of instance types specialized for different roles. So we have general purpose, that's for balance of compute memory and network resources, you have compute optimized, as the name implies, you can get more computing power here. So ideal for compute bound applications that benefit from high performance processors, then you have memory optimized. So that's fast performance for workloads that process large data sets in memory, then you have accelerated optimized hardware accelerators or co processors, then you have storage optimized that's high sequential read and write access to very large data sets on local storage, then you have the concept of instant sizes. And so instant sizes generally double in price and key attributes. So if you're ever wondering when it's time to upgrade, just think when you're need double of what you need that time to upgrade, then you have placement groups, and they let you choose the logical placement of your instances to optimize communication performance, durability, and placement groups are free, it's not so important to remember the types are because I don't think we'll come up with a solution architect associate. And then we have user data. So a script that will be automatically run when launching EC two instance, for metadata. Metadata is about the current instance. So you could access this metadata via a local endpoint when SSH into an EC two instance. So you have this curl command here with metadata, and metadata could be the instance type, current IP address, etc, etc. And then the last thing is instance profile. This is a container for an IM role that you can use to pass roll information to an easy to instance, when the instance starts. Alright, so there you go, that's easy. So we're gonna take a look at the EC two pricing model. And there are four ways we can pay with EC two, we have on demand spot, reserved and dedicated. And we're going to go through each section and see where each one is. We're gonna take first a look at on demand pricing. And this is whenever you launch an EC, two instance, it's going to by default use on demand, and so on demand has no upfront payment and no long term commitment. You're only charged by the hour or by the man, it's going to vary based on ecsu instance type. And that's how the pricing is going to work. And you might think, okay, what's the use case here? Well, on demand is for applications where the workload is short term spike, you're in predictable. When you have a new app for development, or you want to just run an experiment, this is where on demand is going to be a good fit for you. Know, we're taking a look at reserved instances, also known as RI and these are going to give you the best long term savings. And it's designed for applications that have steady state predictable usage or require reserved capacity. So what you're doing is you're saying to AWS, you know, I'm gonna make a commitment to you. And I'm going to be using this over next period of time, and they're going to give you savings. Okay, so this reduced pricing is going to be based on three variables, we have term class offerings, and payment options. We'll walk through these things to see how they all work. So for payment options, we have standard convertible and scheduled standard is going to give us the greatest savings with 75% reduced pricing. And this is compared to obviously to on demand. The thing here though, is that you cannot change the ri attributes, attributes being like instance type, right? So whatever you have, you're you're stuck with it. Now if you needed a bit more flexibility, because you might need to have more room to grow in the future. You'd look at convertible so the savings aren't gonna be as great. We're looking at it. To 54%. But now you have the ability to let's say, change your instance type to a larger size, you can't go smaller, you can always go larger. And you're, but you're going to have some flexibility there, then there's scheduled, and this is when you need to reserve instance, for a specific time period, this could be the case where you always have a workload that's predictable every single Friday for a couple hours. And the idea is by telling you Ws that you're going to be doing out on schedule, they will give you savings there that's going to vary. The other two things is term and payment options. So the terms is how long are you willing to commit one year or three year contract, the greater the terms, the greater the savings, and you have payment options, so you have all the front partial upfront and no upfront, no friends, the most interesting one, because you could say, you know, I'm going to use the server for a year, and you and you'll just pay at the end of the month. And so that is a really good way of saving money. Right off the bat, a lot of people don't seem to know that. So you know, mix those three together. And that's going to change the the outcome there. And I do here have a graphic to show you that you can select things and just show you how they would estimate the actual cost for you. A couple things you wanted about reserved instances that can be shared between multiple accounts within a single organization and unreserved, our eyes can be sold in the reserved instance marketplace. So if you do buy into one or through your contract, you're not fully out of luck, because you can always try to resell it to somebody else who might want to use it. So there you go. Now we're taking a look at spot instances, and they have the opportunity to give you the biggest savings with 90% discount compared to on demand pricing. There are some caveats, though. So eight of us has all this unused compute capacity, so they want to maximize utility of their idle servers, it's no different than when a hotel offers discounts to fill vacant suites, or when the plane offers discounts to fill vacant seats. Okay, so there's just easy two instances lying around, it would be better to give people discounts than for them to do nothing. So the only caveat though is that when you use spot instances, if another customer who wants to pay on demand a higher price wants to use it. And they need to give that capacity to that on demand user, this instance can be terminated at any given time, okay. And that's going to be the trade off. So just looking at termination, termination conditions down below, instances can be terminated by Avis at any time. If your instance is terminated by AWS, you don't get charged for the partial hour of usage. But if you were to terminate an instance, you will still be charged for any hour that it ran. Okay, so there you go. That's the little caveat to it. But what would you use abundances for if it can if these incidents could be interrupted at any time? Well, they're designed for applications that have flexible Start and End Times or applications that are only feasible at very low compute costs. And so you can see I pulled out the configuration graphic when you make spot. So it's saying like, Is it for load balancing workloads, flexible workloads, big data workloads are defined duration workloads. So you can see there is some definitions as to what kind of utility you would have there. But there you are. So we're taking a look at dedicated host instances, which is our most expensive option with EC two pricing models. And it's designed to meet regulatory requirements when you have strict server bound licensing that won't support multi tenancy or cloud deployments. So to really understand dedicated hosts, we need to understand multi tenant versus single tenant. So whenever you launch an EC two instance, and choosing on demand or or any of the other types beside dedicated hosts, it's multi tenant, meaning you are sharing the same hardware as other AWS customers, and the only separation between you and other customers is through virtualized isolation, which is software, okay, then you have single tenant and this is when a single customer has dedicated hardware. And so customers are separated through physical isolation. All right. And so to just compare these two I think of multi tenant is like everyone living in an apartment, and single tenant is everyone living in a house. Right? So, you know, why would we want to have our own dedicated hardware? Well, large enterprises and organizations may have security concerns or obligations about sharing the same hardware with other AWS customers. So it really just boils down to that with dedicated hosts. It comes in an on demand flavor and a reserved flavor. Okay, so you can save up to 70%. But overall, dedicated hosts is way more expensive than our other EC two pricing. Now we're on to the CTU pricing cheat sheet and this one is a two pager but we'll make our way through it. So EC two has four pricing models we have on demand spot reserved instances also known as RI and dedicated looking first at on demand, it requires the least commitment from you. It is low cost and flexible, you only pay per hour. And the use cases here are for short term spiky, unpredictable workloads or first time applications, it's going to be ideal when you want workloads that cannot be interrupted, whereas in spot, that's when you can have interruption and we'll get to that here shortly. So onto reserved instances, you can save up to 75% off, it's gonna give you the best long term value. The use case here are steady state or predictable usage. You can resell unused reserved instances and the reserved instance marketplace the reduced pricing is going to be based off of these three variables terms class offering and payment option. So for payment terms, we have a one year or a three year contract. With payment options, we can either pay all upfront, partial upfront or no upfront. And we have three class offerings, we have standard convertible and scheduled. So for standard we're gonna get up to 75% reduced pricing compared to on demand. But you cannot change those ra attributes meaning like, if you want to change to a larger instance type, it's not going to be possible, you're stuck with what you have. If you wanted more flexibility we have convertible where you can get up to 54% off, and you get that flexibility. As long as those ra attributes are greater than or equal in value, you can change those values, then you have scheduled and this is used. This is for reserved instances for specific time periods. So maybe you want to run something once a week for a few hours. And the savings here are gonna vary. Now onto our last two pricing models, we have spot pricing, which is up to 90% off, it's gonna give you the biggest savings, what you're doing here is you're requesting spare computing capacity. So you know, as we said earlier, it's like hotel rooms where they're just trying to fill the vacant suites. If you are if you're comfortable with flexible Start and End Times spot price is going to be good for you. The use case here is if you can handle interruptions, so servers randomly stopping and starting, it's a very good use case is for non critical background jobs. instances can be terminated by alias at any time, if your instance is terminated by a device, you won't get charged for that partial hour of usage. If you terminate that instance, you will be charged for any hour that it ran in, okay. And the last is dedicated hosting, it's the most expensive option. And it's just dedicated servers, okay? And so it can be it can be utilized in on demand or reserves you can save up to 70% off. And the use case here is when you need a guarantee of isolette hardware. So this is like enterprise requirements. So there you go, we made it all the way through ECP. Hey, this is Andrew Brown from exam Pro. And we are looking at Amazon machine images ami eyes, which is a template to configure new instances. So an ami provides the information required to launch an instance. So you can turn your EC two instances into ami. So that in turn, you can create copies of your servers, okay. And so an ami holds the following, it's going to have a template for the root volume for the instance. And it's either going to be an EBS snapshot, or an instant store template. And that is going to contain your operating system, your application server, your applications, everything that actually makes up what you want your AR ami to be, then you have launch permissions, the controls which AWS accounts can use for the AMI to launch instances, then you have block device mapping, that specifies volumes to attach to the instances when it's launched. Alright, and so now I just have that physical representation over here. So you have your EBS snapshot, which is registered to an ami and then you can launch that ami or make a copy of an ami to make another ami, okay, and analyze our region specific. And we're going to get into that shortly. I just wanted to talk about the use cases of ami, and this is how I utilize it. So ami is help you keep incremental changes to your OS application code and system packages. Alright, so let's say you have a web application and or web server, and you create an ami on it. And it's going to have some things you've already installed on it. But let's say you had to come back and install Redis because you want to run something like sidekick, or now you need to install image magic for image processing. Or you need the cloudwatch agent because you wanted to stream logs from your EC two instance to cloud watch. And that's where you're going to be creating those revisions, okay, and it's just gonna be based on the names. amye is generally utilized with SYSTEMS MANAGER automation. So this is a service which will routinely Patreon eyes with security updates. And then you can bake those ami so they can quickly launch them, which ties into launch configuration. So when you're dealing with auto scaling groups, those use launch configurations and launch configurations have to have an ami. So when you attach an ami to launch configuration and you update the launch configuration in your auto scaling group, it's going to roll out Those updates to all those multiple instances. So just to give you like a bigger picture how ami is tied into the AWS ecosystem. So I just quickly wanted to show you the Avis marketplace. And so again, the marketplace lets you purchase subscriptions to vendor maintained amies. There can also be free ones in here as well. But generally they are for paid. And they come in additional costs on top of your UC to instance. So here if you wanted to use Microsoft's deep learning ami, you couldn't, you'd have to pay whatever that is per hour. But generally people are purchasing from the marketplace security hardened ami is a very popular so let's say you had to run Amazon Linux and you wanted it to meet the requirements of level one cis. Well, there it is in the marketplace, and it only costs you point 02. I guess that's two cents per hour or $130 $130 per year. Yeah, so just wanted to highlight. So when you're creating an ami, you can create an ami from an existing ecsu instance, it's either running or stopped, okay, and all you have to do to create an ami is drop down your actions, go to image and create image. And that's all there is to it. Now we're gonna look at how we would go about choosing our ami and so AWS has hundreds of ami as you can search and select from. And so they have something called the community ami which are free ami is maintained by the community. And then we also have the ad bus marketplace, which are free or paid ami is maintained by vendors. And so here in front of me, this is where you would actually go select an ami. But I wanted to show you something that's interesting because you can have an ami, so let's say Amazon Lex two. And if you were to look at it in North Virginia, and compare it to it in another region, such as Canada Central, you're going to notice that there's going to be some variation there. And that's because amies even though they are the same they they are different to meet the needs of that region. And so um, you know, you can see here for Amazon looks to North Virginia, we can launch it in x86 or arm, but only in Canada central is 64 bit. Alright, so the way we can tell that these API's are unique is that they have ami IDs, so they're not one to one. And so am eyes are region specific. And so they will have different ami IDs per region. And you're not going to just be able to take it an ID from another region and launch it from another region, there is some some things you have to do to get an ami to another region. And we will talk about that. The most important thing here I just want to show you is that we do have hundreds of API's to choose from. And there is some variation between regions. So when choosing ami that we do have a lot of options open to us to filter down what it is that we're looking for. And so you can see that we could choose based on our LS whether the root device type is EBS, or instant store, whether it's for all regions, recurrent regions, or maybe the architecture, so we do have a bunch of filtration options available to us. amies are categorized as either backed by EBS or backed by instant store this is a very important option here. And you're going to notice that in the bottom left corner on there, I just wanted to highlight because it is something that's very important. So you can also make copies of your ami. And this feature is also really important when we're talking about am eyes because they are region specific. So the only way you can get an ami from one region to another is you have to use the copy command. So you do copy ami and then you'd have the ability to choose to send that ami to another region. So there we are. So we're onto the AMI cheat sheets. Let's jump into it. So Amazon machine image, also known as ami provides the information required to launch an instance am eyes are region specific. If you need to use an ami in another region, you can copy an ami into the destination region via the copy ami command. You can create amies from an existing UC two instance that's either running or stopped then we have community amies and these are free ami is maintained by the community then there are 80 of us marketplace. amies and these are free or paid subscription ami is maintained by vendors am eyes have ami IDs, the same ami so if you take Amazon links to will vary in both ami ID and options, such as the architecture option in different regions, they are not exactly the same, they're different. Okay, an ami holds the following information, it's going to have a template for the root volume for the instance. So that's either going to be an EBS snapshot or an instance store template. And that will contain operation system, the application server and application data Okay. And then you have the launch permissions that controls which ad was accounts can be used with the AMI to launch instances, and a blocked device mapping that specifies the volume to attach to the instances when it's launched. So there is your MIT ci and good luck. Hey, this is Andrew Brown from exam Pro. And we are looking at auto scaling groups. So auto scaling groups lets you set scaling rules, which will automatically launch additional EC two instances or shutdown instances to meet the current demand. So here's our introduction to auto scaling groups. So auto scaling groups revia to as G contains a collection of EC two instances that are treated as a group for the purpose of automatic scaling and management. And automatic scaling can occur via capacity settings, health check replacements, or scaling policies, which is going to be a huge topic. So the simplest way to use auto scaling groups is just to work with the capacity settings with nothing else set. And so we have desired capacity, Min, and Max. Okay, so let's talk through these three settings. So for min is how many easy two instances should at least be running, okay, Max is the number of easy two instances allowed to be running and desired capacity is how many easy two instances you ideally want to run. So when min is set to one, and let's say you had a new auto scaling group, and you lost it, and there was nothing running, it would always it would spin up one. And if that server died, for whatever reason, because when it was unhealthy, or just crashed, for whatever reason, it's always going to spin up at least one. And then you have that upper cap, where it can never go beyond two, because auto scaling groups could trigger more instances. And this is like a safety net to make sure that you know, you just don't have lots and lots of servers running. And desired capacity is what you ideally want to run. So as you will try to get it to be that value. But there's no guarantee that it will always be that value. So that's capacity. So another way that auto scaling can occur with an auto scaling group is through health checks. And down here, we actually have two types, we have EC two and lb. So we're gonna look at EC two first. So the idea here is that when this is set, it's going to check the seatoun instance to see if it's healthy. And that's dependent on these two checks, that's always performed on DC two instances. And so if any of them fail, it's going to be considered unhealthy. And the auto scaling group is going to kill that EC two instance. And if you have your minimum capacity set to one, it's going to then spin up a new EC two instance. So that's the that's easy to type. Now let's go look at the lb type. So for the lb type, the health check is performed based on an E lb health check. And the E lb can perform a health check by pinging an end like an endpoint on that server could be HTTP or HTTPS, and it expects a response. And you can say I want a 200 back at this specific endpoint or so here. That's actually what we do. So if you have a web app, you might make a HTML page called health check. And it should return 200. And if it is, then it's considered healthy. If that fails, then the auto scaling group will kill that EC two instance. And again, if your minimum is set to one is going to spin up a healthy new EC two instance. final and most important way of scaling gets triggered within an auto scaling group is scaling policies. And there's three different types of scaling policies. And we'll start with target tracking scaling policy. And what that does is it maintains a specific metric and a target value. What does that mean? Well, down here, we can choose a metric type. And so we'd say average CPU utilization. And if it were to exceed our target value, and we'd set our target value to 75%. Here, then we could tell it to add another server, okay, whenever we're adding stuff, that means we're scaling out whenever we are removing instances, we're moving servers, and we're scaling in okay. The second type of scaling policy is simple scaling policy. And this scales when alarm is breached. So we create whatever alarm we want. And we would choose it here. And we can tell it to scale out by adding instances, or scale in by removing instances. Now, this scaling policy is no longer recommended, because it's a legacy policy. And now we have a new policy that is similar but more robust. To replace this one, you can still use it but you know, it's not recommended and but still in the console. But let's look at the one that replaces it called scaling policies with steps. So same concept you scale based on when alarm is breach, but it can escalate based on the alarms value, which changes over time. So before where you just had a single value here, we could say well, if we have this, this alarm and the value is between one and two, then add one instance and then when it goes between two And three, then add another instance, or when exceeds three to beyond, then add another instance. So you can, it helps you grow based on that alarm, that alarm as it changes, okay. So earlier I was showing you that you can do health checks based on l bees. But I wanted to show you actually how you would associate that lb to an auto scaling group. And so we have classic load balancers. And then we have application load balancer and network load balancer. So there's a bit of variation based on the load bouncer how you would connect it, but it's pretty straightforward. So when in the auto scaling group settings, we have these two fields, classic load balancers and target groups. And for classic load balancers, we just select the load balancer, and now it's associated. So it's as simple as that it's very straightforward, but with the new ways that there's a target group that's in between the auto scaling group and the load balancer, so you're associating the target group. And so that's all there is to it. So that's how you associate. So to give you the big picture on what happens when you get a burst of traffic, and auto scaling occurs, I just wanted to walk through this architectural diagram with you. So let's say we have a web server and we have one EC two instance running, okay, and all of a sudden, we get a burst of traffic, and that traffic comes into roughly three, revenue three points to our application load balancer application load balancer has a listener that sends the traffic to the target group. And we have this these students since which is associated with that target group. And we have so much traffic that it causes our CPU utilization to go over 75%. And once it goes over 75%, because we had a target scaling policy attached, that said anything above 75%, spin up a new instance. That's what the auto scaling group does. And so the way it does is it uses a launch configuration, which is attached to the auto scaling group. And it launches a new EC two instance. So that's just to give you the, like, full visibility on the entire pipeline, how that actually works. So when you have an auto scaling group, and it launches in institutions, how does it know what configuration to use to launch a new new ECU essence, and that is what a launch configuration is. So when you have an auto scaling group, you actually set what a launch configuration you want to use. And a launch configuration looks a lot like when you launch a new EC two instance. So you go through and you'd set all of these options. But instead of launching an instance, at the end, it's actually just saving the configuration. Hence, it's called a launch configuration. A couple of limitations around loss configurations that you need to know is that a launch configuration cannot be edited once it's been created. So if you need to update or replace that launch configuration, you need to either make a new one, or they have this convenient button to clone the existing configuration and make some tweaks to it. There is something also known as a launch template. And they are launching figurations, just but with versioning. And so it's AWS is new version of lock configuration. And you know, generally when there's something new, I might recommend that you use it, but it seems so far that most of the community still uses launch configuration. So the benefit of versioning isn't a huge, doesn't have a lot of value there. So, you know, I don't I'm not pushing you to use launch templates, but I just want you to know the difference because it is a bit confusing because you look at it, it looks like pretty much the same thing. And it just has version in here and we can review the auto scaling group cheat sheet. So an S g is a collection of up to two instances group for scaling and management scaling out is when you add servers scaling is when you remove servers scaling up is when you increase the size of an instance so like you'd update the launch configuration with a larger size. The size of an ASC is based on the min max and desired capacity. Target scaling policy scales based on when a target value of a metric is breached. So example average CPU utilization exceeds 75% simple scaling policy triggers a scaling when an alarm is breach. Scaling policy with steps is the new version simple scaling policy allows you to create steps based on escalation alarm values. desired capacity is how many instances you want to ideally run. An ESG will always launch instances to meet the minimum capacity. health checks determine the current state of an instance in nasg. health checks can be run against either an EOB or needs to do instance, when an auto scaling when an auto scaling group launches a new instance it will use a launch configuration which holds the configuration values of that new instance. For example, maybe the AMI instance type of roll launch configurations cannot be edited and must be cloned or a new one created. Launch configurations must be manually updated in by editing the auto scaling group. setting. So there you go. And that's everything with auto scale. Hey, it's Andrew Brown from exam Pro. And we are looking at elastic load balancers, also abbreviated to lb, which distributes incoming application traffic across multiple targets such as you see two instances containers, IP addresses or lambda functions. So let's learn a little bit What a load bouncer is. So load balancer can be physical hardware or virtual software that accepts incoming traffic and then distributes that traffic to multiple targets. They can balance the load via different rules. These rules vary based on the type of load balancers. So for elastic load balancer, we actually have three load balancers to choose from. And we're going to go into depth for each one, we'll just list them out here. So we have application load balancer, network, load balancer, and classic load balancer. Understand the flow of traffic for lbs, we need to understand the three components involved. And we have listeners rules and target groups. And these things are going to vary based on our load balancers, which we're going to find out very shortly here. Let's quickly just summarize what these things are. And then see them in context with some visualization. So the first one our listeners, and they listen for incoming traffic, and they evaluate it against a specific port, whether that's Port 80, or 443, then you have rules and rules can decide what to do with traffic. And so that's pretty straightforward. Then you have target groups and target groups is a way of collecting all the easy two instances you want to route traffic to in logical groups. So let's go take a look first at application load bouncer and network load balancer. So here on the right hand side, I have traffic coming in through repartee three, that points to our load balancer. And once it goes, our load balancer goes to the listener, it's good check what port it's running on. So if it's on port 80, I have a simple rule here, which is going to redirect it to Port 443. So it's gonna go this listener, and this listener has a rule attached to it, and it's going to forward it to target one. And that target one contains all these two instances. Okay. And down below here, we can just see where the listeners are. So I have listener at 443. And this is for application load balancer, you can see I also can attach a SSL certificate here. But if you look over at rules, and these rules are not going to appear for network load balancer, but they are going to appear for a lb. And so I have some more complex rules. If you're using lb, it simply just forwards it to a target, you don't get more rich options, which will show you those richer options in a future slide. But let's talk about classic load balancer. So classic load balancer is, is much simpler. And so you have traffic coming in it goes to CLB. You have your listeners, they listen on those ports, and you have registered targets. So there isn't target groups, you just have Lucy see two instances that are associated with a classic load balancer. Let's take a deeper look at all three load balancers starting with application load balancer. So application load balancer, also known as a lb is designed to balance HTTP and HTTPS traffic. It operates at layer seven of the OSI model, which makes a lot of sense because layer seven is application lb has a feature called request routing, which allows you to add routing rules to your listeners based on the HTTP protocol. So we saw previously, when we were looking at rules, it was only for a lb that is this is that request routing rules, you can attach a web application firewall to a lb. And that makes sense because they're both application specific. And if you want to think of the use case for application load balancer, well, it's great for web applications. So now let's take a look at network load balancer, which is designed to balance TCP and UDP traffic. It operates at the layer four of the OSI model, which is the transport layer, and it can handle millions of requests per second while still maintaining extremely low latency. It can perform cross zone load balancing, which we'll talk about later on. It's great for you know, multiplayer video games, or when network performance is the most critical thing to your application. Let's take a look at classic load balancers. So it was AWS is first load balancer. So it is a legacy load balancer. It can balance HTTP or TCP traffic, but not at the same time. It can use layer seven specific features such as sticky sessions. It can also use a strict layer for for bouncing purely TCP applications. So that's what I'm talking about where it can do one or the other. It can perform cross zone load balancing, which we will talk about later on and I put this one in here. Because it is kind of an exam question, I don't know if it still appears, but it will respond with a 504 error in case of timeout if the underlying application is not responding. And an application can be not respond spawning would be example as the web server or maybe the database itself. So classic load balancer is not recommended for use anymore, but it's still around, you can utilize it. But you know, it's recommended to use nlb or lb, when possible. So let's look at the concept of sticky sessions. So sticky Sessions is an advanced load balancing method that allows you to bind a user session to a specific EC two instance. And this is useful when you have specific information that's only stored locally on a single instance. And so you need to keep on sending that person to the same instance. So over here, I have the diagram that shows how this works. So on step one, we wrote traffic to the first EC two instance, and it sets a cookie and so the next time that person comes through, we check to see if that cookie exists. And we're gonna send it to that same EC two instance. Now, this feature only works for classic load balancer and application load balancer, it's not available for nlb. And if you need to set it for application load balancer, it has to be set on the target group and not individually easy to instance. So here's a scenario you might have to worry about. So let's say you have a user that's requesting something from your web application, and you need to know what their IP address is. So, you know, the request goes through and then on the EC two instance, you look for it, but it turns out that it's not actually their IP address. It's the IP address of the load balancer. So how do we actually see the user's IP address? Well, that's through the x forwarded for header, which is a standardized header when dealing with load balancers. So the x forwarded for header is a command method for identifying the originating IP address of a connecting, or client connecting to a web server through HTTP proxy or a load balancer. So you would just forward make sure that in your web application that you're using that header, and then you just have to read it within your web application to get that user's IP address. So we're taking a look at health checks for elastic load balancer. And the purpose behind health checks is to help you route traffic away from unhealthy instances, to healthy instances. And how do we determine if a instances unhealthy waltz through all these options, which for a lb lb is set on the target group or for classic load balancers directly set on the load balancer itself. So the idea is we are going to ping the server at a specific URL at a with a specific protocol and get an expected specific response back. And if that happens more than once over a specific interval that we specify, then we're going to mark it as unhealthy and the load balancer is not going to send any more traffic to it, it's going to set it as out of service. Okay. So that's how it works. One thing that you really need to know is that e lb does not terminate unhealthy instances is just going to redirect traffic to healthy instances. So that's all you need to know. So here, we're taking a look at cross zone load balancing, which is a feature that's only available for classic and network load balancer. And we're going to look at it when it's enabled, and then when it's disabled and see what the difference is. So when it's enabled, requests are distributed evenly across the instances in all the enabled availability zones. So here we have a bunch of UC two instances in two different Z's, and you can see the traffic is even across all of them. Okay? Now, when it's disabled requests are distributed evenly across instances, it's in only its availability zone. So here, we can see in az a, it's evenly distributed within this AZ and then the same thing over here. And then down below, if you want to know how to enable cross zone load balancing, it's under the description tab, and you'd edit the attributes. And then you just check box on cross zone load balancing. Now we're looking at an application load balancer specific feature called request routing, which allows you to apply rules to incoming requests, and then for to redirect that traffic. And we can check on a few different conditions here. So we have six in total. So we have the header host header source IP path is to be header is to be header method, or query string. And then you can see we have some then options, we can forward redirect returned to fixed response or authenticate. So let's just look at a use case down here where we actually have 1234 or five different examples. And so one thing you could do is you could use this to route traffic based on subdomain. So if you want an app to sub domain app to go to Target, prod and QA to go to the target QA, you can do that you can either do it also on the path. So you could have Ford slash prod and foresights qa and that would route to the respective target groups, you could do it as a query string, you could use it by looking at HTTP header. Or you could say all the get methods go to prod on a why you'd want to do this, but you could and then all the post methods would go to QA. So that is request request routing in a nutshell. We made it to the end of the elastic load balancer section and on to the cheat sheet. So there are three elastic load balancers, network application and classic load balancer. an elastic load balancer must have at least two availability zones for it to work. Elastic load balancers cannot go cross region you must create one per region lbs have listeners rules and target groups to route traffic and OBS have listeners and target groups to route traffic. And CL B's use listeners and EC two instances are directly registered as targets to the CLB. For application load balancer, it uses HTTP s or eight or without the S traffic. And then as the name implies, it's good for web applications. Network load balancer is for TCP, UDP, and is good for high network throughput. So think maybe like multiplayer video games, classic load balancer is legacy. And it's recommended to use a lb, or nlb when you can, then you have the x forwarded for. And the idea here is to get the original IP of the incoming traffic passing through the lb, you can attach web application firewall to a lb. Because you know web application firewall has application the name and nlb and CLB do not, you can attach an Amazon certification manager SSL certificate. So that's an ACM to any of the L B's for to get SSL. For a lb you have advanced request routing rules, where you can route based on subdomain, header path and other SP information. And then you have sticky sessions, which can be enabled for CLB or lb. And the idea is that it helps the session remember, what would you say to instance, based on cookie. All right, so it's time to get some hands on experience with EC two and I want you to make your way over to the ECG console here by going up to services and typing EC two and click through and you should arrive at the EC to dashboard. Okay, I want you to take a look on the left hand side here because it's not just easy two instances that are under here, we're going to have ami eyes. Okay, we're going to have elastic block store, we're going to have some networking security. So we have our security groups, our elastic IPS are key pairs, we're going to have load balancing, we're going to have auto scaling, okay, so a lot of times when you're looking for these things, they happen to be under the sea to console here, okay, but um, now that we've familiar, familiarize ourselves with the overview here, let's actually go ahead and launch our first instance. Alright, so we're gonna proceed to launch our first instance here, and we're gonna get this really nice wizard here. And we're gonna have to work our way through these seven steps. So let's first start with choosing your ami, your Amazon machine image. And so an ami, it's a template that contains the software configuration. So the operating system, the application server and applications required to launch your instance, right. And we have some really same choices here we have Amazon Linux two, which is my go to, but if you wanted something else, like red hat or or Susi or Ubuntu or or Microsoft Windows, eight of us has a bunch of amies that they support they manage, okay, and so if you were to pay for AWS support, and you were to use these ami is you're gonna get a lot of help around them. Now if you want more options, besides the ones that are provided from AWS, there is the marketplace and also community ami. So if we go to community am eyes, and we're just going to take a peek here, we can see that we can filter based on less than architecture etc. But let's say we wanted a WordPress ami, something was pre loaded with WordPress, we have some here. And if we wanted to get a paid paid one, one that is actually provided by a vendor and that they support it and you pay some money to make sure that it's in good shape, maybe as good security or just kept up to date. You can do that. So here's WordPress by bitnami. Alright, and that's a very common one to launch for WordPress on here. But you know, we're just going to stick to the defaults here and go back to the Quickstart and we're going to launch an Amazon Linux two ami, alright, and we'll just proceed to a clicking select here. So now we're on to our second option, which is to choose our instance type, okay, and this is going to determine how many CPUs you're going to be using. realizing how much memory you're going to have are going to be backed by an EBS or instant store. And also Are there going to be limitations around our network performance. And so you can filter based on the types that you want. And we learned this earlier that there are a bunch of different categories, we're going to stay in the general purpose family here, which are the first that are listed, we're always going to see T two and T two micro is a very same choice and also a free choice if we have our free tier here. Okay, and so we will select it and proceed to instance details. So now it's time to configure our instance. And the first option available to us is how many instances we want to run. So if you wanted to launch 100, all at once, you can do so and put it in an auto scaling group, or they were going to stick with one to be cost effective, then we have the option here to turn this into a spot instance. And that will help us save a considerable amount of money. But I think for the time being we will stick with on demand. Next we need to determine what VPC and subnet we're going to want to launch this into. I'm going to stick with the default VPC and the default sub network, I should say it will pick one at random for me, we're definitely going to want to have a public IP address. So we're going to allow that to be enabled, you could put this easy to insert into a placement group, you'd have to create a placement group first. But we're not going to do that because we don't have the need for it. And we're going to need an IM role. So I'm going to go ahead here and right click and create a new tab here and create a new Im role for us to give this easy to some permissions. And so we're gonna go to UC to here, we're going to go next, I want you to type in SSM for system, simple SYSTEMS MANAGER. And we're going to get this one role for SSM. We hit Next, we're going to go next to review, I'm going to say my EC to EC to URL, okay. And we're gonna create that role, close this tab, and we're gonna hit that refresh there and then associate this Im role to our instance here. The reason I did that, and is because I want us to have the ability to use simple SYSTEMS MANAGER sessions manager, because there are two different ways that we can log into our EC two instance, after it is launched, we can SSH into it, or we can use SYSTEMS MANAGER in order to use this sessions manager, we're going to have to have that Im role with those permissions, the default behavior will it will shut down, that's or do a stop, that's good to me. If we wanted detailed monitoring, we could turn that on would cost additional, if we want to protect against accidental accidental termination, that is also a very good option. But again, this is a test here. So we'll probably be tearing this down pretty quick. So we don't need to have that enabled, then we have Tennessee option. So if we get a dedicated host, that's going to be very expensive. But that'd be very good if you are on, you know, an enterprise organization that has to meet certain requirements. So there are use cases for that. And the last option here is under the advanced details, you might have to drop this down and open it. And this is that script that allows us to set something up initially when the server launches. And we have a script here that we want to run. So I'm going to just copy paste that in there. And what it's going to do is when the EC two instance launches, it's going to set us up without a proxy server and started that server. So we have a very basic website. Okay. And so yeah, we're all done here, we can move on to storage. So now we're going to look at adding storage to our EC two instance. And by default, we're going to always have a root volume, which we cannot remove. But if we wanted to add additional volumes here and choose their, their mounting directory, we could do so there. And we do have a lot of different options available to us for volume types. But we only want one for the sake of this tutorial. So we'll just remove that we can set the size, we're gonna leave it to eight. If we want to have this root volume persist on the case that the CPU is terminated, so the volume doesn't get deleted, we can uncheck box, this is a very good idea. In most cases, you want to keep that unchecked. But for us, we want to make cleanup very easy. So we're going to keep that on. And then we have encryption. And so we can turn on encryption here using kms. The default key, we're just going to leave it off for the time being and yeah, we'll proceed to tags here. And we're just gonna skip over tags. Now tags are good to set. Honestly, I just I'm very lazy, I never set them. But if you want to group your resources across a city or anything, you want to consistently set tags, but we're just gonna skip that and go on to security groups. So now we need to configure some security groups here for this easy to instance. And then we don't have any, so we're gonna have to create a new one, but we could choose an existing one if there was one, but we're gonna go ahead and name it here because I really don't like the default name. So we'll just say my SG four For EC two, okay, and we're gonna set some inbound rules here. And so we have SSH and that stuff, something that we definitely want to do, I'm going to set it to my IP, because I don't want to have the ability for anyone to SSH into this instance, I want to really lock it down to my own. We're gonna add another instance here, because this is running an Apache server. And we do want to expose the Port 80 there, so I'm going to drop that down, it's going to automatically like Port 80. And we do want this to be internet accessible. But if we want to be very explicit, I'm just gonna say anywhere, okay, and I'm gonna make a note here, just saying, like my home, and, you know, for paci, you know, it doesn't hurt to put these notes in here. And then we'll go ahead and review. So now it's time to just review and make sure everything we set in the wizard here is what we want it to be. So you know, we just review it. And if you're happy with it, we can proceed to launch. So now when you hit launch, it's going to ask you to create a key pair, okay. And this is going to be so that we can actually SSH into the instance and access it. So I'm going to drop down here, I'm going to create a new key pair pair here. And I'm going to call it o m, I'm going to call it my EC two. And I'm going to go ahead and download that pair there. And we're going to go ahead and launch that instance. Alright. And so now it says that that instance, is being created. So we can go ahead and click on the name here. And so now it is spinning up, I do suggest that you do on tick here so that we can see all of our ECS instances. And what we're going to be waiting for is to move from a pending state into a green state, I forget the name of it, I guess enabled or created. And then after it goes green, we're going to be waiting for our two status checks to pass. And if those two status checks pass, which will show up under here, we have our system says check out our instance status check. That means the instance is going to be ready and available, it's going to run that user data script that we have. So once again, at one with these two checks are done. It's Oh, sorry, green is running. Great. So now we're waiting for these checks. And once they are complete, we're going to be able to use our public IP address here, either this one here, or even this DNS record, and we're going to see if our, our if our, our servers running. Okay, so if that works, then we will be in good shape here. Okay, so we're just going to wait for that session. So our two checks have passed, meaning that our instance is now ready to attempt to access via either the public IP or the public DNS record, we can use either or I like using the public DNS record. And so I'm just going to copy it using the little clipboard feature there, make a new tab in my browser, and paste it in there. And I'm going to get the Apache test page, meaning that our user data script worked, and it's successfully installed and started Apache here. So that's all great. Now, um, so now, you know, our instances are in good working order. But let's say we had to log into our instance to debug it or do something with it. That's where we're gonna have to know how to either SSH in, which is usually the method that most people like to do. Or we can use simple systems sessions manager, which is the recommended way by AWS. Okay, so I'm gonna show you both methods, methods here, starting with sessions manager. But just before we do that, I just want to name our instance something here to make our lives a little bit easier here, you just have to click the little pencil there and rename it. So I'm just rename it to my server. And we'll pop down services here and type SSM for simple SYSTEMS MANAGER and make a new tab here, click off here. So we have our space here, and we will go to the SYSTEMS MANAGER console. Now, on the left hand side, we are looking for sessions manager, which is all the way down here. And we're just going to click that. And we are going to start a new session. So we're going to choose the instance we want to connect to. So just hit start instance here. Okay. And here, we have my server, which is the instance we want to connect to, we'll hit start session, and it will gain access to this instance, immediately. So there is next to no wait. And we are in this instance, the only thing I don't like is that it logs you in as the root user, which to me is very over permissive. But we can get to the correct user here. For for Amazon links to instances, it always has an EC to user, that is the user that you want to be doing things under. So I'm just going to switch over to that user here just quickly here. Okay, and so now I'm the correct user and I can go about doing whatever it is that I want to do here. Okay. Right. So, so there you go. So that's the sessions manager, I'm just going to terminate here to terminate that instance. And again, the huge benefit here is that you get sessions history so you can get a history of who has logged in and done like gone into server to do something. And, you know, the other thing is that you don't have to share around that key pair. Because when you launch a sudo instance, you only have that one key pair, and you really don't want to share that around. So this does remove that obstacle for you. Also, because people have to log in the console, there are people leave your company, you know, you're also denying them access there, and you don't have to retract that key pair from them. So it is a lot easier to use sessions manager. The downside, though, is that it just has a very simplistic terminal within the browser. So if you're used to more rich features from your, your OS terminal, that is a big downside. That's why people still SSH in, which is now the next method we are going to use to gain access to our EC two instance. So in order to do that, we are going to need a terminal Okay, and I just moved that, that key pair onto my desktop, when you download, it probably went to your download. So just for convenience, I've moved it here. And what we're going to do is we are going to use our SSH command here, and we're going to log in as the aect user because that's the user you should be logging in as, as soon as you have to log in as when you SSH into it, we're just going to grab the public IP, now we could use the public DNS record, but the IP address is a bit shorter here, so it's a bit nicer. And then we're gonna use a hyphen flag to specify the private key that we want to pass along. That's on our desktop here, I'm actually already on the desktop, so I don't have to do anything additional here. So we're just going to pass along, so we're gonna hit enter, okay. And we're just going to wait here, and we got a Permission denied. Now, if this is the first time you've logged the server, it might ask you to for a fingerprint where you will type in Yes. Okay, it didn't ask me that. So that's totally fine. But you're going to see that it's giving me a 644, because the the private key is to open so it is required that your private key files are not accessible by others. So database wants to really make sure that you lock down those permissions. And so if we just take an LS hyphen, Li here, we can see that it has quite a few permissions here. And we could just lock that down by typing chmod 400. Okay. And then if we just take a look here, again, now it's locked down here. So if we try to SSH in again, okay, we should have better luck, this time around. It's not as fast as sessions manager, but it will get you in the door there. There we are. And we are logged in as the user, okay, and we'd go about our business doing whatever it is that we'd want to do. Now, we did talk about in the, the the actual journey about user data metadata, and this is the best opportunity to take a look there. So we have this private, this private addresses only accessible when you're inside your EC two instance. So you can gain additional information. First one is the user data one, okay, so if I just was to paste that in there, and it's just curl a cp 169 254, once it's done to the four latest user data, and if I were to hit enter, it will return the script that actually was a performed on launch. Okay, so if you were to debug and easy to instance, and it wasn't, you know, you're like working for another company, and you didn't launch that instance, and you really wanted to know, what was, uh, what was performed on launch, you could use that to find out, then we have the metadata endpoint, and that is a way for us to get a lot of different rich information. So it's the same thing, it's just gonna have metadata on the end there with with a Ford slash, and you have all these different options. Okay. So let's say we wanted to get the IP address here of this instance, we could just append IP public ipv4 on there, okay. And there you go. So, you know, that is how you log into, into an instance via SSH or, or such sessions manager. And that's how we get some user data and metadata after the fact. So when we had launched a cc two instance, we didn't actually encrypt the root volume, okay. And so if you were to launch an instance, I just want to quickly just show you what I'm talking about here. And we go to storage here. We just didn't we didn't go down here and select this. Alright, so let's say we had to retro actively apply encryption, well, it's not as easy as just a drop down here, we have to go through a bunch of steps, but we definitely need to know how to do this. Okay, so how would we go about that? Well, we go to a volumes on the left hand side here, and we'd find our running volume. So here is the volume that we want to encrypt that is unencrypted. And so what we do is we would first create a snapshot of it, okay. And so I would say, um, we'll say my volume. Okay. We'll go ahead and create that snapshot. And we'll go back here, and we're just going to wait for this snapshot to complete it's going to take a little bit of time here. But once it comes back here, we'll move on to the next step. So our progress is at now at 100%. And we can see that this snapshot is unencrypted. Okay, so what we're gonna do is we're gonna go up to actions at the top here, make a copy of the snapshot. And this is where we're gonna have the ability to apply encryption. Alright, and we're just going to use the default kms key here, which is a pretty good default to use. And we're gonna hit copy. And that's going to now initiate a copy here. So we'll just visit our snapshots page ever. Now we're just going to wait for this to create our copy. So our progress is at 100%. It's encrypted, even though it's it's pending and 0% down there. Sometimes if you hit refresh, that will just get your interface up to date here. I'm just going to turn this off here. So you can see that we have our volume here. And then we have our, our snapshot there. I don't really like the name of the description, I wonder if I can change that. No, so that's fine, though. But anyway, we now have our unencrypted and our encrypted volume. So if we wanted to have this encrypted one launch, all we're gonna have to do here is launch the seats aect, or sorry, create a an image from this volume here. So I'm just gonna hit Create image. And this is going to create an image from our EBS snapshot, okay, so that's going to be our ami. And I'm just gonna say, my server here, okay. And, yeah, this all looks good to me. And we'll just go ahead and hit Create. Alright, and we'll click through here to our ami. And so we're just going to wait here, actually, I think it's instantly created. So our ami is ready to launch. Okay. So if we want to now have our server with a version that is encrypted, we can just go ahead here and launch a new instance. And it's going to pull up our big interface here. And we'll just quickly go through it. So t to micro is good. We have one instance, we're going to drop down to our, our AC t roll here, we're going to have to again, copy our startup script here. Okay. I'm actually I guess not, because if we if we created a snapshot of our instance, this would already be installed. So we don't have to do that again. So that's good. And then we'll go to our storage, and there is our encrypted volume. Okay, we'll go to security groups, and we're just going to select that existing one there. And then we're gonna go to review, and we're going to launch, okay, and we're gonna choose the existing one. So we'll say launch instance. Okay. And we'll go back here, and we just check box off here, we're gonna actually have two instances running, you can see I have a few terminated ones, those are just old ones there. But this is the one we have running here. So once this is running, we'll have this here. And we'll just double check to make sure it's working. But we'll talk about, you know, how do we manage launching multiple instances here next. So our new instance is now a running, I'm just going to name it to my new server. So we can distinguish it from our old one. And I want to see if that root device is actually encrypted, because that was our big goal here. And so we're going to open this up in a new tab here and look at our, our volume and is indeed encrypted. So we definitely were successful there. Now, the main question, is our server still running our Apache test page here? So I'm going to grab the new IP for the new server and take a look here, and we're going to find that Apache isn't running. So you know, what happened? Why is it not working? And there's a really good reason for that. If we go over to our script, when we first use this user data script on our first server, what it did was it installed Apache, and then it started Apache, okay. But that doesn't mean that it kept Apache running if there was a restart. So the thing is, is that when we when we made an a, an ami, or like a copy of our volume, it had the installed part, but there's nothing that says on launch, start up, start Apache, okay, so what we need to do is we need to enable this command, which will always have a paci start on boot, or stop or start or restart. Okay. So let's go ahead and actually turn that on and get our server running. And we are going to use sessions manager to do that. So we'll go back to SYSTEMS MANAGER here. If it's not there, just type SSM and you can just right click and make a new tab there. And we're going to go down to sessions manager. And we are going to start a new session, we are going to choose my new server if you named it that makes it a lot easier to find it. And it will get us into that server lickety split. And we are going to switch to the EC to user because we don't want to do this as root. And what we're going to do is first start our service because it'd be nice to see that it is working. So we'll go back to this IP here and is now working. And now we want to work on reboots. I'm going to copy this command, paste it in, hit on and it's going to create a symlink for the service. Okay, and so now when we restart the server, this should work. So what I want you to do I want you to just close this tab here, leave this one open. And we will go back to this server here. And we're going to reboot it. So we're going to go, reboot. And so if our command does what we hope it will do, that means that it will always work on reboot. Okay. Great. So it is, it should be rebooting now. I'm pretty sure. And let me just do that one more time. Sure, you want to reboot? Yes. There we go. Was it really that fast? Oh, I think it just booted really fast. Okay, so I guess it's finished booting. And we'll just go here, and it's still working so great. So I always get that confused, because if you stop and start instance, it takes a long time reboots can be very quick. So now that we have this instance, here, the only only issue is that if we were to create a copy of this instance, we want to bake in that new functionality. So we need to create a new ami of that instance. Okay, so what we're gonna do is we're gonna go to our images and create a new image, and we're gonna call this, my servers 000. And we're gonna say what we did to this. So what we were doing was ensuring Apache restarts on boot. On reboot, okay, and then we will create our image, okay. And we will let that image proceed there, it failed. Oh, I've never I never get failure. That's interesting. Um, well, that's fine. We'll just do a refresh here. Honestly, I've never had an image fail on me. So what I'm going to do is I'm just going to try that one more time here. My servers, there was a zero, restart Apache on reboot, okay. And we will create that image again. Okay, we'll go back here. And we'll just see if that creates it, there just takes a little bit time. And so I'll come back here. So sometimes instances can fail or like ami, or snapshots can fail, and it's just AWS. So in those cases, you just retry again. But it rarely happens to me. So. But yeah, we'll see you after this is done. Our ami is now available. And if I was to remove this filter, here, we'd see our original ami, where we installed, install the paci and, but it doesn't run it by default. So if we were to launch this one, we'd have our problems that we had previous but this one would always start Apache, okay, now we have a couple servers here, I want you to kill both of them because we do not need them anymore. Okay, we're going to terminate them. And we're going to learn about auto scaling groups. All right. So whenever we want to have a server always running, this is going to be a good use case for it. So before we can go ahead and create an auto scaling group, I want you to go create a launch configuration, okay. And so I just clicked that down there below, and we are going to create a configuration, and we can choose our ami, but we want to actually use one, one of our own ami, so I'm gonna go my ami here, and I'm going to select this one, which is the Apache server, it's going to be T two micro, we want the role to be my EC two, we're going to name this my server lc 000. LC stands for launch configuration there. That's just my convention, you can do whatever you want. We're going to have this volume encrypted by default, because if you use an ami that is encrypted, you can't unencrypted. Okay, we'll go to our security groups. And we will drop this down and select the security group we created previously here. Yeah, miss that one, we'll go to review. And we will create our launch configuration, choose our existing key pair there and launch our or sorry, create our launch configuration. Okay, so we've created launch configuration, you'll see that process was very similar to an EC two instance. And that was just that was saving all the settings because an ami doesn't save all the settings rights and a launch configuration saves all of those settings. So now that we have our launch configuration, we can go ahead and launch an auto scaling group, okay. And this is going to help us keep a server always continuously running. So we'll go ahead and create our our auto scaling group, we're going to use our launch configuration here. We're gonna go next steps. We're going to name it we'll just say as G or say my server is G. ACS SD stands for auto scaling group, we're going to have one an instance of one size, we're going to launch it into default subnet, we are going to have to choose a couple here. So we'll do a and b. Let's check advanced details. We're going to leave that alone. We're going to go to the next step. And we're going to leave that alone. We're going to see notifications. We're gonna leave that alone tags alone. Oops, I think I went backwards here review and we are going to create that auto scaling group. So now we're going to hit close and we're going to check out that ESG here. And so look at that it's set to one desired one min and one max. Okay, it's using that ami. So now. Now what it's going to do, it's going to just start spinning up that server. So the way this works, if I just go to the Edit options, here, we have these three values, okay, and so minimum is the minimum number of instances the auto scaling group should have at any time, so there's not at least one server running. Okay, it's going to start it up. And we can never have beyond one server. So there's a, there's a chance where if you have auto scaling groups, it would try to trigger and go beyond the max. So this Max is like a, like a safety so that we don't end up with too many servers. And then we have desired capacity. And that's the desired number of instances we want to be running. And this value actually changes based on auto scaling groups, it will adjust it. So generally, you want to be at this number, etc. A lot of times I'll have this. Yeah, exactly. You know, I might have like two, for very simple applications, and then this would be one in one. Okay. But anyway, this instance, is automatically starting, okay. And it looks like it's in service. It doesn't assume me it's running. Because if I go to instances over here, okay, and we take a look, here we have it, and it's initializing. All right. So what is the big benefit to USGS is the fact that, you know, if this instance got terminated for any reason, the ASU will spin up another one, right. So we're just going to wait a little bit of time here for this, to get it to two checks here. And then we will attempt to kill it and see if the auto scaling group spins up a new one. So our instances running that was launched our auto scaling group. And let's just double check to make sure that it's working. It's always good to do sanity checks here in our Apache page is still operational. So now the real question is, is that if we are to terminate this instance, will the auto scaling group launch a new instance, because it should, it should detect that it's unhealthy and launch a new one. So this is terminating here, and we're gonna go to our auto scaling group here. And we are going to just see if it's going to monitor, so it's saying that it's terminating, so it can actually tell that it's terminating, and it's unhealthy, okay. And so it's going to determine that there are no instances it's going to start up another one here shortly. Okay. So we are just going to give it a little bit of time here. And so now we have no instances running, right. And so it should detect very shortly, okay. And there is that health grace check period. So we are just waiting a little bit of time here. Okay, and great. So now it's starting up a new EC two instance, because it determined that we're not running. So our ASC is working as expected. Okay. So, yeah, there you go. So I think maybe the next thing we want to do is, let's say we wanted to change our Apache page, that index to have some kind of different text on there. And we can learn how to actually update that. So we'll have to create a new oma ami and then swap out our launch configuration so that the auto scaling group can update that page. So we'll just go back here. And we'll just wait for this to complete so we can SSH in. And we will do that next. So we had terminated our previous instance. And the auto scaling group spun up a new one, and is now running. So let's just double check to make sure our party page is still there. And it is and so now let's go through the process of figuring out how to update this page, so that when we spin up new instances, with our auto scaling group, they all have the latest changes, okay, so what we're going to have to do is we're going to need to update our launch configuration, but we're also gonna have to bake a new ami. And even before we do that, we need to SSH or get into an instance and update the default page. So what we're going to do is go to services and type SSM and open up SYSTEMS MANAGER, okay. And we'll just click off here, so this is not in our way, and we will go down to sessions manager here. Okay, start a new session, we're going to choose the unnamed one because that was the one last by the auto scaling group. And we will instantly get into our instance here and we are going to have to switch over to the user because we do not want to do this as root. Okay, and so if my memory is still good, Apache stores their their pages in HTML, var ww HTML in it is great. And so we're just going to create a new page here. So I'm gonna do sudo touch index HTML, it's going to create a new file. Alright, and now we're going to want to edit that I'm going to use VA you can use nano here nano is a lot easier to use VI is every single thing is a hotkey. So you might regret launching and vi here, but this is what I'm most familiar with. So I'm going to just open that here. And I already have a page prepared. And obviously you'll have access to this too, for this tutorial series. And I'm just going to copy this in here and we're going to Paste that there. And I'm just going to write and quit to save that file. Okay, and then we're going to go back, we'll kill this. Before we kill it, let's just double check to make sure it works. So we're going to go back to this instance, grab that IP, paste it in there. And there we are, we have our Star Trek reference page from the very famous episode in the pale moonlight here. So our page is now been replaced. Great. So now that we have this in place, the next thing we need to do is get it so that when we launch a new auto scaling group, it will have that page, right. And so I said that we need to create an ami. So let's we're going to do so we are going to get the current instance here. And we are going to create a new image and we are going to follow our naming convention, I can't remember what I called it. So I'm just going to go double check it here quickly here. Because you only get the opportunity to name this one. So you want to get them right. So we will go here and name it. What was it my server my image, I'm getting confused. Now my server, okay. So we'll say my server 001, I really wish they'd show you the previous names there. And we'd say, update default party, or create our own custom index HTML page for a party. Okay, so there we go. And we are going to create that image. Great. And we will go to View pending ami here. And we will just wait until that is complete. And we will continue on to updating our launch configuration. So our image there is now available. If we were just to click off here, we can see we have our ami, so this one has the Apache server where it doesn't restart on boot. So you have to manually do it. This one starts up Apache, but it has the default page. And then this one actually has our custom page. Okay, so we need to get this new ami into our auto scaling group. So the way we're going to do that is we're going to update the launch configuration. Okay, so launch configurations are read only, you cannot edit it and change the AMI. So we'll have to create a copy of that launch template, or some last configuration, careful, there's launch templates, it's like the new way of doing launch configurations. But people still use launch. Launch config. So we're just gonna go to actions here and create a copy of launch configuration. Okay, and it's gonna go all the way to the end step, which is not very helpful. But we're going to go all the way back to the beginning here. And we're going to remove our zeros zero here was click that off there, we'll choose one, it probably will warn us saying Yeah, do you want to do this? Yes, of course. And we're going to leave all the settings the same, we're going to go to configure details, because it does a stupid copy thing here. And I'm just going to name it one. And you got to be careful here because you do not get the opportunity to rename these. So it's just nice to have consistency there. And we will just proceed to add storage to make sure everything's fine. Yes, it's opposite encrypted security group, it tried to make a new one. So we're going to be careful there and make sure it uses our existing one, we don't need to continuously create lots of security groups that we don't need, we're going to review it, we're going to create the lock configuration we're going to associate with our key pair as usual. Okay, and now this launch configuration is ready. So now that we have a new launch configuration, in order to launch this, what we need to do is go to our auto scaling group, okay. And what we will do is, we're going to edit it, okay, and we are just going to drop it down and choose one. Alright. So now what we can do is we can either, we'll just hit Save here, but let's say we want this new instance to take place. What we can do here is just terminate this old one, and now the new the new auto scaling group should spin up and use the new launch configuration. Okay, I'm just paranoid here. And you should always be in a device, I'm just going to double check to make sure that I did set that correctly. Sometimes I hit save, and it just doesn't take effect. And this is the new launch configuration being set for ASD. So we'll go back here. Okay. And we're just going to stop this instance. And the new one that's spun up should have that page that we created there. So I'll just terminate here. And I'll talk to you here in a moment we get back, ok. Our new instances running here, and let's take a peek to see if it's showing our web page here. And it is, so we are in good shape here. We successfully updated that launch configuration for the auto scaling group. And so now anytime a new instance is launched, it will have all all our latest changes. So auto scaling groups are great because, you know, they just ensure that there's always at least one server running. But what happens if the entire availability zone goes out? All right, it's not. It's not going to help if the auto scaling group is set in that AZ. So what we're going to have to do is create high availability using the load balancers so that we can run instances in more than one one AZ at any given time. Okay, so let's go ahead and layer in our auto scaling group into our load balancer. So what I want you to do is make a new tab here, and we are going to create a load balancer. Okay, it's not terribly hard, we'll just get through here. And we'll just create a load balancer here. And we have three options provided to us application load balancer network load balancer, and classic load balancer, okay. And we're going to make an application load balancer. Alright, and I'm just going to name this al are my, my lb, okay, and it's going to be internet facing, we're going to use ipv4 because that's the easiest to work with, we're gonna use the HTTP Port 80 protocol, because we don't have a custom domain name. So we're gonna just have to stick with that. We're gonna want to run this in multiple azs. It's always good to run it at least three public AZ. So that's, I'm going to select those there. Okay, we're going to proceed to security here, prove your load balancers, security settings don't seem to have any here. We'll we'll go next here. And we are going to create a new security group actually. And this is going to be the security group for the lb here. So we'll say Alp. Let's what's the SD, that's always thing I like to put on the on the end there. And we can leave the default description in there. And so we want it. So anything on port 80 is accessible from anywhere. So to me, that is a good rule. And we will go to the next step here. And we will have to create our target group. Okay, so the target group is what points to the actual easy two instances, okay, so we will just make a new one, and we'll just call it my target group. And we will call it the production one here, because you can have multiple levels to say tg prod, because you can have multiple target groups, okay. And it's going to be for instances where use HTTP protocol, the health check is actually going to be the default page. That's pretty good to me. And we're going to go ahead and register some targets. And so here we can individually register targets, but we actually want to associate it via the auto scaling group. So we're not going to do it this way. We're just gonna go next and create this load balancer here. And it takes a very little time to do so. Okay. And I mean, it says it's provisioning. So I guess we'll have to wait a little bit here. But what we want to do is we want to get those auto scaling groups associated with our target group. Okay. And so the way we'll go about doing that is we're going to go to our auto scaling group. And we are going to go ahead and go to actions, edit this here. And we are going to associate to tg prod. So that's how it's going to know how to the load balancing browser, it's good to know how to associate with the the auto scaling group. And we will also change your health check to EOB. Because that is a lot better. Here. We're going to save that there. And we are going to go back to load balancers and see how that is progressing. Remember how long it takes for an lb to spin up. Generally, a very quick still says it's provisioning. But while that is going, we have an opportunity to talk about some of the settings here. So the load bouncer has listeners, right. And so it created a listener for us through that wizard there. And so we have one here that listens on port 80. And they always have rules. And that rule is it's going to always forward to the the target group we had selected. So we wanted to edit it or do some more things with those rules, we can hit rules here. and here we can see Port 80. So if by default, everything by default, is going to be Port 80, it's going to then forward it to that target group. So if we wanted to here add another rule here, we could add rules such as all sorts of kinds. So we could say if the path here was secret page, okay, we could then make an action and forward it to a target group that that has some very special servers in it. Okay, so there are some very advanced rules that we can set in here, we're going to leave them alone, I just wanted to give you a little a little tour of that. And so yeah, we're just gonna have to wait for this to finish here. And once it's done provisioning, we're going to see if we can get traffic through our load balancer. So our lb is ready. Now just make sure you press that refresh up there, because a lot of the times these things are ready and you're just sitting there because the UI does not refresh. So always hit the refresh there once in a while. And let's just see if our load balancer is working and routing traffic to our single instance there. So down below, we do have our DNS DNS name. So this is a way that we would access our load balancer there and there you go. So now everything's being routed through the load balancer. Now, if we were to go back to this EC two instance here, okay, we might want to actually restrict traffic so that you can't ever directly go to the instance only through the load balancer. Alright, so I'm just going to copy this IP here. Okay, and so I'm able to access this through the IP address, which is not too bad. But let's say I didn't want to be able to access it through Yeah, through here, okay, so it always has to be directly through the load balancer. And the way we can do that is we can just go adjust this auto scaling group, or sorry, the security group. So that it it denies traffic from Port 80. Now, I kind of like having these, the the security group around for the CTO instance. And so what I want to do is I want to actually create a new security group just for the auto scaling group. Okay, so we'll go to security groups here. And we will create a new security group and we're going to call it my I don't think I was very consistent here. So yeah, yeah, kinda. So we'll say, my ESG security group. And so for this, we are going to allow us to use SSH. Honestly, we don't really need to SSH anymore because we can use SYSTEMS MANAGER. But if the case we wanted to do that, we can set that rule there. And so we will allow inbound traffic that way. And we will also allow inbound traffic for Port 80, but only only from the load balancer. So in here, we can actually supply a security group of another one here. So for the load balancer, I don't remember what's called Oh, I can move this nice. And so the other one is called a lb. So I'm just going to start typing a lb here. And now we are allowing any traffic, just from on port 80. From that load bouncer. Okay, so we're going to hit Create there, oh, I gotta give it a description. So my SD scritti. Group, okay. I know, you always have to provide those descriptions, it's kind of annoying. Okay, and so now what we're gonna do is we're gonna go back to our auto scaling group, we might actually, we might have to make a new launch configuration, because there's no the the the security group is associated with the launch configuration. So I think that's what we're gonna have to do here. So we're gonna have to create a new ami, or new launch configuration, you can see this is a very common pattern here, copy the launch configuration here. And we're going to want to go back, we're gonna want to use the same, sorry, we're gonna wanna use the same ami, we're not changing anything here, we just want to make sure that it's using the new security group here. So we will go to the ASC one here. Okay. And I think that's all I want to do. I'm just gonna double check, make sure all our settings are the same. Oh, yeah, does this stupid copy thing. So we'll just do 002 there. And yeah, everything is in good shape, we will just double check here. Yep. And we will all create that new launch configuration there, close it. Okay, we'll go to our auto scaling group. And we're going to go to actions, edit, as always, and go to version two here. Okay, and we are going to save it. Alright. And the next thing we're going to do is we are going to terminate this instance here, because we want the new security group to take effect, okay. So we're going to terminate that instance, if we stopped, it also would do it. But we want to get rid of the server here. So we'll terminate it, we're going to go back to our auto scaling group, okay, because we want to by default, run at least in three azs. To get that, that full availability there. So what I'm going to do is I'm going to change our desire to three, our min to three and our max to three. Okay? And, um, I'm going to add an additional subnet here, we need See here, okay, so that I can actually do that. And, yeah, now I'm kind of expecting this AZ to, to launch in all three, okay. If I just go back here and edit here. Yeah. So now all three Z's appear there. Because if we didn't have that there, I don't think the load balancer would have been able to do that it would have launched two and one is the one another. So now we have those there. And what we're gonna do is we're just going to wait for these three servers to spin up here. Okay. There's two, where's my third one? Give me a third one. But yeah, we set it to three. So the third one will will appear here shortly. And we will just resume here once they all appear. So our three EC two instances have launched, I was a bit worried about the last one, but it eventually made its way here and look at the azs. One is an A one is in B and one is in C. So we have high availability. So if two available availability zones go out, we're always going to have a third server running. So we're going to be in very good shape. Now, the other thing we were doing is we wanted to make sure that people couldn't directly access the servers and had to be through the load balancer. So let's go discover that right now. So if I pull up this IP should change I'll be accessible via the IP or public DNS here and I try this here. It's never loading. That's great. That's what we want. Okay, so the question is, is now if we go to our load balancer, do, we still have access to our our instances here, through that DNS record here, okay. So we're gonna copy that. And we do. So, you know, the good reason for that is that we always want to restrict our traffic through like this narrow pipeline. Because if everything always passes through the load balancer, then we can get richer analytics on and put things in front of there. One thing we can do with a load balancer is attach a laugh, web application firewall, which is a really good thing to do. And so if you have people accessing things directly, not through the load balancer, then they wouldn't pass it a laugh, okay, so it's just creating those nice choke points there. And so now that we have our lb, I guess the next step would be really to serve this website up via a custom domain name. So yeah, let's do a bit with route 53 and get a custom domain. So now it's time to learn how to use route 53 to get a custom domain name, because, you know, we do have this URL for our website, but it's kind of ugly. And we want to go custom here. And we want to learn how to integrate router d3 into our load balancer. So let's go up at the top here, type a route 53. Okay, and we're going to register our domain. Now, this does cost money. So I guess you could skip it or just watch here. But you know, to get the full effect, I really do think you should go out and purchase a very inexpensive domain. And so we're gonna go to the top here, and I'm going to register a new domain, I'm going to get a.com if I can, unless there's something cheaper. I mean, there are cheaper ones. I guess I don't have to be so frugal, okay, and I'm going to try to get I know what domain I want. And as always, it's going to be Star Trek, I'm gonna see if I can get the friendly lion. So we'll type in frame your lines here and check if it's available. And it is, so that's going to be our domain name. So I'm going to add it to my cart here. We'll have our subtotal. We'll hit Continue. And now we have a bunch of information to fill in. And I just want to point out down below that you can have privacy protection turned on. So normally, with other services like GoDaddy, you'd have to pay an additional fee in order to have yourself protected. So it doesn't display your information on who is okay. So if you're wondering, I'm talking about if you go to who is here. Domain tools, I believe it this is this one here? I'm not sure why Oh, here it is. There we go. I clicked the wrong one. Okay, and so if we were to go to who is here, we can generally look up anything we want. So if we typed in like google.com, okay. I say I'm not a robot. Okay. Generally, there is like additional information here. And it can provide like someone's phone number and the company here. And sometimes you want to keep this information, private. And so that is what this option here is going to do. So if you've ever had a random call from somebody, you wonder how they got your phone number, maybe you registered a domain name, and it wasn't a had privacy turned on. But anyway, I'm gonna go ahead and fill this out. And then I'm going to show you the steps afterwards. Okay. All right. So we're on to the next page here. And I did substitute my private information here. And if you do call this number, you are looking forward to some tasty, tasty pizza hot in Toronto. And so on this page here, it's gonna ask us if we want to automatically renew our domain, I'm gonna say yes, because I think I want to keep this domain. And I'm just going to agree to the terms and conditions, it's going to cost $12 USD. Unfortunately, it's not Canadian dollars. So it's a little bit more expensive. But for me, it is worth it, we're going to go ahead and complete our purchase. Okay, and so it says we've registered the domain. And so it has been successfully registered. So there we go. Cool. So I just want to show you that the domain is in pending. So we are just waiting to get some emails from AWS. And once we get those emails, we will confirm them. And we should have our domain very shortly. They say it can take up to three days, I've never had to wait that long to get a domain it's pretty quick, especially with dot coms. So as those emails come in, I will then switch over to our email and show you what those look like. So our first email has arrived is from Amazon registar. And it's just it's a confirmation. And there's nothing for us to confirm. It's just saying, hey, you are now using Amazon register. That's the thing that actually registers the domains behind revenue three. And this is not really the email we care about, but I just want to show you so if you're wondering, you know what it's about. Okay, so our second email came in here. It didn't take too long here. I think I made maybe waited about 15 minutes and it says I've successfully registered the Frankie Alliance calm and So now, we are ready to start using this domain. So we're gonna go back to row 53. Here, I'm just going to get out of my email. So our domain name is registered. And we can see it appearing under the register domains, it's no longer in the pending state. So let's go all the way up to hosted zones, because either of us will will have created one by default for us. And we can go ahead and start hooking up this domain to our load balancer. So I'm going to click in to the Frankie Alliance, okay. And right off the bat, we have some ns records, we have an SLA record. So they've really set us up here. And we're going to create our first record set. And we're going to want to hook up our www dot, okay, and we are going to want to use alias and we're going to choose the target, and we're gonna choose our E lb. Okay, and so we're gonna leave it on simple routing. And we're gonna create that. And now our www dot should start pointing to our domain name, I'm pretty sure this takes effect, like immediately, this is a new domain name. So I'm not sure if it has to propagate through all the DNS records around the world. So this doesn't work. I'm not going to get too upset about this here, but I'm gonna cross my fingers and can't be reached, okay, so it doesn't work just as of yet. So what I'm going to do is, I'm just going to give it a little bit of time here, just to see if it does take effect here, because everything is hooked up correctly. And we will be back here shortly. So I took a quick break there had some coconut water came back refreshed, and now our website's working, I didn't have to do anything, it just sometimes takes a little bit time to propagate over the internet, those those changes there. So now our next thing we need to resolve is the fact that this isn't secure. Okay. And so AWS has a server certificate, a service called Amazon certification manager, certificate manager, and it'll allows you to get free SSL certificates. But just be sure, when you do go to the service, we're going to click on provision certificates and not private, these ones are very expensive, they're $500. Initially, and you really want to just provision a certificate, I wish they'd say free or public over here. So it's less confusing, you're only ever going to see the splash screen once, for the first time you've ever created a certificate within a zone. So hopefully, that's not too confusing there. But it does ask you again, if you want public or private, you definitely definitely want the public certificate, not the private, okay, so we're going to request a certificate, and we are going to put our domain name in. So just to cover all our bases here, I'm going to put in the naked domain, I'm also going to put in a wildcard. So all our sub domains are included here, this is going to catch all cases. And we'll just hit next here, we're going to use DNS validation. Email validation is the older mechanism here for validation. Everyone does DNS validation, okay, and we're gonna hit review, then we're gonna hit confirm request. And this is going to start spinning here. And now what's gonna ask for us for to do is to validate that we have ownership of that domain. And so we can verify this here. And luckily, we can just drop this down and create a record in row feet, roughly three. So this is what they do, they put a C name in your, your DNS records here. And that's how we know that we own that domain name. And we're gonna do it for both here. Okay, and so now we're just gonna have to wait for these to validate here and it shouldn't take too long. So I took a quick break there had some coconut water came back refreshed, and now our website's working, I didn't have to do anything, it just sometimes takes a little bit time to propagate over the internet, those those changes there. So now our next thing we need to resolve is the fact that this isn't secure. Okay. And so AWS has a server certificate, a service called Amazon certification manager, certificate manager and allows you to get free SSL certificates. But just be sure, when you do go to the service, we're going to click on provision certificates and not private, these ones are very expensive. They're $500. Initially, and you really want to just provision a certificate, which let's say free or public over here, so it's less confusing. You're only ever going to see the splash screen once, for the first time you've ever created a certificate within a zone. So hopefully, that's not too confusing there. But it does ask you again, if you want public or private, you definitely definitely want the public certificate, not the private, okay, so we're going to request a certificate, and we are going to put our domain name in. So I'm going to do this and just so I don't type it wrong, and we're going to go back to row 53 and grab our domain name, okay. And there it is. So I'm just going to copy it there. Go back here, and I'm going to give it wildcard so it just saves you a lot of time if you wildcard it Okay, so you don't have to keep on adding certificates for subdomains you don't cover but we have wildcard it so we're gonna hit next. And then we need to validate that we own That domain name. And since we're using record three, it's going to be very easy, we can just use the DNS validation method, nobody uses email anymore. It's like very old method. So we're always going to do DNS. And so we're going to go to the next step here and say review. And we're going to say, confirm and request. And now we just need to confirm ownership. So they will issue us a free SSL certificate. So we see pending validation, okay, we're going to drop down here. And what we're going to have here is a button. And what this is going to do is it's going to automatically create the C name record in roughly three, this is the way certificate manager is going to confirm that we own the domain, because we can add it to the domain, that means we must own it. And so they have a one button press for us to do that there. So that's a very convenient, okay, and this should not take too long to confirm. So we'll hit continue here. Okay, and it's in pending validation. And so we're just going to wait a little bit here, just like we did for the website update. And we'll do I will do a refresh because it says a sack page, so you will have to hit this a few times. So that certificate has been issued, the console didn't directly take me to the screen. So I did have to go to the top here, type ACM, like this, and to get here and hit refresh. But again, this only takes a few minutes when you are using route 53. And so um, you know, just be aware, that's how long it should take. And so now that it's been issued, we can go ahead and attach it to our load balancers. So we're gonna go back to the C two console. On the left hand side, we're going to go to load balancers. And we're going to make sure our load balancers selected there, go to listeners, and we're going to add another 14443, which is SSL. And we're going to forward this to our target production group here. Okay. And then this is where we get to attach your SSL certificate. So we're going to drop that down to Frankie Alliance, and we're going to hit save. Okay, and so now, we're able to listen on port four, or sorry, 4443. Here, we do have this little caution symbol is saying that we can't actually receive inbound traffic for 443. So we're gonna have to update our security group. So going down to a security group here, we will click it, and it is the alrb security group that we're looking for. So this one here, we're going to go to inbound rules we're going to edit, we're going to add a new rule, and we're going to set it for HTTP s. So there we go. So now, we can accept traffic on 443. And it is attached there. So now we should be able to, we should be able to have a protected or secure URL there when we access our domain name. So I'm just gonna grab the name. So don't make any spelling mistakes here. And we'll paste it in here. And there we go. It's secure. So um, yeah, so there we are. Okay, great. So we're all done this fall along here. And I just want to make sure that we just do some cleanup here to make sure that we aren't being charged for things, we don't need any more. So the first thing we're going to do is we're going to just terminate our load balancer there, which is not too difficult to do. So we'll just go ahead there and go to actions and just go ahead and delete, and delete that, it should be pretty quick. And wow, that was very fast. So now on to our auto scaling groups. So that's the next thing, we need to go ahead and delete there. And so we're just gonna drop down to actions there and go delete. Now, when you delete the auto scaling group, it will automatically delete those easy two instances there. But it's good to keep a little eye on them there. So we're going to pop over to here for a minute. And you can see they're not terminating just yet. So we're gonna wait on that auto scaling group. And so once that scaling group has been deleted, and this might take a little bit of time here, it will definitely get rid of those easy two instances for us. That took an incredibly long time to delete that auto scaling group. I don't know why. But we'll go back to our instances here. And we will see if they are still running. So you can see they've all been terminated. So when that auto scaling group is deleted, it's going to take down the EC two instances with it, we're probably also going to want to go to route 53. And remove those, those dead endpoints there because there is a way of compromising those, if you are a very smart, so we'll just go ahead and delete that record because it's pointing to nothing right now. Right. And so there you go, that was our cleanup. So we're all in good shape. And hopefully you found the the section. Hey, this is Andrew Brown from exam Pro. And we are looking at elastic file system Fs, which is a scalable elastic cloud native NFS file system. So you can attach a single file system to multiple EC two instances and you don't have to worry about running out or managing disk space. Alright, so now we are looking at EF s and it is a file storage service for easy to instances, storage capacity is going to grow up to petabytes worth and shrink automatically based on your data stored so that's why it has elastic in its name that drive is going to change to meet whatever the demand you are Having stored now, um, the huge huge advantage here is that you can have multiple EC two instances in the same VPC mount to a single e Fs volume. So it's like they're all sharing the same drive. And it's not just easy two instances, it's any any of them in a single VPC. That's amazing. And so in order for you easy two instances, to actually mount FSX, it does have to install the NFS version 4.1 client, which totally makes sense, because Fs is using that protocol, the NFS version four protocol. And so Fs, it will create multiple targets in all your VPC subnets. So this is how it's able to allow you to mount in different subnets or different Z's here. And so we'll just see a create a bunch of Mount points and that's what you will mount. And the way it builds, it's going to, it's going to be based on the space that you're using, it's going to be 30 cents for a gigabyte, month over month reoccurring. Okay. So there you go. Hey, this is Angie brown from exam Pro, and we are going to do a quick EF s follow along. So we're going to launch to EC two instances, connect them both IE Fs and see if they can share files between them from a one DFS, okay, so what we're going to need to do is make our way to the DFS console here, okay. And we're going to get here and we're going to create a new file system, we are going to launch this in our default VPC, you're going to see that it's going to create mount targets for every single availability zone for us here, okay. And so it's also going to use the default security group. Okay, and so we're gonna go next, we're gonna skip tags, we do have the ability to do Lifecycle Management. So this is no different than s3, allowing you to reduce costs by moving stuff that you're not using as frequently into infrequent access. So you get a cheaper, cheaper storage cost there. So that's really nice. We're going to be comfortable with using bursting here. We're going to stick with general purpose, and we're going to turn on encryption, okay, it's good practice to use encryption here, especially when you're doing the certifications. You want to know what what you can encrypt. Okay, so, AWS has a default kms key for us. So we can go to next here. And we're just going to do a quick review. And we're going to create that file system. Okay. So it's going to take a little bit of time here to create that file system. Well, while that's going, I think we can go prep our EC two instances. Okay. So yeah, it's just going to take a bit of time. So if we are looking at our Fs volume here, we can still see that our mount targets are creating. So let's go ahead and create those EC two instances. So in the ECG console, you can get there just by typing EC two, and we are going to launch a couple of instances. So we will choose Amazon Lex two, because that's always a good choice, we'll stick with T two micros because that is part of the free tier, we're going to go next. And we're going to launch two of these fellows here. Okay, and we got to make sure that we are launching this in the same VPC as a DFS, which is the default one, okay? We're not going to worry about what which subnet because we're gonna be able access it from anywhere. And I want you to create a new role here. So we're gonna go here and create a new role. Okay, and we've created this multiple times through our fall alongs. But just in case you've missed it, we're going to go ahead and create a role go to EC to go next. And we're going to choose SSM for SYSTEMS MANAGER, okay, and we're going to hit next and then next, and then we're going to type in a my will say Fs DC to role, okay, and we will create that role. Alright. And so that's going to allow us to access these instances via session manager, which is the preferred way over SSH. So we will just hit refresh here and then see if it is there. And there it is. Okay, and we're just going to need to also just attach a user data script here, because we need to configure these instances here to to be able to use EF s, okay, so there is this little thing you do have to install. And so we're just gonna make that easy here, it's gonna install it for both our instances. Alright, and so we're gonna go ahead and go to storage. And then we're going to go to tags, and we're gonna go to security groups, and we're gonna make a new security group for this here. Okay? So I'm just going to call it my EC to a u Fs. sg, alright. And I mean, we don't really plan on doing much with this here. So I'm just going to go ahead and review and launch and then we will get our key pair here. Not that our key pair matters because we are going to use sessions manager to gain access to these instances. Okay, so now as these are spinning up, let's go take a look back over here and see if these endpoints are done creating. Now it says they're still creating but you know, the ages constantly can never trust it. So we'll go up here and do a refresh here and see if they are done crazy and they are all available. Okay? So we do actually have something instructions on how to set these up here with us. So if we just click here, it's going to tell us that we need to install this here onto our EC two instances for it to work. And we have absolutely done that. And then when we scroll down here, we need to then mount the EF Fs. Okay, and so since we are using encryption, we're going to have to use this command here. Okay, and so that's what we're going to do, we're going to have to use sessions manager to log in, and then mount it using the following command. Okay, so we're going to go back to EC two instances, and we're just going to wait till those style checks are complete. Okay, so as these are initializing here, I bet we should probably go set our security groups. So I'm going to go over to security groups here. And I'm going to look for our default one, because I believe that's what we launched, our DFS is in, I probably should have created our own security group here, but that's totally fine. So it's going to be 2198. Okay, and so we're, that's what we're looking for is 2918. And it is this one here, and it's the default here, okay, and we're just going to add an inbound rule here. And we're going to have a Roy here from an old an old following here, this will just remove that. And we'll look for NFS. Okay, that's gonna set it for 2049. And now we just need to allow the security group for our, our EC to there's so I believe we called it and so I just start typing mine. So we have my EC two Fs SG. And so now we shouldn't have any issues with mounting, because we probably do need that to gain access. Okay, and so we're gonna go and now we'll go back and see how our instances are doing. So it looks like they're ready to go. So now that they're, they're in good shape, we can actually go gain access to them. So what we're going to do is we're going to go to SYSTEMS MANAGER, so type SSM and make a new tab here, okay. And we're going to wait for this to load here and on the left hand side, we're going to go to sessions manager. And we're going to start a session. And so there are both of our instances, we probably should go name them, it'll just make our lives a lot easier. So I'm going to say EC two Fs, a, okay. And then we have E, F, E, AC to DFS B. All right, so we'll just go back here, do a refresh. And there are two instances. So we need to launch an instance for this one, okay. And I'm gonna have to start there. And then I'm gonna have to make another session here and start it for a B. All right. Okay, and so now we have these two here, and we're going to have to switch to the correct user. So we are as root user, we don't want to be that we want to be the easy to user. Okay, so we will switch over to EC two for a consuming this one's a doesn't really matter at this point, because you know, doesn't, but it will just switch over here to the user here. Alright, so now we are set up here, and we can go ahead and mount mounted there. So it's just a matter of copying this entire command. So we are going to need the sudo in there. So just make sure you include it. And we're just going to paste that in there. And we're going to mount, okay, and it just says it doesn't exist. So we'll just Alright, so that can fail because it has nothing to mount to. So we actually have to create a directory for it. Okay, so just here in the home directory, I'm just going to go make MK dir, and type in Fs. Okay, and then we will just go up, and then we will mount it. And so now it should mount to that new directory. So it is mounted. We're going to do the same story over here. Okay, so just to save us some time, I'm just going to copy these commands over, okay. All right. Oops, we forgot the M there on the front. Okay. And then we will just copy over the same command here. And we will also mount Fs, okay, so they should both be mounted now. And so I'm just going to do an ls, and we're going to go into that directory there. Okay. And we are just going to create any kind of files, I'm gonna say, touch on touch, base, your dot txt. Okay, so I've touched that file, we cannot do that there. So I'll just type in sudo. And so I'll do an ls within that directory. And so that file is there. So now if I go over to this one here, and do an ls and go into Fs, and do an ls, there is that file. Okay. So that's how you can access files, across instances using DFS. So that's all you need to know. So we're, we are done. Okay, so we'll just terminate this instance here, okay. And we will also terminate this instance here. Great, we will close that sessions manager, we will just kill our instances here. Okay. Because we are all done. And we'll go over to DFS. So it's only only costs for when we're consuming stuff, but since we're done with it, let's go ahead and tear this down. Okay, and we need to just copy this fella in here. And we're all good to go. So there you are, that is EF s. Alright, so we're onto the ZFS cheat sheet here. So elastic file system. VFS supports the network files. System version four protocol you pay per gigabyte of storage per month. volumes can scale to petabyte size storage volumes will shrink and grow to meet current data stored. So that's why it's elastic can support 1000s of concurrent connections over NFS. Your data is stored across multiple agencies within a region. So you have good durability there can mount multiple EC two instances to a single Fs as long as they're all in the same VPC. It creates a mount points in all your VPC subnets. So you can mount from anywhere within your VPC and it provides read after write consistency. So there you go. That's the Hey, this is Angie brown from exam Pro. And we are looking at elastic block store also known as EBS, which is a virtual hard drive in the cloud create new volumes attached easy to instances, backup via snapshots and easy encryption. So before we jump into EBS, I wanted to lay some foundational knowledge that's going to help us understand why certain storage mediums are better than others based on their use case. So let's talk about IOP. So IOP stands for input output per second it is the speed at which non contiguous reads and writes can be performed on a storage medium. So when someone says hi IO, there, they're saying that this medium has the ability to do lots of small fast reads and writes. Then we have the concept of throughput. So this is the data transfer rate to and from the storage medium in megabytes per second. Then you have bandwidth, which sounds very similar, but it's different. And so bandwidth is the measurement of total possible speed of data movement along the network. So to really distinguish between the throughput and the bandwidth, we're going to use the pipe in water example. So think of bandwidth as the pipe and throughput as the water. Okay, so now let's jump into EBS. So we are now on to talking about EBS and it is a highly available durable solution for attaching persistent block storage volumes to easy to instances, volumes are automatically replicated within their AZ to protect them from component failure. And we have five types of EBS storage to choose from we have general purpose provision, I Ops, throughput, optimized HDD, cold, HDD and EBS. Magnetic, okay, and so we do have some short definitions here, but we're going to cover them again here. Alright, so we're gonna look at the different volume types for EBS, and just try to understand their use cases. And so again, there are five types, and we're gonna go through each one starting with general purpose. And it is as the name implies, good for general usage without specific requirements. So you're gonna be using this for most workloads like your web apps, and has a good balance between price and performance for the actual attributes underneath it, it can have a volume size between one gigabytes and 16 terabytes, and a max I ops of 16,000 per second. Moving on to provision I ops SSD, it's really good when you need fast input and output, or the more verbose description is when you need mission critical, low latency or high throughput. So it's not just eye Ops, it's also high throughput as well, it's going to be great for large databases. So you know, think RDS, or Cassandra. And the way you know, when you should start using provision I ops if you exceed 16,000, I often see that's where the limit is for general purpose. So when you need to go beyond that, you're going to want to use this one, or if the throughput is greater than 250 megabytes there as well. Now, the volume size here can between four gigabytes and 16 terabytes, and we can have a max I ops of 64,000. Okay, moving on to our hard disk drives, we have throughput, optimized HDD, and it is a low cost. It's designed for frequency, frequently accessed. data and the throughput and throughput intensive workloads, it's going to be great for data warehousing, big data and log processing stuff where we have a lot of data. The volume size, you can see by default is a lot larger, we have 500 gigabytes to 15 terabytes in the max I ops is 500. Moving over to cold HDD, this is the lowest costing hard drive here. It's less for less frequently used workloads, you're going to have things for storage, okay, so if you want to put backups or file storage, this is going to be a good drive for that. And it has the same volume size as the throughput one is just not as the throughput is lower here, and we're have a max I ops of 250. Moving on to EBS magnetic, we're looking at very inexpensive storage for long term archival storage, where you're going to have between 500 gigabytes and 15 terabytes and a max I ops as As 40 to 200, okay, and so generally it's using previous generation hard drives. So that's why we get that low cost there. But yeah, there is the full spectrum of volume types for you here, on on EBS. So we're gonna look at some different storage volumes starting with hard disk drives. So hard disk drives, is a magnetic storage that uses rotating platters on an actuator arm and a magnetic head. So look, we got this round thing, an arm and a little needle, or head on the end of it remind you something, it's like a record player, right? So h acds is very good at writing a continuous amount of data. So the idea is that once the arm goes down, you have lots of data, it's really good at just writing a bunch of data. Where hard disk drives do not accelerate is when you have many small reads and writes. Okay, so that's high IO, okay. And the reason why is just think about the arm, the arm would have to like, lift up, move to where it needs to right, go down and then right, and then lift up again. And so there's a lot of physical movement there. And that's going to limit its ability to have high i O. So hard disk drives are really good for throughput, because again, it's continuous amounts of data. And so that means fast amounts of data being written continuously. So it's gonna be great for throughput. But you know, the caveat here is does it does have physical moving parts, right? Now we're taking a look at our next storage volume, solid state drives SSDs. And they're different because they don't have any physical moving parts, they use integrated circuits to transport the data and store it on to things like flash memory here. And so SSDs are typically more resistant to physical shock. They're very quiet, because there's no moving parts. And they have quicker access times and lower latency. So they're really, really good at frequently reads and writes. So they're going to have a high i O, they can also have really good throughput. But generally when we're thinking about SSD, we just think hio Okay, so there you go. So looking at our last storage volume, here, we have magnetic tape. And if you've ever seen an old computer, you've seen magnetic tape because they look like film, right? You have these big reels of magnetic tape, and we still use them today, because they are highly durable. They last for decades. And they're extremely cheap to produce. So if it isn't broke, why throw it out. Now, we don't really use magnetic tape in this way anymore with big spools, or sorry, reels, what we do is we have a tape drive down below and you can insert a cartridge into it, which contains the magnetic tape. Okay, so there you go. So I want to talk about how we can move our volumes around. So if you wanted to move your volume from one AZ to another, what you'd have to do is you have to create a snapshot. And then once you create that snapshot, you'd create an ami from that snapshot. And then you could launch an EC two instance into another availability zone. Now for regions, there's a little bit more work involved, it's going to be the same process to begin with, we're going to create a snapshot. And from there, we're going to create an ami from that snapshot. But in order to get into another region, we're going to have to copy that ami. So we're gonna use that copy ami command into region B, and then we're going to launch IBC two instance. And so that's how we're going to get our volumes from one region to another. Okay. So now we're taking a look at how to encrypt our root volume. And so when you create an EC two instance, there is a little through the launch wizard, there is a storage step. And so here we can see our storage volume, that's going to be our root. And we actually have a box over here where we just drop it down and encrypt it based on the method that we want. This wasn't possible prior, I don't know how many years ago was maybe last year or something. But um, you weren't able to encrypt a volume on creation, but you definitely can now. Now what happens if we have a volume that we created that was unencrypted, and now we want to apply encryption to it? Well, we're gonna have to go through a little bit more effort here. And here are the steps below. So the first thing we're going to do is take a snapshot of that unencrypted volume. And then once we have that snapshot, we're going to copy or use the copy command to create another snapshot. And with that, we actually will have the option to encrypt it. And so we will encrypt that copy giving us an encrypted snapshot. And then from there, we will launch a new EC two instance with that encrypted ami and then launch a new EC two instance from That ami and so that's how we're gonna get an encrypted root volume. So there you go. So when you launch an EC two instance, it can be backed by either an EBS volume or an instance store volume. And there's going to be some cases when you want to use one or over the other, but 99.9% of the time, you're going to be wanting to use an EBS volume. And so the EBS volume is a durable block level storage device that you can attach to a single EC two instance. Now the instant store volume is a temporary storage type located on disk that are physically attached to a host machine. And so the key word here is one is durable and one is temporary. All right. And you know, anytime we talk about instant stores, a Ferial is another word that comes up. And so if you ever see that word, it means lasting for a very short time. Okay, so that makes total sense why they're called that sometimes. And so for an EBS volume, if you, if you want to use it within institutions, it's going to the volume is going to be created from an EBS snapshot with an instance or store volume, it's going to be created from a template stored in s3. Now, the way you use these volumes is going to also affect the behavior you see too, because you can stop and start an EC two instance. And the data is going to re persist when it starts back up. Again, when you look at an instance store volume, you cannot stop the instance, if you terminate it, you're going to lose all that data because it's a temporary storage type, you might be able to reboot it, but you won't be able to stop the volume, okay, so you might just have reboot and terminate as your only two options there. And also, you know, when an EC two instance launches up, it goes through status checks, and so the one it goes through is like a host check. If that were to fail, then also the data would be lost there. But generally, when you're spinning up an EC two instance, you don't have any data you're not too worried about any way, but that's just a consideration to list out there. And so we're gonna talk about the use cases. So, so EBS volumes are ideal when you want data to persist. In most cases, you'll want to use an EBS, EBS backed volume. And for instance store it's ideal for temporary backup for storing an applications cash logs or random data. So this is data where when the server is not running you, it should go away. Okay, so there you go. That's the difference between EBS volumes and instant store volume. So we're done with EBS, and we're looking at the EBS cheat sheet so let's jump into it. So elastic block store EBS is a virtual hard disk, and snapshots are a point in time copy of that disk. volumes exist on EBS snapshots exist on s3. snapshots are incremental only changes made since the last snapshots are moved to to s3. Initial snapshots of an EC two instance will take a large crate then subsequent snapshots of taking snapshots of a volume the EC two instance should be stopped before snapshotting you can take snapshots while the instance is still running, you can create amies from volumes or from snapshots. EBS volumes are just define that what they are here a durable block level storage device that you can attach to a single EC two instance. EBS volumes can be modified on the fly so you can increase their storage type or their volume size. volumes always exists in the same AZ as easy to instance. And then looking at instance store volumes a temporary storage type located on disks that are physically attached to a host machine. Instant storage volumes are serial meaning that cannot be if they cannot be stopped. If the host fails and you lose your data. Okay, EBS backed instances can be stopped and you will not lose any data. By default root volumes are deleted on termination. EBS volumes can have termination protection, so don't delete the volume on termination. And snapshots or restored encrypted volumes will also be encrypted. You cannot share a snapshot if it has been encrypted and unencrypted snapshots can be shared with other Eva's counts or made public. So there you go. That's your EBS tg Hey, this is Angie brown from exam Pro. And we are looking at CloudFront, which is a CDN a content distribution network. It creates cache copies of your website at various edge locations around the world. So to understand what CloudFront is, we need to understand what a content delivery network is. So a CDN is a distributed network of servers, which deliver webpages and content to users based on their geographical location, the origin of the web page and a content delivery server. So over here, I have a graphical representation of a CDN specifically for CloudFront. And so the idea is that you have your content hosted somewhere. So here the origin is s3 and the idea is that the server CloudFront is going to distribute a copy of your website on multiple edge locations which are just servers around the world that are nearby to the user. So when a user from Toronto tries to access our content, it's not going to go to the s3 bucket, it's going to go to CloudFront. And CloudFront is going to then route it to the nearest edge location so that this user has the lowest latency. And that's the concept behind. So it's time to look at the core components for CloudFront. And we'll start with origin, which is where the original files are located. And generally, this is going to be an s3 bucket. Because the most common use case for CloudFront is static website hosting. However, you can also specify origin to be an easy to instance, on elastic load balancer or route 53. The next thing is the actual distribution itself. So distribution is a collection of edge locations, which define how cache content should behave. So this definition, here is the thing that actually says, Hey, I'm going to pull from origin. And I want this to update the cache, whatever whatever frequency or use HTTPS, or that should be encrypted. So that is the settings for the distribution. And then there's the edge locations. And an edge location is just a server. And it is a server that is nearby to the actual user that stores that cache content. So those are the three components to so we need to look at the distribution component of CloudFront in a bit more detail, because there's a lot of things that we can set in here. And I'm not even showing you them all, but let's just go through it. So we have an idea, the kinds of things we can do with it. So again, a distribution is a collection of edge locations. And the first thing you're going to do is you're going to specify the origin. And again, that's going to be s3 ec two lb or refer d3. And when you said your distribution, what's really going to determine the cost and also how much it's going to replicate across is the price class. So here, you can see, if you choose all edge locations, it's gonna be the best performance because your website is going to be accessible from anywhere in the world. But you know, if you're operating just in North America and the EU, you can limit the amount of servers it replicates to. There are two types of distributions, we have web, which is for websites, and rtmp, which is for streaming media, okay, um, you can actually serve up streaming video under web as well. But rtmp is a very specific protocol. So it is its own thing. When you set up behaviors, there's a lot of options we have. So we could redirect all the traffic to be HTTPS, we could restrict specific HTTP methods. So if we don't want to have puts, we can say we not include those. Or we can restrict the viewer access, which we'll look into a little bit more detail here, we can set the TTL, which is time to expiry, or Time To Live sorry, which says like after, we could say every two minutes, the content should expire and then refresh it right, depending on how, how stale we want our content to be. There is a thing called invalidations in CloudFront, which allow you to manually set so you don't have to wait for the TTL. to expire, you could just say I want to expire these files, this is very useful when you are pushing changes to your s3 bucket because you're gonna have to go manually create that invalidation. So those changes will immediately appear. You can also serve back at error pages. So if you need a custom 404, you can do that through CloudFront. And then you can set restrictions. So if you for whatever reason, aren't operating in specific countries, and you don't want those countries to consume a lot of traffic, which might cost you money, you can just restrict them saying I'm blocking these countries, or or you could do the other way and say I only whitelist these countries, these are the only countries that are allowed to view things from CloudFront. So there's one interesting feature I do want to highlight on CloudFront, which is lambda edge and lambda edge are lambda functions that override the behavior of requests and responses that are flowing to and from CloudFront. And so we have four that are available to us, we have the viewer requests, the origin requests, the origin response, and the viewer response, okay, and so on our on our CloudFront distribution under probably behavior, we can associate lambda functions. And that allows us to intercept and do things with this, what would you possibly use lambda edge for a very common use case would let's say you have protected content, and you want to authenticate it against something like cognito. So only users that are within your cognito authentication system are allowed to access that content. That's actually something we do on exam pro for the video content here. So you know, that is one method for protecting stuff, but there's a lot of creative solutions here with you can use lambda edge, you could use it to serve up a to b testing websites, so you could have it. So when the viewer request comes in, you have a roll of the die, and it will change what it serves back. So it could be, it could set up a or set up B. And that's something we also do in the exam pro marketing website. So there's a lot of opportunities here with lambda edge. I don't know if it'll show up in the exam, I'm sure eventually it will. And it's just really interesting. So I thought it was worth an hour talking about cloud front protection. So CloudFront might be serving up your static website, but you might have protected content, such as video content, like on exam Pro, or other content that you don't want to be easily accessible. And so when you're setting up your CloudFront distribution, you have this option to restrict viewer access. And so that means that in order to view content, you're going to have to use signed URLs or signed cookies. Now, when you do check this on, it actually will create you an origin identity access, and oh AI. And what that is, it's a virtual user identity that it will be used to give CloudFront distributions permission to fetch private objects. And so those private objects generally mean from an s3 bucket that's private, right. And as soon as that set up, and that's automatically set up for you. Now you can go ahead and use sign URLs inside cookies. So one of these things well, the idea behind it is a sign URL is just a URL that CloudFront will provide you that gives you temporary access to those private cash objects. Now, you might have heard of pre signed URLs, and that is an s3 feature. And it's similar nature. But it's very easy to get these two mixed up because sign URLs and pre signed URLs sound very similar. But just know that pre signed URLs are for s3 and sign URLs are for CloudFront, then you have signed cookies, okay. And so it's similar to sign URLs, the only difference is that you're you passing along a cookie with your request to allow users to access multiple files, so you don't have to, every single time generate a signed cookie, you set it once, as long as that cookie is valid and pass along, you can access as many files as you want. This is extremely useful for video streaming, and we use it on exam Pro, we could not do video streaming protected with sign URLs, because all the video streams are delivered in parts, right, so a cookie has to get set. So that that is your options for protecting cloud. It's time to get some hands on experience with CloudFront here and create our first distribution. But before we do that, we need something to serve up to the CDN. Okay, um, so we had an s3 section earlier, where I uploaded a bunch of images from Star Trek The Next Generation. And so for you, you can do the same or you just need to make a bucket and have some images within that bucket so that we have something to serve up, okay. So once you have your bucket of images prepared, we're going to go make our way to the CloudFront console here. And so just type in CloudFront, and then click there. And you'll get to the same place as me here, we can go ahead and create our first distribution. So we're gonna be presented with two options, we have web and rtmp. Now rtmp is for the Adobe Flash media server protocol. So since nobody really uses flash anymore, we can just kind of ignore this distribution option. And we're going to go with web, okay. And then we're going to have a bunch of options, but don't get overwhelmed, because it's not too tricky. So the first thing we want to do is set our origin. So where is this distribution going to get its files that wants to serve up, it's going to be from s3. So we're going to click into here, we're going to get a drop down and we're going to choose our s3 bucket, then we have path, we'll leave that alone, we have origin ID, we'll leave that alone. And then we have restrict bucket access. So this is a cool option. So the thing is, is that let's say you only want two people to access your, your bucket resources through CloudFront. Because right now, if we go to s3 console, I think we made was data public, right? And if we were to look at this URL, okay, this is publicly accessible. But let's say we wanted to force all traffic through cloud front, because we don't we want to the conference can track things. So we get some rich analytics there. And we just don't want people directly accessing this ugly URL. Well, that's where this option comes in, restrict bucket access, okay, and it will, it will create an origin identity access for us, but we're gonna leave it to No, I just want you to know about that. And then down to the actual behavior settings, we have the ability to redirect HTTP to HTTPS. That seems like a very sane setting. We can allow these to be methods, we're only going to be ever getting things we're never going to be put or posting things. And then we'll scroll down, scroll down, we can set our TTL. The defaults are very good. And then down here, we have restrict viewer access. So if we wanted to restrict the viewer access to require signed URLs of site cookies to protect access to our content, we'd press yes here But again, we just want this to be publicly available. So we're going to set it to No. Okay. And then down below, we have distribution settings. And this is going to really affect our price of the cost we're going to pay here, as it says price class, okay, and so we can either distribute all copies of our files to every single edge location, or we can just say US, Canada, Europe, or just US, Canada, yeah, Europe, Asia, Middle East Africa, or just the the main three. So I want to be cost saving here. So I'm really going to cost us a lot anyway. But I think that if we set it to the lowest cost here that it will take less time for the distribution to replicate here in this tutorial go a lot faster, okay, then we have the ability to set an alternate domain name, this is important if we are using a CloudFront certificate, and we want a custom domain name, which we would do in a another follow along but not in this one here. Okay, and if this was a website, we would set the default route here to index dot HTML. Okay, so that's pretty much all we need to know here. And we'll go ahead and create our distribution, okay, and so our distribution is going to be in progress. And we're going to wait for it to distribute those files to all those edge locations. Okay, and so this will just take a little bit of time here, he usually takes I don't know, like three to five minutes. So we'll, we'll resume the video when this is done. So creating that distribution took a lot longer than I was hoping for, it was more like 15 minutes, but I think the initial one always takes a very long time. And then then after whenever you update things, it still takes a bit of time, but it's not 15 minutes, more like five minutes. Okay. But anyway, um, so our distribution is created. Here, we have an ID, we have a domain name, and we're just going to click in to this distribution. And we're going to see all the options we have here. So we have general origins, behaviors, error pages, restrictions, and validations. And tags. Okay, so when we were creating the distribution, we configured both general origins and behaviors all in one go. Okay. And so if we wanted to override the behaviors from before, we just looked at it here, we're not going to change anything here. But I just want to show you that we have these options previous. And just to see that they are broken up between these three tabs here. So if I go to Edit, there's some information here and some information there. Okay. So now that we have our distribution working, we have this domain name here. And if we had, if we had used our own SSL, from the Amazon certification manager, we could add a custom domain, but we didn't. So we just have the domain that is provided with us. And this is how we're going to actually access our our cache file. So what I want you to do is copy that there. I'm just going to place it here in a text editor here. And the idea here is we want to then, from the enterprise D poll, one of the images here. So if we have data, we'll just take the front of it there, okay. And we are going to just assemble a new URL. So we're going to try data first here, and data should work without issue. Okay. And so now we are serving this up from CloudFront. So that is how it works now, but data is set to public access. Okay, so that isn't much of a trick there. But for all these other ones, I just want to make sure that he has public access here and it is set here yep to public access. But let's look at someone that actually doesn't have public access, such as Keiko, she does not have public access. So the question is, will CloudFront make files that do not have public access set in here publicly accessible, that's what we're going to find out. Okay. So we're just going to then assemble an another URL here, but this time with Keiko, okay, and we're gonna see if we can access her All right. Okay, oops, I copied the wrong link. Just copy that one more time. Okay, and there you go. So Keiko is not available. And this is because she is not publicly accessible. Okay. So just because you create a CloudFront distribution doesn't necessarily mean that these files will be accessible. So if we were to go to Keiko now and then set her to public, would she be accessible now through CloudFront? Okay, so now she is all right. So so just keep that in mind that when you create a CloudFront distribution, you're going to get these URLs and unless you explicitly set the objects in here to be publicly accessible, they're not going to be publicly accessible. Okay. But yeah, that's all there is to it. So we created our conference. So we need to touch on one more thing here with CloudFront. And that is invalidation. So, up here we have this Keiko image which is being served up by CloudFront. But let's say we want to replace it. Okay, so in order to replace images on CloudFront, it's not as simple as just replacing an s3. So here we have Keiko, right, and this is the current image. And so let's say we wanted to replace that and so I have another version of Keiko here. I'm just going to upload it here. And that's going to replace the existing One, okay. And so I'm just going to make sure that the new one is here. So I'm just going to right click or sorry, gonna hit open here, make sure it's set to public. And then I'm just going to click the link here. And it still now it's the new one, right, so here we have the new one. And if we were to go to the CloudFront distribution and refresh, it's still the old image, okay, because in order for these new changes to propagate, you have to invalidate the old, the old cache, okay, and that's where invalidation is come into play. So, to invalidate the old cache, we can go in here to create invalidations. And we can put a wildcard to expire everything or we could just expire. Keiko. So, for Keiko, she's at Ford slash enterprise D. So we would just paste that in there. And we have now created an invalidation. And this is going to take five, five minutes, I'm not going to wait around to show you this because I know it's going to work. But I just want you to know that if you update something in order, in order for it to work, you have to create a validation. So it's time to look at the CloudFront cheat sheet. And let's get to it. So CloudFront is a CDN a content distribution network. It makes websites load fast by serving cache content that is nearby CloudFront distributes cached copies at edge locations, edge locations aren't just read only you can actually write to them. So you can do puts to them. We didn't really cover that in the core content. But it's good to know CloudFront has a feature called TTL, which is time to live and that defines how long until a cache expires. Okay, so if you set it to expire every hour, every day, that's how fresh or I guess you'd say how stale your content is going to be. When you invalidate your cash, you're forcing it to immediately expire. So just understand that invalidations means you're you're refreshing your cash, okay? refreshing the cast does cost money because of the transfer cost to update edge locations, right. So if you have a file, and it's and it's expired, it then has to then send that file to 1020, whatever amount of servers it is, and there's always that outbound transfer cost, okay? origin is the address of where the original copies of your files reside. And again, that can be a three EC two, lb raffa, d three, then you have distribution, which defines a collection of edge locations and behavior on how it should handle your cache content. We have two types of distributions, we have the web distribution, also known as web, which is for static website content. And then you have rtmp, which is for streaming media, again, that is a very specific protocol, you can serve up video streaming via the web distribution. Then we have origin identity access, which is used to access private s3 buckets. If we want to access cash content that is protected, we need to use sign URLs or signed cookies, again, don't get signed roles confused with pre signed URLs, which is an s3 feature, but it's pretty much the same in terms of giving you access to something, then you have lambda edge, which allows you to pass each request through a lambda to change the behavior of the response or the request. Okay, so there you go. That is cloud front in a nutshell. Hey, this is Andrew Brown from exam Pro. And we are looking at relational database service RDS, which is a managed relational database service and supports multiple SQL engines, easy to scale, backup and secure. So jumping into RDS RDS is a relational database service. And it is the AV solution for relational databases. So there are six relational database options currently available to us. So we have Amazon Aurora, which we have a whole section dedicated on MySQL, Maria dB, Postgres, which is what we use it exam Pro, Oracle and Microsoft SQL. So let's look at what we can do for encryption, so you can turn on encryption at rest for all RDS engines, I've noticed that you might not be able to turn on encryption for older versions of some engine. So sometimes this option is not available, but generally it always is. And also, when you do turn on encryption, it's also going to encrypt, as well as your automated backups, your snapshots and your read replicas related to that database. And encryption is handled by AWS key management service kms, because it always is. So you can see it's as simple as turning on encryption, and you can either use the default key or provide another kms key that you were taking a look here at RDS backups. Okay, so we have two solutions available to us starting with automated backups, we're going to do is you're going to choose a retention period between one and 35 days. Now generally, most people we're going to set this to seven, and if you were to set it to zero, that's actually how you turn it off. So when they say that, automated backups are enabled, Default, they just mean that they fill it in with like seven by default for you. And you can just turn that to zero, it's going to store transaction logs throughout the day, all the data is going to be stored inside s3, and there is no additional charge for those. Those backups, okay, you're going to define when you want backups do occur through a backup windows. So here you can see UTC six, and the duration can't be longer than a half an hour. And then storage and IO may be suspended during backup. So just understand that you might have some issues during that period of time. So you might really want to choose that time carefully. Now, the other way is manual snapshots. And all you have to do is drop down actions and take a snapshot. So it is a manual process. Now, if your primary database or your RDS was instance was deleted, you're still going to have the snapshot. So if you do want to restore a previous snapshot, you totally can do that. Okay, they don't go when you delete the RDS. So let's learn how to actually restore a backup now. And it's as simple as dropping down actions here and choosing restore to point in time. So when recovering, AWS will take the most recent daily backup and apply transaction log data relevant to that day. This allows point in time recovery down to a second inside the retention period. backup data is never restored over top of an existing instance, what it's going to do is when you restore an automated backup, or manual snapshot, it's going to create a new instance, from that create, for the created restored database, okay. And so when you do make this new restored RDS instance, it's going to have a new DNS endpoint. And you are going to have to do a little bit of manual labor here because you're going to want to delete your old instance, and then use this new endpoint for your applications. Okay. So we're going to be looking at multi AZ deployment. And this ensures your database remains available if another AZ becomes unavailable. So what it's going to do, it's going to make an exact copy of the database in another availability zone, and it is automatically going to synchronize those changes from the primary database, the master database over to the standby database. Okay. So the thing about this standby is that it is a slave database, it is not receiving any real time traffic, it is just there's a backup to take the place of the master database in the case of the AZ goes down. So over here we have automatic failover protection. So if the AZ does go down, then failover will occur, it's going to point there's like a URL or address that that says that points to the database. So it's going to point to the slave and the slave is going to be promoted to master and now it is your master database. All right, so that's multi AZ going to take a look at Reed replicas, and they allow you to run multiple copies of your database. These copies only allow reads, so you can't do writes to them. And it's intended to alleviate the workload of your primary database, also known as your master database to improve performance. Okay, so in order to use read replicas, you must have automatic backups enabled. And in order to create a replica, you're just dropping down actions here and hitting create read replica as easy as that. And it uses a synchronous replication between your master and your read replica. So you can have up to five replicas of the database, each read replica will have its own DNS endpoint. You can have multi AZ replicas, cross region replicas, and even replicas of replicas. replicas can be promoted to their own database, but this will break replication, which makes a lot of sense, and no automatic failover. So if the primary copy fails, you must manually update your roles to point at copy. So now it's time to compare multi AZ and read replicas because it's very important to know the difference between the two. So for replication, multi AZ has synchronous replication and read replicas have a synchronous replication for what is actually active on multi AZ it's just gonna be the primary instance. So the standby doesn't do anything, it's just there. If the primary the primary instance becomes unavailable, then it becomes the primary instance where read replicas the primary and all the replicas are being utilized. Okay, we're going to have autom automated backups are taken from the standby word read replicas there are no backups configured by default, multi AZ as the name implies, will span to azs. within a single region. replicas are within a single AZ but they can be multi AZ cross az, or cross region. Okay. When upgrades are occurring, it's going to happen on the primary database for multi AZ when upgrade upgrades. To create and read replicas, it's going to be independent from the source instance. And then lastly, we have failover. So automatic failover will happen to standby. And so for read replicas, you're not going to have that automatic failover, you're gonna have to manually promote a, one of those replicas to become the standalone data. Hey, it's Angie brown from exam Pro, and we are looking at Amazon RDS. And so we are going to create our own RDS database, as well as an aurora database, as well as looking how to migrate from RDS to Aurora. And also maybe we'll look at some backup solutions there and some of the other superflous features that RDS supplies us. If you're wondering how do you get to the console here, you go the top here, type RDS, and click that and you will end up in the same place as I am. So let's get to it and create our first database. Okay, and we're going to do that by going on the left hand side here to databases, and clicking Create database. So here we are in the RDS creation interface. And the first thing we're presented with is with crates, standard crate and easy crate, I assume that this would eliminate some options for us, we're going to stick with standard, because we want to have full control over what we're doing here of the next thing is to choose your engine. And so we're going to Aurora later. And we're going to spin up a Postgres database, which is very popular amongst Ruby on Rails developers, which is my primary web framework that I like to use, then under templates, this is a more configuration for you that allows you to get started very easily. So if you leave this to production here, I want to show you the cost because it's laughably really inexpensive. It's $632, because it's doing a bunch of stuff here is running more than one AZ it's using a very large EC two instance, and it has provisioned I Ops, okay. And so for our use case, I don't think we want to spend $632. But if you were like an enterprise, it makes sense why they would do that. But if you aren't paying attention, that's very expensive. And there's obviously the free tier, which is what we will want to use here. But we will configure this so that it will end up as free tier here. And we will learn the options as we go through it. So the first thing is we're going to have to have a database, so we'll just keep our database name as database one, then we need to set a master password. So we will set it as Postgres. And we'll see if we can get to get away with that. Obviously, when you make your real password, you'd use a password generator, or let it auto generate one. So it's very long in length. But I just want to be able to work with this database very easily for the purpose of you know, this, this follow along, okay, then we have our DB instance size. So for the DB instance size, you can see it's set to standard classes m, I would say the standard class is more like burstable, which are t twos. And so this is what we're used to when we're saving money. So we have the T two micro if you are a very small startup, you would probably be starting on T two micro and do totally fine for you. So we're going to change it to T two micro okay. And the next thing is storage class. so here we can choose I provisioned I ops. And so we have faster, oh, I Ops, right, but we're gonna go general here, because we don't need, we don't need that crazy amount of I ops there. And that's going to reduce storage, there is this ability to do storage, auto scaling. So this is kind of nice, where it dynamically will scale your database to your needs. Um, I think I will just leave that on, I don't see why we wouldn't want to keep that on unless there's additional cost to don't believe there is. Then there's multi AZ and so that would set up a nother database for us in another availability zone in a standby, I don't think we need that. So we're going to turn that off. But just to show you how easy it is to turn that on, then we're going to need to choose our VPC, it's very important, whatever web application you are deploying that your your RDS databases in the same VPC or it's going to be a bit of trouble trying to connect to it. So we'll just leave that in the default there. There's some additional connectivity options here. And so we is a subnet group. So we're gonna have to default, we're, we're gonna ask whether we want this to be publicly accessible. Generally, you're not going to want a public IP address. But if we want to interact with this database very easily, I'm going to set it to Yes, for the for the sake of this follow along, because it would be nice to put some data into this database, interact with it with table plus, and then we'll go ahead and delete it. Okay. Then down below, we have a VPC security group, I'm thinking that it will probably create one for us by default, so we'll leave it with the default one, which is totally fine. We can choose our preference for AZ I don't care, we'll leave it by default. And then we have the port number 5432, which is the standard port number there. You might want to change it just for the sake of security reasons. Because if you change the port number then people have to guess what it is. And there's some additional configurations here. So what we have is the initial database name, if you're not specify when RDS does not create a database Okay, so we probably want to name our database here. So I'm just going to name the database one here. You can also authenticate using IMDb authentication. So if that is one way you want to authenticate to your database, that is definitely a really nice way of doing that. So you might want to have that checkbox on, then you have backups, okay, and so backups are enabled automatically, and they're set for seven days. If you want to turn backups, which I definitely do, I'm gonna set it to zero days, if we had left that on and created our RDS instance, it would take forever to create because immediately after it starts up, it's going to then create a backup. And that just takes a long time, you can set the backup window here and select when you want to have it run, there is the chance of interruptions during a backup window. So you definitely want to pick this one, it's it's not the most important usage by your users, we can enable performance insights, I'm pretty sure I thought that was only accessible for certain classes of database performance, advanced database performance monitoring, and offers free tier for seven days of rolling, okay, sure, we'll turn it on. But at one point, you had to have a higher tier to be able to use this, then we have the retention period here for performance insights, I guess it's seven days we'll leave that there appear appears to be encrypted by default. That seems like a good thing there. There's an account, Id kind of ignore my kindly ignore my account ID for this account. But this is a burner account. So it's not like you're going to be able to do anything with that ID. We have enhanced monitoring, I don't need that. So I'm going to turn that off, that seems kind of expensive. We can export our logs, that is a good thing to have. We can have it so it automatically upgrades minor versions, that is a very good thing to set, you can also set the window for that, we can turn on deletion protection, I'm not going to have that on because I'm going to want to delete this lickety split. And so there we go. And so it says 1544. But I know that this is free tier because it's using the T to micro and so we get things so even though it doesn't say it's free, I definitely know it's free. But this gives you the true costs. So after your feature would run out, this is what it would cost about $15 per month at the lowest tier, the lowest thing you can get on on AWS for for your RDS instance. Okay, so let's go ahead and create that database. So I failed creating my database, because it turns out database is a reserved word for this engine. And that's totally fine. So we're gonna have to scroll up here and change the database name. And so I'm just going to change it to Babylon five. And we're going to go ahead and create that. And this time, we're going to get our database. And so now we're just waiting for our database to be created here. And so we just saw the region AZ pop into existence here, and it is currently being created, you might have to hit refresh here a few times. So we'll just wait a little bit here until this status has changed. So our database is available. And now we can actually try to make a connection to it and maybe put in some SQL and run a query. And so before we can try to even make a connection, we need to edit our security group, okay, because we are going to need access via Port 5432 to connect to that instance there. So we'll just edit our inbound rules. And we're going to drop down and look for Postgres in here. So there's 5432. And we're gonna set to only rip because we don't want make this publicly accessible. And we will hit save, okay. And so now if we want to make connection, we should have no trouble here. I'm just going to close that tab, and we're going to have to collect some information, you're going to need to use a tool such as table Plus, if you are on Mac or Windows, it's free to download and install if you're on Linux, or you could use B, I think it's called D Beaver. Okay, so that's an open source SQL tool here. And so I'm gonna just make a new connection here. And we're just going to choose Postgres, we're going to fill in some information. So I called this Babylon five. Okay, and that was the name of the database as well, Babylon five. All right, and the username was Postgres. And the very, not secure password is, is Postgres as well, again, if you are doing this for production, or any case, you should really generate a very long password there. And then we need the host, the host is going to be this endpoint here, okay. And the port is 5432 by default, so I don't have to do anything special here. I'm just gonna hit test and see if we connect. Okay, and so it went green. So that is great. I'm gonna hit save. So I can save myself some trouble here. I will just double, double click and make a connection. There's a bit of latency here when you are connecting to RDS and just running things. So if you don't see things immediately, I just give it a little bit time or hit the refresh here. But I already have a SQL script prepared here. I'm just going to show it to you. So this is a script and what it does, actually should have been shouldn't have been called the Babylon five database because I'm mixing Star Trek with Babylon five. Ridiculous, right? But this is a bunch of starship classes from Star Trek and a bunch of starships from Star Trek here. And I'm going to run the script to get us some data here. Okay, and so if we do it for table plus, we're gonna go import from SQL dump. And I have it on my desktop here called Starfleet ship registry, I'm gonna hit open. Okay, I'm going to import, I'm just going to run that script and import us our data here into Postgres. Now, if you're using a different database type is MySQL, Oracle, I can't guarantee that this will work. But it will definitely work for Postgres, because SQL does vary based on engines, okay, and so it says it's successfully done, it even tells us to do a refresh here, there is a nice refresh button up there that you can click, and we're gonna wait for our tables to appear. So there we are, we have our ship classes. And we also have our starships. Okay, I just want to run one query here to make sure queries are working, I'm sure it's gonna work. And we're gonna go over here, and we're just going to get out, we're going to want to pull all the starships that are of ship class defiant. Okay, and we'll just make a new query here and or run the old say, run all Okay, and so there you go. So we're getting data. So that's how you can connect to your RDS database, you just have to open up that that port number there, if you are to connect this to your web application, what you probably want to do for your security group, is to just allow 5432 to the security group of the web application. Okay, so here, I gave my access to my IP, right. But you know, you just have whatever your your security group is, you know, here. So if you had one for your EC two instances, your auto scaling, auto scaling group that holds two instances, you just put that in there. Alright. So now that we have our database running, I figured it'd be cool to go check out performance insights, which I'm really excited about, because this service used to be only available to a certain expensive tier, so you had to pay, I don't know, it was like, like a T to large before you can utilize this. But now it looks like AWS has brought it down all the way to the T to micro. And it just gives you some rich performance insights into your application here. So here actually ran that query, it actually shows me kind of the performance over time. So this is really great to see here. I bet if I was to perform another query, it would repair so I could probably just run the same one here. And we could just change it to a different class. So we're just going to go here and change it. Let's pick one at random like this one here. Okay. And I'll just run that there. And so I ran that query. And I'm not sure how real time this is, because I've actually never had a chance to use it until now, because I just never, never wanted to upgrade for that there. So it looks like it does take a little bit of time for those queries to appear. But I did run a query there. So I bet it will come in, it probably is he says fast past five minutes. So I'm going to assume that it's at a five minute interval. So if we waited five minutes, I'm sure this query would show up there. But just nice to know that you can get these kind of rich analytics here, because normally, you'd have to pay for data dog or some other third party service. And now it's free with AWS. So I just want to quickly show you that you can reserve instances with RDS just like EC two, and you can start saving money. So just go to the reserved instances tab here and go to purchase a reserved DB instances. And we're going to have to wait a little bit of time here for this to load, it's probably because it's getting the most up to date information about pricing. And so what we're going to do is just go through this here and just kind of get an idea of the difference in cost. So we're going to drop down and choose Postgres as our database, I always seem to have to select that twice. Okay, but now I have Postgres selected, we are using a T to micro Okay, we are not doing multi AZ one term seems great to me, we will first start with no upfront, we only want one DB instance. And we'll look at what we're getting. So here, it's going to tell us what we're going to save. So it's gonna say it's at 0.0 14 cents per hour. So to compare this, I have the pricing up here. So for a tea to micro, it is a point 00 18 there, okay? And so that's your savings there. So if you just fiddle with this, you'll see now it's 007, which is considerably cheaper, and you have all up front and that can't be right. So 000 I guess that would be the case because you you've already paid for it. So there'll be no hourly charge that makes total sense. But now you have an idea of what that cost is for the year. So for $111 your, your your, your cost is totally covered for you. And so if we wanted to actually calculate the full cost here, we would just go here and generally the number is between watching grab the full full price here to get a comparison. Where is our TT micro buddy here? Here it is. So I always just do 730 times because that's generally how many hours there are in a month, seven 730 by that so that you have basically a $14 charge. So you say 14 times 12. Okay. And so it's $168 for the year. So if you're paying upfront for one year, we are saving about 50 bucks. If we go on for three years, we're saving more money, I'm not gonna do the math on that, but you get the idea. So just be aware those options are available to us at the T to micro stage, it's not a huge impact. But when you get to these larger instances, you realize you definitely want your savings. Okay, so yeah. So I'm going to show you how to create a snapshot for your database, it's pretty darn straightforward. We're going to go into our database, go to our maintenance and backups. If we had backups, you know, they'll be turned on here. And so just to take a snapshot, which is the manual process of backing up, we can name our snapshot, whatever he wants, like to say, first snapshot there. Okay, and then we'll just press take snapshot. And it's just going to go from into creating state. And we're just going to now wait for that snapshot to complete. So our snapshot is now available to us. And so there's a few things we can do with it. This only took about seven minutes. I didn't wait that long for the snapshot here. But if we go to the top here to actions, there's a few things we can do, we can restore our snapshots. So that's the first thing we're going to look at here. And so you're gonna be presented with a bunch of options to pretty much spin up a new RDS instance here. And so the reason why you might want to do this is you have a database, and you have outrun or outlived the size that you're currently using. So if you're using that T to micro, which is super small here, we'll just use, we'll just show t three micro here as an as an example. And you wanted to increase the neck size, you would do so. And you could also switch to multi AZ change your storage type here etc. And then you could restore it, which will spin up a new RDS instance, okay. And then you just kill your old one and move your endpoints over to this one. Alright. So that's one thing we can do here. With restoring a snapshot, the other is migrating a snapshot, okay. And so we'll look into that next here. So just before we get onto a migrate snapshot, let's take a look at copy and share. So copy allows you to move your snapshot to another region. So if you need to migrate your snapshot somewhere else, this is how you're going to go about doing that. And then you can also enable encryption. So if you don't have encryption enabled, this is a good opportunity for you to encrypt your snapshots. So when you launch an RDS instance, it will be encrypted. Okay, just like an easy to instance. And so then we have the ability to share. So now let's say you wanted to make this snapshot available to other people via other AWS accounts, where you'd add their ID here. And so now they would be able to reference that snapshot ID and utilize it. Or you can also set it to public. So that anyone, anyone could access the snapshot. But you know, we're we're just going to leave that alone, just so you are aware of those two options. Now the one that is of most interest is migrating. So this is how you're going to create an aurora database, okay, so you can just directly create an aurora database, but if you wanted to migrate from your RDS, Postgres to Aurora, Postgres, this is how you're going to go about it. Okay, so we're just going to choose, obviously, Aurora, Postgres, because we're dealing with a Postgres database here, that we have our engine version, okay. So this is an opportunity where we could upgrade our version, we're going to change our instance class. Now, Aurora instances are a lot larger than your normal RDS instances. So we're not going to have a teaching micro here, you might want to skip this step, because it is kind of expensive, and you might forget about it. So you don't want to leave this thing running. So down below, I'm gonna just choose t to medium because that is the least expensive option I have here. And I'm just going to end up doing this anyway. So it's not a big deal. Then we can choose our VPC, we're going to leave it to the default here, we can make it publicly accessible. I'm gonna leave it publicly accessible here, because I don't care. Um, and yeah, so there you go. And we'll just scroll down here, and we will migrate. Okay. And so you might get a complaint here, sometimes I get that. And so what I normally do is I just go ahead and hit migrate again. Okay. Let me just drop down the version, maybe it won't let us do it for version 10.6. Okay, and we'll hit migrate one more time. Funny, as soon as you choose 10.7, you have to re choose your instance class there. So I'll go back to T two t three medium there and now hit migrate. Okay, so now it's going to go ahead and create that cluster there. So it's going to go ahead and create that cluster. You can see here, we had a two to previous failed attempts there when I hit a save there, so those will vanish. But we're just going to wait a while for this to spin up. So our migration has completed. And so our RDS instance is now running on Aurora. So let's just take a quick peek inside of here, it did take a considerable amount of time, I think I was waiting about like 20 minutes for this Aurora instance to get up here. And so right away, you're gonna see that we have a cluster. And then we have the writer underneath. We have two endpoints, one for writing and one for reading. And you can obviously create your own custom endpoints here. But we're just going to go back here and I just want to show you that you can connect to this database. So going back to table plus, we're going to create a new connection and inherited all the settings from our previous database. So just grabbing the reader endpoint here, I'm just going to paste in the host name. We called the user was Postgres. The password was Postgres. Not very secure password. By the way, the database is called Babylon five. Okay, and we'll just say this is our Aurora, Aurora, Babylon five. babbie. Ilan five. I don't know why having such a hard time spelling that today. is okay. I think I spelt it wrong there. Oh, okay. But anyway, let's just test our connection to see here if it works definitely have spelled something wrong here. Okay, there we go. So it's the same the same credentials, right, just the host has changed. And I can obviously connect to that. And we will see, we'll have read only access to our data there. So yeah, it's the same process. Yeah, and there you go. So just just to peek around here, you can create additional readers. So you know, so you have more read replicas, we also have this option for activity stream, which is for auditing all the activity. So this might be for an enterprise requirement there for you. But we're pretty much done with this cluster here. So I'm just going to go to databases here. And I'm just going to terminate it here. And so when we want to terminate that here, we have, we can go down here and just delete, and we just type in delete me. Okay, and that's going to take out the whole thing here. Okay. So once this is done here, we'll just have to hit refresh there. And this will take a considerable long time, see, it's deleting both, then this URL will be gone here. So yeah, there you are. So um, we created an RDS Postgres database, we connected to it. We created, we migrated it to Aurora. But I wanted to show you a little bit more with Aurora. Because I don't feel like we got to look at all the options here. And we're only going to be able to see that by creating a new instance here. So we're going to stick with the standard create there, we're going to have Amazon Aurora, we have the option between MySQL and Postgres, we're going to select Postgres, and which is going to have on the version 10.7 there. And what I really want to show you here is this database feature setting. So we had this set here, which had one writer and multiple readers. And so you're continuously paying for for Aurora there, and it's very expensive. But we have this option called serverless. And serverless is a very inexpensive option. So let's say we were building a web application, and it was in development, so only some few clients were using it, or it was only just be using being used sporadically throughout the month, not a lot of usage, then serverless is going to be a very cost effective option for us to use Aurora and also a way for us to scale up to using Aurora when we need to full time. Okay, so what I'm going to do is just go and set up a serverless. Aurora database here, we're going to have a call database to we're going to also call it Postgres give it that very weak password. And this is the big thing here. So we have this capacity setting. So this is only showing up because we have serverless I'm pretty sure if we checkbox that off, it doesn't appear. So now just we just choose our DB instance size, okay, but we're gonna go back up here and go to serverless. And so the idea here is that we are choosing I believe it's called ACU is the acronym for this capacity in here, but we're gonna choose our minimum and our maximum. So at our minimum, we want to use two gigabytes of RAM and a maximum we want that okay, and we have some scaling options here, which we're going to ignore, we're going to launch this in our default VPC. And then he did they do have a clear warning here. Once you create your database, you cannot change your VPC selection. But I mean, that's the case of these two instances or whatever you can't you always have to create a new one right? But I guess some people aren't aware of that. We are going to leave these alone here. There is this option here for websites Data API. So this allows you to access and run SQL via an HTTP endpoint. This is extremely convenient way of accessing your, your, your database here, and it's only available here because we are using serverless. Okay. And this is the same thing with like the Query Builder. So if you're using the query builder, query editor, which is called that there, by having this enabled, then we can do both, okay, and we can have a retention period, I'm going to set it to one day, I wish I could set it to zero. But with Aurora, you have to have something set up, you can't have backups turned off. And it's going to have encryption by default. So you can see that it's really making sure that we make all the smart decisions here, and we have deletion protection, I'm gonna turn that off, because I definitely want to be able to delete this, and we're gonna hit create database. Okay, so there you go, we're going to just wait for that to create, and then we're going to see how we can use it with the query editor here, maybe loaded up with some data, and etc. So our service is now available to us. And so let's go actually connect to it and play around with our server. And in order to connect to Aurora serverless is a little bit different, because you have to be within the same VPC, we're not going to be able to use table plus. So in order to do or to connect to it, we're gonna have to launch an EC two instance. But to make things really easy, we're going to use Cloud Nine, okay, because cloud nine is a an ID that's backed by easy to instance, it already has the MySQL client installed. So it's going to make it really easy for us. So what I want you to do is go to services here and type in cloud nine. And we will make our way over to the cloud nine console here. And we'll create ourselves a new environment. And so I'm just going to call this MySQL, Aurora serverless, okay, because that's all we're gonna use this for. Okay. And we're gonna hit next step. And we're going to create a new EC two instance, we're gonna leave it at T two micro, the smallest instance there, we're going to launch it with Amazon Linux, it's going to shut down automatically after 30 minutes. So that's great for us. And we'll go ahead and hit next step. Okay, and then we will create that environment. And now we just have to wait for that ID to spin up here. Okay. So it shouldn't take too long. just takes a few minutes. All right. So our cloud nine environment here is ready here. And down below, we have our environment. And so I can type MySQL, okay. And you can see that the client is installed, but we didn't actually specify any information. So there's no way it's going to connect anything here. But let's go ahead and let's go to our RDS because we need to prepare this so we can actually make a connection here for cloud nine. So let's go into the database here and grab this endpoint here. And I've actually prepped a little follow over here with the stuff that we need. So we're gonna need to prepare this command. But before we even do this, okay, we are going to need to update our security group, okay, because we're going to need to grant access to the security group of that EC, or that cloud nine environment. So also, on the right hand side, we'll open our left hand side, we'll open up school groups, again here. And we're going to look for this, this person's or this a cloud environment. So we have one, here, it's this one up here. Okay, and so I just need this, the actual name, the group ID of the security group. And we'll go back to our serverless, or our service security group here, and we're going to edit it here, this looks like it's using the default one, which is kind of a mess, we shouldn't be using this one. But I'm going to go ahead here and just remove those, and we'll drop down and choose MySQL, MySQL, Aurora, if I can find it in here. There it is. Okay, and we'll just paste that in there. And so that is going to allow the cloud nine environment to connect to, or have permission to connect to the Aurora serverless there. So going back to our environment, now we're ready to try out that line. So here is our line here. And so we're just going to copy that whole thing in there and paste it in. Okay, it's gonna prompt for password, and we made it password 123. And there we are. So we're connected to our database there. And so if we wanted to create whatever we want, it would just be as we were doing with Postgres there. So there you go. That's, that's how you create a Aurora serverless database. And that's how you would go about connecting to it. So now it's just time to do a bit of cleanup here. So we are not incurring any costs. Now, Aurora serverless, doesn't cost any money while it's running. So it's not going to cost you anything. I did terminate these other instances earlier on. So you just have to go to the top here and hit Delete. And we don't want to create a final snapshot and we will delete that cluster. The other thing that we need to consider deleting is this cloud nine environment again, it will automatically shut down after 30 minutes. So it's not going to cost you things the long term but you know just to keep it out of your account. You can go ahead here and delete. You can see I was attempting An earlier one here with Aurora serverless Postgres that didn't work and I messed up the cloudformation templates, I can't get rid of that one, but this one will delete here and that's all I got to do to clean up for this section of the RDS cheat sheet and this one is a two pager. So RDS is a relational database service and its AWS solution for relational databases. Artists instances are managed by AWS so you cannot SSH into the VM running the database. There are six relational database options currently available. So we have Aurora, MySQL, Marya dB, Postgres, Oracle and Microsoft SQL Server. Multi AZ is an option you can turn on which makes an exact copy of a database and another AZ that that is only a standby for multi AZ it is automatically synchronizes changes in the database over to the standby copy. Multi AZ has automatic failover protection so if one AZ goes down, failover will occur and the standby slave will be promoted to master. Then we have read replicas replicas allow you to run multiple copies of your database. These copies only allow reads and no writes and is intended to alleviate the workload of your primary database to improve performance. replicas use a synchronous replication, you must have automatic backups enabled to use read replicas. You can have up to five read replicas you can combine read replicas. With multi AZ you can have read replicas in another region. So we have cross region read replicas, read replicas can be promoted to their own database. But this breaks replication. You can have read replicas of read replicas, RDS has two backup solutions. We have automated backups and database snapshots to be manual snapshots. But it means the same thing. So automated backups, you choose a retention period between one and 35 days, there is no additional cost for backup storage, you define your backup window, then you have manual snapshots. So you might you manually create backup backups. If you delete your primary, the manual snapshots will still exist, and you can they can be restored. When you restore an instance it will create a new database, you just need to delete your old database and point traffic to the new restore database. And you can turn on encryption at rest for RDS via kms. So there you go. That's it. This is Angie brown from exam Pro. And we are looking at Aurora which is a fully managed Postgres or MySQL compatible database, designed by default to scale and is fine tuned to be really really, really fast. Looking more here at Aurora, it combines the speed and availability of a high end database with the simplicity and cost effectiveness of an open source database. So Aurora can either run on MySQL or Postgres compatible engines. And the advantage of using Aurora over just a standard RDS, Postgres or MySQL engine, is the fact that it's fine tuned to be super for performance. So if you're using MySQL, it's five times faster than your traditional MySQL. And the Postgres version is three times more performant than the traditional Postgres. And the big benefit is the cost. So it's 1/10 10th the cost of other solutions offering similar performance and availability. So let's talk about Aurora scaling, which is one of its managed features. So it starts with 10 gigabytes of storage initially, and can scale in 10 gigabyte increments all the way up to 64 terabytes, so you have a lot of room for growth here. And storage is auto scaling. So just happens automatically. for computing power computing, resources can scale all the way up to 32 VPC use, and up to 244 gigabytes of memory. Let's take a look at aurors availability and you can see that it's extremely available because it runs six copies of your data across three availability, availability zones, with two in each single AZ Okay, so if you were to lose two copies of your data, it would not affect right availability. If you were to lose three copies of your data, it would not affect read availability. So this thing is super super bomb. Now looking at fault tolerance and durability for Aurora backups and failover are handled automatically if you wanted to share your data to another Eva's account snapshots can be shared. It also comes with self healing for your storage so data blocks and disk are continuously scan for errors and repaired automatically. Looking at replication for Aurora There are two types of replicas available we have Amazon or more replicas in MySQL read replicas knows for MySQL we can only have up to five for performance impact on primary is high. It does not have auto Automatic failover However, it does have support for user defined replication delay, or for different data or schema versus primary. So you have to decide for yourself, which one makes more sense for you. But just for exams, you might need to know there's two different types. If for whatever reason they had you looked into Aurora pricing, you'd find out, it's really expensive if you aren't using it for high production applications. So if you're a hobbyist, like me, and you still want to use Aurora, Aurora has Aurora serverless, which is just another mode that it runs in. And the advantage here is that it only runs when you need it to and it can scale up and down based on your applications needs. And so when you set serverless, in the database features, you're going to have this capacity settings, so you can set the minimum and maximum capacity for a work capacity units, also abbreviated as ACU. And so here it's between two and 384 ac use. And that's what it's going to charge you based on only when it's consumed. So when would you want to use a word serverless, what's really good for low volume blog sites, maybe a chatbot. Maybe you've built an MVP that you are demoing out to clients, so it's not used very often, but you plan on using your word down the road. So that's the use case for Aurora. It works with both MySQL and Postgres. For over a year, Postgres wasn't there, but now it is here. There are some limitations on the versions of Postgres and MySQL are bicycles, you can use it, it used to be only MySQL 5.6. But last time I checked, I saw 5.6 and 5.7. For my school. And for Postgres, I saw a lot of versions. So there is a lot of flexibility there for you. But there are some limitations around that. There's also other things that it can't do that Aurora can do. But it's a big long list. I'm not going to listen here, but I just want you to know, the utility of Aurora thermal. We've finished the raw section and now on to the raw cheat sheet where we're going to summarize everything that we've learned. So when you need a fully managed Postgres or MySQL database that needs to scale, have automatic backups, high availability, and fault tolerance. Think Aurora, Aurora can run on my skull or Postgres database engines. Aurora, MySQL is five times faster over regular MySQL, and Aurora, Postgres is three times faster over regular Postgres. Aurora is 1/10. The cost over its competitors with similar performance availability options, over replicate six copies of your database across three eyzies. Aurora is allowed up to 15 Aurora replicas on Aurora database can span multiple regions via Aurora global database. Aurora serverless, allows you to stop start Aurora and scale automatically while keeping costs low. And the ideal use case for serverless is for new projects or projects with infrequent database usage. So there you go. That's everything you need to know about. We are looking at Amazon redshift, which is a fully managed petabyte size data warehouse. So what we use a data warehouse for we would use it to analyze massive amounts of data via complex SQL queries. Amazon redshift is a columnar store database. So to really understand what redshift is, we need to understand what a data warehouse is to understand what a data warehouse is, it's good to compare it against a database and understand this, we need to set some foundational knowledge and understand what a database transaction is. So let's define a database transaction. A transaction symbolizes a unit of work performed within a database management system. So an example of a transaction or reads and writes that's as simple as that. And for database and data warehouse, they're going to treat transactions differently. And so for a database, which we have an online transactional processing system and OLTP, the the transactions are going to be short. So look at the bottom here, we say short transaction. So that means small and simple queries with an emphasis on writes. Okay, so why would we want short transactions? Well, for OLTP? Well, a database was built to store current transactions, and enables fast access to specific transactions for ongoing business processes. So they're just talking about I have a web app. And we need to be very responsive for the current user for reads and writes. Okay, and so that could be adding an item to your shopping list. That could be sign up that could be doing any sorts of thing in a web application. And generally, these are backed by a single source. So a single source would be Postgres on could be running on RDS. And so that's the idea behind a database. So if we go over to the data warehouse side, it runs on an online analytical processing system, an OLAP. And all apps are all about long transaction so long and complex SQL queries with an emphasis on reads. So a data warehouse is built to store large quantities of historical data. and enable fast and complicated complex queries across all data. So the utility here is business intelligence tools generating reports. And a data warehouse isn't a single source it is it takes data from multiple sources. So dynamodb, EMR, s3, Postgres all over the place, data is coming into one place so that we can run complex queries, and not too frequently. So now that we know what a data warehouse warehouse is, let's talk about the reasons why you'd want to use redshift. So redshift the Pricing starts at 25 cents per hour with no upfront costs or commitments. It scales up to petabytes, petabytes of data for $1,000 per terabyte per year. redshift is price less than 1/10. The cost of most similar services redshift is used for business intelligence redshift uses OLAP redshift is a columnar store database. It was the second time we've mentioned this. And we really need to understand what a column or storage database is to really understand the power behind redshift and data warehouses. So columnar storage for database is database tables is an important factor in optimizing an analytic query performance because it drastically reduces the overall disk IO requirements and reduces the amount of data you need to load from the disk. So columnar storage is the reason why redshift is so darn fast. And we're going to look at that in more detail here. So let's really cement our knowledge with redshift and show a use case example. So here I have, I want to build my own business intelligence tool. And I have a bunch of different sources. So I have data coming from EMR, I have data coming from s3, I have data coming from dynamodb. And I'm going to copy that data however I want. There's a copy command, I'm going to copy that data into redshift. Okay, so but once that data is in there, you say, Well, how do I interact and access redshift data? Normally, you know, most services use the ABS SDK. But this case, we're not using native SDK, because we just may need to make a generic SQL connection to redshift. And so if we were using Java, and generally you probably will be using Java, if you're using redshift, you'd be using j JDBC, or ODBC, which are third party libraries to connect and query redshift data. So, you know, I said columnar storage is very important to redshifts performance. And so let's conceptually understand what that means. So what would we normally use with a database would be reading via the rows, whereas an in an OLAP, or reading versus columns, because if we're going to be looking at a lot of data and crunching it, we it's better to look at it at columns, okay. Because that way, if we're reading columns, that allows us to store that data as the same database datatype for allow for easy compression, that means that we're going to be able to load data a lot quicker. And because we're always looking at massive amounts of data, at the same time, we can pull in only the columns that we need in bulk, okay, and so that's gonna give us much faster performance for our use case, which is like business intelligence tools. So redshift configuration, you can set it up in two different cluster types. So you have single node, which is a great way to get started on redshift, if you don't have a lot of money you want to play around, you can just launch a single node of 160 gigabytes, or you can launch an multimo, multi node. And so when you launch a multi node, you always have a leader node, and then you have compute nodes. And you can add up to 128 compute nodes, so you have a lot of computing power behind you. Now, I just want to point out that when you do spin up redshift and multi node, you're gonna see there's a maximum set of 32. And I just said, there's 128. So what's going on here? Well, it's just one of those same defaults, where AWS wants to be really sure that you want more than 32. Because you know, if you come in day one, someone had 128, they want to make sure that they have the money to pay for it. So if you need more than 32 nodes, you just have to go ask a request for a certain service limit increase. Now besides there being different cluster types, there's also different node types. And so we have two that are labeled here we have DC dense compute and dense storage DS, okay. And they are as what they say they are one is optimized for computing power, and one is optimized for storage. So you know, depending on your use case, you're going to choose what type of node you want. Notice that there are no smalls or micros, we all we only start at large here. Because if you're doing redshift, you're working with large amounts of data. So you know, that makes total sense. compression is something that is the most important thing in terms of speed. So redshift uses multiple compression techniques to achieve a significant compression. Relative to traditional relational data stores. Similar data is stored sequentially on disk. It does not require indexes or materialized views, which saves a lot of space compared to traditional systems when loading data to An empty table data is sampled. And the most appropriate compression scheme is selected automatically. So this is all great information for the exam, you know, it's not so important remember this. So you know what redshift is, is utilized for these Nitty gritties at F or the associate is not so, important. redshift processing. So, redshift uses massively parallel processing, which they aggrandize or initializes MPP, it automatically distributes data and query loads across all nodes, and lets you easily add new nodes to your data warehouse while still maintaining fast query performance. So yeah, it's easy to add more compute power on demand. Okay, and so we got redshift backups of backups are enabled by default, with a one day retention period, and retention periods can be modified up to 35 days. All right. redshift always attempts to maintain at least three copies of your data. One is the original copy. The second is a replica on the compute nodes. And then the third is a backup copy of s3. And so redshift also could asynchronously replicate your snapshots to s3 in a different region. So you know, if you need to move your data region per region, you have that option as well. For redshift billing, the compute node hours, the total number of hours ran across all nodes in the billing period, build one unit per node per hour, and you're not charged for the leader nodes per hour. So when you spin up a cluster, and you only have one compute node, and one one leader node, you're just paying for the compute node. For backups, backups are stored on s3 and you're billed the s3 storage fees, right. So you know, just same same thing as usual. And data data transfer build only only transfers within a VPC not outside up outside of it. Okay. redshift security. So we have data in transit encrypt using SSL data rest, we can encrypt using a Aes 256 encryption. database encryption can be applied using kms. Or you can use Cloud HSM and here you can see it's just as easy as applying it. redshift availability. So redshift is single AZ super important to remember this because a lot of services are multi AZ but redshift is not one of them maybe in the future, but maybe not. To run in multi AZ you would have to run multiple redshift clusters in a different AZ with with same inputs. So you're basically just running a clone, it's all manual labor, right? So there's no managed automatic way of doing multi AZ snapshots can be restored to a different AZ in the event of an outage occurs. And just to wrap everything up, we have a really good redshift cheat sheet here definitely recommend you print this out for your exam. And we're going to go through everything again. So data can be loaded from s3 EMR. dynamodb, or multiple data sources on remote hosts. redshift is columns columnar store database, which can give you SQL like queries and is an is an OLAP. redshift can handle petabytes worth of data redshift is for data warehousing. redshifts most common use cases business intelligence redshift can only run in one AZ so it's a sink, it's single AZ it's not multi AZ redshift can run via a single node or multi node for clusters. A single node is 150 gigabytes in size. A multi node is comprised of the leader node and multiple compute nodes. You are billed per hour for each node excluding the leader node in multi node, you're not billed for the leader node. just repeating that again there, you can have up to 128 compute nodes. Again, I said earlier that the maximum by default was 32. But they're not going to ask you what the default is. redshift has two kinds of node types dense compute and dense storage. And it should be pretty obvious when you should use one or the other redshift attempts to back backup your data three times the original on the compute node on a three similar data is stored on a disk sequentially for faster reads. Read of data database can be encrypted via kms, or cloud HSM backup retention is default to one day and can be increased to a maximum of 35 days. redshift can asynchronously backup to your snaps to your backup via snapshot to another region delivered via s3. And redshift uses massively parallel processing to distribute queries and data across all loads. And in the case of an empty empty table when importing redshift will sample the data to create a schema. So there you go. That's redshift in a nutshell, and that should help you for the exams. Hey, this is Andrew Brown from exam Pro. And we are looking at Dynamo DB which is a key value and document database, a no SQL database which can guarantee consistent reading rights at any scale. So let's just double check a couple things before we jump into Dynamo dB. So what is a no SQL database? Well today It is neither relational and does not use SQL to query the data for results, hence the no SQL part, no SQL databases, the method of how they store data is different. dynamodb does both key value store and document store. So key value stores when you simply have a key, and a value and nothing more. And then document store is where you have structured data, right? So this whole thing here would be a single value in the database. So again, dynamodb is a no SQL key value and document database for internet skills applications. It has a lot of functionality behind it is fully managed multi region. Multi master durable, is a durable database, built in security, Backup and Restore. And in memory caching. The big takeaway why you'd want to use dynamodb is that you just say what you need, you say I need 100 reads per second, or 100 writes per second, and you're guaranteed to get that it's just based on what you're willing to pay. Okay, so scaling is not an issue here, it's just do you want to pay that amount for whatever capacity you need. So when we're talking about durability, dynamodb does store its data across three regions. And we definitely have fast reads and writes because it's using SSD drives. So that's the level of durability. And the next thing we're going to look into is the consistency because it replicates data across different regions, you could be reading a copies of data, and we might run into inconsistency. So we need to talk about those caveats. So I just wanted to touch on table structure here. Because Dynamo DB does use different terminologies instead of what a relational database uses. So instead of a roll, they call it an item instead of a column or a cell or whatever you want to call it, they just call it an attribute. And then the other most important thing is the primary key which is made up with a with a partition key and a sort key. And that's all you need to know for the solution architect associate for the other certifications, we have to really know this stuff. But this is all we need to know for this case. So consistency is something that's a very important concept when we're dealing with dynamodb. Because when data is written to the database, it has to then copy it to those other regions. And so if someone was reading from region C, when an update was occurring there is that there's that chance that you're reading it before it has the opportunity to write it. Okay. And so dynamodb gives us a couple options to give us choices on our use case. And we'll go through the two. And so the first one is eventual consistent reads, which is the default functionality. And the idea here is when copies are being updated, it is possible for you to read and be returned inconsistent copy, okay. But the trade off here is the reads are fast, but there's no guarantee of consistency, all copies of data, eventually will become generally consistent within a second. Okay, so the here that the trade off is that you know, you could be reading it before it's updated. But generally it will be up to date. So you have to decide whether that's the trade off you want. That's default option. The other one is strongly consistent reads, okay, and this is one all copies are being updated and you attempt to read it, it will not return a result until all copies are consistent, you have a guarantee of consistency. But the trade off is higher latency, so slower reads, but that the reads are going to be as slow as a second because all copies of data will be consistent within a second. So if you can wait up to a second in the case of a right, then that's what you'll have to do. for eventual consistent reads. If you can tolerate something being consistent because it's not important, then those are your two options. So we're on to the dynamodb cheat sheet. If you are studying for the developer associate this would be two pages long but since this is for the solution architect associate it this is a lot shorter. Okay, so dynamodb is a fully managed no SQL key value and document database. applications that contain large amounts of data but require predictable read and write performance while scaling is a good fit for dynamodb dynamodb scales with whatever read and write capacity you specify per second dynamodb can be set to have eventually consistent reads which is the default option and strongly consistent reads, eventually consistent reads, data is returned immediately, but data can be inconsistent copies of data will be generally consistent within one second. Strongly consistent reads will wait until data is consistent data will never be inconsistent, but latency will be higher, only up to a second though. Copies of data will be consistent within within a guarantee of one second or at you know exactly one second dynamodb stores three copies of data on SSD drives across three regions. And there you go, that's all you need. Hey, this is Andrew Brown. And we are looking at Eva's cloud formation, which is a templating language that defines AWS resources to be provisioned, or automating the creation of resources via code. And all these concepts are called infrastructure as code which we will cover again in just a moment here. So to understand cloud formation, we need to understand infrastructure as code because that is what cloudformation is. So let's reiterate over what infrastructure is code is. So it's the process of managing and provision computer data centers. So in our case, it's AWS, through machine readable definition files. And so in this case, it's cloudformation, template YAML, or JSON files, rather than the physical hardware configuration or interactive configuration tools. So the idea is to stop doing things manually, right. So if you launch resources in AWS, you're used to configuring in the console all those resources, but through a scripting language, we can automate that process. So now let's think about what is the use case for cloud formation. And so here, I have an example, where let's pretend that we have our own minecraft server business, and people sign up on our website and pay a monthly subscription, and we will run that server for them. So the first thing they're going to do is they're gonna tell us where they want the server to run. So they have low latency and what size of servers so the larger the server, the more performant the server will be. And so they give us those two inputs. And then we somehow send that to a lambda function, and that lambda function triggers to launch a new cloudformation stack using our cloud formation template, which defines, you know, how to launch that server, that easy to instance, running Minecraft and a security group and what region and what size. And when it's finished creating, we can monitor maybe using cloud watch events that it's done, and using the outputs from that cloud formation stack, send the IP address of the new minecraft server to the user, so they can log in and start using their servers. So that's way of automating our infrastructure. So we're gonna look at what a cloudformation template looks like. And this is actually one we're going to use later on to show you how to launch a very simple Apache server. But cloudformation comes in two variations. It comes in JSON, and YAML. So why is there two different formats? Well, JSON just came first. And YAML is is an intent based language, which is just more concise. So it's literally the same thing, except it's in that base. So we don't have to do all these curlies. And so you end up with something that is, in length, half the size. Most people prefer to write YAML files, but there are edge cases where you might want to use JSON. But just be aware of these two different formats. And it doesn't matter which one you use, just use what works best for you. Now we're looking at the anatomy of a cloud formation template. And so these are made up of a bunch of different sections. And here are all the sections listed out here. And we'll work our way from top to bottom. And so the first one is metadata. So that allows you to provide additional information about the template, I don't have one of the example here and I rarely ever use metadata. But you know, it's just about additional information, then you have the description. So that is just describing what you want this template to do. And you can write whatever you want here. And so I described this template to launch new students, it's running Apache, and it's hard coded work for us East one, then you have parameters and parameters is something you can use a lot, which is you defining what inputs are allowed to be passed within to this template at runtime. So one thing we want to ask the user is what size of instance type Do you want to use, it's defaulted to micro, but they can choose between micro and nano. Okay, so we can have as many parameters as we want, which we'll use throughout our template to reference, then you have mappings, which is like a lookup table, it maps keys to values, so you can change your values to something else. A good example of this would be, let's say, you have a region. And for each region, the image ID string is different. So you'd have the region keys mapped to different image IDs based on the region. So that's a very common use for mappings. Then you'd have conditions these are like your FL statements within your template don't have an examples here. But that's all you need to know. Transform is very difficult to explain, if you don't know macros are but the idea it's like applying a mod to the actual template. And it will actually change what you're allowed to use in the template. So if I define a transform template, the rules here could be wildly different, different based on what kind of extra functionality that transform adds. We see that with Sam, the serverless application model is a transform. So if you ever take a look at that you'll have a better understanding of what I'm talking about there. Then you have resources which is the main show to the whole template. These are the actual resources you are defining that will be provisioned. So think any kind of resource I enroll etc. instance lamda RDS anything, right? And then you have outputs and outputs is, it's just what you want to see as the end results. So like, when I create the server, it's we don't know the IP address is until it spins it up. And so I'm saying down here, get me the public IP address. And then in the console, we can see that IP address, so that we don't have to, like look at the PC to console pull it out. The other advantage of outputs is that you can pass information on to other cloudformation templates or created like a chain of effects because we have these outputs. But the number one thing you need to remember is what makes a valid template. And there's only one thing that is required, and that is specifying at least one resource. All these other fields are optional, but resource is mandatory, and you have to have at least one resource. So if you're looking for cloudformation templates to learn by example, Ava's quickstarts is a great place to do it, because they have a variety of different categories, where we have templates that are pre built by AWS partners and the APN. And they actually usually show the architectural diagram, but the idea is you launch the template, you don't even have to run it, you can just press a button here and then actually see the raw template. And that's going to help you understand how to connect all this stuff together. Because if you go through the ages documentation, you're going to have to spend a lot of time figuring that out where this might speed that up if this is your interest. So I just wanted to point that out for you. It's not really important for the exam, it's not going to come up as an exam question. It's just a learning resource that I want you to consider. We're on to the cloudformation cheat sheet, please consider that this is specific for the solution architect associate. Whereas for the SIS ops associate, this would be a much longer cheat sheet because you have to know it more in detail. I did add a few additional things we did not cover in the core content just in case they do creep up on the exam. I don't think they will, but I threw them in there just in case. And so let's get through this list. So when being asked to automate the provisioning of resources, think cloudformation. When infrastructure as code is mentioned, think cloud formation. cloudformation can be written in either JSON or YAML. When cloudformation encounters an error, it will roll back with rollback and progress again, might not show up an example, I'm putting it in their cloudformation templates larger than half a megabyte, arc too large. In that case, you'd have to upload from s3. So the most important thing is you can upload templates directly or you can provide a link to an object in an s3 bucket. Okay, nested stacks help you break up cloudformation templates into smaller reusable templates that can be composed into larger templates. At least one resource under Resources must be defined for a cloudformation template to be valid. And then we talk about all the sections. So we have metadata that's for extra information about your template description that describes what your template should do parameters how you get users inputs into the template, transforms and applies macros, outputs, these are values you can use to important to other stacks. So it's just output variables. mappings is like a lookup table. So it maps keys to values resources, define the resources you want to provision. And again, I repeat it, at least one resources required. All these other sections are optional, and conditions, which these are like your FL statements within your cloudformation templates. So there you go. We're all done with cloudformation. Hey, this is Andrew Brown from exam Pro. And we are looking at cloudwatch, which is a collection of monitoring services for logging, reacting and visualizing log data. So I just want you to know that cloud watch is not just one service, it's multiple services under one name. So we have cloudwatch, logs, cloudwatch metrics, cloud watch events, cloud watch alarms, and cloud watch dashboards, I'm not going to go through the list here, because we're going to cover each section and then we'll cover it in the cheat sheet. But just so you know, the most important thing to know is that their cloud watch is not a single service, it's multiple services. So it's time to look at cloudwatch logs, which is the core service of cloud watch. All the other cloud services are built on top of this one. And it is used to monitor store and access your log file. So here we have a log file. And logs belong within a log group that cannot exist outside of the Law Group. So here I have one called production dot log, which is a Ruby on Rails application. And it contains multiple log files over a given period of time, and this is inside of those log files. And we have the ability to like filter that information and do other things with it. So log files are stored indefinitely by default, and they never expire. Okay, so you don't ever have to worry about losing this data. And most Eva's services are integrated cloudwatch logs by default. Now there are some cases where there's actually multiple cases where you have to turn on cloud watch logs, or you have to add Iam permissions. So like when you're creating a lambda function, the default permissions allow you to write to logs. But the thing is, you wouldn't normally realize that you're enabling it. So anyway, that's cloud watch. So now we're going to take a look at cloudwatch metrics, which is built on top of logs. And the idea behind this is it represents a time ordered set of data points, or you can think of it as a variable to monitor so within the logs, we'll have that data and extracts it out as data points, and then we can graph it, right. So in this case, I'm showing you for an EC two instance. So you have some network in coming into that EC two instance. And you can choose that specific metric, and then get a visual of it. So that is cloudwatch metrics. Now, these metrics are predefined for you. So you don't have to do anything. to leverage this, you just have to have logs enabled on specific services. And these metrics will become available when data arrives. Okay. So now we're going to take a look at cloudwatch events, which builds off of metrics and logs to allow you to react to your data and and take an action on that, right. So we can specify an event source based on an event pattern or a schedule, and that's going to then trigger to do something in a target, okay. And a very good use case for this would be to schedule something that you'd normally do on a cron tab. So maybe you need to backup a server once a day. So you trigger that, and then there's probably like, EBS snapshot in here. But you know, that's the idea behind it here. Okay, so you can either trigger based on a pattern or a timeframe. And it has a lot of different inputs here, it's not even worth going through them all. But EBS snapshot, and lambda are the most common. So we're looking at cloudwatch metrics, and we had a bunch of predefined ones that came for us. But let's say we wanted to make our own custom metric, well, we can do that. And all we have to do is by using the avsc ally, the command line interface, or the SDK, software development kit, we can programmatically send data for custom metrics. So here I have a custom metric for the enterprise D, which is namespace and Starfleet. And we're collecting dimensions such as hole integrity, shields and thrusters. Okay, so we can send any kind of data that we want on and publish that to cloudwatch metrics. Now another cool feature about custom metrics is that it opens the opportunity for us to have high resolution metrics, which can only be done through custom metrics. So if you want data, a lot like edit even more granular level of blow one minute, with high resolution metrics, you can go down to one second, and we have these intervals, you can do it one second, five, second, 10, second, 30 seconds. But generally, you know, if you can turn it on, you're probably gonna want to go as low as possible. The higher the frequency, the more it's going to cost you. So do have that in consideration. But the only way to get high resolution metrics is through a custom metric. So now we're taking a look at cloudwatch alarms, which triggers a notification based on a metric when it is breached based on a defined threshold. Very common use case is building alarms. It's like one of the first things you want to do when you set up your AWS account. And so here we have some options when we go set our alarm. So we can say whether it's static, or its anomaly, what is the condition? So does it trigger this question, when it's greater than equal then lower than etc? And what's the amount so you know, my account, I'm watching for $1,000, if it's under $1,000, I don't care. And if it goes over, please send me an email about it. And that is the utility there. So there you go. cloudwatch alert. So now it's time to look at Cloud watch dashboards, which, as the name implies, allows you to create dashboards. And this is based off of cloudwatch metrics. So here, we have a dashboard in front of it. And we add widgets. And we have all sorts of kinds here, graphs, bar charts, and etc. And you drag them on you pick your options, and then you have to just make sure you hit that Save button. And there you go. So it's really not that complicated. Just when you need a visualization of your data. You know, think about using cloud watch. So I just wanted to quickly touch on availability of data, and how often cloudwatch updates the metrics that are available to you because it varies on service and the one we really need to know is these two because it does creep been to a few exam questions. So by default, when you're using EC two, it monitors at a five minute, minute interval. And if you want to get down to one minute, you have to turn on detailed monitoring, which costs money, okay? For all other services, it's going to be between one minute to three minute to five minute, there might be a few other services that have detailed monitoring, I feel like the elastic cache might have it. But generally, all you have to worry about is for EC to the majority of services are by default one minute. So that's why I just had to really emphasize this because you see, two does not default to one minute, it's five minutes. And to get that one minute, you have to turn on detailed monitoring. I just want to make you aware that cloudwatch doesn't track everything you'd normally think it would track for an easy to incidence. And specifically, if you wanted to know like your memory utilization, or how much disk space was left on your server, it does not track that by default. Because those are host level metrics. Those are more in detailed metrics. And in order to gather that information, you need to install the cloud watch agent and the cloud watch agent is a script, which can be installed via SYSTEMS MANAGER run command, it probably comes pre installed on Amazon, Linux one and Amazon x two. And so you know, if you need those more detailed metrics, such as memory and disk space, you're gonna have to install that. But these ones you already have by default, so there is disk usage, there is network usage and CPU usage. The disk usage here is limited. I camera what they are the top my head, but it's not like disk space, like, am I 40% left in disk space, okay. So you know, just be aware of these two things, okay. Hey, this is Andrew Brown from exam Pro. And we are going to do a very short follow along here for cloudwatch. So if you're taking the solution, architect associate, you don't need to know a considerable amount about cloud watch in terms of details, that's more for the sysop associate. But we do need to generally know what's going on here. So maybe the first thing we should learn how to do is create an alarm. Okay, so alarms are triggers based on when certain thresholds, metrics trigger a certain threshold. So I have a bunch here already, because I was creating some dynamodb tables, whenever you create dynamodb, you always get a bunch of alarms, I'm going to go ahead here and create a new alarm, the most common alarm to create is a building a building alarm. So maybe we could go ahead and do that. So under billing, we're going to choose total estimated charge, we're gonna choose USD, I'm going to select metric, okay. And, you know, we can choose the period of time we want it to happen, we have the static and anomaly detection, we have whether we need to determine when it should get triggered. So we would say when we go over $1,000, okay, we should get a Rubes $1,000 USD. Okay. So it's not letting me fill it in there. There we go. When we hit that metric there, then we should get an email about it. All right. And so we are going to go ahead and just hit next there. And for this alarm to work, we need to have an SNS topic. Okay, so we're gonna create a new topic here. And I'm just gonna say Andrew at exam pro.co. All right. And what we'll end up doing here is we'll just hit next, oops, we have to create topic button there. Okay, so it's created that topic, we'll hit next. And we'll just define this as billing alarm. Okay. And we will hit next here, and we will create the alarm. And so now we have an alarm. So anytime billing goes over $1,000, it's going to send us an email, it's very unlikely this is going to happen within this account. Because I'm not spending that much here. It does have to wait for some data to come in. So it will say insufficient data to begin with. And it's still waiting for pending confirmation. So it is waiting for us to confirm that SNS topic. So I'm just going to hop over to my email and just confirm that for you very quickly here. And so in very short amount of time I've received here a subscription. So I'm just gonna hit confirm subscription here. Okay, it's just going to show that I've confirmed that, okay. And I'm just going to go ahead and close that here. And we'll just give this a refresh. Alright, so that pending confirmation is gone. So that means that this billing alarm isn't an OK status, and it is able to send me emails, when that does occur. Okay, so there's a few different ways to set alarms, sometimes you can directly do it with an EC too. So I just want to show you here, okay. So we're just going to go to over DC to I don't think we have anything running over here right now. Okay, and so I'm just going to launch a new instance because I just want to show that to you. We're going to go to Amazon Linux two. We're going to go to configuration next. Now there is this option here for detailed monitoring. And this is going to provide monitoring at for every minute as opposed to every five minutes by default. Okay, so this does cost additional money. But I'm just going to turn it on here. For the sake of this, follow along. Okay, I'm just going to give it a key pair here. And then I'm just going to go to View instances here. And I just want to show you that under the monitoring tab, we do get a bunch of metrics here about this EC two instance. And if you wanted to create an alarm, it's very convenient, you can actually just do it from here. So if you have an EC two instance, and you want to send an alarm for this here, it could be for a variety thing. So take action, so send a notification here. And also, you could stop the instance. So we could say, when the CPU utilization goes over 50%, shut down the server. Okay. And so that's one very easy way to create alarms. All right. Okay, um, so, you know, just, it's, it's good to know that a lot of services are like that, I bet if we went over to dynamodb, I bet it's the same thing. So if we go over to Dynamo dB, okay, and we go to tables here, and we were using this for a another tutorial here and we create an alarm, okay, it's the same story. So you're gonna want to take a peek at different services, because they do give you some basic configurations, though that make make it very easy to set up alarms, you can, of course, always do it through here. But it's a lot easier to do that through a lot of the services. Okay. And so I think maybe what we'll do here is look at events. Next, we're going to take a look now at cloudwatch events, which has been renamed to Amazon event bridge. So these are exactly the same service. So AWS added some additional functionality, such as the ability to create additional event buses, and to use partner event sources. And so they gave it a rebranding, okay. So the way it works is, and we'll just do this through cloudwatch events here. And then we'll also do it through the new interface. But the idea is, you generally will create rules within cloud watch. And so you have the ability to do from an event pattern, okay? Or from a schedule. All right. So we are going to actually just do from schedule, because that's the easiest one to show here. And so based on a schedule, we could say every day, all right, once a day, I want to create a backup of a aect volume. So we have a bunch of options here. Okay. And this is a very common one. So I actually have a minecraft server that I run, I like to backup the volume at least once a day. And so this is the way I would go about doing that. So here, I just have to supply it. Oops, we actually want to do the snapshot here. And so I would just have to supply the volume. So here I have an easy to instance running from earlier in this fall along here. And so I'm just going to provide the volume ID there. Okay. And so once that is there, I can hit configure details and say EBS snapshot, okay. Or we just say volume, snapshot doesn't matter. Just I'm being picky here. And we'll create that. And so now we have that rule. So once a day, it's going to create that snapshot for us, we're gonna go ahead and do that event bridge, you're gonna see it's the same process. It's just this new UX, or UI design. Whether it is a improvement over the old one is questionable, because it's a very thing that people always argue about with AWS is the changes to the interface here, we're going to see the same thing of that pattern and schedule, I'm going to go to schedule here, we're going to choose a one day, alright. And you can see now we choose our Event Bus. And so we're whenever we're creating rules here, it's always using the default Event Bus. But we can definitely create other event buses and use partner events. Okay. And we're just gonna drop this down here and choose Create a snapshot here. I don't know if it's still my clipboard it is, there we go. And we'll just create that. So you're gonna see that we can see both of them here. Okay, so we can see UBS one and snapshot. And if we go back to our rules here, we should be able to see both, is it just the one or both? Yeah, so they're both so you can just see that they're the exact same service. Okay. And just to wrap up talking about Amazon event bridge, I just want to show you that you can create multiple event buses here. So if we go and create an event bus, you can actually create an event bus that is shared from another AWS account. Okay, so you could actually react to within your system, an event from another actual account, which is kind of cool. And then you also have your partner event sources. So here you could react to data with a data dog or something with has to do with login. So you know, there are some ways to react cross account. Okay, so that's just the point I wanted to make there. All right. And we're going to just check one more thing out here, which is a cloud watch dashboard. So college dashboards allows you to create a bunch of metrics and put it on a dashboard. So I'm just going to make one here my EC to dashboard because we do have an EC two instance running And what we can do here is just start adding things. So I could add a line graph, okay. And we do have a running EC two instance. So we should be able to get some information there. So let's say per instance metrics here, and we will see if we can find anything that's running should be something running here, I actually, you know what, I think it's just this one here, we didn't name that instance. That's why I'm not seeing anything there. Okay, I'm just gonna create that there. And so you know, a lot, a lot of stuff is happening with that instance. So that's why we're not seeing any data there. But if there was, we would start to see some spikes there. But all you need to know is that you can create dashboards in these, we can create widgets based on metric information. And just be sure to hit that save dashboard button. It's very non intuitive. This interface here, so maybe this will get a refresh one day, but yeah, there you go. That's dashboards. So that wraps up the cloud watch section here. So what we're gonna want to do is, we're just going to want to tear down whatever we created. So let's go to our dashboard. And I believe we can delete it, how do we go about doing it, we go to delete dashboard, these dashboards us get like a few free but they do cost in the long term, then we're going to tear down are on our alarm, because these actually alarms do cost money. If you have a lot of them, so Mize will get rid of ones that we aren't using. Okay, then we will go to our rules here. And we will just go ahead and delete these rules, okay, I'm just disabling them, I actually want to delete them. Okay, and I believe I started an EC two instance. So we're gonna just go over to our instances here, and terminate. Okay, so there we go. That's a full cleanup there. Of course, if you're doing the SIS Ops, where you have to know, cloud watch in greater detail, but this is just generally what you need to know, for the solution architect associate. And likely the developer is you're on to the cloud watch cheat sheet. So let's jump into it. So Cloud watch is a collection of monitoring services. We have dashboards, events, alarms, logs and metrics, starting with logs. First, it is the core service to all of cloud watch. And it logs data from Ada services. So a very common thing that you might love would be CPU utilization. Then we go on to metrics and metrics builds off of logs, and it represents a time ordered set of data points. It is a variable to monitor. So let's go back to CPU utilization and visualize it as a line graph. That is what metrics is done, we go on to cloudwatch events, which triggers an event based on a condition, a very common use case is maybe you need to take a snapshot of your server every hour, I like to think of events as a serverless crontab, because that's how I use it. Then you have alarms, which triggers notifications based on a metric when a defined threshold is breached. So a very common use case are a building alarm. So if we go over $1,000, I want an email about it, you got to tell me, then you got cloudwatch dashboards, as the name implies, it's a dashboard. So it creates visualizations based on metrics. There are a couple of exceptions when we're dealing with end to end cloud watch. And the first is that it monitors at an interval of five minutes. And if you want to get that one minute interval, you have to turn on detailed monitoring, most services do monitor at one minute intervals. And if they don't, it's going to be the one three or five minute interval. logs must belong to a log group cloudwatch agents need to be installed on EC to host if you want to get memory usage or decides because that doesn't come by default. You can stream custom log files to to cloud watch logs. So maybe if you're gonna have a Ruby on Rails app, you have a production log, and you want to get that in cloud watch logs, you can do that. And then the last thing is cloud watch metrics. metrics allow you to track high resolution metrics. So that you can have sub minute intervals, tracking all the way down to one second. So if you need something more granular, you can only do that through cloud metrics. So there you go. That's the cloud watch. Hey, this is Angie brown from exam Pro. And we are looking at Cloud trail which is used for logging API calls between AWS services. And the way I like to think about this service. It's when you need to know who to blame. Okay, so as I said earlier, cloud trail is used to monitor API calls and actions made on an AWS account. And whenever you see these keywords governance, compliance, operational auditing or risk auditing, it's a good indicator, they're probably talking about Eva's cloud trail. Now, I have a record over here to give you an example of the kinds of things that cloud trail tracks to help you know how you can blame someone What's up, something's gone wrong. And so we have the where, when, who and what so the where, so we have the account ID what, like which account did it happen in and Have the IP address of the person who created that request the lens. So the time it actually happened, the who. So we have the user agent, which is, you know, you could say, I could tell you the operating system, the language, the method of making this API call the user itself. so here we can see Worf made this call, and, and what so to what service, and you know, it'll say what region and what service. So this service, it's using, I am here, I in the action, so it's creating a user. So there you go, that is cloud trail in a nutshell. So within your database account, you actually already have cloud trail logging things by default, and it will collect into the last 90 days under the event history here. And we get a nice little interface here. And we can filter out these events. Now, if you need logging be on 90 days. And that is a very common use case, which you definitely want to create your own trail, you'd have to create a custom trail. The only downside when you create a custom trail is that it doesn't have a gooey like here, such as event history. So there is some manual labor involved to visualize that information. And a very common method is to use Amazon, Athena. So if you see cloud trail, Amazon, Athena being mentioned in unison, there's that reason for that, okay. So there's a bunch of trail options, I want to highlight and you need to know these, they're very important for cloud trail. So the first thing you need to know is that a trail can be set to log in all regions. So we have the ability here, say yes, and now, no region is missed. If you are using an organization, you'll have multiple accounts, and you want to have coverage across all those. So in a single trail, you can check box on applied to my entire organization, you can encrypt your cloud trail logs, what you definitely want to do using server side encryption via key management service, which abbreviate is SSE kms. And you want to enable log file validation, because this is going to tell whether someone's actually tampered with your logs. So it's not going to prevent someone from being able to tamper from your logs. But it's going to at least let you know how much you can trust your logs. So I do want to emphasize that cloud trail can deliver its events to cloudwatch. So there's an option After you create the trail where you can configure, and then it will send your events to cloud watch logs. All right, I know cloud trail and cloud watch are confusing, because they seem like they have overlapping of responsibilities. And there are a lot of aidable services that are like that. But you know, just know that you can send cloud trail events to cloud watch logs, not the other way around. And there is that ability to. There are different types of events in cloud trail, we have measurement events, and data events. And generally, you're always looking at management events, because that's what's turned on by default. And there's a lot of those events. So I can't really list them all out for you here. But I can give you a general idea what those events are. So here are four categories. So it could be configuring security. So you have attached rule policy, you'd be registering devices, it would be configuring rules for routing data, it'd be setting up logging. Okay. So 90% of events in cloud trail are management events. And then you have data events. And data events are actually only for two services currently. So if you were creating your trail, you'd see tabs, and I assume as one, they have other services that can leverage data events, we'll see more tabs here. But really, it's just s3 and lambda. And they're turned off by default, for good reason. Because these events are high volume. They occur very frequently. Okay. And so this is tracking more in detail s3, events, such as get object, delete object put object, if it's a lambda, it'd be every time it gets invoked. So those are just higher there. And so those are turned off by default. Okay. So now it's time to take a quick tour of cloud trail and create our very own trail, which is something you definitely want to do in your account. But before we jump into doing that, let's go over to event history and see what we have here. So AWS, by default will track events in the last 90 days. And this is a great safeguard if you have yet to create your own trail. And so we have some event history here. And if we were just to expand any of them doesn't matter which one and click View event, we get to, we get to see what the raw data looks like here for a specific event. And we do have this nice interface where we can search via time ranges and some additional information. But if you need data Now beyond 90 days, you're going to have to create a trail. And also just to analyze this, because we're not going to have this interface, we're gonna have to use Athena to really make sense of any cloud trail information. But now that we have learned that we do have event history available to us, let's move on to creating our own trail. Let's go ahead and create our first trail. And I'm just going to name my trail here exam pro trail, I do want you to notice that you can apply a trail to all regions, and you definitely want to do that, then we have management events, where we can decide whether we want to have read only or write only events, we're going to want all of them, then you have data events. Now these can get expensive, because s3 and lambda, the events that they're tracking are high frequency events. So you can imagine how often someone might access something from an s3 bucket, such as a get or put. So they definitely do not include these. And you have to check them on here to have the inclusion of them. So if you do want to track data events, we would just say for all our s3 buckets, or specify them and lambdas are also high frequency because we would track the invocations of lambdas. And you could be in the 1000s upon millions there. So these are sanely not included by default. Now down below, we need to choose our storage location, we're going to let it create a new s3 bucket. For us, that seems like a good choice, we're going to drop down advanced here at it because it had some really good tidbits here. So we can turn on encryption, which is definitely something we want to do with kms. And so I apparently have a key already here. So I'm just gonna add that I don't know if that's the default key. I don't know, if you get a default key with cloud trail, usually, you'd have one in there. But I'm just going to select that one there, then we have enable log file validation. So we definitely want to have this to Yes, it's going to check whether someone's ever tampered with our logs, and whether we should not be able to trust her logs. And then we could send a notification about log file delivery, this is kind of annoying, so I don't want to do that. And then we should be able to create our trail as soon as we name our bucket here. So we will go ahead and just name it will say exam pro trails, assuming I don't have one in another account. Okay, and so it doesn't like that one, that's fine. So I'm just going to create a new kms key here. kms keys do cost a buck purse, if you want to skip the step you can totally do. So I'm just going to create one for this here called exam pro trails. Okay. Great. And so now it has created that trail. And we'll just use the site here. And then maybe we'll take a peek here in that s3 bucket when we do have some data. Alright, I do want to point out one more thing is that you couldn't set the the cloud watch event to track across all organizations, I didn't see that option there. It's probably because I'm in a sub account. So if I was in my, if you have an alias organization, right, and this was the root account, I bet I could probably turn it on to work across all accounts. So we didn't have that option there. But just be aware that it is there. And you can turn a trail to be across all organizations. So I just had to switch into my route organization account, because I definitely wanted to show you that this option does exist here. So when you create a trail, we have applied all regions, but we also can apply to all organizations, which means all the accounts within an organization. Okay. So you know, just be aware of that. So now that our trail is created, I just want you to click into and be aware that there's an additional feature that wasn't available to us when we were creating the trail. And that is the ability to send our cloud trail events to cloud watch logs. So if you want to go ahead and do that, you can configure that and create an IM role and send it to a log or cloud watch log group. There are additional fees apply here. And it's not that important to go through the motions of this. But just be aware that that is a capability that you can do with patrol. So I said earlier that this will collect beyond 90 days, but you're not going to have that nice interface that you have an event history here. So how would you go about analyzing that log, and I said you could use Amazon Athena. So luckily, they have this link here. That's going to save you a bunch of setup to do that. So if you were to click this here, and choose the s3 bucket, which is this one here, it's going to create that table for you and Athena, we used to have to do this manually, it was quite the pain. So it's very nice that they they've added this one link here, and I can just hit create table. And so what that's going to do, it's going to create that table in Athena for us and we can jump over to Athena. Okay. And um, yeah, it should be created here. Just give it a little refresh here. I guess we'll just click Get Started. I'm not sure why it's not showing up here. We're getting the splash screen. But we'll go in here and our table is there. So we get this little goofy tutorial. I don't want to go threat. But on that table has now been created. And we have a bunch of stuff here. There is a way of running a sample query, I think he could go here and it was preview table. And that will create us a query. And then we it will just run the query. And so we can start getting data. So the cool advantage here is that if we want to query our data, just like using SQL, you can do so here. And Athena, I'm not doing this on a day to day basis. So I can't say I'm the best at it. But you know, if we gave this a try here and tried to query something, maybe based on event type, I wonder if we could just like group by event type here. So that is definitely a option. So we say distinct. Okay, and I want to be distinct on maybe, event type here. Okay. doesn't like that little bit, just take that out there. Great. So there we go. So that was just like a way so I can see all the unique event types, I just take the limit off there, the query will take longer. And so we do have that one there. But anyway, the point is, is that you have this way of using SQL to query your logs. Obviously, we don't have much in our logs, but it's just important for you to know that you can do that. And there's that one button, press enter to create that table and then start querying your logs. So we're onto the cloud trail cheat sheet, and let's get to it. So Cloud trail logs calls between eight of us services. When you see the keywords such as governance, compliance, audit, operational auditing and risk auditing, it's a high chance they're talking about cloud trail when you need to know who to blame. Think cloud trail cloud trail by default logs events data for the past 90 days via event history. To track beyond 90 days, you need to create a trail to ensure logs have not been tampered with, you need to turn on log file validation option. Cloud trail logs can be encrypted using kms. Cloud trail can be set to log across all service accounts in an organization and all regions in an account. Cloud trail logs can be streamed to cloud watch logs, trails are outputted to s3 buckets that you specify cloud trail logs come in two kinds. We have a management events and data events, management events, log management operations, so you know, attach roll policy, data events, log data operations for resources. And there's only really two candidates here s3 and lambda. So think get object delete object put put object did events are disabled by default when creating a trail trail log trail logs in s3 and can be analyzed using Athena I'm gonna have to reword that one. But yeah, that is your teaching. Hey, this is Andrew Brown from exam Pro. And we are looking at AWS lambda, which lets you run code without provisioning or managing servers. And servers are automatically started and stopped when needed. You can think of as lambdas as serverless functions, because that's what they're called. And it's pay per invocation. So as we just said it was lambda is a compute service that lets you run code without provisioning or managing servers. Lambda executes your code only when needed and scales automatically to a few or to 1000 lambda functions concurrently. In seconds, you pay only for the compute time you consume, there is no charge when your code is not running. So the main highlights is lambda is cheap, lambda is serverless. And lambda scales automatically. Now, in order to use lambda, you are just uploading your code and you have up to seven options that are supported by AWS. So we have Ruby, Python, Java, go PowerShell, no GS, and C sharp. If you want to use something outside of this list, you can create your own custom runtimes that are not supported. So eight support is not going to help you with them, but you can definitely run them on it. So when we're thinking about how to use Avis lambda, there is a variety of use cases because lambda is like glue, it helps you connect different services together. And so I have to use use cases in front of you here. So the first is processing thumbnail. So imagine you are a web service, and users are allowed to upload their profile photo. So what you normally do is you would store that in an s3 bucket. Now you can set event triggers on s3 buckets so that it would go trigger a lambda and then that image would get pulled from that bucket. And using something like Sharpe j s or image magic, you could then take that profile photo and then crop it to a thumbnail and then store it back into the bucket. Okay, another use case would be contact email form. So this is the same thing example if user email form, it's exactly this. When you fill in the contact email form, it sends that form data to an API gateway. endpoint, which then triggers a lambda function, and then you have this lambda function that evaluates whether the form data is valid or not. If it's not valid, it's gonna say, hey, you need to make these corrections. Or if it if it's good, it's going to then create a record in the in our dynamodb table, which is the records are called items. And it's also going to send out an email notification to the company so that we know that you've contacted us via SNS. All right, so there you go. So to invoke a lambda to make it execute, we can either use the AWS SDK, or we can trigger it from another database service. And we have a big long list here. And this is definitely not the full list. So you can see that we can do API gateway, we just showed that with the email contact form, if you have IoT devices, you could trigger a lambda function, your maybe you want your Echo Dot using an Alexa skill would trigger a lambda, Al B's CloudFront, cloudwatch, dynamodb, kinesis, s3, SNS Sq s. And I can even think of other ones outside of here, like guard duty and config, there's a bunch, okay, so you can see that a dress integrates with a lot of stuff, it also can integrate with third party party, or partnered third party database partners. And that's powered through Amazon event bridge, which is just event bridges very much like cloud watch events. But with some additional functionality there. And you can see we can integrate with data dog one, login page pager duty. So just to give you a scope of the possible triggers available, I just want to touch on lambda pricing here, quickly. So the first million requests like the first functions, you execute our free per month, okay? So if you're a startup, and you're not doing over a million requests per month, and a lot aren't, you're basically not paying anything for compute after that is 20 cents per additional million requests. So very, very inexpensive. There, the other costs to it, besides just how often things are requested. It's also you know, how many how long the duration is. So the first 400 gigabytes of seconds is free. Thereafter, it's going to be this very, very small amount, for every gigabyte per second, okay, this also is going to this value is going to change based on the amount of memory you use, I bet you This is for the lowest amount 128 megabytes, most of the times, you're not going to see yourself increasing beyond 512. That's really high. But yeah, I always feel that I'm between 128 and 256. Now just to do a calculation just to give an idea of total pricing. So let's say we had a lambda function that's at 128 megabytes with the lowest, we have 30 million executions per month, those are requests, and the duration is 200 milliseconds. For those lambda functions, we're only paying $5.83. So you can see that lambda is extremely inexpensive. So I just wanted to give you a quick tour of the actual ad bus lamda interface, just so you can get an idea how everything works together. So you would choose your runtime. So here we're using Ruby. And then you can upload your code, you have different ways, you can either edit it in line, so they have like CLOUD NINE integrated here. So you can just start writing code. If it's too large, then you either have to upload in a zip or provided via s3. So there are some limitations. And the larger gets eventually you'll end up on s3, when you want to import a lambda, and then you have your triggers. And there's a lot of different triggers. But for this lambda function, it's using dynamodb. So when a record is inserted into dynamodb, it goes to dynamodb streams. And then it triggers this lambda function. And then we have on the right hand side, the outputs and those outputs. It's the lambda function that actually has to call those services. But you create an IM role. And whatever you have permissions to it will actually show you here on the right hand side. So here Here you can see this lambda is allowed to interact with cloud watch logs, dynamodb and kinesis. firehose. So there you go. We are looking at default limits for AWS lambda. It's not all of them, but it's the ones that I think are most important for you to know. So by default, you can only have 1000 lambdas running concurrently. Okay, so if you want to have more, you'd have to go ask AWS support. It's possible there could be an exam question where it's like, Hey, you want to run X amount of lambdas. And they're not running or something. This could be because of that limit. You are able to store temporary files on a on a lambda as it's running, and has a limit of up to 500 megabytes. When you create a lambda by default, it's going to be running in no VPC. And so sometimes there are services such as RDS where you can only rack them if You are in the same VPC. So you might actually have to change the VPC. In some use cases, when you do set a lambda to a VPC, it's going to lose internet access, I'm not to say that you cannot expose it because it's in a security group. So there might be some way to do it. But that is a consideration there, you can set the timeout to be a maximum of 15 minutes. So you don't really if you had to go beyond 15 minutes, and this is where you probably want to use fargate, which is similar to Avis lambda, but there's a lot more work in setup and your your charge per second as opposed to milliseconds. But just be aware, if you need anything beyond 50 milliseconds, you're gonna want fargate. And the last thing is memory. So you can set memory memory starts at 128 megabytes and goes up all the way to 3008 megabytes, the more megabytes you use, the more expensive it's going to be paired with how long the duration is, and it This goes up in 64 megabyte increments, okay. So there you go, there's the most important do you know. So one of the most important concepts to ages lambda is cold starts, because this is one of the negative trade offs of using serverless functions. So, you know, database has servers pre configured, so they're just lying around, and they're in a turned off state for your runtime environment. So when a lambda is invoked, the servers need to then be turned on, your code needs to be copied over. And so during that time, there's going to be a delay when that function will initially run. And that's what we call a code cold start. So over here, you can see I have a lambda function, it gets triggered, and there is no server for it to run on. And so what's gonna happen is that servers gonna have to start and we're going to copy that code, and there's gonna be a period of delay. Now, if you were to invoke that function, again, what's going to happen is, it has to be recent, right? If if the same function, so the codes already there, and and the servers already running, then you're going to not have that delay, that cold starts not going to be there. And that's when your servers actually warm. All right. So, you know, serverless functions are cheap, but everything comes with the trade off. And so serverless functions, cold starts can cause delays in the user experience. And this is actually a problem directly for us on exam Pro, because we didn't use serverless architecture, because we wanted everything to be extremely fast, because, you know, using other providers, we weren't happy with the delay and experience. Now there are ways around cold starts, which is called pre warming. So what you can do is you can invoke the a function so that it starts prematurely so that when someone actually uses it, it's always going to stay warm, or you can take a lambda and then give it more responsibility so that more things are passing through it so it stays warm, more constant. And you know that cold starts is becoming less and less issue at going forward because cloud providers are trying to find solutions to reduce those times, or to mitigate them, but they are still a problem. So just be very aware of this one caveat to thermal. We're on to the lamda cheat sheet. So lambdas are serverless functions, you upload your code and it runs without you managing or provisioning any servers. Lambda is serverless. You don't need to worry about the underlying architecture. Lambda is a good fit for short running tasks where you don't need to customize the OS environment. If you need long running tasks greater than 15 minutes, and a custom OS environment then consider using fargate. There are seven runtime language environments officially supported by lambda, you have Ruby, Python, Java, Node JS, C sharp PowerShell and go. You pay per invocation. So that's the duration, the amount of memory used, rounded up to the nearest 100 milliseconds, and you are at and you're also paid based on the amount of requests so the first 1 million requests per month are free. You can adjust the duration timeout to be up to 15 minutes, and the memory up to 3008 megabytes, you can trigger lambdas from the SDK or multiple AWS services, such as s3 API gateway dynamodb. Lambda is by default run in no VPC to interact with some services, you need to have your lambdas in the same VPC. So, you know, in the case of RDS, you'd have to have your lambda in the same VPC as RDS lambdas can scale to 1000 concurrent functions in a second 1000 is the default if you want to increase this you have to go make an EVA service limit increase with Ada support and lambdas have cold starts if a function has not been recently executed, there will be a delay. Hey, this is Andrew Brown from exam pro and we are looking at simple queue service also known as Sq s which is a fully managed queuing service that is enables you to decouple and scale micro services, distributed systems and serverless applications. So to fully understand SQL, we need to understand what a queueing system is. And so a queueing system is just a type of messaging system, which provides asynchronous communication and decouples. Processes via messages could also be known as events from a sender and receiver. But in the case for a streaming system, also known as a producer and consumer. So, looking at a queueing system, when you have messages coming in, they're usually being deleted on the way out. So as soon as they're consumed or deleted, it's for simple communication, it's not really for real time. And just to interact with the queue. And the messages, they're both the sender and receiver have to pull to see what to do. So it's not reactive. Okay, we got some examples of queueing systems below, we have sidekick, sq, S, rabbit, rabbit and queue, which is debatable because it could be considered a streaming service. And so now let's look at the streaming side to see how it compares against a queueing system. So a streaming system can react to events from multiple consumers. So like, if you have multiple people that want to do something with that event, they can all do something with it, because it doesn't get immediately deleted, it lives in the Event Stream for a long period of time. And the advantage of having a message hang around in that Event Stream allows you to apply complex operations. So that's the huge difference is that one is reactive and one is not one allows you to do multiple things with the messages and retains it in the queue. One deletes it and doesn't doesn't really think too hard about what it's doing. Okay, so there's your comparative between queuing and streaming. And we're going to continue on with Sq s here, which is a queueing system. So the number one thing I want you to think of when you think of SQL says application integration, it's for connecting isolette applications together, acting as a bridge of communication, and Sq s happens to use messages and queues for that you can see Sq s appears in the Ava's console under application integration. So these are all services that do application integration Sq S is one of them. And as we said it uses a queue. So accuse a temporary repository for messages that are waiting to be processed, right. So just think of going to the bank, and everyone is waiting that line, that is the queue. And the way you interact with that queue is through the Avis SDK. So you have to write code that was going to publish messages to the queue. And then when you want to read them, you're going to have to use the AWS SDK to pull messages. And so Sq S is pull based, you have to pull things, it's not pushed based, okay. So to make this crystal clear, I have an SQL use case here. And so we have a mobile app, and we have a web app, and they want to talk to each other. And so using the Avis SDK, the mobile app sends a message to the queue. And now the web app, what it has to do is it has to use the Avis SDK and pull the cue whenever it wants. So it's up to the this app to code in how frequently it will check. But it's going to see if there's anything in the queue. And if there is a message, it's going to pull it down, do something with it and report back to the queue that it's consumed it meaning to tell the queue to go ahead and delete that message from the queue. All right, now this app on the mobile left hand side to know whether it's been consumed, it's going to have to, on its own schedule, periodically check to pull to see if that message is still in the queue, if it no longer is, that's how it knows. So that is the process of using SQL between two applications. So let's look at some SQL limits starting with message size. So the message size can be between one byte to 256 kilobytes, if you want it to go beyond that message size, you can use the Amazon SQL extended client library only for Java, it's not for anything else to extend that necessarily up to two gigabytes in size. And so the way that would work is that the message would be stored in s3 and the library would reference that s3 object, right? So you're not actually pushing two gigabytes to SQL is just loosely looking to something in an s3 bucket. Message retention. So message retention is how long SQL will hold on a message before dropping it from the queue. And so the message retention by default is four days, and you have a message retention retention that can be adjusted from a minimum of 60 seconds to a maximum of 14 days. SQL is a queueing system. So let's talk about the two different types of queues. We have standard queue which allows for a nearly unlimited number of transactions per second when your transaction is just like messages, and it guarantees that a message will be delivered at least once. However, the trade off here is that more than one copy of the message could be Potentially delivered. And that would cause things to happen out of order. So if ordering really matters to you just consider there's that caveat here with standard queues, however, you do get nearly unlimited transactions. So that's a trade off. It does try to provide its best effort at to ensure messages stay generally in the order that they were delivered. But again, there's no guarantee. Now, if you need a guarantee of the, the ordering of messages, that's where we're going to use feefo, also known as first in first out, well, that's what it stands for, right. And the idea here is that, you know, a message comes into the queue and leaves the queue. The trade off here is the number of transactions you can do per second. So we don't have nearly unlimited per second where we have a cap up to 300. So there you go. So how do we prevent another app from reading a message while another one is busy with that message. And the idea behind this is we want to avoid someone doing the same amount of work that's already being done by somebody else. And that's where visibility timeout comes into play. So visibility timeout is the period of time that it meant that messages are invisible DSU Sq sq. So when a reader picks up that message, we set a visibility timeout, which could be between zero to 12 hours. By default, it's 30 seconds. And so no one else can touch that message. And so what's going to happen is that whoever picked up that message, they're going to work on it. And they're going to report back to the queue that, you know, we finished working with it, it's going to get deleted from the queue. Okay. But what happens, if they don't complete it within the within the visibility timeout frame, what's going to happen is that message is now going to become visible, and anyone can pick up that job, okay. And so there is one consideration you have to think of, and that's when you build out your web apps, that you you bake in the time, so that if if the job is going to be like if it's if 30 seconds have expired, then you should probably kill that job, because otherwise you might end up this issue where you have the same messaging being delivered twice. And that could be an issue. Okay, so just to consideration for visibility. Don't ask us we have two different ways of doing polling we have short versus long. Polling is the method in which we retrieve messages from the queue. And by default, SQL uses short polling, and short polling returns messages immediately, even if the message queue being pulled is empty. So short polling can be a bit wasteful, because if there's nothing to pull, then you're just calling you're just making calls for no particular reason. But there could be a use case where you need a message right away. So short polling is the use case you want. But the majority of use cases, the majority of use cases, you should be using long polling, which is bizarre, that's not by default, but that's what it is. So long polling waits until a message arrives in the queue, or the long pole timeout expires. Okay. And long polling makes it inexpensive to retrieve messages from the queue as soon as messages are available, using long polling will reduce the cost because you can reduce the number of empty receives, right. So if there's nothing to pull, then you're wasting your time, right? If you want to enable long polling if you have to do it within the SDK, and so what you're doing is you're setting the receive message request with a wait time. So by doing that, that's how you set long polling. Let's take a look at our simple queue service cheat sheet that's going to help you pass your exam. So first, we have Sq S is a queuing service using messages with a queue so think sidekick or rabbit mq, if that helps if you know the services, sq S is used for application integration. It lets you decouple services and apps so that they can talk to each other. Okay, to read Sq s, you need to pull the queue using the ABS SDK Sq S is not push based. Okay, it's not reactive. sq s supports both standard and first in first out FIFO queues. Standard queues allow for unlimited messages per second does not guarantee the order of delivery always delivers at least once and you must protect against duplicate messages being processed feefo first in first out maintains the order messages with a limit of 300. So that's the trade off there. There are two kinds of polling short by default and long. Short polling returns messages immediately even if the message queue is being pulled as empty. Long polling waits until messages arrive in the queue or the long pole time expires. in the majority of cases long polling is preferred over short polling majority okay. Visibility timeout is the period of the time that messages are invisible to the Sq sq. messages will be deleted from the queue after a job has been processed. Before the visibility timeout expires. If the visibility timeout expires in a job will become visible to The queue again, the the default visibility timeout is 30 seconds, timeout can be between zero seconds to a maximum of 12 hours. I highlighted that zero seconds because that is a trick question. Sometimes on the exams, people don't realize you can do it for zero seconds, sq s can retain messages from 60 seconds to 14 days by default, is four days. So 14 days is two weeks. That's an easy way to remember it. Message sizes can be between one byte to two and 56 kilobytes. And using the extended client library for Java can be extended to two gigabytes. So there you go, we're done with SQL. Hey, this is Andrew Brown from exam Pro. And we are looking at simple notification service also known as SNS, which lets you subscribe and send notifications via text message email, web hooks, lambdas Sq s and mobile notification. Alright, so to fully understand SNS, we need to understand the concept of pub sub. And so pub sub is a publish subscribe pattern commonly implemented in messaging systems. So in a pub sub system, the sender of messages, also known as the publisher here, doesn't send the message directly to the receiver. Instead, they're going to send the messages to an Event Bus. And the event pumps categorizes the messages into groups. And then the receiver of messages known as the subscriber here subscribes to these groups. And so whenever a new message appears within their subscription, the messages are immediately delivered to them. So it's not unlike registering for a magazine. All right, so, you know, down below, we have that kind of representation. So we have those publishers, and they're publishing to the Event Bus which have groups in them, and then that's gonna send it off to those subscribers, okay, so it's pushing it all along the way here, okay, so publishers have no knowledge of who their subscribers are. Subscribers Do not pull for messages, they're gonna get pushed to them. messages are instead automatically immediately pushed to subscribers and messages and events are interchangeable terms in pub sub. So if you see me saying messages and events, it's the same darn thing. So we're now looking at SNS here. So SNS is a highly available, durable, secure, fully managed pub sub messaging service that enables you to decouple microservices distributed systems and serverless applications. So whenever we talking about decoupling, we're talking about application integration, which is like a family of Ada services that connect one service to another. Another service is also Sq s. And SNS is also application integration. So down below, we can see our pub sub systems. So we have our publishers on the left side and our subscribers on the right side. And our event bus is SNS Okay, so for the publisher, we have a few options here. It's basically anything that can programmatically use the EVAs API. So the SDK and COI uses the Avis API underneath. And so that's going to be the way publishers are going to publish their messages or events onto an SNS topic. There's also other services on AWS that can trigger or publish to SNS topics cloud watch, definitely can, because you'd be using those for building alarms. And then on the right hand side, you have your subscribers and we have a bunch of different outputs, which we're going to go through. But here you can see we have lambda Sq s email, and HTTPS protocol. So publishers push events to an SNS topic. So that's how they get into the topic. And then subscribers subscribe to the SNS topic to have events pushed to them. Okay. And then down below, you can see I have a very boring description of SNS topic, which is it's a logical access point and communication channel. So that makes a nap. That makes sense. So let's move on. We're gonna take a deeper look here at SNS, topics and topics allow you to group multiple subscriptions together, a topic is able to deliver to multiple protocols at once. So it could be sending out email, text, message, HTTPS, all the sorts of protocols we saw earlier. And publishers don't care about the subscribers protocol, okay, because it's sending a message event, it's giving you the topic and saying, you figure it out, this is the message I want to send out. And it knows what subscribers it has. And so the topic, when it delivers messages, it will automatically format it for the message according to the subscribers chosen protocol. Okay. And the last thing I want you to know is that you can encrypt your topics via kms key management service. And you know, so it's just as easy as turning it on and picking your key. So now we're taking a look at subscriptions. And subscriptions are something you create on a topic, okay, and so here I have a subscription that is an email subscription. And the endpoint is obviously going to be an email. So I provided my email there. If you want to say hello, give send me an email. And it's just as simple as clicking that button and filling in those options. Now you have to choose your protocol. And here we have our full list here on the right hand side. So we'll just go through it. So we have a sheet phps. And you're going to want to be using this for web hooks. So the idea is that this is usually going to be an API endpoint to your web applications that's going to listen for incoming messages from SNS, then you can send out emails. Now, there's another service called ACS, which specializes in sending out emails. And so SNS is really good for internal email notifications, because you don't necessarily have your custom domain name. And also, the emails have to be plain text only. There's some other limitations around that. So they're really, really good for internal notifications, maybe like billing alarms, or maybe someone signed up on your platform you want to know about it, then they also have an email JSON. So this is going to send you JSON via email, then you have Sq s. So you can send an SNS message to Sq s. So that's an option you have there. You can also have SNS trigger a lambda functions. So that's a very useful feature as well. And you can also send text messages that we'll be using the SMS protocol. And the last one here is platform application endpoints. And that's for mobile push. So like a bunch of different devices, laptops, and phones have notification systems in them. And so this will integrate with that. And we're just gonna actually talk about that a bit more here. So I wanted to talk a bit more about this platform application endpoint. And this is for doing mobile push. Okay, so we have a bunch of different mobile devices, and even laptops that use notification systems in them. And so here you can see a big list, we have ADM, which is Amazon device messaging, we have Apple, Badu Firebase, which is Google. And then we have two for Microsoft. So we have Microsoft push, and Windows push, okay. And so you can with this protocol, push out to that stuff. And the advantage here, you're gonna when you push notification messages to these mobile endpoints, it can appear in the mobile app just like message alerts, badges, updates, or even sound alerts. So that's pretty cool. Okay, so I just want you to be aware of that. Alright, so on to the SNS cheat sheet. So simple notification service, also known as SNS, is a fully managed pub sub messaging service. SNS is for application integration. It allows decoupled services and apps to communicate with each other. We have a topic which is a logical access point and communication channel, a topic is able to deliver to multiple protocols. You can encrypt topics via kms. And then you have your publishers, and they use the EVAs API via the CLA or the SDK to push messages to a topic. Many Eva's services, integrate with SNS and act as publishers. Okay, so think cloud watch and other things. Then you have subscriptions. So you can subscribe, which consists subscribe to topics. When a topic receives a message, it automatically immediately pushes messages to subscribers. All messages published to SNS are stored redundantly across multi az, which isn't something we talked in the core content, but it's good to know. And then we have the following protocols we can use. So we have HTTP, HTTPS. This is great for web hooks into your web application. We have emails good for internal email notification. Remember, it's only plain text if you need rich text. And custom domains are going to be using sts for that. Then you have email JSON, very similar to email just sending Jason along the way. You can also send your your SNS messages into an ESA s Sq sq. You can trigger lambdas you can send text messages. And then the last one is you have platform application endpoints, which is mobile push, okay. And that's going to be for systems like Apple, Google, Microsoft, Purdue, all right. Hey, this is Andrew Brown from exam Pro. And we are looking at elastic cache, which is used for managing caching services, which either run on Redis or memcached. So to fully understand what elastic cache is, we need to answer a couple questions. And that is, what is caching what is an in memory data store. So let's start with caching. So caching is the process of storing data in a cache. And a cache is a temporary storage area. So caches are optimized for fast retrieval with the trade off. The data is not durable, okay. And we'll explain what not like what it means when we're saying it's not durable. So now let's talk about in memory data store, because that is what elastic cache is. So it's when data is stored in memory. So memory, literally think RAM, because that's what it's going in. And the trade off is high volatility. Okay. So when I say it's very volatile, that means low durability. So what does that mean? It just means the risk of data being lost, okay, because, again, this is a temporary storage area. And the trade off is we're going to have fast access to that data. All right. So that is generally what a cache and memory data store is. So with elastic cache, we can deploy, run and scale popular open source compatible in memory data stores. One cool feature is that it will frequently identify queries that you Use often and will store those in cash so you get an additional performance boost. One caveat, I found out when using this in production for my own use cases is that alasa cash is only accessible to two resources offering in the same VPC. So here I have an easy to instance, as long as the same VPC can connect to elastic cache, if you're trying to connect something outside of AWS, such as digitalocean, that is not possible to connect that elastic cache. And if it's outside of this VPC, you're not gonna be able to make that connection, probably through peering or some other efforts, you could do that. But you know, generally, you want to be using elastic cache or the servers that use it to be in the same VPC. And we said that it runs open source compatible in memory data stores. And the two options we have here are mem cache, and Redis. And we're going to talk about the difference between those two in the next slide. for lots of cash, we have two different engines, we can launch we have memcached, and Redis. And there is a difference between these two engines, we don't really need to know in great detail, you know, all the differences. But we do have this nice big chart that shows you that Redis takes more boxes, then mem cache. So you can see that Redis can do snapshots, replication, transact transactions, pub sub, and support geospatial support. So you might think that Redis is the clear winner here. But it really comes down to your use case. So mem cache is generally preferred for caching HTML fragments. And mem cache is a simple key value store. And that trade off there is that even though it's simpler and has less features, it's going to be extremely fast. And then you have Redis on the other side, where he has different kinds of operations that you can do on your data in different data structures that are available to you. It's really good for leaderboards, or tracking unrenewed notification, any kind of like real time cached information that has some logic to it. Redis is going to be your choice there. It's very fast, we could argue to say who is faster than the other because on the internet, some people say Redis is overtaking memcached, even in the most basic stuff, but generally, you know for for the exam, memcached is technically are generally considered faster for HTML fragments, okay. But you know, it doesn't really matter because on the exam, they're not gonna really ask you to choose between memcached and Redis. But you do need to know the difference. So we are on to the elastic cache cheat sheet. It's a very short cheat sheet, but we got to get through it. So elastic cache is a managed in memory caching service. Elastic cache can launch either memcached or Redis. mem cache is a simple key value store preferred for caching HTML fragments is arguably faster than Redis. Redis has richer data types and operations, great for leaderboards, geospatial data, or keeping track of unread notifications, a cache is a temporary storage area. Most frequently identical queries are stored in the cache and resources only within the same VPC may connect to lots of cash to ensure low latency. So there you go. That's the last thing. So now we're taking a look at high availability architecture, also known as h A. And this is the ability for a system to remain available. Okay. And so what we need to do is we need to think about what could cause a service to become unavailable. And the solution we need to implement in order to ensure high availability. So starting with number one, we're dealing with the scenario where when an availability zone becomes unavailable, now remember, an AZ is just the data center. So you can imagine a data center becoming flooded for some reason. And so now all the servers there are not operational. So what would you need to do? Well, you need to have easy two instances in another data center. Okay. And so how would you route traffic from one AZ to another? Well, that's where we will use an elastic load balancer so that that way we can be multi AZ. Now, what would happen if two ACS went out? Well, then you'd need a third one. And a lot of enterprises have this as a minimum requirement, we have to be running at least in three azs. Okay, moving on to our next scenario, what happens when a region becomes unavailable? So let's say there is a meteor strike. It's a very unlikely scenario. But um, we need a scenario which would take out an entire region, all the data centers in that geographical location. And so what you're going to need is going to need instances running in another region. So how would you facilitate the routing of traffic from one region to another, that's going to be route 53. Okay, so that's the solution there. Now, what happens when you have a web application that becomes unresponsive because of too much traffic? All right. So if you're having too much traffic coming to your platform, well, then you're probably going to need more EC two instances to handle the demand. And that's where we're going to use auto scaling groups which have the ability to scale based on the amount of traffic that's coming in and So now what happens if we have an instance that becomes unavailable, because there's an instance failure. So something with the hardware or the virtual software is failing, and so it's no longer healthy? Well, again, that's where we can have auto scaling groups, because we can say, we will set the minimum amount of instances. So let's say we have always three running to handle the load minimum. And if one fails, then it's going to spin up another one. And also, of course, the lb would run traffic to other instances and other AZ. So we have high availability. And now we have our last scenario here. So what happens when our web application becomes unresponsive due to distance and geographical location? So let's say someone's accessing our web application from Asia, and we are in North America. And the distance is causing unavailability? Well, we have a couple options here, we can use a CloudFront. And so CloudFront could cache our static content or even our dynamic content to some degree. So that there's there's content nearby to that user, which gives them back availability, or we could just be running our our content, or sorry, our servers in another region that's nearby. And we use route 53 routing policy for geo geolocation. So it's, so if we have servers that are in Asia, that it's going to route traffic to that those servers. Okay, so there you go. That's the rundown for high availability. We're looking at scale up versus scale out. So when utilization increases, and we're reaching capacity, we can either scale up known as vertical scaling, or we can scale out known as horizontal scaling. So in the case for scale up, all we're doing is we're just increasing the instance size to meet that capacity. And the trade offs here is this is going to be simple to manage, because we're just increasing the instance. But we're gonna have lower available availability. So if a single instance fails, the service is going to become unavailable. Now for scale out known as horizontal scaling, while we're going to do is we're going to add more instances. And so the advantage here is we're gonna have higher availability, because if a single instance fails, it doesn't matter, we're gonna have more complexity to manage. So more servers means more, more of a headache, okay? So what I would suggest you to do is you generally want to scale out first to get more availability. And then you want to then scale up so that you keep simplicity, okay. So you do want to use both these methods, it's just it has to do on the specific scenario that's in front of you. Hey, this is Angie brown from exam Pro. And we are looking at Elastic Beanstalk, which allows you to quickly deploy and manage web apps on AWS without worrying about infrastructure. So the easiest way to think of Elastic Beanstalk is I always compare it to Heroku. So I always say it's the Heroku of AWS. So you choose a platform, you upload your code, and it runs without little worry for developers knowing the actual underlying infrastructure, okay. It just does not recommend it for production applications, when each of us is saying that they really are talking about enterprise or large companies. for startups, you know, I know ones that are still using it three years in, so it's totally fine for those use cases. But if you do see exam questions where they're talking about, like, you have a workload, it's just for developers, they don't want to have to really think about what they're doing Elastic Beanstalk is going to be that choice. So what kind of what kind of things does Elastic Beanstalk set up for you? Well, it's going to set up you a load balancer, auto scaling groups, maybe the database, easy to instance, pre configured with the platform that you're running on. So if you're running a Rails application, you choose Ruby. If you're running large Ral you choose PHP, you can also create your own custom platforms and run them on Elastic Beanstalk. Another thing that's really important to know is that Elastic Beanstalk can run Docker eyes environments. So here we have multi container and Docker. It does have some nice security features where if you have RDS connected, it can rotate out those passwords for you. And it does have a couple of deployment methodologies in there. By default, it's in place, but it can also do bluegreen deployment, and it can do monitoring for you. And down below, you just see these little boxes, that's just me showing you that when you go into Elastic Beanstalk, you'd have all these boxes to do some fine tuning but more or less, you just choose if you want high availability or you want it to be cheap, and it will then just choose all these options for you. So that is all you need to know about Elastic Beanstalk for the solution architect is Andrew Brown from exam Pro, and we are going to learn how to utilize Elastic Beanstalk. So we can quickly deploy, monitor and scale our applications quickly and easily. Okay, so we're going to go ahead here and hit get started. And we're going to upload or create a new application here we're going to name it Express j s because that's what we're going to be utilizing here. Express is just example, a To choose a platform to be no j s, okay, and so now we're at this option where we can utilize a sample application, which is totally something we can do. Or we can upload our own code. So for this, I really want you to learn a few of the caveats of Elastic Beanstalk. And you're only going to learn that if you upload your own code, you can definitely just do sample application, just watch the videos to follow along. But the next thing we're going to do is we're going to prep a application where I have a sample repo here, we're going to go and talk about some of the things we need to configure and upload in the next video. All right. Alright, so I prepared this expressjs application. So we can learn the caveats of Elastic Beanstalk. And I even have the instructions here, if you didn't want to just use this premade one and you want to go the extra effort to make your own, I do have the instructions here emitting how to install no GS, you have to figure that out for yourself. But um, you know, just using this application here, we're going to go ahead and you can either download the zip, or clone it, I'm going to clone it, because that's the way I like to do it. And we'll go over to our terminal here. And I'm just going to clone that to my desktop. Okay. And it won't take long to here. And there we go. We have it cloned. We'll go inside of it, I'm just going to open up the folder here. So you can get an idea of the contents of this Express JS application. So with most applications, we just want to make sure that it runs before we upload it here. So I'm going to do npm install. Okay, it's gonna install all the dependencies, you just saw a node modules directory created. And I'm just going to run my application, I have a nice little script here. To do that, it's going to start on localhost. And here's our application. So it's a very simple application references a very popular episode of Star Trek The Next Generation. And so we're going to go back and just kill our application here. And now we're going to start preparing this application for for Elastic Beanstalk. So when you have an Elastic Beanstalk application, it needs to know how to actually run this application, okay. And the way it's going to go ahead and do that is through a hidden directory with a couple of hidden files. So if you scroll down in my example, here, I'm going to tell you that you need to create a couple, you need to create a hidden folder called Eb extensions. And inside that folder is going to contain different types of configuration files. And it's going to tell Elastic Beanstalk how to run this application. So for no GS, we want it to execute the NPM start command, which is going to start up the server to run this application. We also need it to serve static files. So we have another configuration file in here to serve those static files. Now this.eb extensions, it's actually already part of the repository, so you don't have to go ahead and create them. But a very common mistake that people have with Elastic Beanstalk is they fail to upload that hidden folder, because they simply don't see it. Okay. So if you are on your on a Mac here, you can hit Control Shift period, which is going to show those hidden folders for Windows and Unix, you're gonna have to figure that out for yourself here. But just be aware that you need to include this folder for packaging. Alright, so now we know that our application runs on and we can see all the files we need, we're going to go ahead and packages. So I'm going to grab what we need here. We don't need the docks. That's just something that I added here just to get that nice graphic in there for you. And I believe that's all we need, we could exclude the readme, we're going to exclude the dot get directory because sometimes they contain sensitive credentials. But the most important thing is this.eb. Extension. So I'm going to go ahead here and zip this into an archive. Okay, and so now I have that ready to upload. Okay. And just one more caveat here is that you saw me do an npm install to install the dependencies for this application. Well, how is Elastic Beanstalk? going to do that, okay, so it actually does it automatically. And this is pretty much for most environments have it's going to be a Ruby application, it's going to do bundle install. And if you have requirements and your Django application is going to do it, but for Elastic Beanstalk for a no Jess, it's going to automatically run npm install for you. Okay, so you don't have to worry about that. Um, but anyway, we've prepared the archive. And so now we can proceed to actually uploading this archive into Alaska. So I left this screen open, because we had to make a detour and package our code into an archive. And so now we're ready to upload our code. So just make sure you have upload your code selected here and we'll click Upload. And just before we upload, I want you to notice that you can either upload a local archive or you can provide it via a s3 URL and you will have to do this once you exceed 512. megabytes, which isn't a very hard thing to do, because applications definitely get larger. So just be aware of that. But we still have a very small application here, it's only five megabytes, five, five megabytes here. So we can definitely upload this directly. I do want to point out that we did zip the node modules, and this directory is usually very large. So I bet if I excluded this, this would have been less than a megabyte. But for convenience, I just included it. But we did see previous that Elastic Beanstalk does do an npm install automatically for us. So if we had admitted this, it would be installed in the server, okay. But I'm just going to upload this archive, it is five megabytes in size. So it will take a little bit of time here, when we hit the upload button, but just before we do, we need to set our version label. Now I'm going to name this version 0.0. point one. And it's a good idea to try to match the versioning here with your Git tags, because in git, you can tag specific cabinets with specific versions, okay, and we're going to go ahead and upload this, okay. And this will just take a little bit of time, my Internet's not the fastest here. So five megabytes is going to take a minute or so here. Okay, great. So we've uploaded that code here. And we have version 0.0. point one. And so now we can talk about more advanced configuration, we could go ahead and create this application. But I want to just show you all the little things that you can configure in Alaska. So let's look at a bit more advanced configurations here and just make sure that we aren't getting overbilled because we spun up resources, we didn't realize we're gonna cost us money. So the preset configuration here is on the low cost here. So it's going to be essentially free. If we were to launch this here, which is great for learning, but let's talk about if we actually set it to high availability. So if we set it to high availability, we're going to get a load balancer. So a load bouncer generally costs at least $15 USD per month. So by having a low cost, we're saving that money there. And when we set it to high availability, it's going to set it in an auto scaling group, okay, between one to four instances, with low cost, it will only run a single server, you can see that is set to a tee to micro, which is the free tier. And we could adjust that there if we want. And then we have updates. So the deployment method right now is all at once. And so if we were to deploy our application, again, let's say it's already been uploaded, and we deploy it again, using all at once, we're going to have downtime, because it's going to take that server off offline, and then put a new server up with the code in order to deploy that code. And so we can actually use bluegreen deployment to mitigate that, and I'm just going to pop in here to show you so all at once means that it's going to, it's going to shut down and start up a new server in place. And then immutable means it's going to create a new server in isolation, okay, so just be aware of those options there. But um, there's that. And we can also create our database and attach it here as well. Sometimes, that is a great idea. Because if you create an RDS database, so here, I could select like MySQL and Postgres, right, and you provide the username and password. But the advantage of creating your art RDS database with Elastic Beanstalk is that it's going to automatically rotate your RDS passwords for you for for security purposes. So that's a very good thing to have here. I generally do not like creating my RDS instances with Elastic Beanstalk, I create them separately and hook them up to my application. But just be aware that you can go ahead and do that. And I think that's like the most important options there. But we're just going to make sure that we are set to the low cost free tier here with T to micro, okay. And we'll go ahead and create our app. And here we go. And so now, what we're going to see is some information here as it creates our application. And this does take a few minutes here anytime you launch, because it has to launch at two instance. But it always takes about, you know, three to five minutes to spin up a fresh instance. So I will probably clip this video. So this proceeds a lot quicker here. So that deploy finished there. And it redirected me to this, this dashboard here. So if you are still on that old screen, and you need to get to the same place as me just go up to express j s sample up here and just click into your environment and we will be in the same place. So did this work. So it created us a URL here and we will view it and there you go. Our application is running on Elastic Beanstalk. Alright. Now if you're looking up here and saying well, what if I wanted to get my custom domain here? That's where row 53 would come into play. So in roughly two, three, you would point it to your elastic IP, which is the case here, because we created a single instance to be cost saving, and attached an elastic IP for us. If we had done the high availability option, which created a load bouncer, we would be pointing roughly three, two, that load balancer. And that's how we get our custom domain on Elastic Beanstalk. And let's just quickly look at what it was doing as it was creating here. So if we go to events, we're gonna get all the same information. As we were in that prior, you know, that black terminal screen where it was showing us progress. It's the exact same information here. So it created an environment for us it, um, had environment data that it uploaded to s3, it created a security group, it created an elastic IP on and then it spun up that easy to essence and it took three minutes. And this is what I said it would take between three to five minutes to spin up an EC two instance, if we had chose to create an RDS instance, in our configuration to create that initial RDS always takes about 10 to 15 minutes because it has to create that that initial backup. But then from then on, if we did other deploys, would only take the three to five minutes. Okay, so there you go. That's all we need to really know for Elastic Beanstalk for the solution architect. So just so we're not wasting our free tier credits, we should tear down this Elastic Beanstalk environment. So I'm going to go up here to actions. And we are going to terminate this environment here. And we're going to have to provide its name so it's up here. So I'm just going to copy it, paste it in and hit terminate, okay, and this is a bit slow. But we'll let it go here. And it's going to hopefully destroy this environment. Sometimes it does fail, you'll have to give it another try there. But once that is done, then you might want to go ahead and destroy it. Delete the application, okay? So, but it's not necessarily sorry to delete the application. It's just necessary to destroy this environment here, because this is actually running instances. All right. So we'll just wait here. shouldn't take too long, just a couple minutes. All right. So we finished terminating, and it redirected me here. And we can see the previous terminated environment. And now just to fully clean everything up here, we can delete the application. Now, there's no cost to keep this application around. It's really the environments that contain that running resources. But just to be tidy here, we'll go ahead and delete that there. And we'll provide its name and should be relatively quick, they're in Great. So the environment and also the application is destroyed. So we're fully or fully cleaned up. So onto the Elastic Beanstalk cheat sheet. And this is very minimal for the solution architect associate, if you're doing other exams, where Elastic Beanstalk is more important, because there's going to be like two pages, okay, so just keep that in mind. But let's get through this cheat sheet. So Elastic Beanstalk handles the deployment from capacity provisioning, and load balancing, auto scaling, to application health monitoring when you want to run a web app, but you don't want to have to think about the underlying infrastructure. You want to think Elastic Beanstalk. It costs nothing to use Elastic Beanstalk only the resources it provision. So RDS lb easy to recommend it for test or development apps, not Rebecca recommended for production use, you can choose from the following pre configured platforms, you got Java, dotnet, PHP, node, js, Python, Ruby, go and Docker. And you can run Docker eyes environments on Elastic Beanstalk. So there you go. Hey, this is Angie brown from exam Pro. And we are looking at API gateway, which is a fully managed service to create, publish, maintain, monitor and secure API's at any scale. So API gateway is a solution for creating secure API's in your cloud environment at any scale. So down below, I have a representation of how API gateway works. So on the left hand side, you'd have your usual suspects, you'd have your mobile app, your web app, or even an IoT device. And they would be making HTTP requests to your API, which is generally a URL. And so API gateway provides you a URL that it generates, so that you can do that. And then in API gateway, you create your endpoint. So here I have like endpoints for tasks with different methods. And the idea here is that it's likely to create these virtual endpoints so that you can then point them to AWS services, the most common common use service is lambda. But yeah, so the easiest way to think about API gateway and this is really 80 of us this definition is that the API acts as the front door for applications to access data, business logic and functionality for back end services. So it's just a virtual, a bunch of virtual endpoints to connect to AWS So let's just talk about some of the key features API gateway. So API gateway handles all tasks involving accepting and processing up to hundreds of 1000s of concurrent API calls, including traffic management, authorization and monitoring. So it allows you to track and control any usage of your API, you can throttle requests to help prevent attacks, you can expose atbs endpoints to define a RESTful API is highly scalable. So everything happens automatically and is cost effective. And you can send each API endpoint to a different target, and maintain multiple versions of your API. All right, so let's look at how we actually configure API and the components involved. So the first most important thing are resources. Okay, so resources over here is Ford slash projects. And so you know, they literally just are URLs, that's what resources are. And in a API gateway project, you're gonna want to be creating multiple multiples of these resources, right, because you're not gonna just have one endpoint. And so here we have a forward slash projects. And underneath, you can see we have another resource, which is a child of this Parent Resource, which would create this full URL here for you, you see this weird hyphen ID syntax that is actually a variable. So that would be replaced with like three or four. But you know, what you need to know is that resources are URLs, and they can have children, okay? Then what you're going to want to do is you're going to want to apply methods to your resources. So methods are your HTTP methods. So it's the usual write, delete patch, post, put options head, okay. And you can define multiple resources. So if you wanted to do a get to projects, ID, you could do that. And you could also do a post. And now those are unique endpoints. So both GET and POST are going to have different functionality. But yeah, you just need to define all that stuff, right? So yeah, a resource can have multiple methods, resources can have children, you know. And so once we've defined our API using resources and methods, the next thing is to actually get our API published. And in order to do that, you're going to need a bunch of different stages setup. And so stages is just the is just like a way of versioning, your, your API for published versions. And you're normally going to do this based on your environment. So you would have like production, QA, for quality assurance, staging, maybe you'd have one for developers. So yeah, so you'll create those stages. Now once you create a stage, you're going to get a unique URL that's automatically generated from AWS. So here, I have one, and this is the called the invoke URL, and this is the endpoint, you're actually going to hit. So you do this, like Ford slash prod. And then whatever your endpoints are. So we saw in the previous example, we had Ford slash tasks and projects, you just depend on there and make the appropriate method, whatever it is, GET or POST. And that's how you're gonna interact with API gateway. Now, you might look at this and say, I don't really like the look of this URL, I wish I can use a custom one, you can definitely do that in API gateway. So you could make this like API dot exam pro CO, instead of this big ugly URL. But again, for each staging, they're going to have one. So here, it's prod. So there'd be one here, QA and staging. All right. And so in order to deploy these versions to the staging, you'd, you'd go to your actual API, and you'd have to do deploy API action. Every time you make a change, you have to do the deploy API actually doesn't automatically happen. That's something that confused me for some time, because you think you made this endpoint? Why aren't they working? It's generally because you have to do so we looked at how to define the API, and also how to deploy it. The last thing is actually how do we configure those endpoints. So when you select the method for your resource, you're going to choose your integration type. And so we have a bunch of different ones. So we got lambda HTTP and Mach one, you can send it to another ad of a service, which this option is very confusing, but it's supposed to be there. And you have VPC link. So I would go to your on premise environment or your local data center or local. Network. Okay. So you do have those integration types. And then once you do that, the options are going to vary, but the most common one is lambda function. And so we're going to see is generally this and the idea is that you get to configure for the the request coming in and the response going out. Okay, so you could apply authorizations, yes is off none. So you can make it so they have to authenticate or Authorize. And then you can say you have some configuration lambda, and you have some manipulation on the response going out. Okay. So get a bit of cost saving here and a bit of less of a burden on your API, what you can do is you can turn on API gateway cache, so you can cache the results of common endpoints. Okay. So when enabled on the stage API gateway caches response from your endpoint for a specific TTL. Okay, so let's just a period of time have expired right, or Time To Live. API gateway responds to requests by looking up the responses from the cache. So instead of making a request to the endpoint, okay, and the reason you're gonna want to do this is going to reduce the number of calls To your endpoint, which is going to save you money. And it's going to prove latency for the requests made to your API, which is going to lead to a great experience. So definitely, you might want to turn on. So we're gonna look at cores now. And this stands for cross origin resource sharing. And this is to address the issue of same origin policy. So same origin policy protects us against Xs s attacks, but there are times when we actually do want to access things from another domain name, okay, that's not from our own. And so that's what corps allows you to do causes kind of like this document or these header files, which say, Okay, this domain is okay. To run scripts on. Okay. And so an API gateway, this is something we're going to be commonly turning on. Because by default, cores is not enabled. And so you're gonna have to enable it for the entire API or particular endpoints, like the cores that you want to like the headers, you want to be passed along with those endpoints. And so here, you'd say, Okay, well, posts are allowed options are allowed and see where it says access cross origin allow, you're doing a wildcard saying everything's allowed. Okay? So that's how you would set it. But you know, just understand what causes and korres is a is these headers that say, this domain is allowed, allowed access to run these things from this location, okay. And so of course, is always enforced by the client cores being the browser. Okay, so the core, that means that the browser is going to look for cores, and if it has cores, then it's going to, you know, do something, okay. So there you go. So there's this common vulnerability called cross site scripting, exe, s s attacks. And this is when you have a script, which is trying to be executed from another website, on your website. And it's for malicious reasons. Because a lot of people when they're trying to do something malicious, they're not going to be doing it from your site, because that's from you. It's gonna be from somebody else. So in order to prevent that, by default, web browsers are going to restrict the ability to execute scripts that are cross site from another site. But in order to allow scripts to be executed, you need a same origin policy, okay? That's the concept of the browser saying, Okay, these scripts are allowed to execute from another website. So again, web browsers do enforce this by default. But if you're using tools, such as postman and curl, they're going to ignore same origin policy. So if you're ever wondering why something's not working, cross site, it's going to likely be this. Right, so we're on to the API gateway cheat sheet. So API gateway is a solution for creating secure API's in your cloud environment at any scale, create API's that act as a front door for applications to access data, business logic or functionality from back end services, API gateway thralls, API endpoints at 10,000 requests per second. We didn't mention that in the core content, but it's definitely exam question that might come up where they're like, Oh, you have something, you're going beyond 10,000. And it's not working? Well. That's the reason why is that there's a hard limit of 10,000 requests per second. And then you have to ask for a increase a service level increase at the support stages allow you to have multiple published versions of your API. So prod staging QA. Each stage has an invoke URL, which is the endpoint you use to interact with your API, you can use a custom domain, domain for your invoke URL. So it could be API dot example code to be a bit prettier. You need to publish your API via the deploy API action, you choose which which stage you want to publish your API. And you have to do this every single time you make a change. It's annoying, but you have to do it. Resources are URLs. So just think forward slash projects. resources can have child resources, resources. So the child here being hyphen, Id edit hyphen, hyphen, hyphen is like a syntax is saying this is a custom variable. That could be three, four to sign an exam question, but it's good for you to know. You define multiple methods on your resources. So you're gonna have your get post, delete, whatever you want. cores issues are common with API gateway cores can be enabled on all or individual endpoints. caching improves latency and reduces the amount of calls made to your endpoint. Same origin policies help to prevent excess attacks, same origin policies ignore tools like postman or curl. So similar to policies just don't work with those or don't work. But it just the ease that so you can work with those tools. cores is also enforced by the client client would be the browser. So cores, the browser is going to definitely look for course headers and interpret them. You can require authorization to to your API via Ava's cognito or a custom lambda. So just so you know, you can protect the calls to Hey, this is Andrew Brown from exam Pro, and we are looking at Amazon kinesis, which is a scalable and durable real time data streaming service. As to ingest and analyze data in real time from multiple sources. So again, Amazon kinesis is AWS is fully managed solution for collecting, processing and analyzing street streaming data in the cloud. So when you need real time, think kinesis. So some examples where kinesis would be of use stock prices, game data, social media data, geospatial data, clickstream data, and kinesis has four types of streams, we have kinesis data streams, kinesis, firehose delivery streams, kinesis, data analytics, and kinesis video analytics, and we're going to go through all four of them. So we're gonna first take a look at kinesis data streams. And the way it works is you have producers on the left hand side, which are going to produce data, which is going to send it to the kinesis data stream, and that data stream is going to then ingest that data. And it has shards, so it's going to take that data and distributed amongst its shards. And then it has consumers. And so consumers with data streams, you have to manually configure those yourself using some code. But the idea is you have these two instances that are specialized to then consume that data and then send it to something in particular. So we have a consumer that is specialized to sending data to redshift than dynamodb than s3, and then EMR, okay, so whatever you want the consumer to send it to, it can send it wherever it wants. But the great thing about data streams is that when data enters into the stream, it persists for quite a while. So it will be there for 24 hours, by default, you could extend it up to 160 68 hours. So if you need to do more with that data, and you want to run it through multiple consumers, or you want to do something else with it, you can definitely do that with it. The way you pay for kinesis data streams, it's like spinning up a new CPU instance, except you're spinning up shards, okay. And that's what's going to be the cost there. So as long as the shard is running, you pay X amount of costs for X amount of shards. And that is kinesis data. So onto kinesis firehose delivery stream, similar to data streams, but it's a lot simpler. So the way it works is that it also has producers and those producers send data into kinesis firehose. The difference here is that as soon as data is ingested, so like a consumer consumes that data, it immediately disappears from the queue. Okay, so data is not being persisted. The other trade off here is that you can only choose one consumer. So you have a few options, you can choose s3, redshift, Elasticsearch, or Splunk, generally, people are going to be outputting to s3. So there's a lot more simplicity here. But there's also limitations around it. The nice thing though, is you don't have to write any code to consume data. But that's the trade off is you don't have any flexibility on how you want to consume the data, it's very limited. firehose can do some manipulations to the data that is flowing through it, I can transform the data. So if you have something where you want it from JSON, you want to convert it to parkette. There are limited options for this. But the idea is that you can put it into the right data format, so that if it gets inserted into s3, so maybe Athena would be consuming that, that it's now in parkette file, which is optimized for Athena, it can also compress the file. So just simply zip them, right. There's different compression methods, and it can also secure them. So there's that advantage. The big advantage is firehose is very inexpensive, because you only pay for what you consume. So only data that's ingested is what you what you pay for you kind of think of it like I don't know, even lambda or fargate. So the idea is you're not paying for those running shards, okay. And it's just simpler to use. And so if you don't need data retention, it's a very good. Okay, on to kinesis video streams. And as the name implies, it is for ingesting video data. So you have producers, and that's going to be sending either video or audio encoded data. And that could be from security cameras, web cameras, or maybe even a mobile phone. And that data is going to go into kinesis video streams, it's going to secure and retain that encoded data so that you can consume it from services that are used for analyzing video and audio data. So you got Sage maker recognition, or maybe you need to use TensorFlow or you have a custom video processing or you have something that has like HL based video playback. So that's all there is to it. It's just so you can analyze and process a video streams applying like ml or video processing service. Now we're gonna take a look at kinesis data analytics. And the way it works is that it takes an input stream and then it has an output stream. And these can either be firehose or data streams. And the idea is you're going to be passing information data analytics What this service lets you do is it lets you run custom SQL queries so that you can analyze your data in real time. So if you have to do real time reporting, this is the service you're going to want to use. The only downside is that you have to use two streams. So it can get a little bit expensive. But for data analytics, it's it's really great. So that's all there is. So it's time to look at kinesis cheat sheet. So Amazon kinesis is the ADA solution for collecting, processing and analyzing streaming data in the cloud. When you need real time, think kinesis. There are four types of streams, the first being kinesis data streams, and that's a you're paying per shard that's running. So think of an EC two instance, you're always paying for the time it's running. So kinesis data streams is just like that data can persist within that stream data is ordered, and every consumer keeps its own position, consumers have to be manually added. So they have to be coded to consume, which gives you a lot of custom flexibility. Data persists for 24 hours by default, up to 168 hours. Now looking at kinesis firehose, you only pay for the data that is ingested, okay, so think of like lambdas, or fargate. The idea is that you're not paying for a server that's running all the time. It's just data, it's ingested, data immediately disappears. Once it's processed consumer, you only have the choice from a predefined set of services to either get s3, redshift, Elasticsearch, or Splunk. And they're not custom. So you're stuck with what you got kinesis data analytics allows you to perform queries in real time. So it needs kinesis data streams or farfalle firehose as the input and the output, so you have to have two additional streams to use a service which makes it a little bit of expensive. Then you have kinesis video analytics, which is for securely ingesting and storing video and audio encoder data to consumers such as Sage maker, recognition or other services to apply machine learning and video processing. to actually send data to the streams, you have to either use kpl, which is the kinesis Producer library, which is like a Java library to write to a stream. Or you can write data to a stream using the ABS SDK kpl is more efficient, but you have to choose what you need to do in your situation. So there is the kinesis cheat sheet. Hey, this is Andrew Brown. And we are looking at AWS storage gateway, which is used for extending and backing up on premise storage to storage gateway provides you seamless and secure integration between your organization's on premise IT environment, and ABS AWS storage infrastructure, we can securely store our data to the newest cloud. And it's scalable and cost effective in uses virtual machine images in order to facilitate this on your on premise system. So it supports both VMware ESXi and Microsoft Hyper V. And once it's installed and activate, you can use Eva's console to create your gateway. Okay, so there is an on premise component and a cloud component to connect those two things. And we have three different types of gateways, which we're going to get into now. So the three types of gateways, we have file gateway, which uses NFS, or SMB, which is used for storing your files in s3, then you have volume gateway, which is using iSCSI. And this is intended as a backup solution. And we have two different methods of storing those volumes. And then the last is tape gateway, which, which is for backing up your virtual tape library. Here we're looking at file gateway. And what it does is it allows you to use either the NFS, or SMB protocol, so that you can create a mount point so that you can treat s3, just like a local hard drive or local file system. And so I always think of file gateway as extending your local storage onto s3. Okay, and there's some details here we want to talk about, so ownership permissions, and timestamps are all stored with an s3 metadata. For the objects that are associated with the file, Once the file is transferred to s3, it can be managed as a native s3 objects and bucket policies versioning lifecycle Lifecycle Management, cross region replication apply directly to your object stored in your bucket. So not only do you get to use s3, like a normal file system or hard drive, you also get all the benefits of s3. Now we're going to look at the second type of storage gateway volume gateway. So volume gateway presents your application with disk volumes using internet small computer systems interface. So iSCSI block protocol. Okay, so the idea is that you have your local storage volume and using this protocol through storage gateway, we're going to able to interact with s3 and store a backup of our storage volume as an EBS snapshot, this is going to depend on the type of volume gateway we use because there are two different types and we'll get into that out of the slide. But let's just get through what we have here in front of us. So the data is written to the volumes and can be asynchronously backed up as a point in time snapshot of the volume and stored in cloud as an EBS snapshots. snapshots are incremental backups that capture only change blocks in the volume. All snapshots, storage is also compressed to help minimize your storage charges. So I like to think of this as giving you the power of EBS locally, because if you were to use EBS on AWS. It does all these cool things for you, right, but so it's just treating your local drives as like EBS drives, and it's doing this alter s3. So let's let's go look at the two different types here. So the first type is volume gateway for storage vault volumes. And the key thing is that it's where the primary data is being stored. Okay, so the primary data is stored locally while asynchronously backing up the data to AWS. So you're all your local data is here, and then you just get your backup on AWS. So it provides on premise applications with low latency access to the entire data set while still providing durable offset backups. It creates storage volumes and mounts them as ice FCS devices from your on premise servers. As we saw in the last illustration, any data written to the stored volumes are stored on your on premise storage hardware. That's what this is saying here with the primary data. EBS snapshots are backed up to a database s3 and stored volumes can be between one gigabyte to 16 terabytes in size. Let's take a look at cached volume. So the difference here between stored volumes cache volumes is the primary data stored on AWS. And we are caching the most frequently accessed files. So that's the difference here. And the key thing to remember between storage volume or storage volumes and cache volumes is where the primary data is. So why would we want to do this? Well, it minimizes the need to scale your on premise storage infrastructure while still providing your applications with low latency data access, create storage volumes up to 32 terabytes in size and attach them as I SCSI devices, from your on premise servers, your gateway stores data that you will write to these volumes in s3 and retain recently read data in your on premise storage. So just caching those most frequently files, gateway cache and upload buffer storage cache volumes can be between one gigabyte and 32 gigabytes in size. So there you go, that is volume gateway. We're looking at the third type of storage gateway tape gateway. And as the name implies, it's for backing up virtual tape libraries to AWS. So it's a durable cost effective solution to archive your data in AWS. You can leverage existing tape based backup application infrastructure, stores data virtual tape cartridges that you create on your tape gateway, each tape gateway is pre configured with a media changer and tape drives. I know I'm not showing that in here. But you know, I think it's just better to see the simpler visualization, but which are available to your existing client backup applications. Ice SCSI devices, you add tape cartridges as you need to archive your data, and it supports these these different tape tape services. Okay, so got veem you got backup, exact net backup. There's also one from Symantec, Symantec that's called Backup Exec, I don't know, used to be there got bought out, I don't know. So I just listed this one. So maybe I made a mistake there. But it's not a big deal. But the point is, is that you have virtual tape libraries, and you want to store them on s3, and it's going to be using s3 Glacier because of course that is for long, long storage. So there you go. That's it. So we're at the end of storage gateway. And here I have a storage gateway cheat sheet which summarizes everything that we've learned. So let's start at the top here, storage gateway connects on premise storage to Cloud Storage. So it's a hybrid storage solution. There are three types of gateways file gateway, volume, gateway and tape gateway, Vol gateway, lets s3 act as a local file system using NFS or SMB. And the easy way to think about this is think of like a local hard drive being extended into s3. Okay. Volume gateway is used for backups and has two types stored and cached. Stored volume gateway continuously backs backs up local storage to s3 as EBS snapshots, and it's important for you remember that the primary data is on premises that's what's going to help you remember the difference between stored and cached. Storage volumes are between one gigabyte to 16 terabytes in size, cache volume gateway caches the most frequently used files on premise and the primary data is stored on s3 again, remember the difference between where the primary data is being stored. cache volumes are one gigabytes, between three to gigabytes in size and tape gateway backs up virtual tapes to s3 Glacier for long archival storage. So there you go, we're all done with storage. Hey, this is Andrew Brown. And we are going to do another follow along. And this is going to touch multiple services. The core to it is lambda, but we're going to do static website hosting, use dynamodb use SNS and API gateway, we're all going to glue it together, because I have built here a contact form, and we are going to get it hosted and make it serverless. Okay, so let's get to it. So, um, we're gonna first try to get this website here hosted on s3, okay, and so what I want you to do is make your way to the s3 console, you can just go up to services here and type s3 and click here and you will arrive at the same location. And we're going to need two buckets. So I've already registered a domain called Frankie Lyons calm here in route 53. Okay, and we're going to have to copy that name exactly here and create two buckets, okay, and these buckets are going to have to be the exact name as the domain name. So we're going to first do the naked domain, which is just Frankie Lyons, calm, okay. And then we're going to need to do the second one here, once this creates, it's taking its sweet time, we're gonna have to do that with the sub domain, okay. And so now we have both our buckets, okay. And so now we're going to click into each one in turn on static website hosting. So going to management over here, or sorry, properties, there is a box here called static website hosting. And we're going to have this one redirect to our subdomain here. So we'll do ww dot, I'm not even gonna try to spell that. So I'm just gonna copy paste it in there. Okay. And we're just going to hit save. Alright, and so we have a static website hosting redirect set up here. And then we're going to go to back to Amazon s3 and turn on static website hosting for this one here. So we're going to go to properties, and set static website hosting and use this bucket. And we're going to make it index dot HTML error dot html. And yeah, that's good. So now we have our other stuff turned on, this is going to need to be public facing because it is a bucket. So we're going to go over to our permissions here and edit the block public access. And we're going to have to hit Save here, okay. And we just need to type confirm. Okay. And now we should be able to, we should be able to upload content here. So let's go ahead and upload our website here. So I do have it on my, my desktop here under a folder called web. So this is all the stuff that we need to run it probably not the package dot JSON stuff. So I'm just going to go ahead here and grab this, okay. And we're just going to click and drag that there. And we'll just upload it. Okay, and that won't take too long here. And now if we want to preview the static website hosting, we're going to go to our properties here, and just right click on this endpoint to or I guess you can right click, we'll just copy it. Okay. And we'll just give it a paste and give it a look here. So we're getting a 403 forbidden, um, this shouldn't be the case, because we have it. Oh, you know, it's not WW. Oh, no, and just www. So that's a bit confusing, because we should have this turned on. So I think what it is, is that I need to update the bucket policy. Okay, so I'm just going to go off screen here and grab the bucket policy, it's on the database documentations, I just can't remember it off the top my head. So I just did a quick Google on static website hosting bucket policy. And I arrived here on a device docs. And so what we need is we need this policy here. Okay. And so I'm just going to go copy it here. And I'm going to go back to s3, and I'm going to paste in this bucket policy. Now I do need to change this to match the bucket name here. So we'll just copy that here at the top. Okay, and so what we're doing is we're saying allow, allow read access to all the files within this bucket, okay. And this said, we can name it whatever you want, it's actually optional, I'm just gonna remove it to clean it up here, okay. And we should be able to save this. Okay, and we are and now this bucket has public access. So if we go back to this portal three here, and do refresh, our website is now up. So there is a few other things we need to do. So this form when we submit it, I wanted to send off an email via SNS, and I also want it to, I want it to also stored in Dynamo dB, so we have a reference of it. So let's go ahead and set up an SNS topic and then we'll proceed to do Alright, so let's make our way over to SNS here. So I'm just gonna go back up to the top here. Just click down services here and type SNS and we're going to open this up in a new tab. Because it's great to have all these things open. And we'll just clear out these ones here. Okay? And we're going to get to SNS here, and what I'm going to do is on the first time I'm here, so I get this big display here, but a lot of times, you can just click the hamburger here and get to what you want. So I'm just going to go to topics on the left hand side, because that's what we need to create here. I'm going to create a topic. And I'm going to name this topic, um, Frankie Alliance. Okay, so I'm just going to grab that domain name here. Okay, I'm just gonna say topic here. And I don't need an optional display name, I guess it depends, because sometimes it's used in the actual email here that's actually displayed. So I'm just going to copy this here and just put F, and here, Frankie Alliance, okay, I think we can have uppercase there. And we have a few options here, we can encrypt it, I'm not going to bother with that, we can set our access policy, we're gonna leave that, by default, we have this ability to do retries, we're not doing HTTP, we're going to be using this for email. So this doesn't matter. And, you know, the rest are not important. So I'm going to hit Create topic here. Okay, and what that's going to do is that's going to create an Arn for us. And so we're going to have to come back to this later to utilize that there. Okay. But what we're going to need to do is if we want to receive emails from here, we're going to have to subscribe to this topic. So down below, we'll hit the Create subscription here. And we're going to choose the protocol. And so I want it to be email, and I'm going to choose Andrew at exam pro.co. All right, I'm just gonna drop down here, see if there's anything else no, nothing important. And I'm just gonna hit Create subscription. So what what's that that is going to do? It's going to send me a confirmation email to say, hey, do you really want to subscribe to this? So you're going to get emails, I'm going to say yes, so I'm just going to flip over to my email here, and go ahead and do that. Alright, and so here came the email was nearly instantaneous. And so I'm just going to hit the confirmation here, okay. And now that's going to confirm my subscription. Okay, so that means I'm going to now receive emails if something gets pushed to that topic. All right. So yeah, if we go back to SNS here, you can see it was in a pending state, if we just do a refresh here. Okay, now we have a confirmation. So there you go. Um, now we can move on to creating our Dynamo DB table. Alright, so now that we have SNS, let's proceed to create our Dynamo DB table. So I want to go to services at the top here, type in Dynamo dB. And we will open this up in a new tab because we will have to come back to all these other things here. And we're just gonna wait for this to load and we're going to create ourselves a dynamodb table. So we'll hit Create. And I'm going to name this based off my domain name. So I'm gonna say Frankie Alliance. And we need to set a partition key. So a good partition key is something that is extremely unique, like a user ID, or in this case, an email. So we'll use that as email. And then for the sort key, we are going to use a created date. Okay, so there is no date time, data structure here in dynamodb. So we'll just have to go with a string, and that's totally fine. And there are some defaults here. So it says no secondary indexes, provision, capacity five, and five, etc, etc. So we're just going to turn that off, and we're gonna override this ourselves. So there is the provisioned, and we can leave it at provision, I'd rather just go Yeah, we'll leave it at provision for the time being. But I'm going to override these values. So I'm just gonna say one and one, okay. And the reason why is just because I don't imagine we're gonna have a lot of traffic here. So being able to do one Read, read and writes per second should be extremely capable for us here, right? So this should be no issue. And then we'll just go down below, this is all fine. We could also encrypt this at rest, I'm just gonna leave that alone. Okay, and that all looks good to me. So I'm going to hit Create there. Okay, and so this table is just going to create here and so what we're looking for is that Arn. So once we have the Arn for the table, then we will be able to go ahead and hook that up into our lambda code. Okay. So yeah, this all looks great. Um, so I guess maybe the next thing is now to actually get the lambda function they're working. So maybe we'll do that or I guess we could go ahead and actually put this behind CloudFront and hook up the domain. I think we'll do that first. Okay, so we're gonna go and do some CloudFront here, sem rush. So I guess the next thing here is to actually get our, our proper domain here, so we're not using the AWS one. So I've actually already registered a domain here and I might actually include it from another tutorial here are those steps here. So if you feel that there is a jargon section, it's just because I am bringing that in over here. But we already have the domain name. And so once you have your domain name, that means that you can go ahead and start setting up CloudFront and ACM. So we're gonna want to do ACM first. So we're going to type in ACM here. Okay, that's Amazon certificate manager. And that's how we're going to get our SSL domain, make sure you click on the one on the left hand side provision certificates, because this one is like $500. Starting, so it's very expensive. So just click this over here to provision just make sure again, that is the public certificate and not the private private, again, is very expensive, we're going to hit request, we're going to put the domain name in. So we have the domain name there, I'm always really bad at spelling it. So I'm just gonna grab it here. Let's really hold it, we don't need spelling mistakes. And we're going to have the naked domain. And we're also going to do wildcard. And so just by doing this, we're going to cover all our bases of the naked and all all subdomains. So I strongly recommend that you go ahead and do this, when you are creating your certificates, we're going to hit Next we're going to use DNS validation email is just a very old way, nobody really does it that way anymore. We're going to hit review, we're gonna hit confirm request, okay. And so what's happening here is that it's going it's now in pending validation. And so we need to confirm that we own this domain. And so we need to add a record to our domain name, since our domain name is hosted on Route 53, that's going to make it very easy to add these records, it's going to be one click of a button. So I'm just gonna go ahead here and hit create, and then go here and hit Create. Okay, and so um, yeah, this shouldn't take too long, we'll hit continue, okay. And we're just going to wait for this to go pending to issued, okay, this is not going to take very long, it takes usually a few minutes here. So we're just going to wait, I'm going to go grab a coconut water, and I'll be back here shortly. Alright, so I'm back here, and it only took us a few minutes here. And the status has now issued, so meaning our SSL certificate is ready for use. So that means we can now create our CloudFront distribution. So what I want you to do is go up to here and type in CloudFront. Okay. And we're going to make our way over to CloudFront. So, here we are in CloudFront. And we're just going to create ourselves a new distribution, we have web and rtmp, we're not going to be using rtmp. This is for Adobe Flash media server. And very rarely does anyone ever use Adobe anymore, so it's going to have to be the web distribution. And we're gonna have to go through the steps here. So the first thing is, we need to select our actual bucket here. So we are going to be doing this for the www, okay. And we're going to restrict bucket access, because we don't want people directly accessing the website via this URL here, we want to always be through the domain name. So that's what this option is going to allow us to do. It's going to create a new origin identity, we can just let it do as it pleases, we need to grant permission. So I'm gonna say yes, update my bucket policy. So that should save us a little bit of time there. Now on to the behavior settings, we're going to want to redirect HTTP to HTTPS, because really, no one should be using HTTP, we are going to probably need to allow we'll probably have this forget. And head I was just thinking whether we need post and patch. But that's only four if we were uploading files to s3 through education. So I think we can just leave it as get get in head. So we're fine there. We're just going to keep on scrolling down here, we're not gonna restrict access to this is a public website, we're just going to drop down and choose US, Canada and Europe, you can choose best performance, I just feel this is going to save me some time because it does take a long time for this thing to distribution to create. So the fewer edge locations, I think the less time it takes to create that distribution, we're going to need to put our alternate domain name in here. So that is just our domain names, we're going to put www dot and again, I don't want to spell it wrong. So I'm just going to copy it here manually. Okay. Back to CloudFront here and we'll just do Frankie lines calm. Now we need to choose our custom SSL, we will drop down here and choose Frankie Alliance Comm. Okay, and we need to set our default route object that's going to be index dot HTML. That's how it knows to look at your index HTML page right off the bat. Okay, and that's everything. So we're gonna hit Create distribution. And luckily, there are no errors. There are no errors. So we are in good shape here. I'm not sure why it took me here. So I'm just going to click here to see if it actually created that distribution. It's very strange, it usually takes you to the distribution page there. Okay, but it is creating that distribution. Okay, so we're gonna wait in progress. This does take a considerable amount of time. So go take a shower, go take a walk, go watch an episode of Star Trek, and we will be back here shortly. So our distribution is created. It took about 20 minutes for this to create. I did kind of forget to tell you that we have to create two distributions. So sorry about that, but we're going to have to go ahead and make another one. So we have one here for the www, but we're going to need one for the naked domain. So I want you to go to create distribution and go to web. And for the domain name, it's going to be slightly different. Okay, so instead of selecting the bucket, what I want you to do is I want to go back to s3. And I want you to go to the bucket with the naked domain. And we're going to go to properties here, okay. And at a static website hosting, I don't want you to copy everything. For the end point here with the exception of the HTTP, colon, forward slash forward slash, okay. And we're just going to copy that. And we're going to paste that in as the domain name. All right, so we're not going to autocomplete anything that I want you to hit tab, so that it autocompletes this origin ID here. And then we can proceed. So we will redirect HTTP to HTTPS. We'll just scroll down here. So this is all good. The only thing is, we want to change our price class to you the first one here, okay, we're going to need to put that domain name in there. So we'll just copy it here from s3, and paste that in, we're going to need to choose our custom SSL drop down to our SSL from ACM. And we'll leave everything else blank. And we'll create that distribution. So now it's going to be another long wait. And I will talk to you in a bit here. So after waiting 20 minutes, our second distribution is complete, I want you to make note that for the naked domain, we were pointing to that endpoint for the static s3, website hosting, and for the WW, we are pointing to the actual s3 bucket, this doesn't matter. Otherwise, this redirect won't work. Okay, so just make sure this one is not set to the bucket. Alright. So now that we have our two distributions deployed, we can now start hooking them up to our custom domain name. Alright, so I want you to make your way over to route 53, we're going to go to hosted zones on the left hand side, we're going to click into that domain name. And we're going to add two record sets, we're gonna add one record set for the naked domain, and then the www Alright, so we're gonna leave that blank there, choose alias. And we're going to drop down here, and we're going to choose that CloudFront distribution. Now, there are two distributions, there's no chance of you selecting the incorrect one, because it's only going to show the one that matches for the domain. Alright, so we'll hit create for the naked domain, and then we'll add another one. And we'll go to www, and we're gonna go to alias, and we're going to choose the www CloudFront distribution. All right, and now that those are both created, we can go ahead and just grab this domain name here and give it a test. Okay, and so it's working, we're on our custom domain name. Now, definitely, you want to check all four cases. So with, with and without the WW. And with and without the s for SSL. And then in combination of so that one case works will try without the WW. Okay, it redirects as expected, okay, and we will try now without the SSL and make a domain. And it works as expected. And we will just try it without the S here. And so all four cases work, we are in great shape. If anyone ever reports to the your website's not working, even though it's working for you just check all those four cases, maybe there is a issue there. Alright. So now that we have route 53, pointing to our distributions and our custom domain hooked up, we need to do a little bit of work here with our www. Box bucket there because when we first created this policy, we added this statement here, which allowed access to this bucket. And then when we created our CloudFront distribution, we told it to only allow access from that bucket. So it added this, this statement here, all right. And so this one was out of convenience, because we weren't using a custom domain. And so we only want this website to be accessible through CloudFront. And this is still allowing it from the original endpoint domain. So if we were to go to management, I'm just going to show you what I mean, I'm sorry, properties. And we were to go to this endpoint here. All right, we're going to see that it's still accessible by this URL. So it's not going through CloudFront. And we don't want people directly accessing the bucket. We want everything to go through CloudFront. So we get statistics and other fine tuned control over access to our website here. So what I want you to do is I want you to go and back to your permissions here and go to bucket policy and remove that first statement. Okay. And we're going to hit save. All right, and we're going to go back to this endpoint, and we're going to get a refresh. And we should get a 403. If you don't get it immediately, immediately. Sometimes chrome caches the browser's just try another browser or just hit refresh until it works. Because if you definitely have removed that bucket policy, it should return a 403. So now, the only way to access it and we'll go back to the original one here is through the domain name. So there you go. It's all hooked up here and we can now proceed to actually working with the landlord. Alright, so now it's time to work with AWS lambda. And so I prepared a function for you here in the For a folder called function, and the idea is we have this form here. And when we submit it, it's going to go to API gateway and then trigger this lambda function. Alright. And those, that data will be inputted into this function here. So it gets passed in through event. And then it parses event body, which will give us the return Jason of these fields here, that we're going to use validate, which is a, it's a third party library that we use to do validations. Okay, so if I just open up the constraints, file here, this uses, this actually validates all these fields. And so whether the input is valid or not, it's going to either return an error to this forum saying, hey, you have some mistakes. If it is successful, then what it's going to do. Alright, it's going to call this success function, and then it will call insert, record and send email. So these two things are in two separate files here. So one is for Dynamo dB, and one is for SNS. So when we say insert record, we're saying we're going to insert a record into the dynamodb table that we created. And then for SNS, we're going to send that email off. Alright, so this is a slightly complex lambda function. But the reason I didn't just make this one single file, which could have been very easy is because I want you to learn at the bare minimum of of a more complex lambda functions, such as having to upload via zip, and dealing with dependencies. All right, so now that our, our we have a little walkthrough there, let's actually get this lambda function into the actual AWS platform. All right. So before we can actually upload this to AWS, we have to make sure that we compile our dependencies. Now, I could easily do this locally. But just in case, you don't have the exact same environment as myself, I'm gonna actually show you how to do this via cloud nine. All right, so what I want you to do is, we're just going to close that tab here. And I want you to close these other ones here and just leave one open. And we're going to make our way over to cloud nine. Okay, and so just before we create this environment, can you double check to make sure that you are in the correct region. So I seem to sit on the border of US East, North Virginia, and Ohio, and sometimes it likes to flip me to the wrong region. So I'm gonna switch it to North Virginia, this is super important, because when we create our lambda functions, if they're not in the same region, we're going to have a bunch of problems. All right, so just make sure that's North Virginia, and go ahead and create your environment. Okay. And I'm going to name this based off our domain name. So I should probably have that tab open there. So I'm just gonna open up route 53. There, okay. And I'm just going to copy the domain name here. Okay, and I'm going to name it for Randy Alliance. Alright. And we're going to go to a next step. And we are going to choose to micro because it's the smallest instance, we're gonna use Amazon Lex, because it's packed with a bunch of languages pre installed for us. Cloud Nine environments do shut off after 30 minutes, so you won't be wasting your free credits. Since it is a T two micro it is free tier eligible. And the cost of using cloud nine is just the cost of running the cloud, the actual EC two instance underneath to run the environment. So we're going to hit next step here, we're going to create the environment here. And now it will only take a few minutes, and I will see you back shortly here. Oh, so now we are in cloud nine. And I'm just going to change the theme to a darker because it's easier on my eyes, I'm also going to change the mode to vim, you can leave it as default vim is a complex keyboard setup. So you may not want to do that. And now what we can do is we can upload our code into Cloud Nine. So that we can install the dependencies for the specific Node JS version we are going to need, so I want you to go to File and we're going to go to upload local files. And then on my desktop, here, I have the contact form, it's probably a good idea, we grab both the web and function, the web contains the actual static website, and we will have to make adjustments to that code. So we are just prepping ourselves for a future step there. And so now that code is uploaded, okay, so what we're looking to do is we want to install the dependencies, okay, because there, we need to also bundle in whatever dependencies within this function, so for it to work, and in order to know what to install with, we need to know what version of Node JS we're going to be using. And the only way we're going to know is by creating our own lambda function. Alright, so what I want you to do is just use this other tab here that I have open, and I'm going to go to the lambda, the lambda interface here, and we are going to go ahead and create our first. All right, so let's proceed to create our lambda function. And again, just make sure you're in the same region so North Virginia because it needs to be the Same one is the cloud nine environment here. So we're going to go ahead and create this function and we're going to need to name it, I'm going to name it the offeror engi. Alliance, okay? And I'm gonna say contact form. Alright. And I believe I spelt that correctly there, I'm just using that as a reference up here. And we need to choose a runtime. So we have a bunch of different languages. So we have Ruby, Python, Java, etc, etc, we were using no Gs. And so now we, you can use 10, or eight point 10, it's generally recommended to use 10. But there are use cases where you might need to use eight, and six is no longer supported. So that used to be an option here, but it is off of the table now, so to speak. So we need to also set up permissions, we're not going to have a role. So we're going to need to create one here. So let's go ahead and let this lambda create one for us. And we will hit Create function, okay. And we're just gonna have to wait a few minutes here for it to create our lambda function. Okay, not too long. And here's our function. Great. So it's been created here. And the left hand side, we have our triggers. So for our case, it's going to be API gateway. And on the right hand side, we have our permissions. And so you can see, by default, it's giving us access to cloud watch logs. And we are going to need dynamodb and SNS in here. So we're going to have to update those permissions, just giving a quick scroll down here, we actually do have a little cloud nine environment embedded in here. And we could have actually done all the code within here. But I'm trying to set you up to be able to edit this web page. And also, if the if your lambda function is too large, then you actually can't use this environment here. And so you'd have to do this anyway. So I figured, we might as well learn how to do it with the cloud nine way, alright, but you could edit it in line, upload a zip file, so as long as it's under 10 megabytes, you can upload it and you should, more or less be able to edit all those files. But if it gets too big, then you have to supply it via s3. Alright, so, um, yeah, we need to get those additional permissions. And so we're going to need to edit our, our roll, which was created for us by default. All right, let's make our way over to I am so just gonna type in I am here. And once this loads here, we're going to go to the left hand side, we're going to go to a roles, and we're going to start typing Frankie. So if he are, okay, there's that role. And we're going to add, attach a couple policies here. So we're gonna give it we said SNS, right, we'll give it full access. And we're gonna give it dynamodb. And we'll give it full access. Now, for the associate level, it's totally appropriate to use full access. But when you become a database professional, you'll learn that you'll want to pair these down to only give access to exactly the actions that you need. But I don't want to burden you with that much. Iam knowledge at this time, all right, but you'll see in a moment, because when we go back to, sorry, lambda function, and we refresh here, okay, I'm just hitting the manual refresh there, we're gonna see what we have access to. So this is now what we have access to, and we have a bunch of stuff. So we don't just have access to dynamodb. We have access to Dynamo DB accelerator, we're not going to use that we have access to easy to we don't need that we, we have access to auto scaling, we probably don't need that data pipeline. So that's the only problem with using those full access things as you get a bit more than what you want. But anyway, for our purposes, it's totally, totally fine. Okay, and so now that we have our role, we want to get our code uploaded here. So what I want you to do is I want you to go back to cloud nine. All right, and we're going to bundle this here. So down below, we are in the correct directory environment. But just to make sure we are in the same place, I want you to type cd Tilda, which is for home, and then type forward slash environment. Okay, and then we're gonna type forward slash function. All right, I want to show you a little a little thing that I discovered is that this top directory that says Frankie Alliance is actually the environment directory, for some reason, they name it differently for your purpose. But just so you know, environment is this folder here. Okay. And so now we know our node version is going to be 10. I want you to type in nvm, which stands for node version manager type NBN list, and we're going to see what versions of node we have installed and which one is being used. And by default, it's using 10. So we're already in great shape to be installing our dependencies in the right version. So I want you to do npm install, or just I, which is the short form there. Okay. And it's going to install all the dependencies we need. We can see they were installed under the node modules directory there. So we have everything we want. And so now we just need to download this here and bring it into our lambda function. So we are going to need to zip this here and then upload it to the lamda interface here. So what I want you to do is I want you to right click the function folder here and click download and it's going to download this to our download. folder here, okay. And then I want you to unzip it, okay, because we actually just want the contents of it, we don't want this folder here. Alright. And the idea is that we are just including this node modules, which we didn't have earlier here. And I'm just going to go ahead and compress that. And then we're going to have an archive. And I want you to make your way back to the lamda interface here. And we're going to drop down and upload a zip. Alright, and we are going to upload that archive. Alright, and then we will hit save so that it actually uploads the archive, it doesn't take too long, because you can see that it's less than a megabyte. And so we can access our files in here. All right. And again, if this was too large, then it would actually not allow us to even edit in here, and we'd still have to do cloud nine. Alright. So now that our code is uploaded, now we can go and try and run it, or better yet, we will need to learn how to sync it back to here, so that we can further edit it. Okay. So what I want to just show you quickly here in cloud nine is if you go to the right hand side to AWS resources, we have this lambda thing here. And again, if we were in the wrong region, if we were in US East two, in our cloud nine environment, we wouldn't be able to see our function here. But here's the remote function. And what we can do is if we want to continuously edit this, we can pull it to cloud nine, and edit it here and then push it back and upload it. So this is going to save us the trouble of continuously zipping the folder. Now, you could automate this process with cloudformation, or other other serverless frameworks. But, you know, I find this is very easy. It's also a good opportunity to learn the functionalities of cloud nine. So now that we have the remote function here, I just want you to press this button here to import the lambda function to our local environment. And it's saying, Would you like to import it? Yes, absolutely. Okay. And so now, this is the function here, and this is the one we're going to be working with. So we can just ignore this one here. All right. And so whenever we make a change, we can then push this back, alright. And we might end up having to do that here or not. But I just wanted to show you that you have this ability. All right. So actually, let's actually try sinking it back, we're just going to make some kind of superficial change something that doesn't matter, I just want to show you that you can do it. So we're going to just type in anything here. So I'm just going to type in for reggy. Just a comment, okay. And I'm going to save this file here and see how that was a period. Now it's green. So that says it's been changed. And what I'm going to do is I'm going to go up here and click this, and I'm going to re import this function. Alright, so I'm just going to hit this deploy. Alright, and that's going to go ahead and send the changes back, alright, to the remote function here. Okay. And I'm just going to hit our refresh here. All right. And then what I'm going to do is I'm going to go back to the land environment here, I'm just going to give it a refresh here. And let's see if our comment shows up. Okay, and so there you are. So that's how we can sync changes between cloud nine in here. And again, if this file was too large, we wouldn't even be able to see this. So Cloud Nine would be our only convenience for doing this. So now that we know how to do that, let's actually learn how to test our function. So let's proceed to doing that. So now let's go ahead and test our function out. And so we can do here is create a new test event. Alright, and so I've prepared one here for us. Okay, so we have a JSON with body. And then we have a string of five JSON, because that's how it would actually come in from API gateway. All right, and so I'm just going to do a contact form, test contact form, okay? And hit Create there. And so now we have this thing that we can test out. All right, and so I'm just going to go ahead and hit test, and we're going to get a failure and that's totally, totally fine because if we scroll down, it actually tells us that we have an undefined a table name. Alright. And the reason why we have an undefined table name is because we actually haven't configured our SNS topic, like it's our an ID or the dynamodb table. So what we're going to need to do is we're gonna need to supply this function with that information and I believe we have them as environment variables, that's how they're specified. So if I was to go back to cloud nine here, and we were to look maybe in Dynamo dB, I'm just looking at where we actually configure the actual table name here. Okay, um, there it is. So see where it says process envy table name, so that means is expecting a down here in the environment. We are expecting under our environment variables where our environment variables, here they are, so we're expecting a table name. And we're also for SNS, we are expecting the topic Arn. Okay, so we need to Go grab those two things there. And we will have better luck this time around. So I'm going to go and look for dynamodb here. Okay, and I'm going to also get SNS. And we will go ahead and swap those in. So for our table, it's just called Frankie Alliance. So that's not too complicated. Okay. And we will just apply it there. And then for SNS, we might actually have an additional topic here since the last time we were here, and we need the Arn. So we're going to go in here and grab this. Okay, we need the Arn. All right. And so I'll paste that in there. And we will hit save. Okay, and we will give this a another trial. So let's go ahead and hit that test button and see what we get this time around. Fingers crossed. And look, we got a success here. All right. So if we were to wanting to take a look at the results of this, if we go down to monitoring here, okay, and we go to View cloud watch logs, we can actually see our errors and our success rates. Okay. So here, we have a couple logs, I'm just gonna open up this one here. Alright. And so we can see here that the body was passed along. And it inserted into the dynamodb. table there. And it also did the SNS response. And so these are all just the console logs that I have within the actual code. So if you're wondering where these are coming from, it's just these console logs here. Okay, so I set those up so that I'd have an idea for that. So let's actually go take a look at dynamodb and see if our record exists. And there it is. So it's added to the database. So now the real question is, have we gotten an email, notification. And so I'm just going to hop over to my email, and we're going to take a quick look now. Alright, so here I am in my email, and we literally got this email in less than a minute. And here is the information that has been provided to us. So there you go, our lambda function is working as expected. And so now that we have that working, the next big thing is actually to hook up our actual actual form to this lambda function. And so in order to do that, we are going to need to set up API gateway. So that that form has somewhere to send its data to. So let's proceed over to API gateway. And we are going to create our own API gateway, Kool Aid. Okay, hello, let's go. So here we are at the API gateway. And if you need to figure out how to get here, just type API up here in the services. And we will be at this console. And we will get started. And we'll just hit X here, I don't care what they're saying here. And we're going to hit new API, make sure it's rest. And we're going to name it as we have a naming everything here. So I'm going to call this for Engie. Alliance, okay. And the endpoint is original, that's totally fine here. And we will go ahead and create that API. So now our API has been created. And we have this our default route here, okay. And so we can add multiple resources or methods, we can totally just work with this route here. But I'm going to add a resource here, and I'm going to call it a transmit, okay. Okay. And I'm just going to name it the same up here. And we're also going to want to enable API gateway course, we do not want to be dealing with course issues. So we will just checkbox that there. And we'll go ahead and create that resource. Okay, and so now we have a resource. And by default, we'll have options. We don't care about options, we want to have a new method in here, and we are going to make it a post. Okay. And we are going to do that there. And it's actually going to be a lambda function. That's what we want it to have. All right. And do we want to use the lambda integration proxy requests will be proxy to lambda with request details available in your event? Yes, we do. That sounds good to me. And then we can specify the lamda region is US East one, and then the lambda function. So here, we need to supply the lambda function there. So we're going to make our way back to that lambda function there. So we have this SNS topic, I'm just going to go over here and go back to lambda. And we're going to grab the name of that lambda function and go back to API gateway and supply it there. Okay, and then we're going to go ahead and save that. Yes, we are cool with that. Okay. And we'll hit Save there. And so now our lambda function is kind of hooked up here. We might have to fiddle with this a little bit more. But if we go back to our our lamda console here and hit refresh. Now on the left hand side, we should see API gateway. So API gateway is now a trigger. Okay? So yeah, we will go back to API gateway here. Alright, so now to just test out this function to see if it's working, I'm going to go to test here and we have some things we can fill in like query strings, headers, and the request body. So flipping back to here, you're going to probably wonder why I had this up here. Well, it was to test for this box here. And so I'm just gonna just slightly change it here. So we'll just change it to Riker. Okay, and we're gonna say bye. Okay. All right. And so, you know, now that I have this slight variation here, I'm just going to paste that in there. And hit test. Okay. And it looks like it has worked. So we can just double check that by going to a dynamodb here and doing a refresh here. And so the second record is there. Obviously in cloud watch, if we were to go back here and refresh, okay, we are going to have updated logs within here. Alright, so not sure if the logs are showing up here. There is Riker. So he is in there. So our API gateway endpoint is now working as expected. Okay. So now what we need to do is we need to publish this API gateway endpoint. Okay, so let's go ahead and do that. All right. Okay. Hello, let's go. So in order to publish our API, what we're going to need to do is deploy it. Okay. So anytime you make changes, you always have to deploy your API. So I'm going to go ahead here and hit deploy API. And we're going to need to choose a stage, we haven't created any stages yet. So I'm going to go here, I'm going to type prod, which is short for production, okay, very common for people to do that. You could also make it pro if you like, but prod is generally what is used. And I'm gonna go ahead and deploy that. Okay. And so now it is deployed. And now I have this nice URL. And so this URL is what we're going to use to invoke the lambda. So all we have to do is copy this here, and then send a put request to our sorry, post request to transit here. So what we're going to do is copy this URL here, okay. And we're going to go back to our cloud nine environment, and we are going to go to our web in here. Okay. And in this, we have our HTML code, and we have a function called form Submit. And so if we were to go into the JavaScript here, okay, there is a place to supply that in here. And it's probably going to be Oh, where did I put it? Um, it is a right here. So on the form submit, it takes a second parameter, which is URL. All right. And so actually, it's just all right over here, I made it really easy for myself here, and I'm just going to supply it there. Okay, and we're going to need strings around that. Otherwise, it's going to give us trouble, it's going to have to be a double quotations. Okay. And so now this form submit is going to submit it to that endpoint, it has to be transmit, of course. Okay. So, yeah, there we go. I'm just going to double check that transmit that looks good to me, I'm just gonna double check that it's using a post it is. So yeah, that's all we need to do here. So now that we've changed our index HTML, we're going to need to update update that in CloudFront, and invalidate that. So let's make our way over to s3 and upload this new file. Okay. All right. So the first thing we're going to need to do is download this file here. So we're just gonna go ahead and download that new index HTML. And we're going to need to use one of our other tabs here, we'll take up the cloud watch one here, we don't need to keep all these things open. And we're going to make our way to s3, okay. And once we make it into s3, we're going to go into the www dot francke. Alliance a.com. And we're going to upload that new file, I believe it's in my downloads. So I'm just gonna go down here and show in Finder, and here it is. So I'm just going to drag it on over Okay, upload that file. Okay, and so now that file has been changed, but that doesn't mean that CloudFront has been updated. So we have to go to our friendly service called CloudFront. Okay, and we're going to need to invalidate that individual file there. So we're gonna go to www, and we're gonna go into validations create. Now, we could put an asterisk, but we know exactly what we're changing. So we're gonna do index dot HTML, and we're going to invalidate it. And we're gonna just wait for that validation to finish Okay, and then we will go test out our form. So after waiting a few minutes here, our invalidation is complete. And so let's go see if our new form is hooked up. So we're going to need to go to the domain name. And I was have a hard time typing it so I'm just going to call it Copy, Paste it directly here, okay, just gonna go route 53 and grab it for myself. And there it is. Come on, there we are. Okay. And so I'm just going to then paste it in there. Okay, so here's our form. And I just want to be 100%. Sure, because you're using when you're working with Chrome and stuff, things can aggressively cache there. So see, it's still using the old one. But we've definitely definitely updated it. So I'm just going to give it a hard refresh here. Okay, and so now it is there. So just make sure you check that before you go ahead and do this so that you save yourself some frustration. All right. So now it's the moment of truth here, and we are going to go ahead and fill up this form and see if it works. So I'm gonna put in my name Andrew Brown. Okay, and we're gonna just put in exam Pro. Andrew exam pro.co, we're gonna leave the phone number blank there. I'm going to say, Federation, I want to buy something. Can I buy a spaceship? Okay. Whoa, boy. And we're going to now hit transmit, okay, and it's transmitting there, and it says success. Okay, so we're gonna go to dynamodb, and do a refresh there, and we can see that it's inserted. So there you go, we're all done all the way there. If we wanted to test the validation, we could totally do so as well. So I'm just gonna hit Submit here. And here, it's throwing an error. So we're, we're done. So we went through everything we created dynamodb, an SNS topic, a lambda function, we used cloud nine, we hooked up API gateway, we created a static website hosting. And we backed it all by CloudFront using route 53. So we did a considerable amount to get this form. So you know, that was quite impressive there. So I guess, now that we're done, let's go ahead and tear all this stuff down. So now it's time to tear down and do some cleanup here. All the resources we're using here pretty much don't cost any money or shut themselves down. So there's, we're not going to be in trouble if we don't do this. But, you know, we should just learn how to delete things here. So first, I'm going to go to dynamodb, I'm going to go ahead and delete that table. Okay, and I don't want to create a backup. So we'll go ahead and do that. Then we'll make our way over to SNS, okay. And we are going to go ahead and delete that topic. So there it is, and we will just delete it, okay, and we will put in a delete me. And then we will make our way over to lamda. All right. And we are going to go ahead and delete that lambda function. And then we are going to make our way over to Iam. And I am roles aren't that big of a deal, but you might not want to keep them around. So we will just go in here and type in Frankie here. Okay, and we will delete that role. And then we were running an ECG, or any situ instance via cloud nine. So I'm just going to close that there. Let's tap here. And I'm gonna just type in, in here, cloud nine, okay. And we are going to terminate that instance, there, you can see I have a few others that did not terminate, but we'll go ahead and delete that there. And we will just type delete. Okay, and that's going to delete on its own. You will want to double check that there. But I want to get through this all with you here. So we're not going to watch that. And then we want to delete our API gateway here. Why don't we delete this, um, we'll go up to the top here. I rarely ever delete API. So we'll go here and enter the name of the API before commit couldn't just be delete, right? Sometimes it's delete me, sometimes it's etc, etc, they give you all these different ways of deleting, we'll hit Delete API. So now that is deleted there. Um, we need to delete our CloudFront. Okay, so we'll go here. And we have two distributions here. So in order to delete them, you have to first disable them. And this takes forever to do Okay, so once they're disabled, there'll be off and then you can just delete them, I'm not going to stick around to show you how to delete them. As long as they're disabled, that is going to be good enough, but you should take the extra time and delete them 2030 minutes when these decide to finish, okay, and we want to delete our ACL if we can. Okay, so we'll make our way over there. So I'm gonna hit delete, and it won't let me until those are deleted. So we'll wait till those are fully deleted, then you just go ahead to here and delete that. I'm not going to come back and show you that it's just not not worth the time here. Some of these things just take too too long for me. Okay, and then we'll go into fringe Alliance. We will go ahead and remove these Records here, okay, you don't want to keep records around that are pointing to nothing. So if those CloudFront distributions are there, there is a way of people can compromise and have those resources point to somewhere else. So you don't want to keep those around, then we're going to go to s3, okay. And we're going to go ahead and delete our buckets. Generally, you have to empty them before you can delete them. So I'm going to go here. I don't know if databases made this a little bit easier now, but generally, you'd always have to empty them before you can delete. So I'll hit Delete there. And we will try this one as well. Okay. Oops, www dot. Okay, look at that. Yeah, that's nice. You don't have to hit empty anymore. So yeah, I think we have everything with the exception of CloudFront there and the ACM again, once they're disabled, you delete them, and then you delete your ACM. So yeah, we have fully cleaned up here. And hopefully, you really enjoyed this. Alright, so now it's time to book our exam. And it's always a bit of a trick to actually find where this page is. So if you were to search eight, have a certification and go here. Alright, and then maybe go to the training overview, and then click get started, it's going to take you to at bis dot training, and this is where you're going to register to take the exam. So in the top right corner, we are going to have to go ahead and go sign in. And I already have an account. So I'm just going to go and login with my account there. So I'm just gonna hit sign in there. Okay, and we're just going to have to provide our credentials here. So I'm just going to go ahead and fill mine in, and I will see you on the other side and just show you the rest of it. Alright, so now we are in the training and certification portal. So at the top, we have a one stop training. And to get to booking our exam, we got to go to certification here. And then we're going to have to go to our account. And we're going to be using the certain metrics, third party service that actually manages the certifications. So we're going to go to our certain metrics account here. And now we can go ahead and schedule our exam. So we're going to schedule a new exam. And down below, we're going to get a full list of exams here. So it used to just be psi. And so now they all have psi Pearson VUE, these are just a network of training centers where you can actually go take and sit the exam, for the CCP, you can actually take it from home. Now it's the only certification you can take from home, it is a monitored exam. But for the rest, they have to be done at a data center. And so I'm just going to show you how to book it either with psi or a Pearson VUE here. And again, they have different data centers. So if you do not find a data center in your area, I'll just go give Pearson VUE a look so that you can actually go book that exam. So let's go take a look at an exam. So maybe we will book the professional here. So I'm just going to open this in a tab and open that in a tab and we're going to review how we can book it here through these two portals. So let's take a look at psi, this is the one I'm most familiar with. Okay, because Pearson VUE wasn't here the last time I checked. But so here you can see the duration and the confirmation number, you want to definitely make sure you're taking the right exam. Sometimes there are similar exams like the old ones, that will be in here. So just be 100%. Sure, before you go ahead and do that, and go and schedule your exam. And so it's even telling you that there is more than one available here, and that's fine. So we'll just hit Continue. Okay. And then from here, we're going to wait here and we're going to select our language, okay. And then we get to choose our data centers. So the idea is you want to try to find a data center near you. So if I typed in Toronto here, so we'll enter a city in here like Toronto, I don't know why thinks I'm over here. And I'm just going to hit Toronto here. And we're going to search for exam centers. Okay, and then we are going to have a bunch of over here. So the closest one in Toronto is up here. So I'm gonna click one. Alright, and it's going to show me the available times that I can book. So there's not a lot of times this week, generally you have to it has to be like two, three days ahead. Every time I booked exam, it's never been the next day. But here we actually have one it's going to vary based on the test center that you have here. We're going to go ahead here and this one only lets you do Wednesdays and Thursdays. So if we had the Thursday here at 5pm. Okay, and then we would choose that and we would continue. Okay, and then we would hit Continue again. Alright, and so the booking has been created and in order to finalize it, we just have to pay that it is in USD dollars, okay. So you'd have to just go and fill that out. And once that's filled out and you pay it, then you are ready to go sit that exam. So that's how we do with psi and then we're gonna go take a look over at Pearson VUE. So I'm just gonna go ahead and clear this. Because I'm not serious about booking an exam right now. Okay. And we'll go take a look how we do it with Pearson VUE. So here we are in the Pearson VUE section to book. And you first need to choose your preferred language. I'll choose English because that's what I'm most comfortable with. And we're going to just hit next here. And the next thing it's going to show us is the price and we will say schedule this exam. All right. And now we can proceed to scheduling. Okay, so we'll just proceed to scheduling, it's given me a lot of superfluous option. and here we can see locations in Toronto. Okay, so here are test centres. And we do actually have a bit of variation here. So you can see there are some different offerings, you might also see the same data center, so I can choose this one here. Okay, and it lets you select up to three to compare the availability. So sure, we will select three, and we will hit next. Okay, we'll just wait a little bit here. And now we are just going to choose when we want to take that exam there. So we do have the three options to compare. And so you know, just choose that 11 time, okay. And so then we would see that information and we could proceed to checkout.
Info
Channel: freeCodeCamp.org
Views: 1,416,121
Rating: 4.9723167 out of 5
Keywords: aws tutorial for beginners, aws certified solutions architect, aws full course tutorial, aws full crash course, aws solution architect certification, aws interview questions, aws ec2 tutorial, aws ec2 tutorial for beginners, aws lambda, aws lambda tutorial, aws s3, aws s3 tutorial for beginners, aws iam, aws iam tutorial, aws cloudformation, aws cloud computing
Id: Ia-UEYYR44s
Channel Id: undefined
Length: 626min 18sec (37578 seconds)
Published: Mon Dec 23 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.