CloudWatch - Dashboards, Alarms, Events

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so today we are going to see how to set up an SNS topic and how to subscribe to the topic we saw how we can use SNS for sending notifications to email or adjacent format or an SMS format so I'm going to show you how to set up and dump it and subscribe to it so you choose SMS service using the services option here such it and then computes the export and here in the topics click on that or you can use the create topic option that was shown there and here you can see in my screen as of now I have one topic already if you remember in some of my other lambda discussions we use the s3 event notification window so I'm just going to resize it for some reason it doesn't work I guess Auto high-need why you know it doesn't work anyway let us not focus there so there is already one topic we just need a new topic for monitoring in this case so what I'm going to do is click on create new topic and give a friendly name so I'm just going to say it is going to be server monitor and monitor is okay it says I need to have only 10 characters so come on click on create topic ok you see here server monitoring created and there is an AR and for it so let me this is a topic now so under this topic you can have any number of subscriptions when I say subscription it is like the email subscription you go for newsletters or deals or offers so this is how the subscription works so click on this one and or you can also do here go here subscriptions on the left hand side that is also possible you can go here and it gives me all the subscriptions that are in my account so here if you see if they're listed for each and every topic that are there so this is one way you can go and create a subscription add an email address and attach it to a topic or go to a topic itself and the action you can say that you want to create a subscription years so you can say subscribe to topic so it is asking me what type of or protocol I want to use whether it is HTTP HTTP email email JSON or do we want to trigger lambda for some downstream fraction action or do I want to send it to another application which is having an application ID something like that so in this demo we are going to see how to set an email subscription in other words a simple email address is going to be suffice here so I'm just going to type or one of the dummy email addresses that I use for quite a lot of things so let us say yes so what happens is Amazon will send the email to that email address that we just now typed and it is going to send you an email and you need to click on that to subscribe so I kept my email address open here and if you see here already the new mail has come and if I go here you see here it is a reviewers the notifications subscription confirmation open this email and click on confirm subscription so once you do that Amazon will verify that the user has also subscribe to it so it can confidently publish information to this email address now so that is how simple as it is to set a notification topic and under the notification topic you can have any number of subscriptions you want so if you have a mobile device which can receive the office you can go ahead and do the same thing only that your mobile device will need mechanism to receive those emails or Jason messages you will need some kind of processing system to receive those images in terms of email Gmail knows how to receive an email and put it into a mailbox so now that we have done a notification topic and we are subscribed to it so what we are going to do is we are going to head over to cloud watch and set a monitoring agent so it will monitor the web server that I have here say here is the monitoring the mouse over it is going to monitor this over for a CPU metric and what is the metric like weather server is running up and running or not so if we go to my monitoring tab here you see a lot of metrics happening here the CPU play session will see a small spike here how many disk writes so we are going to set a monitoring alarm for this server nope so since it is up on running we don't have to make any changes here just leave it as it is let me just copy the instance ID so that we can quickly go ahead and configure our cloud watch alarm for that so go to cloud watch once again for going to cloud watch all your tools services search for cloud watch and it will take you to the dashboard that you are seeing here and at the dashboard click on alarm and click on create an alarm so alarm a lot of services are supported here as you can see here if we just close this one probably in the dashboard it shows I think it supports a both of quite a lot of new metrics I don't remember the exact number there we go yeah here it is your cloud watch matrix has been loaded and there are total matrix or 788 so you can choose any of this metrics to monitor your particular service of interest them so when it comes to ec2 there are about the 378 metrics for you to pick and choose of what you want to monitor so let me just - go ahead and click on ec2 just to give you an example of the metrics available and it picks up with all the previous instances and all the different metrics that you can choose for that instance so this one I said we will copy the instance ID so I'm just going to copy paste that instance ID it takes a while for all the other metrics to come online for example network packets network in and out might not be visible immediately it will take some time for floors to collect it collate it and show it here by default you will see at least two metrics immediately that is good we are able to see quite a lot of them 14 of them good you can see here the CPU credit balance credit usage and utilization and the safe is check fail that is where at the system hypervisor level is good that is a systems check and the instance level whether the virtual machine is receiving traffic and performing the function it is supposed to perform so these are all the metrics available to you for this particular incident and if you go to auto-scaling once again you will have a different kind of matrix in this instance ID we don't have water scaling so that this way it is not showing so but once I remove that you see here all the other metrics that are relevant are shown here so let me remove this filter and I just put the instance ID back and for our demo we are going to choose CPU utilization so that we can stress this over and find out whether the alarm is triggered or not so the name of my alarm CPU monitor that is what I'm going to call it CPU monitor and here in this case on the left hand right hand side you see here from what is the period I want to monitor what I'm going to say monitor a period for one minute so duration every one minute is going to monitor it and if the average CPU is greater than or equal to let us say 40% th and consecutively for one period let me make it as let us say three periods so consecutively for three minutes if my CPU is going to be can I increase it to no I cannot so if it is consecutively more than 40 percentage for three minutes and I should be getting an alarm so how do I get an alarm will come down to actions and here you see here whenever this alarm is triggered what is the state you want it to be whether state is alarm or okay or insufficient data so I want an alarm to be triggered so I am going to choose status alarm and where do I want to send my notification to here is where the SNS topic is coming into picture so I'm going to select the one that we just now created that means it is not visible okay what I'm going to do is I'm just going to refresh the screen sometimes the cloud was dense that it doesn't pick up the latest SMS topics so let me just refresh my screen and quickly head over to my instance ID CPU utilization click on next mom's the pond meanwhile you me yeah there we go myself monitor disappear in here server monitor if it is more than 40 percentage for consecutively three minutes my period to say one minute and you see her automatically on a choose server monitor the email address list is getting updated automatically you don't have to do anything and if it is an auto scaling is configured you can click on auto scaling here and you can see here which are the scaling group that you want to take into effect as of now there are no groups and whether you want to scale in or scale out action what is the auto scaling thing that you need to do here so you can remember when I spoke about the easy to recovery action also that if it is possible to recover a server or stop a server or we put a server in case an alarm is triggered and remember this will work only if you are using certain c type or m type are are taken since not on a t2 micro so for a simple quick demo let us use this configuration go ahead and click on create alarm so initially it is going to collect some data for some time if which is already having the data for three minutes or five minutes you can immediately see it is okay otherwise you might see it as insufficient data I mean whenever the alarm state is insufficient that means that at all watch it doesn't have enough data to compute whether a lump should be triggered or not in this case I'm just asking it to look for a three minute window and it is saying that it has enough data to make that decision let me see if I can pull this little bit more beside the graph wouldn't become because my screen is a little messed up okay let's just move it this way so let me go back to alarms and you see here there's this very small tiny spike on my graph that means that my CPU is nowhere close to 40 percentage and everything is fine so this is one way you can create an alarm there is another thing that you can also do you can create in events also and based on certain rules and event can be triggered say for example your status of the server has changed from to stop or terminate when that kind of an event happens you want to get notified so you can do the same thing using this option this is quite powerful in terms of s3 notification or starting a server at a particular time say for example every day morning you want to start a server at 6 o clock and shut it down not even 10 o clock so you want to have a cron job or a scheduled event happening so this is where you come and do that click on create rule and here you see two options that you have event pattern or schedule if I choose schedule automatically it will give me a ask my cron expression how do I want to do it if you are not sure how to set this up or go ahead and read this learn more a bit it gives you a lot of options of how to write this up properly so in our demo now we are going to choose in he went to pattern and let us choose a service name here as you see - because we have a server running on ec2 and what they pop events I want all types of events whether the ec2 state has changed or anything happening in my ec2 I want it so all events that are happening easy - so where do I want to send the target or what well do you want to send the notification to so click on add target and here once again you see here whether it is an SNS topic sqs or some other command that you want to run or lambda function you want to invoke all these options are available to you so quite often people do lambda and skewers here or directly call other services also in our example we will use SNS choose server monitor again and automatically the input will be matched here and that is all we have to do here click on configure details and that is you see here at the bottom and what is the role name that you want to say so easy to event notify her that is what I am going to call Maya you went so on the state is enabled so by default if we check this not by default you check this this event will come live immediately so as of now I'm not doing anything in ec2 so there won't be any emails in my subscription or in my email address let me delete this as well so something happens we'll get the notification there so let me go back to cloud launcher you can see here the alarm keeps on checking itself and if it finds for some reason something has gone wrong the state just keeps changing here so cloud watch is still working on the amount of data that needs whether to trigger an alert or not so the if we want to trigger this alert what I'm going to do is I'm going to go to my console and distress the server so that we can spike the CPU to one hundred percentage I'm just going to copy over this command to my server here so 400 seconds is about more than five minutes so we should be definitely triggering about one a lot alert alarm at least so let me start this and if I go over to my top command performance monitoring you can see here the CPU has immediately spiked to 200% tha so it is going to take at least three minutes for the spike to show on cloud or dashboard and trigger an alarm email to us you can see here there is an alert notification in my email and the Alang Alang has been triggered from my CPU monitor so it says that you are receiving this alarm because there is a CPU spike and that is the reason she was greater than or equal to 40 percentage and from insufficient it went to alarm and if I actually go back whenever so speaking this it went back to insufficient data stage because the CPU spike has subsided and it is gone which is no longer than the alarm condition so it is still collecting data to see whether it is in the okay state so it is going to wait for another three to five minutes and then it is going to put back into the okay state so if I go to the ec2 dashboard itself and go to the monitoring you I should be able to see a CPU spike in the utilization part of Huerta where is my CPU here this is the burst balance this is not my CPU utilization itself not in the crowd watch itself I should be able to see it let me just go to my cloud watch alarm yeah if I click on this and if I can ask it to break it down to one minute you can see here there is a spike of or spanning about a 15 15 to 15 20 at CPU of about 60% eh you close through that so that is why my alarm has been triggered and then I got an email as well so typically what you can do is you can forward this to a downstream system like a logic monitor or a ServiceNow IBM Tivoli or anything that you are comfortable with and create an auto ticketing system so this is how a cloud watch monitoring system works I am going to demonstrate how the tool that we configured in cloud watcher for an event to based monitoring works so this happens only when there's a state change as of no my server is running and that is why nothing is happening no let has been triggered so what is what I'm going to do is I'm just going to turn this server off that is in other words I'm going to stop this over instance State click on stop okay then we will you see here the CPU utilization here earlier I was in the volumes now when the CPU so the CPU utilization is here if I can no I cannot narrow it down even more but I can zoom in you can see here the CPU spike went about 260 percentage here so this is what the spike that we just now created for some time back let me go and turn the state in to stop the state as of now it is running let's go ahead and click on stop and if you remember this one was earlier there we skipped it when we are looking at ec2 so if you want create an alarm for yourself you can do it from your ec2 dashboard screen also so in this option it will say usually create an alarm so if we click on that it will take you to the same floor dashboard that we used some time back so I saw from it is saying that is one alarm and that has been triggered so anyway if I go to my emails now it's already triggered for notification of CPU monitor that is I am still receiving an alarm for CPU that is at 8:58 and which says that it is equal to 40 percentage and I will receive one more notification which is in JSON format and which says that the state is stopping and that was received at the 9 p.m. and you see here at 9 p.m. again the state has changed to stopped so any change of state I will get an email so if I am going to again go ahead and trigger start or restart the server when stopped change to starting and then stopping changes I mean starting changes to running I will get two more emails now so let's go ahead and try that as well since the server is stopped now I'm just going to click on start again so since the state changes SNS is true to pick up this pending state and then it sent an email you'll see here already one email has come if I go ahead and opened it and you can see here the state is spending now so if we wait for some time the spending will change into running in a short while so ok it is running and yes Ana's sure to pick that up anytime and we should be able to see an email soon or it might have already arrived by the time it came here from this point to go ahead and refresh it yeah there we go the mail has arrived so if I go to SNS and you can see here the state has changed to running so this is how an event based a notification or a triggering for ec2 can be done in a diverse now we are in the SES service and we are in Virginia since ICS is not available all regions the first thing you that you need to do is you need to register the email address to which you want to send the emails so though so what do you do is typically you go and verify an email address in other words just like we did for SNS topic we need to subscribe or we need to verify and if the email address that to which you want to send an email so typically you that is very simple as we are doing ABC example.com by the way example.com is globally agreed as a sample domain which you can use it for demo purpose or texting purpose so if you want to do something like this click on verify email address the third party will be getting an email once they click on that that email address will be approved so you see here you're successfully sent a verification email it may take up an however for verification email to arrive at the users inbox blah blah blah and you can see here there is a pending verification and if you want a recent data you can resend it also for this email address so only when an email address is approved you should be you will be able to send email using SES service' so in my case i already have one of my dummy email addresses set up and we are going to use that to send an email so let me remove this guy I don't want any spam people to send emails from this so if I want to test send a test email or what typically happens is people use this SES service through programmatic email service they don't use it from the dashboard to send millions of emails they access this service through their applications or mobile apps and use this service to send emails with all the endpoints as you can see here on the left hand side you have email templates you can configure your email template and let us say hello user ABC the ABC will be filled in by the app itself or the application that is using for a username and the content will be more or less the same especially promotion messages the content except the user name everything is the same so you can create a template and send it so that is how some of these settings are working just go ahead and play around with it very simple and straightforward if you have a domain name through which you want to send a particular email say like we did my website calm and you want the emails to appear as if it is coming from my website calm you once again verify your domain you type in your domain name Amazon will generate the tkam keys or as it says domainkeys identified mail in other words they are going to verify you are the ones of this domain name and you are going to be allowed to send the identified email so because if you are going to send us a lot of spam emails in other words if your users or ten users of people who are receiving it are going to mark it as spam Amazon is going to write an email and just block your domain name from sending any more emails they are very particular about it so and there is a limit is also there in the server is it like for example every second you can send only X amount of emails and there is a part day limit also it is there so that you don't misuse the service and start firing emails at everybody so having said that I'm just going to quickly trigger and send a test email so I'm going to say this is my from Madras and this is also micro address I'm going to say okay this doesn't work can you so does anybody want to share their email address and let me know whether they are receiving it and if quickly I'll share it in my chat window um I can just put a comma and a try it out mostly couldn't work nobody ok click on send test email and this number show the change to one or two no this is on particular topic I don't think so it will change but meanwhile let us go ahead and check the notification is notification for terminated state so open this the state should be terminated here yeah as you go here so let me open my inbox and see what is the state whether I have got an email from ACS itself you can see here there is a test email from s ears and we have received it in our inbox and if I just expand it you can see here the domain name is something like Amazon SES dot-com so if you don't want it to be showing the place Amazon SES then add your domain name here so that you can send emails as approaches appearing from your domain name so that is how Amazon SES is used to buy pizza for a people to set up your image and once again you can set up some rules like saying whether you want to say the email is coming from this address rejected or certain addresses you should not be sending it so you can create some filters here based on that your emails can be sent here and there are a lot of rules or also can be set like saying what happens if bones back is happening do I want to resend it do I want to create a track of bones back messages and things like that so all those things are possible here so here there is a reputation dashboard it's nothing but whether your account is good enough or not whether you are expanding people or not this is an Amazon view of things it takes a feedback of all the messages it has been sent and creates this status so this is not going to be it's going to unhealthy you are not allowed to send more emails so make sure that you maintain a healthy status so that you can use this service for professional reasons so we are going to see what is Amazon RDS and what it means by that how you can use that in your own architectures when you are talking to clients and what are the things that make up them and finally demo of RDS itself when you are talking about audience this forms a part of the database as a service provided by Amazon and multiple options provided to you by Amazon when you are talking about databases itself so in this session or in the next few minutes we are going to spend our time on RDS of the all the other things which is the relational database service as it is fully known and the other common or of most popular ones are provided by Amazon is that animal TP an elastic cash what is already our DS is a sequential or relational database in the cloud and it is a managed to database what I mean by that is almost all the administrative tasks is managed by Amazon like installing an RDS sitting of a cluster and updating it upgrading it taking a snapshot and making sure encryption is available and performance is maintained all these kind of tasks are managed by Amazon itself so you don't have to spend time configuring it and managing it that is what a managed database means so typically you can think of it like Amazon is sitting behind you and doing all this migration of word discs and partitioning it and ensuring the availability part of it is there resilience part of it is there so when you are talking about ideas there are multiple features that you want to know so it is a very simple to use almost it takes about the five to ten minutes only to set up an RDS and it might take more time to come online but it is one of the most easiest stated every system so that I have encountered to set up and it is comparatively cost effective when you are trying to set it up with or compare it with an on to my is a database system what functionality it provides and in terms of security you have the encryption part of it you have it in a V PC and you can have I am roles so that you can have only settings people accessing RDS itself so predictable performance just like an EBS volume you can trigger the performance of your RTS also you can say this is the eye ops I want I wanted SS details and you can ensure the performance is throughout the same and it is going to scale to demand or what it means that is the number of connections as it scales up your RDS data which is also going to scale up if you are struggling furthermore I ops then you all you have to do is you go to your RDS PA off and increase it so that the more connections of read write activities can happen and RDS once again the snapshots are stored in your s3 bucket whenever the snapshot is taken so typically they are very durable and you can connect all our days typically because it is a DNS name just like CloudFront or for our days you will get an endpoint which is a DNS name routable DNS name so you can it is compatible with a lot of other applications and finally as I said something back the administrative tasks is comparatively less when you compared to an on from eyes database and the number of engines supported here is quite vast you can have my SQL engine these are different databases available in the market a minus QL which is Marya TP is another variant of MySQL this is owned by Oracle and this is owned by other open group not open group Apache book and Postgres Microsoft SQL Server and Amazon has taken Maria DV and MySQL and improved it and called call it as Amazon Aurora the this engine is fully compatible with MySQL so you can take a MySQL database and migrated to Amazon order also without any problems as I said they have taken in it and improved it and it is a completely different database but it is fully binary compatible with anything that is closed in MySQL and finally most importantly you can run the giant Oracle also inside RDS so that is also possible to run it in the cloud without any management issues and when you are talking about RDS instances the instances make up the building blocks as they are typically kind of like your ec2 boxes but only the difference is instead of having t2 or M series you will have something like the db2 dot micro I mean not DV 2 DV dot micro DP dot and a no DV dot a large X large 4x large so they're basically DB instances and you can create multiple databases inside in single instance so it is not like one is to one mapping you create a DB instance you can have multiple databases inside that one instance and each of those instances are isolated and that they are all in the cloud there is no on promisor some things like that and if you are sitting up in a high ability configuration which we'll see in a short while the read snapshot is how created automatically for you we will see that in a short while so that is what about the devious DB instances of if you're talking about security you can run your argos inside your V PC so when you're choosing your radius it will ask you under which V PC you want what security group you want whether you want encryption whether you want a SSL connection or you want to use the database own features for the security also that is also possible say for example Oracle is coming up with some security features and Amazon has an option called parameter groups using that you can secure your database also so there are multiple ways to secure your database in the cloud moving forward how does a backup or automatic management works just like in your own from my shoulder there are two options for you when it's automatic backups just like the cron jobs or scheduled jobs we saw some time back when you are doing cloud watch you can create a scheduled snapshot backup for your RDS also and it also gives you an option to see say that whether you want it a base to be temporarily cost so that you can take a backup without any issues in other words is a minor downtime and it allows you to choose how much downtime you want and you can have a retention period of up to 35 days if you want to more you can have a snapshot and replicate it across the region and store it in your s3 bucket as long as you wanted the defaults period is about 35 days and having said that you can also do a manual snapshot so there is nothing stopping you instead of having every day at 8 o clock you suddenly think that I have made lot of changes at 2 o clock and you want to take a copy of the snapshot go ahead and create it manually and that is also allowed and interesting thing is the manual greatest snapshots are not deleted automatically well in automatic backups any snapshot holder is deleted automatically after the set period of time so manual snapshots stay as long as you need it in so you need to go ahead and manually clean up your snapshots account but typically in the production you if you are taking a some short you must probably need it and you only delete it after making the has it's very checks so that is the reason Amazon doesn't clean them automatically so there is something called a cross region some shots is possible that means that if you want to have a disaster recovery scenarios or for compliance reasons you want to move the data across regions you can trigger this also in your account so you initiate that snapshot in this case as they're shown in the image Virginia region it is running and you can trigger the snapshot to say Tokyo or a Brazil region and this provides a very good disaster recovery option and remember that this is not a synchronous it is asynchronous so there is going to be a small time lag or the commit of data is going to be different from the source so although it is going to be good disaster recovery but not a real-time recovery solution that is going to be small amount of where data loss if you are going to propose this as a typical backup and recovery solution so ideally what you do is you do this and also take and commit logs or roll logs or cable locks and ep2 it is called and constantly take a snapshot of that in the EBS volume and push it to other regions for a full disaster recovery solution so there are two more things that we need to know when you are talking about ideas one is parameter groups and other is option groups so when you're talking about parameter groups every database engine whether it is MySQL or Oracle comes with a lot of parameters to find unit or customize it to make it suit for your requirements one of the best examples is a time zones if you are in an Indian region all over the country we have using ist but if you are in the Europe or North America region there are multiple time zones in world so if to ensure that transactions are happening at consistently across multiple time zones you want to customize your database for that particular region or set it at UTC time whichever way you want it so these kind of customization parameters are can be done in your DB parameter group Amazon has a default parameter group if you are not sure how to use that if you start a database engine by default the default parameter of group will be applied if you want to customize it then go ahead and create your own and apply your parameters on top of it and then use that likewise Amazon has something called option groups which is nothing but customizing the management activities when you want to take a backup when you want to don't have to be done when the minor upgrade to be done so all those things can be customized using the DB option groups so if I show you a simple RDS architecture which is not highly available this is how it looks like that I said to ec2 servers in the middle and there is a load balancer at the top and they are talking to a single master instance in this case there is no multi AC standby it is written as multi-season way but there is no multi as a standby here meaning if this master is goes offline if data is not available so it is not a fault tolerant or high available configuration so if you want to fire a high available configuration you need to use an option called mighty easy deployments what it means is once you create an RDS instance the master copy is put in one availability zone and there is a standby replica created in another availability zone and data is copied synchronously in this case so that is the difference between cross region snapshots and using a multi AC in multi I see everything is synchronized so for some reason the master is goes offline the standby replicas is promoted as a master and automatically the traffic will be shifted to the standby so that your applications will not see any downtime at all and you will not have any data loss also if one is equals offline so this is how multi AC deployment for RDS happens and this you you there is an additional cost associated with multi easy but production typically goes with the multi AC deployments so an architecture wise this is how it is going to look like there is a load balancer which is talking to 4 different ec2 instances and those instances are going to talk to only the master databases you cannot talk to your standby directly Amazon only can talk to it only if something goes wrong with your master the standby is promoted to master and once again you will be talking to your master server so that is how you design a resilient we'll database for your applications so that is how relational database looks like in the cloud so when we compare relational to another database systems it is available in the market there is something called a noise q8 databases so when you are talking about no SQL databases DynamoDB is Amazon variant of no SQL databases there are other things like MongoDB Cassandra HBase so many pouch DB so many things are there the best kind of use cases to understand is think of images that we all take a person like me will take more images person like you might take less images of more than me and each of us will have everyday taking thousands of the hundreds and thousands of images so somebody has to store this image let us say all of us are uploading it to Facebook and Instagram and Facebook sends this as a feed message to other people and the likes the comments for each of those images has to be stored separately so in this way the complexity increases for each person there are hundreds of images and for hundreds of images I am going to track the X number of comments likes and repos and field items so writing this all into a sequential or the relational database is not going to be very efficient that is where you come up with systems like a noise QL database where the things are stored as collections rather than a relationship between the first record and the second record so DynamoDB is a very good example for this use case I'm just going go ahead and show you some examples or representational view of how data is stored and probably that you will be able to more correlate or Howard and Omo TP or NoSQL database looks like but before doing that a couple of words about the mu TP itself it is once again I manage the database there is no dial for performance as one gives you read and write materia how many units of read write you want up for example 3 is about 4 KB per unit and write is about 1 KB so you go to Amazon and say many units I need and these in the right units I need and Amazon will ensure that my performance would be always met for you and remember that all that most oil the data is stored and the SSD disks so performance is really very high the cost is also high I wouldn't say this is the cost-effective but if you want a very high performance data base and a very low latency think of the image upload example that we spoke about there is no question of people are not going to wait for images to be coming up on your screen and to comment on them or lie to them people want to click on the light button and move on to the next image so they want to have a data base which is a really high performance and these are all small transactions when you click on the like button you're probably sending about a few kilobytes of not even kilobits few bytes of data to the server so for that you don't need a transaction database like a SQL database dynamodb or any noise fill database works fantastically well but it's one of the reason Facebook invented Cassandra DB to store column voice data or no SQL data so to have a quick comparison here this is how the side-by-side comparison of SQL and no SQL looks like in the SQL the data is stored in this way that is a rows and columns alright but whereas NoSQL it is a suite key and value what I mean by key and value is quickly let us see the example at the bottom of the screen in SQL this is a table for books and there are multiple columns like identifier for the book the title of the book and who is the author of the book what is the format whether it is hard copies of copy things like that if I want to store the same data into a know a scale format I will create one a file or a collection like this for each book this whole entry can be in a CSV file or a JSON file or each of them is a separate file so my identifiers in other words is being is my key and that this number that is shown here is my value so I have a key as title and a value as the title of the book so that is how no SQL stores your data key and value pairs and here the structure in SQL is standard ISBN title and format you if you want to change it and you are going to add another column here and all the values for all the rows will have to be updated or you are going to say that the true that column is not mandatory so you can have null values so it is at the impact is across all the rows but when it comes to no SQL there is no necessity of saying the schema in other words the schema can be dynamic afore one book I will have ten key values in other book I would have only four key values so that is the flexibility of DynamoDB or no SQL format it gives you finally when you are coming up querying that is where the difference comes up with it uses an SQL syntax a select star from tables that is how one SQL syntax will look like in this case select ISBN where title equal to cloud star so then you will pick up this ISBN value so that is how you will do whirring but here the querying is based on collections you will say find me a vial where is being value equal to this and which is going to search a lot of a lot of files and then find the exact one you can create an index but it is not going to be the very efficient aware of sometimes researching so if you have combination of multiple searches like join such as say for example I want is be a nun title and author I want a combination of all three SQL this performant more than no SQL in this case so sometimes I've seen questions based on that or clients asking me what what if I have joined query is what if I have a very complex query sir should I go with the know SQL or SQL so if you have very very complex queries and you see whether the SQL will benefit for that client or if there is a really technical application itself is changing and if the application are data or can suit no SQL then go ahead and recommend no SQL and scalability party since there is a relationship between the first record and then the second record and third record it can only scale vertically what it means is you can add the CPU you can add the memory and keep on adding it in the vertical database it so you cannot duplicate the database just like that at all whereas horizontally means these are all stored as separate files all these collections breakfast there is no relationship between one adjacent file or one object and next object so you can immediately start another server and start storing your collections there and you just index them or don't index them so if scalability is very very high in no SQL whereas scalability is a big constraint in SQL based databases there is one thing you need to know and what are the use cases where do people use that so this is a quick snapshot of use cases and we have split us we have discussed this before but quickly to summarize if you have an existing database apps or business critical process centric process where there is a flow where there is an sequence of actions happening then ideally relational database will make a lot of difference here but if you don't know how many lights are going to happen how many reads are going to happen at any point of time and your application is distributed across the web then no SQL will be a fantastic use case so as I said typically small read rates and the faster speed writes than no SQL and if you want transactional or complex queries once again SQL database but if you don't have a transactional small queries but you have range queries like what was the photos I uploaded for the last one year just a simple query of my user ID but all the images has to be picked up and I'm not going to update much in all the items at the same time then this kind of database makes sense for you and scaling wise we spoke about this if you want to scale this you need to do a clustering you can do a partitioning these are database concepts in other words the partitioning based on horizontal leave saying records for in the last one year is stored in a different database and the new records for next year is stored in another database that way you can call it as horizontal scaling but that will be still the link between last year and this year so that is what partitioning means and sharding is some more of a horizontal concept I'm a vertical concept I would highly recommend you guys to go ahead and see what is partitioning and shading if you are not familiar with that but in this case it is the seamless what it means is you just start another server and start storing your collections on your new server because there is no linkage between the first data and the second data or the collection of the objects so it it just keeps on scaling as much as you need it performance is going to be depend upon how you architected as the quality of service in other words so if you are talking about animal TV it is automatically optimized that means the performance is a simple parameter of how many reads you want for many writes you want and you can directly get that kind of performance but in this case the type of query that you are running that the kind of database table that you have created or how many columns are there so many things comes into the picture to achieve the performance that you require for your applications so that is another way to see RDS against a 10 M OTB so in this space the only thing that you need to remember is read is about 4k B's and write is about 1 KB so that is how your dynamodb performance is met in Amazon so your cost is also associated with how many leaves and rights you need so broaching them as required by a client so quickly I'm going to show you a data model or how the data is stored here there is a collection of songs think of each and every role here or the four blocks that you see at each of them is an individual collection they are not at all linked so but it is shown as a table or amazing music let us say this is collection number one plucks number two question number three collection number four and then blah blah blah and if you see here collection number one has all the parameters artists a song title album and here whereas when it comes to the third collection you have another field called that genres also there but in the collection number two you have only two or three items so this is possible to store data in this way that is the flexibility of no SQL database and each item can have any number of attributes key value pairs and however you want it so sometimes for better performance or what people do is they create an index key or another way partition key in this case the artists name say for example pick up all the music files albums that mr. Eric Manas then then you can easily sort by that or the artist name and he will use all the files and it is also sometimes efficient to give a short key also so because mr. Rahman must have done 10,000 songs and you want to sort them by title then you can go ahead and sort them by this particular title of the album itself so if that is also not efficient you can also go ahead and create a secondary partition key or slope is possible so you can go ahead and identify a local secondary index album title is also possible or year whichever you want it is sometimes it is not mandatory to have that so these are all just to improve the performance of your DynamoDB data model so if I want to show it as an architecture level this is how it looks like you have multiple endpoints whether it is mobile or a tablet or desktop connecting to your load balancer which in turn talks to your application and which on the background root of your elastic I mean DynamoDB itself so that is how you set up your DynamoDB based on your customer applications
Info
Channel: Valaxy Technologies
Views: 10,582
Rating: 4.6666665 out of 5
Keywords: aws, amazon web services, demo, introduction, register, enrol, sign-up, sign-in, console, hands-on, solution, contact, classroom, training, fast-track, online, trainers, certification, leader, assistance, instructor-led, vilt, virtual instructor led, cloud computing, cloudwatch, cloudwatch alarams, cloudwatch events, aws cloudwatch, create cloudwatch alarams, delete cloudwatch, modify cloudwatch, aws monitoring services, monitoring ec2 instances, ec2 memory matrix, ec2 cpu matrix
Id: UFczk5RkGoU
Channel Id: undefined
Length: 50min 0sec (3000 seconds)
Published: Wed Nov 22 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.