ADD THIS TO YOUR RESUME|PYTHON REAL-TIME PROJECT FOR DEVOPS ON AWS | LAMBDA FUNCTIONS |#devops #aws

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone my name is Abhishek and welcome back to my channel so today's video is a python project for devops engineers you can use this project in your resumes you can use this project for explaining to the interviewer when they ask you about practical knowledge of python and how did you use Python for your Cloud infrastructure or AWS environment now what are we waiting for let's quickly jump onto the video and understand what is this project and what is the project description so last to last week I shared the project idea it is called as AWS ninja I give this word I gave this name to the project you can change the name as per your requirement it does not matter that it has to be AWS ninja or something else now what is the project description I will give you a very basic example before I jump on to explaining the description so as a cloud engineer it is or as a devops engineer it is one of your goals or it is one of your thumb uh you know one of your job responsibilities to maintain the infrastructure according to the complaints of the organizational now it might be complicated I'll give you very very Layman terminology let's say someone creates a EBS volume of type gp2 so in ABS you can create volume of type gp2 or gp3 let's say someone has created an EPS volume of type gp2 but your organization wants all the EVS volumes to be of type gp3 why because gp3 has better performance when compared to gp2 it's it's quite fast and there are other things so that's why one of the organizational rules is that anyone who creates EBS volume on the AWS accounts it should be of type gp3 but how do you do that let's say you create documents to it you share the knowledge you uh you know explain everyone that EBS volume should be of type gp3 but still someone who is joining new or unexpectedly Unfortunately they have created EBS volume of gp2 now how do you handle this scenario because you are the cloud Engineers you are responsible for it so what you can do is you can use this project that I am going to explain and you can convert this EBS of type gp2 automatically to EBS of type gp3 now this is just an example the options for this are endless you can even consider this as an ec2 instance let's say someone creates an ec2 instance with 100 GB CPU sorry 100 GB RAM and 100 CPU as a cloud Engineers you should restrict such actions right what if there are 10 Engineers who keep on creating ec2 instances with large resources then you know your organization is affected by these uh developers or these persons who are creating this infrastructure so to block these things or to ensure such things are not uh taken place you need to use this project and this project is basically ensuring that you maintain the AWS infrastructure or Cloud infrastructure according to the complaints of your organization so that's what I mentioned here as a cloud engineering team we can take care of new uh sorry we can take care of we sorry we take care of AWS environment and make sure it is in compliance with the organizational policies for example we trigger a Lambda function when AWS elastic blocks store that is EBS volume is created what we do is we verify if it is of type gp3 if it is of not type gp3 then we automatically convert the gp2 type to gp3 type and how do you do that we use AWS cloudwatch in combination with AWS Lambda functions and inside the AWS Lambda functions we use python as scripting language okay so this is the big picture and now let's jump on to understand how to do this what are things required to implement it and we will also do the same example that I have shown you right I will try to keep this video as informative as possible at the same time I will try to restrict the time as well because many people would engage if the video is small sweet and informative no problem so firstly let's start creating an EBS volume of type gp2 and let's see how it gets converted automatically to gp3 but before that we need to do some prerequisites that is we need to create a Lambda function we need to create AWS cloudwatch and integrate both AWS cloudwatch and AWS Lambda function so that Lambda function gets triggered by cloudwatch when EBS volume is created so I hope you understood the project what am I going to demonstrate and what is your role as Cloud engineers and similarly you can try thinking different options like I gave example of EBS I also gave example of ec2 you can think of S3 buckets you can think of RDS you can think of kubernetes that is eks n number of possibilities and this project scope is endless because number of resources on AWS number of policies that your organization can build around that resources the compliance rules that you can that your organization can build as Cloud Engineers you have to take care of ensuring that every resource is following or is according to the complaints now without wasting any time what I'll do first is this is my AWS console I hope you are seeing the right screen perfect you are seeing the right screen now what I'll do is I'll start with a basic Lambda function I'll not write the Lambda function immediately but I will just what I'll do is I'll just create a basic Lambda function okay so take a new tab and search for Lambda so this is the Lambda function here and let's create a new Lambda function okay perfect create function what we will do is we will call this Lambda function as EBS volume check okay so I'll just call it as EBS volume check because we are verifying the EBS volume then let's use python 3. I'll not change any of these things let's use the default execution role itself or if you want I can also create an new AWS role okay but for the purpose of keeping the video very simple let's use the basic Lambda function that is created here and what I'll do is I'll update the permissions of this role because there are many beginners who are watching this video I don't want to confuse them by creating lot of resources okay but if you are already uh well versed with AWS or you have acquaintance with acquaintance with AWS what you can do is instead of using this default Lambda rule that AWS creates for the Lambda function you can also use a im role by creating IM rule by yourself I will tell you what permissions are required okay so this is a very basic Lambda function that AWS has given us okay so there is no code related there is no python even a single line of python code is not available here related to our function because this is a default value so if if you check this function can be executed like you can click on test and provide uh the event name as test event click on Save and once you run this you will notice that the Lambda function is running so this is a basic check that you have to do to ensure that your Lambda function is working perfect now let's go to cloudwatch so search for cloud watch you have to be very clear with what you are doing only then you will understand this entire process that's why I initially try to explain about the project so we are going to the AWS cloudwatch and what I will do is I will configure a rule in AWS cloudwatch which will trigger the Lambda function okay so what we need to do here is go to this section here under the events you will see something called as rules click on that and after that click on create rule because we need to create a new role sorry new rule and select the name of the service so in my case the name of the service is let's see if we have EBS if not we can go with ec2 perfect what is the event type so the event type is volume creation right so let's put it as EBS volume notification do you want to specify any specific event like for example do you want to specify creation of volume yes because the scope of my project is only for creation of volume always ensure that whenever you create any resource in AWS the thing that you have to keep in mind is least privilege or zero privilege that means you need to try to restrict the actions you need to try to restrict the permissions everything to the extent of your project if you go beyond that then you are creating a security violation any volume Arn or specific volume Arn right now I don't know the Arn of the EBS volume that is getting created that's going to be created and this project should have any volume Arn because any volume that is created your Cloud watch and the Lambda function should monitor it so that's why I'll keep it as any volume now what you can do is you can also add a Target okay so what is the target EBS volume check the function that I have written the Lambda function that we wrote is the uh how do I say that is the Lambda function again do you want to configure any versions for your Lambda function or anything etcetera no I don't want to so I'm fine with it and I don't want to add any additional Target there is only one target for my function okay so let's click on configure details now what is the name of your rule definition let's give the same name called EBS volume check description we can give the same project description okay in your organization it is very important to provide the descriptions okay so for now I'm providing the same project description but you need to modify accordingly Okay now click on create rule I hope it is clear on what we have done till now all that we did is we have configured the Lambda function sorry AWS Cloud watch to trigger the Lambda function when there is a volume creation event now before I write the code for verifying the gp2 and converting it into gp3 what I need to check is I need to verify if my cloud watch is able to access the Lambda functions right so for that I can do a dummy run okay what I'll do is I'll create a dummy Lambda function uh I'll create a dummy EBS volume go to your ec2 dashboard volumes are under ec2 dashboard see you can find something called elastic Block store move to volumes and inside the I mean here what you will do is you can create a volume uh looks like I already have a volume let me delete that make sure you delete the volumes after you create um okay something is going wrong here perfect delete volume I have deleted the volume now let's create a new volume and the expectation is the Lambda function should get triggered and see here by default the type is of gp2 but right now your Lambda function does not have that logic to convert it to type gp3 because we have not written any python code but if you look here the type is gp2 this is the default type awesome now let's just queue it a minute because Lambda function has to trigger the sorry cloudwatch has to trigger the Lambda function so now let's go to the log groups and see if the Lambda function is triggered and executed it's just taking a minute perfect now you will notice the log group for Lambda functions and specifically to your Lambda function okay I think there is some delay just a second okay so uh there was some delay but uh if you go to the cloudwatch log groups now I can see that uh you know there is a log stream here there is a log group here if we go to this log group let us see what is the latest log group okay so this is the latest log group and if I go into this one I will notice that there is a Lambda function that got triggered okay so this is the Lambda function that got triggered and what does it do is it is a very basic Lambda function and it does not execute anything related to your EBS volumes but this is just for verification that my cloud watch when there is a EBS volume that gets created triggers the Lambda function okay nothing more than that so my verification the basic verification has passed now what I'll do is I will go and write the python code for converting the EBS volume of type EP sorry of type gp2 to type gp3 so let's do that here the very first thing that you need to do is understand how Lambda functions works so in Lambda function there is a default function called Lambda Handler that is the one that gets triggered or that's the one that gets executed as soon as your Lambda function is called now you can modify this you can change this Lambda Handler in the configuration and you can provide any custom name and what is this event and what is this context so event and context are provided by the invoking uh function invoking resource now cloudwatch is invoking this Lambda function so the event is the cloud watch event okay so this Json event is basically a Json is provided by cloudwatch and what do we have inside this event you can verify that what you can do is just provide this detail here called print event okay so this is a print event for you to understand what is the event okay what is this Json that uh cloudwatch is providing to the Lambda function when EBS is created okay so just click on deploy every time you make a change it is recommended to uh deploy the new change or use Ctrl s now test it perfect Lambda function is in good shape now test it from the cloud watch so that you understand what is this event okay so again what you need to do for that is remove this ebase volume that you have created and create a new one because you are cloudwatch triggers the Lambda function only when a EBS volume is created so delete the volume and let me create a new volume always make sure you are deleting the volumes so now again a new volume is created and wait for a minute your log group will be updated and but this time in the log group you will get the full details of your EBS volume that is created and this full full details are sent to the Lambda function as a cloud engineer you will use this even details and you will write the logic in your python code for handling the event and updating the ebase volume of type gp3 so we are just five minutes away to complete this project stay tuned this is a log group now you will notice that there is a new log stream okay so this is the latest one that I have and if I go into this one see you have the complete details of the EBS volume that is created you can search uh so this is the volume name that cloudwatch log groups tells us let us verify perfect you can verify that the volume is same so this is the Json data now what is this Json data let us investigate for that what you need to do is take a online Json formatter any Json formatter is fine uh just search for Json formatter okay so I've just opened a Json formatter provide the details here process and you will get the entire details so this is the same details that cloudwatch is providing to the Lambda function okay so you can assume that let's say you are a beginner to python you can assume that this function is translated something into this okay event is equals to the Json data now this is just for your understanding okay you don't have to put this thing but just understand that this is how this function is converted okay the event is populated with this details so you don't have to provide this thing now because event is already provided by the AWS Cloud watch so now what are these things inside it which you can use the only thing that I would need is the name or the ID of the Lambda function okay so this is something that I need so that I can convert this volume ID or the volume of this ID to type gp3 now how do I extract this information okay firstly I think most of you know that in Python the most popular package that you would use is boto 3. now I will use boto3 the python module to convert this volume to type gp3 but before that I have to extract this one so for extraction I will write a new python function okay so I'll just call it as uh very simply let's call it as python function of name what name should we provide let's provide the name as extract volume ID okay you can provide any name just take name as input and now what I'll do is inside this I will write the python code related to this one okay so I already have this handy so that we don't waste a lot of time so I just change the name of it to get volume ID from Arn okay so what we are doing here is we are firstly taking extracting this entire arm and from this Arn what I am doing is I am extracting this volume ID okay now how do I extract this one you know that this is a Json information right so why I put this one here we don't need this one but just for your understanding I have this one here but once we complete the function I'll remove this block because it's very difficult to go back to the log group and always show you this information what I did is I have this information here but once the function is done I'll remove it so get volume ID from arm we are taking the volume Arn as an input that means we are taking this as input but to take this as input you have to pass this Arn to the function right to pass this AR into the function firstly what I'll do is I'll parse this Json how do you parse the Json in Python all that you need to do is uh let's say let's call this as volume Arn okay to parse this what I can do is simply volume arm is equals to event of what is the function here resources of 0. okay why I am doing 0 because I just want the first entry that's it nothing more than that right I am just taking the first entry and I'm calling it as volume Arn or you can also call it as resource Arn or whatever you would like to now I have this information right now I will pass this volume Arn to this function and retrieve the volume ID okay so for that what I can do is call this function uh let me call this as volume underscore ID and I will pass this information to the function and let's pass this one called volume Arn so now you will get the volume ID okay so you have the volume ID what you need to do next is use this python module called moto3 and convert this thing the volume of type gp2 to type gp3 so for that you need to again go to your browser and search for go23 modify volume okay because you are trying to modify the volume go to the python documentation I mean the Border 3 documentation and you have the very clear instructions here just copy this instruction where is this here perfect and what I'll do is I'll just paste it here format it do I need this no I don't want to talk anything related to dry run do I need the volume ID absolutely yes just copy the volume ID here do I need the size right now I'm not bothered about it what is the volume type that you need I need the volume type as gp3 okay so what you are doing here you are using the Lambda function sorry you are using the Moto 3 you haven't created the client don't worry I'll tell you how to create the client but I am using the modify volume function and I am trying to convert the volume of this ID to type gp3 I don't want any of these things so let me remove and finally I just have to create a client so what is this client so client in Moto 3 is nothing but like boto3 can handle all the AWS resources or most of the AWS resources but you want to use a client for your specific resource okay so what I'll do for that is again you can go to the boto3 documentation you have to understand that which type of resource is it so EBS basically falls under ec2 so I am going to create ec2 client where I call it Auto 3 dot client ec2 okay now if you are dealing with some other resource then let's say you are dealing with S3 then it will be S3 client if you are dealing with something else then you have to create that something else client and that's it we are technically done now let me try to remove this thing I'll try to keep this python function as minimalistic as possible I don't want to use event okay now remove this perfect if you want to add comments then it is very good you can add n number of comments so that your function is much more readable but let's try to understand what I've written here one is import Json is not required because we are not doing any Json related formatting I have imported the boto3 functionality I have imported the Border 3 module and this is the get volume ID from arm this is the volume Aaron that I am trying to sorry the volume ID that I am trying to retrieve from the volume arm so how am I doing it basically I am using this function called get volume ID from Arn and I'm passing the volume Arn so how did I pass the volume Arn I read the event from the log group I use the Json formatter I understood where the volume ID is and I got that basic information after that you can basically use the boto3 client for ec2 and modify the volume now as soon as I deploy it and you know if I create a test volume it should work fine but one thing that is left is you have to Grant the permissions to this IM role why because the IM role will not have permissions to access the EBS resource so for that what I'll do is quickly go to somewhere here what you can do is duplicate this tab and go to IM some of every time I open a tab it gets slow but that's fine let's try one more time cool I am oh sorry I need to search here not there my bag so go to IM and there will be an existing role for your EBS I just try to add or attach it with a policy to handle the EPS now that will be your final step after that I'll show you practically we are just two minutes away you will see that the volume is converted to type gp3 sorry again it's bit slow click on the three roles you will find the role for yourself this is the role EBS volume check hyphen roll hyphen this random string that is prefixed sorry that is suffixed add permissions attach policy or create inline policy you should go with create inline policy um what happened here so I think it's getting refreshed okay choose the service so the service should be of type ec2 as you already know okay so what exactly in ec2 are you looking for okay so in ec2 there will be several actions and you should find the correct action like I always tell you don't provide permissions that are not required just for the demo you can do that but in your organization when you are doing it do not provide more permissions okay so select the ec2 that is of Services of type ec2 and then what you will do is go with the action for now because of the demo you can just select all ec2 actions but go with expand all and select the permissions that are required in your case you will mostly need permissions related to list volume describe volume and then you will need permissions related to um just few things uh probably you don't even have to describe the volume but you have to just modify the volume so these are the permissions that you will require let me review the policy and there is some error the policy contains an error okay let's see what it is okay uh I think it's better to restrict uh the permissions related to volume so now I choose uh volume here and let's provide the permissions that are required uh describe volume I don't think even describe is required but let's provide the permissions later to describe and modify volume okay because we are basically doing the modify volume and do you want to provide anything related to a resource arm no I want to do it for all the resources because my function has to deal with all the resources now finally review the policy and just create the policy okay let's provide the same name EBS volume check create policy foreign now get let's get ready for the final step where I will delete this volume create a new volume and we will see that the volume is converted to type gp3 let's delete this volume perfect create the volume I'm not going to change anything click on the create volume if the Lambda function is successfully executed we will notice that the type is converted to gp3 if it is not executed then let's try to see what is the error and let's try to fix it okay it's still of type gp3 let us see if it is converted to type of gp3 if not like I told you let's try to fix it final check I'm hoping it will pass let's see the volume type okay it's still gp2 let's go to the cloud watch logs and see what has happened okay so this is a cloud watch logs let's go to the log groups one more time and let's see the latest log group if there is any issue we should be able to fix it this is a volume volume group and this is the latest volume stream this is updated right now let's open the volume group see there is an error here it says resources is not defined in traceback okay so what is this resources let's see okay it clearly says that in line number 13 my Lambda function is not working perfect there is no problem let's go to the Lambda function and let's try to fix it what is line number 13 here it says event resources ah okay so if you check the event uh I have somewhere this is the Json right since inside the event you have something called as resources am I wrong with it somewhere okay I think I got it should be inside the single quotes right so see we do kind of silly mistakes now let's try to deploy it one more time and it's time for one more attempt where I will delete this existing volume and recreate a new one delete okay let's create a new volume create volume one more time we are giving our second attempt and hopefully this time it will pass again if it doesn't pass you don't have to worry just like I showed you go to the log group try to identify the error it's always good if we fail in the demo because people who are watching the video they'll be able to understand where exactly is the implementation going wrong and how to fix it so I refreshed it now let us see if the volume is of type gp3 loading volumes okay it's still on gp2 probably we did one more mistake this is the latest one let us see where did we go wrong again name client is not defined okay so it says that the client dot ah okay because I was doing it live I just did some very simple mistakes it should be EC to underscore client because we have defined it as ec2 and this time I'm very sure it will pass uh let's delete this one foreign attempt I hope that subscribers who are watching this or the viewers you are learning from the mistakes that I am doing now I have created the volume and this time I'm pretty sure if I refresh this this should be of type gp3 final check gp3 you can play this video at 2x speed so that you don't get bored while watching it but we are doing a practical implementation and it is very good always to learn from the mistakes okay still GP very strange is it that it did not get refreshed or ah Okay so ec2 client right what is the mistake it says here why did I provide EC client is it not easy to underscore client probably it's not well deployed line number what does it say Trace line number 18 ECC client no I think uh it's not well read so let's try to redeploy this ec2 underscore client perfect I think it's issue with caching not from our side okay now I hope finally it will pass uh last time was a caching issue I don't think it's issue from RN but let's try to see one more time so I don't want to edit this part because people who are watching it you can learn from the mistakes so I'm not interested in editing and I'll post it as is without making any changes so if again it is not converted to type gp3 probably there is some uh issue with yeah perfect gp3 so this is it and I hope you like the demonstration where we have seen how to use this function how to use this project called AWS ninja like I told you I'll put this entire thing in the description so that you can follow the video thank you so much for watching the video I hope you found this useful and the demo part which failed initially learned from that mistakes and try to implement this with multiple examples I'll see in the next video If you haven't subscribed to our Channel please subscribe to the channel take care everyone bye
Info
Channel: Abhishek.Veeramalla
Views: 8,819
Rating: undefined out of 5
Keywords: devops mock interview, devops, devops round table, azure, vmware, openshift, gcp, openstack, azure git, bitbucket, stash, gitlab, azure devops interview questions, Real Time Python Project, DevOps Python Project, python videos, what is devops, why devops, how to learn devops, SRE, What is SRE ?, Devops vs SRE, Devops to SRE, SRE roadmap, learn SRE, Platform Engineering ?, What is Platform Engineering ?, Difference Between Platform Engineering and DevOps ?, Future of Devops
Id: DgavixR_w5Y
Channel Id: undefined
Length: 38min 17sec (2297 seconds)
Published: Wed May 17 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.