Day-17 |Everything about Terraform |Write Your First Project |Remote Backend |Modules |Interview Q&A

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone my name is Abhishek and welcome back to my channel so today we are at day 17 and we'll be talking about a terraform like we'll understand various things about terraform how to install how to configure write your first terraform project I'll also show you how to like you know uh instead of creating your state file in local how to put the state file into a remote backend and we will talk about uh terraform modules we'll talk about problems with the terraform and problems with the terraform State file and we'll finally look at some terraform interview questions and all of these things are put together with my experience and it took a lot of time for me uh so what I've done is for somebody who is not able to follow these things let's say you watch the video but for some reasons the video was not clear to you then you can also go ahead and look into this repository okay so I have put together all of the things that we are discussing today in a git repository including the configuration files like the terraform files the projects that we are going to create both using local state file and remote State file okay so these are the configuration files as well you can clone this repository and you can execute the actions by yourself as well okay so the repository holds everything like the images that we are discussing the examples what not everything so Focus repository so that you get a gist of what we are doing as well perfect so if we go back to the discussion yesterday okay so what did we discuss yesterday we talked about infrastructure as code right then we talked about uh the concept of apis code and how terraform actually executes these things and terraform automates your infrastructure so today I'll try to put these things in as you know practical ways as possible so that you understand the realistic example of terraform and we will also take a look at what is a good terraform setup how does a good terraform setup look like so I'll explain you using AWS but the same thing is applying for Azure as well okay so many people have been asking me about Azure so like you know whenever I explain AWS the cloud providers are all the same it is matter of like you know today we'll be talking about S3 as remote bucket and in Azure you can use your Azure storage account and you can use your Azure storage container just replace S3 with Azure storage container then your concept will be the same okay so don't worry about the cloud provider instead worry about the concept and the execution perfect so this is a brief diagram which explains you what is terraform actually okay so if you still have any questions from yesterday's sessions and you did not understand okay why not uh we use uh AWS cloud formation templates and why should we only use terraform or why not we use AWS CLI so if you look at terraform okay what terraform basically does is as a user okay you configure a terraform provider and terraform will talk to the Target API what is a target API so let's say you have provided your terraform provider as AWS okay what terraform will understand is whatever the user has written like the configuration files so terraform will convert these configuration files into the apis that AWS will understand and tomorrow if you want to change your entire terraform scripts into Azure let's say so what you will do is you will modify the script files and terraform will do the rest of the things for you like you know you don't have to worry about uh of course you have to provide the variables of course you have to provide all the required information but the templating language is the same okay for example if you look at AWS Cloud information templates and if you look at Azure resource manager and if you want to move from one place to another place it's like you you have to learn two different tools now the options for the cloud providers is vast you can today you can use AWS tomorrow you might move to this solution but tomorrow you might move to a world of hybrid cloud like your organization might have few things in AWS and few things in Azure so instead if you just learn one terraform so terraform once you provide the provider details it will understand that okay so this is a provider let me convert the terraform configuration files into apis that AWS would understand because AWS is your target so it will convert your configuration files into Target AWS apis and your terraform execution will takes place so what you are achieving with this you are achieving like you know you can manage any kind of infrastructure tomorrow if there is a new Cloud that comes comes into play let's say like Alibaba tomorrow you have something called as Xyz cloud so terraform or the module developers at terraform like the hashicorp or the open source contributors they will take care of writing this apis okay all that you need to do is once they write the modules you have to write the terraform configuration files and like you know that's why you can manage any kind of infrastructure using terraform whether it is existing clouds or whether it is the new clouds that are going to come in the future terraform will remain and will uh be the only tool that you have to learn if you I mean if you want to move to terraform then track your infrastructure that means to say with terraform like you don't have to log into your cloud provider and see what is the infrastructure that you have created let's say you have an organization that is created by terraform all the infrastructure what you can do is you can simply log into your terraform machine and you can look into the state file if your state file is stored in uh S3 buckets or any other places which is called as remote backends which we'll learn uh going ahead you can just go to your S3 bucket and see how does your state file look like and it will explain your end entire AWS organization what are the resources that are created okay so that way you can keep a track of your infrastructure then automate changes what is automation of changes so with terraform you don't have to like you know uh let's say you have to make one simple change you don't have to manually log into a log into your infrastructure and make the changes instead you can automate and similarly you can collaborate as well like whenever you want to make any changes you can put this terraform files except your state file okay accept your state file you can put your terraform files in a git repository or any version control system and what you will do is using this version control system you can collaborate with your peers like if you are if you want to increase the let's say the resources of your ec2 instance instead of manually changing it you will go to git you will update your terraform file and you will ask one of your peers to review it okay so that they can review the code and they can say okay this is looking fine so that way you can automate changes with collaboration okay that's what I'm telling you here and after that you also have standardized configuration okay so what is standardized configuration standardized configuration means like there is a standard that you are maintaining with TF files right so like manually when you are doing things there is no standard with one cloud with one cloud provider you will do things in certain way and with another cloud provider you will do things in different way so instead what you are doing is you are standardizing the way that you are writing the configuration perfect so these are the advantages of terraform whenever somebody asks you why you want to move to terraform so these are the things that you have to communicate perfect and okay you don't have to be explaining this with the same terms you can just explain the overview that should be more than enough okay so after that what is a terraform life cycle okay so uh the reason why I'm explaining you in this step-by-step approach is when we move ahead to install the terraform and we had to actually write the terraform configuration it should be useful for you uh I mean it will be useful for you if you understand what are we exactly learning with terraform right so that's why I'm going to explain you about the life cycle of terraform in general what you will do with terraform is you start writing your terraform configuration files so to write your terraform configuration files I agree that you might be very new to terraform so you might be wondering okay I'm pretty new to terraform now how should I uh start writing the terraform project uh because it also follows a different indentation and syntax don't worry about it like just go to uh okay let me show you just go to uh hashicop ter terraform okay search for it in your browser so terraform by hashikov and let's say that your provider is AWS okay so click on the docs okay uh like instead what you can also do is uh hashicorp terraform AWS if you know that your provider is that AWS then you get AWS provider hashicorp okay so this is much easier right instead of again going to the docs and finding out where the AWS docs are you can directly jump onto hashicorp AWS provided docs okay now terraform itself provides you a wonderful explanation of the documents like you know if your uh terraform is 0.13 or later okay which is the latest version uh well you can use the latest version definitely so this is how you start with okay it says how to create your AWS provider configure an AWS provider where you are providing all the details I'll explain you the files don't worry about it but just I'm showing you an overview that you can Pro create any of the things with the examples that are given here so terraform provides you wonderful examples with detail explanation let's say you want to create Lambda functions okay even I don't know the Syntax for Lambda functions what I'll do is I'll come here okay and I know that I want to create a Lambda functions from sketch okay so AWS Lambda function I'll come here there is a resource that is available and then there is a very good example which explains you how to create AWS Lambda function okay so I'll just copy this example modify the changes that I want to okay so like uh maybe I want to modify the environment variables here or maybe I want to modify the Lambda function name here okay and what is the name of the Lambda function I can I can change all of these things okay so it is very simple but don't worry I'll also explain you the indentation syntax what is written in each and every block but just for now what I wanted to explain is the first step that you do is write terraform configuration files which is not at all difficult okay just go to the hashicor terraform documentation you don't have to go anywhere else okay you don't have to follow any videos for writing the terraform files just go to hashicorp terraform documentation search for the provider that you want to even tomorrow if you want to move to Azure you don't have to go to Azure and search for the Azure documentation just go to again terraform documentation search for the provider you have each and every example of the resource that you want to create and if there is let's say there is a AWS function or there is a AWS service which is introduced yesterday and you don't find documentation here that means to say that itself we need to say that terraform does not support that resource okay then what is the next thing once you write your terraform configuration files before you actually execute okay because what terraform does is once you write the configuration files as explained here terraform will take the configuration files create an API for it and then submit the API to the Target cloud provider but what if you wrote something wrong okay so in most of the tools it does not support these kind of things but terraform supports something called as dry run okay so what is dry run dry run is a way using which you can actually see what are the things that are going to happen when you execute let's say You have given mistakenly okay when you are writing your terraform configuration files you came here okay mistakenly you copied the as it is like you know you copied the exact example and you forgot to change the environment variables Okay then if you run terraform applying your Lambda function will be created with the same environment variables but what terraform provides you is an option to plan okay or absent option to dry run using which you can understand okay so whenever I am going to execute terraform applying these are the resources that are going to be created so let me correct myself okay so if there is any mistakes or if you want to look at uh the list of things that are going to be created use terraform plan and then go ahead with terraform apply so this is entire life cycle so firstly you write configuration files and then you use the terraform training and then use the terraform apply and somebody or some people also consider terraform destroy as part of the terraform life cycle which is like just deleting the resources that you created with terraform now now that we understood what terraform does what are the different things that are involved in terraform what is the advantage of terraform we talked a lot so let's go to the Practical implementation and see how to install terraform so that we can start playing with it so uh I've considered two things here one is people with Mac uh then uh you know or the other people which have Linux or installed Oracle virtualbox or you know created an ec2 instance on their Windows machine okay because as I always suggest either create a virtual machine on your windows box or go with an ec2 instance because so that you can play around with Linux in a real-time organization or in a real-time scenarios it is very unlikely that you would work on a Windows machine with Windows VM so uh if you have Linux go with this command this is for Ubuntu now I'll also show you a command for the other operating systems don't worry and then if you have a Mac it is very simple just use the brew and install the Brew Target that is for AWS sorry that is for hashicorp terraform so I have a uh like you know Mac so what I did was I used this Brew install hashicorp terraform and I installed terraform okay so if you see here if I just do terraform so you will find that okay terraform is installed on my machine all also you can search for uh terraform hyphen hyphen version so what are the commands that I used I simply use the Brew tab hashicorp terraform to create the Brew Tab and then I use the Brew install but what I got was uh you know I what I understood was I already had terraform installed on my machine but I was using an old version so that's the reason why if any of you already have terraform installed on your machine but for some reason if you're using an older version of terraform then you know please go with this specific option through upgrade hash cup tab terraform so that you have a latest version what happens is some of the cloud provider modules might not work with old version of terraform so always have an updated version of terraform on your machine perfect so if you have a different machine or you know if you're on Windows don't worry again go to the hashico uh terraform documentation and you know you can simply look for the ways to install terrafa so click on Windows uh so that uh you know they'll redirect you to a stack Overflow article which explains you exactly how to install terraform and how to put your terraform in the path and if you are on a window uh sorry Mac I already provided you the steps perfect so this is how you install terraform okay so what if somebody is on centers right they also have steps for that let me also show that for you okay where was it no not this one yeah Mac or Linux oh yeah so the options are here uh either you can go with uh you know these options that are present here and then you can search for Ubuntu Debian uh so this is the option that I've provided then you have Centos you can you can just follow this guide okay so I'll also put the link in the description don't worry about it so to install terraform uh do not go with the manual installations do not install with curl because you might run into some issues with the path many people might not uh be experienced in installing the softwares and configuring them as part of path and if there are any issues with the symbolic links the soft links and the hard links don't worry do not go into that uh all of those things and instead just use the native package managers like if you are if you're on Mac use the Brew if you're on Centos use em if you're on Ubuntu use appkit okay so the documentation is very clear about these things and once you install terraform okay if you go to the previous slides you just use this command called terraform help so that you understand okay if your terraform is installed or not like for checking the version of terraform use terraform hyphen version okay which I already showed you right so if you look at the terraform version I am on the latest version of terraform right V 1.3.7 so I'm already on the latest version of terraform so I can proceed with the next steps now what would be your next steps are once you install the terraform you should understand that terraform basically runs on four commands one is terraform init one is terraform plan which we discussed about then terraform apply and terraform test drive we talked about plan apply and Destroy already what is terraform in it so terraform in it will initialize your terraform like for example here if you see uh okay I think I already I'm already in a project so I just cloned the git repository that I created but let me mode and show you for example here I don't have any terraform and if I want to run the terraform here okay I can show you here only don't worry so here my I want to create my terraform repositories and I want to initialize a project here so what I'll do is whenever wherever I have my terraform files okay so here I have my terraform file there is one main.tf file that I'm go you you will be going to write what you can do is simply type terraform in it okay and let's see what happens so terraform will initialize AWS provider for you okay so what terraform is doing terraform is initializing you the provider that you want okay so in your case it can be Azure or it can be gcp or any other things don't worry about it but the command will be the same so once you do terraform init okay whatever you wrote in your main.tf file the provider details those provider configuration will be created in your local and you will be able to communicate with that provider okay so what are the four commands that you will use terraform init plan apply and Destroy but saying that what is the things that you are going to write in your Excel terraform file okay so you understood that you will use terraform unit to initialize your provider you understood that you will use terraform plan to execute all of the things or you know dry run and terraform apply to execute the things in real L and terraform destroy to destroy but how to actually write terraform files before you do all of these things so for that very own reason I wrote a very simple terraform uh to start with okay so let us see that in a git repository so that you have a better picture of it okay so go to the AWS whenever you are starting with it when you're starting with my repository go to AWS look for local state okay so this is where you actually start with do not go with the remote State for the very first time start with the local state and in the local state I just wrote a very simple uh terraform file that will create AWS ec2 instance okay so this is a very simple file even you will find similar example you might not find the exact example but you will also find similar example from terraform documents as well okay like I explained you you don't even have to follow my documentation you can directly follow the terraform documentation to start writing your first file as well but why I created in repository and why I'm going to show you is because I am going to explain you block by block what I'm what I actually wrote here and what we are going to create okay so the first thing that you start with is you start with the terraform block okay so if you see here all of these things like the resorts uh provider required resource okay they are falling under the terraform block so this is the actual thing that you start with okay so if you look at the hierarchy here what is happening is you have a terraform block okay so this is the terraform block and inside the terraform block you wrote bunch of things okay so start with a terraform block after that you provide the required providers it can be a single provider or it can be multi providers like I mentioned here in case of hybrid Cloud you might use multi provider in case of a single Cloud you can just provide your AWS provider okay so this reference where did I get this information from this information is a static information like you know this information will remain same all the time except the version might increase or decrease okay it is very unlikely that you will use a low version like if you start with 4.16 in your project you always use 4.17 or 4.18 unless you have some compatibility issues with your existing files okay so this information you can get from the AWS provider as well okay so for example this is the AWS documentation right so if you jump onto the AWS provider the very first thing that you will see is this one okay so here they are using 4.0 and in my examples I am using 4.16 I am using a slightly higher version of the AWS provider make sense so firstly you start with the provider details without provider details your terraform is of no use whether you are writing AWS related information or any information terraform will only initialize your setup okay whenever we were doing this terraform init what was happening is your terraform init is actually looking for this required providers okay so it understood that in my required providers I used AWS so that's why terraform init created the AWS configuration for me okay so if you see here what it did is it started initializing uh sorry installing hashicor slash AWS with this specific version and then what it did is it said terraform has been successfully initialized you may now begin working with terraform try running terraform plan to see any changes to your infrastructure okay and you can then apply your changes to terraform so this is about terraform in it or this is about the required provider's block and tomorrow let's say you added one more block here okay similar to AWS let's say you added Azure here and you provided the Azure details as well like water is the source what is the version and what is the uh like you know source and version that's it so let's say you provided these details what you need to do is you have to re-initialize your terraform state so to reinitialize what you will do just you will run terraform in it one more time okay make sense so this is about your terraform initialization and here we did not provide anything about the terraform State file okay I'll explain you what is State file don't worry but sometimes you also make changes to your terraform State even in such cases you have to reinitialize your terraform perfect don't worry about state right now then required version of your terraform CLI has to be greater than 1.2 you have to definitely write this block as well because you know sometimes people will come to and say that Abhishek I exactly cloned your repository but it was not working for me so in such cases what you need to do is that you have to mention the version of terraform so that whenever terraform is running your file okay whenever terraform is going to do terraform plan what terraform is going to do terraform apply even without plan what terraform will do is in your CLI let's say somebody has cloned my repository and they are trying to execute this terraform script but it was it is failing for them because they are using a low version of terraform or lesser version of terraform so Terror form we look into this specific block and says that okay the owner of the repository or the owner of this terraform files have mentioned that the terraform CLA version has to be greater than 1.2.0 only then terraform will work for them okay so that's why provide the required version details as well so what is the primary thing that you will write the first thing that you write is the provider details it always has to start with the provider details after that it should start with the required version once both of them are created now you would move to the what is the region okay this Cloud providers even if you don't use this block that is still fine okay so let's say you remove this entire thing there is no problem okay the reason for that is by default terraform will choose the default region on your AWS that is Us East one but if if let's say that you know you created the provider terraform has initialized the provider for you but you have some default configuration like you know you only want your researchers to be created in us uh East one or you only volunteer resources to be created in US West one so in such cases you can also configure your provider okay so this is not a mandatory block you can add any fields to this block or you can just leave this leave this empty perfect then once you provide all of these things you can move ahead with the resources okay what are the resources that you are going to create now this very own thing is the differentiator for each and every terraform file that you are writing all of these things are constant okay so even in your project you can as it is copy uh these specific lines from my repository there is no problem at all but only things that would vary is these ones okay so what is the resource information what is the resource name what are the parameters for the resource now these things you will get from the hashicorp terraform documentation okay nobody will remember these things and even if somebody is remember these things it's not a good practice because it will not help you anyways right what is the advantage of remembering this resource block for AWS VPC okay let's say you remember this one you are not achieving anything right because this is something that you can get from the documentation as well there is no code involved in it or you know there is no advantage of remember during these things even if somebody asks you in the interviews you can say that I follow the terraform documentation I vaguely remember or you know I do not remember the exact syntax but I can try writing for you okay so that even if you make any mistakes in writing this syntaxes they will understand that okay you have uh idea of how to write terraform files but you might not remember exactly the Syntax for VPC AWS has 200 Services okay so will you remember all of the syntaxes for this 200 Services now Azure also has uh close to 150 200 services so practically it will become difficult for people to remember uh the syntaxes for all of the resources so everybody follows the documentation don't worry about it so once you do that what is going to happen let's go back to the git repository okay so you can add bunch of resources okay so this is one resource that I'm going to create let's say tomorrow you want to create a specific uh I mean along with the ec2 instance you also want to add a load balancer so what you will do is you will uh just modify this and let's say I don't remember okay for now I know how to create a load balancer but let's say I don't remember what I'll do is I'll come here search for load balancer okay let's say I want to add a elastic load balancer I'll just go here and look for the syntax AWS lb okay your task let's say your team has sent you a task to add AWS load balancer creation to the existing terraform repositories for a specific organization you will go to the terraform scripts and what you will do is you'll just go to the uh last part of your terraform scripts and just you will add resources regarding this one okay so just copy the syntax from here do not copy exactly make the modifications that that are required if changes are required in a variables file if your organization is maintaining a variables file you can make changes there if your organization I mean definitely automation will have input output variable files right now I am not explaining them once you understand this entire thing I'll also explain you about the input output variables okay so it is always a good practice like for example here this thing the name or you know the load balancer configuration the security group name the subdent the cidr block all of these things should never be put inside the terraform files the main terraform files or the resources.tf files they should always go into the input.tf file so that tomorrow let's say somebody is going to modify these details they don't have to come to the exact error form files and they don't have to modify these things right so they can only go to the input.tf file and they can update the details let's say you want to scale up your resource uh memory or let's say you want to scale up your resource CPU so instead of updating the terraform TF files uh like the resource.f or the main.tf or any uh provider.tf details they can only go to the input variables.tf file it will just call input.tf mostly and update the details even when you are reviewing the code you will understand that somebody who is modifying these details in git okay they are only updating the input.tf file so it is less harmful for you so okay they're modifying the input.tf file that means to say that they are changing some variables so that you can look into the variable change and you understand okay there is no problem with it right so always good practice to maintain input.tf and output.df files okay so input.tf and output.tf is no rocket science it's just you segregate your variables from these files and put them in different uh file names input is for inputting and output is for outputting like let's say you want to let's say AWS ec2 instance is created and you want to show in the terminal what are the uh specific parameters of this ec2 instance that are created for you so you can add those variables to your output.tf and terraform will understand that okay you are expecting terraform to show these details once the resources are created okay so let's say uh for the ec2 instance you want to get the private IP address once the instance is created so private IP address depends on the AWS right it can allocate dynamically any private IP address or public IP address so what you can ask AWS is instead of once the AWS instead instance is created instead of logging to the AWS ec2 console and looking into that you can directly put these details into the output.df file so that terraform will understand okay you are expecting me to share the private IP address okay I don't have any problem I'll print the output of private IP address so this is how you make or this is how you write better terraform files by using input.df and output.tf don't worry uh it's not it's not a big deal I'll also show you how to do that perfect so we talked a lot about this git repository we talked how to write main.tf what are the different things that are going to be used in this main.tf let us now execute okay so I just have the same repository on my local okay if you look at the path as well I came to the right your first terraform project slash AWS local state okay so like I mentioned you do not start with remote State because remote state has uh some Advanced configuration perfect main.tf to just show you that the things are same let me just catch the main.tf right so it is exactly same whatever we discussed till now perfect so terraform init is already done let me do terraform plan so once I do terraform plan let's see what is going to happen sorry it is taking some time okay but it showed you what is what is uh terraform going to create with this main.tf file so it is telling you see terraform prints everything in detail like if you see the details here okay there are bunch of things that terraform tells you that using this one resource AWS instance AP server terraform is going to use this Ami which I have provided then the arm which is uh AWS resource uh name uh that that will be only known to you once you apply the configuration terraform tells you clearly the instance type is T2 micro which is going to be created and then terraform tells you that this is the tag I'm going to create these are the list of the tags and you know everything that are going to happen once the terraform creates this instance now to actually run your terraform applying and terraform plan you should get a doubt now that okay I am doing terraform apply I'm doing terraform plan but how terraform is able to authenticate to my AWS account okay so you should have got this doubt by now like because terraform is just a CLI that is installed on your local and AWS is a cloud provider you just mentioned in your TF dot main file if you look at your TF dot main fine sorry main.tf file what is there here just the required providers and the provider name that's it right you just provided the region no other details so there is a prerequisite for terraform that whenever you are configuring terraform with your crowd provider you should already authenticated it your sorry you should already authenticate your cloud provider with your CLI so in my case I already have AWS CLI on my machine okay so even in in our previous classes I showed you that I have configured AWS CLI how to configure AWS CLI I have already explained you but if somebody is new to our class and let's say they only started with terraform so to authenticate terraform with any cloud provider it is essential for you like let's say you are using Azure so you already have to configure your Azure service principle okay without that terraform will not work so configure your Azure service principle first and only then your terraform will start working so in case of AWS you have to configure your AWS CLI first so once you have your AWS CLI and let's say I'm able to authenticate with my AWS so I'll just do S3 LS which is a very simple AWS CLI command and if this is working that means to say that your terraform will also work okay so to do this how do you do this if you are on AWS the very simple command that you will do is AWS configure okay so if you do AWS configure it will ask you for your AWS access key ID once you provide that it will ask you for AWS secret access key what is the default region what is the default output format and done okay so if you're on AWS this is a very simple command if you are on Azure I already made a video on how to create a service principle how to configure the service principle and how to execute the service principle okay so it is going to be lengthy so I'm not going to explain uh but if you are interested then you can watch the Azure service principle video on my channel now uh if if you are still confused on how to create these things like you know you executed AWS configure but you don't know how to use AWS access key ID and all of those things you can again watch my previous video but to just explain you in one simple line just go to your AWS CLI okay sorry AWS UI where is this right so if you go to your AWS UI or to your raw top right you will find a settings page and you know your icon and beneath your icon you will have something called as security credentials once you go to your security credentials there is an option for access key as well as okay so just scroll down so you will find access key as well as your okay this is a very section which is called access key okay so here you can create your access keys and you can configure them in your CLI perfect so we already understood telephone minute terraform plan now let us see terraform apply which will actually create resources on your terraform so let me show you if I have any ec2 instance or not so if I search for ec2 we are creating an easy to instance using terraform so before that let me show you that I do not have any instances on my AWS account okay so there are no instances in running State and there are no instances at all let me do terraform upline and let us see what is going to happen and remember that we were using Us West 2. okay so let me also show you in the U.S West 2. so my bad I should have showed you in US West 2 first but no worries so terraform apply did not even start execution only when I click on yes okay then only your tariff okay there is one instance my bad let me delete this instance actions and uh instant State let me terminate this instance so it is good that I showed you else you would have said that no you already have an instance just kidding okay so it is shutting down as you know AWS will not take a lot of time to shut down right so the instance is already sitting down and it will move to the termination state in next few seconds perfect so instant stage is already shutting down now let me click on enter then it says yes let me see if it creates a new instance for me or not so what is it saying AWS instance app server is getting created app server is the name that we provided still creating okay so terraform is right now converting your terraform configuration files into apis that AWS will understand so that it can execute this apis and get the required action see if you see here now a new instance is already getting created this was the one that we were shutting down I mean that we just terminated and this is the new instance that is getting created right so this is the instance with the same name terraform demo and if you see here now this is running and in few seconds this will get terminated so if you can also find this instance ID on your CLI okay that terraform just created for you like you see here creation complete after 40 seconds 47 seconds and the name is this is the instance ID okay so this is how your terraform will actually work okay so using terraform init plan and apply you just created a terraform ec2 instance easy to instance on your AWS account is this isn't this isn't it very simple right this is very simple right and we explained all of these details using the slides and you also have a GitHub repository so that you can try it by yourself as well right so if you want to try the local state so all of these things are part of your AWS and then local state folder so if you want to try things you can just clone my GitHub repository or Fork my GitHub repository and you can find all of these things inside the main.tf file now now we will jump into some Advanced topics okay so this is about creating your uh easy to instance and uh like you know how to run your first terraform project and all of these things but once you understand these things okay now even if you do these things uh on your day one that means you have learned a lot of things in terraform already but if you think that this is going too much in your very first class you can stop the video here and once you are comfortable with all of the things that we learned till now you can come back and watch this video again but for the people who have understood it and you know they want to understand further details of terraform in the same class now what are the things that we learned till now your terraform apply is done and if you see here okay let me explain you in a step by step way how to learn terraform in advanced manner okay so for the users who have executed this terraform they only got this thing right they only understood that okay this is the terraform uh this is the idea of the ec2 instance that terraform just created for you okay and uh they will understand going to the ec2 instance that okay this is the ID and I am fine with it but let's say I don't even want to log into this ec2 instance or there is a user who executed your terraform scripts and you don't have access to the terraform at all like you know sorry he don't have access to the AWS console at all okay so he's just using a shared machine or he's just using Jenkins or you know he's using some pipelines and he has executed your terraform script just giving him the ID details will not be enough right you he needs to know more about this terraform uh ec2 instance he needs to know what is the private IP address he needs to know what is the public IP address right because he is the user who has actually requested for this ec2 instance and because he don't have access to the cloud provider he use your terraform using Jenkins Pipeline and he executed the ec2 instance so for that reasons you should create a file called as okay in this very own repository you should write a file called as output.tf okay so once you write output.tf provide all the details like you know what are the output that terraform has to uh give you like for example here you know uh yeah where is this right so here terraform just give you the instance ID but similarly you want to provide the users lot of other details like his what is the private IP address of this ec2 instance what is the public IP address of this is ec2 instance what is the key value pair so that he can log in okay you have to give all of these details so that it will be useful for him and even if you look at any repositories like you know just go to your github.com and search for uh terraform examples okay so this is something that randomly came I did not do anything here and let's say Let Us open one of the uh okay this is AWS here let me open this one see here they have a outputs.tf file so this is a general practice and good practice of writing there are form files and let's see what is written in the outputs.tf see he has provided what is the API Gateway invoke URL so in this case he is creating a API Gateway and if he does not know like let's say you are a user devops in any roads return this terraform scripts okay they wrote the terraform scripts and they just said that any user in the organization who wants to create API gateways they can simply go to Jenkins or they can simply go to this machine and execute your terraform scripts so they will execute the terraform scripts as a usage but without getting the information of what is the Gateway invoke URL or without getting the information of what is the ec2 instance private IP address public IP address there is no use so that's why as a user you have to provide these things in the outputs.tf file and similarly if you have any variables okay so you can configure them in variables.tf input.tf Okay so so you can provide all the variables information in the previous example if you look at the main dot TF even this is a very simple terraform file we have hard-coded a lot of things right what is the Ami what is the instance type all of these these things can easily go into the variables files okay so if you use them in the variable files okay for example this is a variable and here he is providing the default information okay what is the default he's saying index.handler then you can handle these kind of things okay you can put them in variables.tf and in your main.tf you can say that get these details from the variables like this okay so the indentation is very simple like you use them in the Shell scripting you can just say dot uh like you know uh dollar followed by the braces and then you can say where dot ec2 instance name or wire dot memory size wire dot uh Ami image name okay so apart from this apart from just writing the terraform files what you need to do is you have to write always here variables.tf okay and your outputs.tf or inputs.tf and outputs.tf so that you explain the user whoever is executing this program what is the output okay what is the information of the resources that are created right so this is one level of improving your terraform scripts now if you watch the execution very carefully previously I created just the main.tf file and I applied the configuration and if you see here this is the file that got created automatically haven't created this file if you look at the git repository this file will not exist so this is my repository let me go back to main.state there is only main.tf file this file is not there terraform.tfstyle now TF State now what is this file and this is the important file of your entire terraform okay you ask anybody about terraform the first thing that you will talk about is the terraform State file because terraform entirely relies on this state file today if I delete this state file okay terraform will not understand that it has created an ec2 instance for you so I told you already about terraform that terraform is all about tracking the infrastructure as well tracking the infrastructure now what do I mean by tracking this is exactly place where terraform tracks the infrastructure okay if you open this file and see a terraform writes everything here so terraform for its remembrance or you know for it to uh in future modify the infrastructure change the infrastructure it writes everything that it created into this state file okay so it says that okay I created one resource here that resource is with this instance ID in the U.S West 2 and in the availability Zone uh 2A it has created one instance with one CPU everything okay so terraform keeps a track of everything about this instance that it created okay this is the instance ID so that tomorrow if you come here and say uh this is my main.tf right so if I just say that instead of T2 dot micro instance type or instead of one CPU I want to increase the CPU to two CPU okay and if I uh redo the same thing like you know if I do terraform uh plan and terraform apply then terraform will make the changes to the infrastructure so it will upgrade your CPUs to two but terraform will tell you that okay this is uh let me show you okay for example this is my main.tf so this is just for showing you uh usually this will be in a very uh let me just change it to uh instead of T2 micro let me just change okay this is just for uh example okay I just changed it to T1 micro and let me just say terraform plan now terraform will explain you that okay you are doing terraform plan and this is the change that I am going to do okay so if you see here it said one to change okay plan one to add zero to change and zero to destroy if you see if you uh actually the uh T2 micro uh T2 micro and T1 micro is not a good example instead if we had uh any resources I would have showed you very clear but don't worry about it so what I am going to actually explain you what are what I was able to explain you that for any change that you are doing here so terraform should understand that you are making some changes to your environment or your AWS provider so using this state file terraform will actually track all of the changes okay so it has everything here like all the sensitive information nonsense to information instead information about your VPC is easy to instance if some organization is managing uh entire AWS accounts using the terraform it will have information right from your Cloud walls to ec2 instances to vpcs your KMS Keys everything right because organization is internally managing AWS using terraform so this is the main problem okay although terraform state is a very good one okay and because terraform state is the one that is keeping track of your entire infrastructure but if you see here right now what I did is I have executed terraform from my laptop okay and it has created terraform State file on my laptop and with open permissions okay so it has read to all the users so any user can come here and can read the state file okay so you might ask me okay this is not a big problem I already know Linux and I can simply change the permissions of this terraform State file again you can do it but the problem is that how will you share this information like you have executed terraform on your local uh let's say you downloaded this git repository that I created okay and let's say we both are working in one organization both of us let's say are working at red hat and uh we are using this repository for creating infrastructure on AWS okay so let's say that uh okay uh Abhishek create a repository and we'll share the repository so you downloaded this repository you executed and state file is created in your local similarly when I execute State file will be created in my local as well right so the problem is that this state file okay cannot me one there are two problems one is the state file cannot be uploaded to git repository because it has lot of sensitive information it has Secrets it has information about your AWS vpcs information about your KMS keys so one is you cannot upload uh the state file into git repository and the other thing is you cannot merge both of the state files right so I have my own State file you have your own State file because we both are using this paper to git repositories we have to merge both of these State files and we have to put them into git Repository which is again practically not possible so that's why you know even I explained the same things in the diagram you know that's why you have to put the state file in one centralized location okay you cannot or you should never store your state files remotely okay so this is a very first point that I explained about State file you understood what is State file terraform uses State file to track everything that is that it created and the main thing that you should never do with the state file is you should never store the state files on your local machine or even in git repositories okay you should always store this state files remotely in some remote backends we'll talk about remote backends okay but first of all you should understand these two points okay so let me even explain you one more time because this is very important terraform uses State file to track everything that it created let's say tomorrow you want to modify some things so terraform will only understand using the state file that okay I created this now I am updating this resource that I created yesterday so everything is tracked using the state file but the problem is that because it has a lot of sensitive information it cannot go to a git repository and the other thing is again multiple people might work on the state state file I might clone the git repository XYZ person might clone the git repository ABC person might clone the gate repository now sharing the state file or merging the state file okay somebody forgets to update the state file in git repository then your entire infrastructure is gone literally entire infrastructure is gone right let's say I executed the state file uh some changes to my terraform and I forgot to update the state file in git repository gone right so terraform does not automatically update your state file to git repository and I totally forgot about it and nobody else remembers that or nobody else knows that I'm executing terraform so it's a biggest challenge with terraform so that's why never use a terraform State file on your local machine always also never do it in your source code Source control it can be gate it can be bitbucket never store them always store your state file in remote backends okay so that is one of the biggest selling with State files and then do not manipulate the state files locally okay what do I mean by do not manipulate State files locally is that some people try to change the state files on their machines okay usually you don't have right access to it but because I'm the owner of the state file other people will not have access to it but because I'm the owner if you see here I have read and write access and if I try to update the state file okay then terraform will get corrupted terraform will not understand okay that you updated the state file or terraform will not understand that okay something has happened it will only think that I am using the right State file and it will tell you that okay uh something is not correct let me fix the AWS infrastructure when somebody else is running it so again you are uh doing uh things wrong so that's why do not manipulate your state file or do not update your state file by your own so it should always be given read permissions only okay so right now it has write permissions right so even when you're storing this terraform in remote backends which I'm going to explain you always you only read permissions to the state file only terraform should be able to update the uh like you know uh State file and also isolate and organize your state file to reduce the blast radius what do I mean by this is uh okay let me explain you about the back end and probably let me come back to this point and explain because firstly people should know what is remote backend and then I can explain you these things so in Ideal World this is of your terraform should look like okay just give me a second sorry foreign this is how your terraform should look like so users will write terraform Scripts sorry devops Engineers will write the terraform scripts and they'll store the terraform scripts in a git repository so let's say I'm a devops engineer I wrote the terraform files and I stored the terraform files in my git repository okay and uh there are users in my organization let's say there are users in red hat and the users want to use this terraform scripts because what I did is as a devops engineer I did not grant them access to the AWS infrastructure I said apart from devops Team nobody will log into AWS for some reason let's say okay let's let's uh say that nobody has access to create resources on AWS and if at all they want to create resources on AWS they should just use the uh Jenkins pipeline that I created which will watch for the terraform resources in GitHub okay it will pull the resources I mean the TF files from GitHub and it will execute the resources on the AWS and this person using the output.tf will get the information of your resources created on your AWS okay let's say this is the process but to this process what I have added is you know if you see here I said that the state file would be stored in AWS Amazon S3 right what is this Arrow here indicating that I have entire terraform configuration files like even if you see here I just have my configuration files okay but I don't have the state file here so what is a good practice is always store the state file not in Jenkins not in GitHub always store your state file in Amazon S3 if you are using AWS okay so if you are using Azure you can store that in Azure storage container so why we will store in Amazon S3 because when you store your state file locally you will run into so many issues even if you put that in GitHub you have issues that I just explained you if you put in centralized locations like S3 what will happen is anybody who is trying to modify or you know anybody who is executing your uh terraform scripts the terraform will automatically update the state file in centralized location that is S3 bucket okay and one more challenge that IT addresses is let's say there are two people who who is trying to execute this terraform scripts parallely okay so there are 100 people who are using this terraform scripts for example okay sometimes it can be even more and two users okay in this case there is only one user but let's say two users are trying to parallely run this terraform scripts so terraform should not allow that okay if terraform allows that what will happen is let's say they are giving conflicting information to AWS so one is saying create a create uh update the ec2 instance to two CPUs and other is saying update the ec2 instance to four CPUs and both of them are running parallel now what will terraform 2 which instruction will terraform take okay so to avoid that problem what you will do is that along with your remote backend you will integrate your remote backend with Dynamo DP okay here dynamodb is basically used for locking the terraform State file okay what does it do it will lock your state file so once it is logged what terraform will say that state file is locked some other user is actually using uh the terraform scripts or summary some other user is performing infrastructure configuration on your AWS resources so wait until the configuration is done and once the S3 is free or once the resources in S3 are unlocked which is your state file is unlocked then you know I'll give you permissions to execute the terraform scripts so what is a good configuration or what is the ideal terraform configuration should look like always put your terraform configuration files or your TF files in GitHub or in any kind of Version Control System but your terraform State file should go into remote backends okay what is remote backends remote backends are nothing but your remote storage services like it can be Amazon S3 bucket or it can be Azure storage container or something okay so store that in your remote remote backends or remote storage solutions and integrate them with proper locking Solutions okay so in this case in case of AWS the Locking solution is dynamodb okay so you can use dynamodb in Azure as well okay but you can create your own dynamodb and you can do that sorry for that okay so AWS S3 with dynamodb okay once you integrate this locking mechanism okay what will happen is terraform will not allow parallel execution of terraform scripts okay so this is the ideal terraform setup that anyone should configure in their organization there is no chance of excuses here if you are even missing one of these things okay one of the things that I have explained here only thing is you can replace Jenkins with any other CI Solutions like GitHub actions or you can replace GitHub with any other VCS or Version Control Systems like bitbucket or stash I mean bitbucket stash or you know uh gitlab or something but you cannot change this flow if you are changing this flow then you are using terraform in a wrong way okay that's why I clearly mentioned that this is ideal terraform setup now this is the concept of remote backends okay so what what we did was in the example we use terraform in local state file but in real time scenarios or in production scenarios that is not how you deal with the things you always count if you get a remote backend don't worry I'll show you how to configure remote backend okay before going there I just wanted to explain you this point which I skipped in the previous slide that is isolate and organize your state files to reduce the blast radius so in any organization usually you have multiple environments right you have unit unit environments or some people call it uh you know uat environments people have staging environments then they have uh you know prod environments or Dev stage and prod environments so it is always a good practice to isolate uh terraform scripts for each and every environment like you know for your Dev you have a different state file either you can segregate your terraform scripts like for uh uat these are my terraform scripts for uh staging these are my terraform scripts and for prod these are my terraform scripts so that you have isolated State files even if somebody is messing up with the state files accidentally incidentally for you know for any reasons if they are messing up with the state files you can reduce the blast radius or you know you can reduce or implement the disaster recovery so in such cases always have your state files isolated okay so these are some of the good practices with the state files uh so this is very important and you know these are the questions people usually ask in interviews for terraform I'll explain you when we move to the terraform interview question section as well but okay so before that I'll show you how to do remote backend because we talked a lot about remote backend now that I'll show you how to do remote backend it's not a rocket sense again it is very simple to just your existing uh let's go back here so here I have local state and remote State local state you already saw if you go to the remote State okay so there is only one simple change okay so in this case I have also implemented outputs.tf okay for somebody who wants to take a look at the outputs.tf you can run both of them as is and they should work if if they are not working just let me know in the comment section I already tried the implementation so they should definitely work so if you go to the mains.tf or sorry main.tf of the remote State the only change that I have done here okay if you see here the only change that we do in the remote state is that we provide the back end details okay so for example if you okay uh sorry first you create the backend okay so what I showed you in the previous slide is that I want to use Amazon S3 bucket and dynamodb right is it clear so we should create Amazon S3 bucket and dynamodium what I did was the only change is in the existing flow I created an S3 bucket okay so this is the S3 bucket that I am creating okay in the local state main.tf what I did I just created ec2 instance but here what I'm doing is I am creating S3 bucket okay and then I'm creating a dynamodb okay dynamodb table so these are the two things that you have to do additionally from the previous main.tf or the local uh main.tf that I showed you right the local state main.tf once you do it what you can simply do is like you know in your existing configuration you can directly add these details as part of the back end okay so let me show you that as well foreign if you see here what we usually do is that apart from the region details and apart from the uh like you know AWS provided details that we are providing what we do is that we just add you know this is a terraform block that I showed you initially right so the to this terraform block you also add details of your back end okay so in in this case what I'll do is I'll just come here okay to the main.tf once I create the S3 bucket and dynamodb what I'll do is to uh you know about this required provider I'll just add the details of the back end okay so in the GitHub repository I already showed you if you open this GitHub repository I already showed you how to create S3 bucket and how to create dynamodb once you create both of these things once you execute the terraform scripts by your own all that you need to do is just come here and append the backend details as well don't worry I'll also do that in the repository actually I have it in my local it just looks like I did not update the GitHub repository that's it so once I update the GitHub repository you'll also see the details of how to implement the back end okay so I just have it in my local I thought I already pushed the changes but uh looks like the changes right now are only to create S3 bucket and dynamodb but uh you know it's a very simple change just add this block of code where just provide that your backend is S3 what is Your Bucket name what is your dynamodb table name okay that's the same thing that I that I am creating in this as well if you go to this GitHub repository I'm just creating a dynamodb table and I'm just creating a AWS bucket name so the AWS S3 bucket just provide both of those details uh this region encryption and all of them are uh you know they are not so important and once you provide these things your S3 uh your terraform will be configured with the remote storage that is S3 bucket okay so you will be able to implement this setup by just following my GitHub repository I just need to make one commit more uh right now I just have uh how to create S3 bucket and dynamodb I'll make that commit as soon as this session is done so that you will be able to follow your end to end workflow for S3 remote backend using AWS sorry using terraform perfect so this is about remote backend this is about local uh State file what are the issues with the state file what are some of the good practices with the state file and all other things now there is one more topic that usually people ask in terraform is terraform modules or there is something that you need to learn about terraform is terraform modules this concept of modules is actually quite similar across multiple devops tools okay so whether it is Jenkins whether it is terraform whether it is any other tools so the concept of modules is very similar that is whenever you have reusable code okay so let's say in your organization you have a code for implementing ec2 instance plus ALB okay application load balancer and you think that this code is used quite common or even if we take the same example like you know the example that I'm showing you here what I am doing in this main.tf is I am writing a terraform script for creating S3 bucket and I'm writing terraform script for creating AWS dynamodb table right so I'm creating dynamodb and S3 bucket so in your organization there might be multiple devops teams or you know you might be doing the same thing for multiple stages like you know uh div environment staging environment and production environment so this happens to be a reusable code right because you can use this code across multiple uh teams or you can use this code across multiple environments so in such cases what you will do is that you will put this example okay in this case I put this example in a git repositor right so you will start using this repository as a module okay inside other terraform Scripts or inside other terraform files so you will reference these things as modules and terraforms and once you reference them as modules what will happen is before execution of these scripts terraform will start executing the modules okay so that whenever you are trying to run the same example if you think whenever we are running to uh we are trying to set up terraform the users can worry about only writing the terraform files and if they just start using your example as module what will happen is by default they will have the terraform configured with S3 remote backend and dynamodb enabled so there is a lot of effort that has gone right so with this you avoid lot of things by using modules so modules is a way of writing reusable components whether it is terraform or Jenkins or any other thing and what do I mean by existing models existing models are nothing but whatever the things that we were using even if you're even if you see here we were using uh AWS S3 dynamodb table so by default uh terraform also provides some modules where you can create Stacks or where you can use some reusable code now once we discussed all of these things it is essential for us to understand the problems with terraform as well because we talked about a lot of things in last one hour session right we talked about terraform modules we talked about terraform State files remote back-ends remote States configurations advantages of terraform let us also talk about some disadvantages okay because even when uh interviewers asking scenario scenario based questions uh during your interviews they would be more interested in understanding the challenges that you faced with terraform or problems that you faced with terraform so what are some of the problems to start with State file is a single source of truth right whether you are storing it in a remote backend whether you are storing it in your local machine whether you are storing in a git repository it is always a single source of Truth so by any chance the state file is corrupted whether you have misconfigured it whether you have misused it okay or whether it is deleted accidentally which is not the case usually but let's say okay even if you are doing it in S3 the S3 supports versioning and there is a lot of things you can always go and get your old version of the statement but let's say that you have misconfigured the state file or you don't know how to use it in a good way then you are you have compromised your terraform State file once you compromise your terraform State file that means to say that you have messed up with your terraform and your infrastructure because State file is the heart of terraform then manual changes to the cloud provider cannot be identified and auto corrected this is one of the major challenges with terraform let's say somebody during your production support this happens very frequently uh like you know you can ask anybody who is using terraform let's say somebody who is in a production support or let's say somebody who wants to change AWS infrastructure has access to your AWS okay they manually modified some things to the terraform in our case we created an easy instance using terraform right let's say somebody updated this ec2 instance on terraform okay knowingly intentionally or unintentionally now terraform is not bi-directional or it is not a two-way communication punished terraform can look into the state file and only when you update the changes in your local terraform will say that okay you are going to change the ec2 instance on the cloud provider so let me update that but when somebody modifies the resource directly on the cloud provider terraform will not know about it and next time when you try to update the resource terraform will say that oh okay whatever I have in the state file is not same as the resource that is there in the cloud provider maybe somebody accidentally modified it now I cannot update it because there is a difference in it okay so in such cases you have to do terraform refresh or you know you have to manually update your state file using the terraform refresh command okay so this is one of the disadvantages of terraform because sometimes what if there are many changes that are made to the cloud provider or I don't even want somebody to update changes into the cloud provider right you know I want terraform to be bi-directional if somebody change make any changes to the cloud provider even it's easy to instance resource or key value pair or something I just want terraform to automatically update it right that is how you expect a tool to be but terraform does not do that the same thing is explained it is not a GitHub friendly tool okay so the same thing if you if you follow the principles of githubs uh it is basically like you know git is your single source of Truth whenever you have resources in your git uh like the same example let's say I have used the terraform file in git repository I it is a resource to create ec2 instance or terraform files to create easy to understand now somebody when they modify some things on the AWS Cloud itself they updated the ec2 instance on the AWS Cloud itself they have access to it now the information is corrupted right whatever you have in your kit is not single source of Truth there is a change in the git repository there is a difference in the configuration file in git repository and your AWS ec2 instance and terraform is not automatically correcting it so that's why it's not a git optionality tool so you might say that you can integrate it with any githubs tools like Argo CDN flux but terraform does not play well at least uh there are some flux controllers that flux is developing for this terraform but uh it I have tried them uh they are not that user friendly at this point of time and Argosy does not have native support to terraform even today then can become very complex and difficult to manage yeah if you if you ask somebody uh who are who is managing terraform with uh like you know for two to three accounts or you know for large AWS uh users so terraform does not play well because it has a lot of things uh that can be messed up and finally try to position as a configuration manager because uh you know these days uh the ansible and terraform are trying to cross paths each other people create infrastructure using ansible people try to you know configure kubernetes using terraform so the crossover is not that good because the tools have to be natively for their own purpose right terraform is built for configuration sorry infrastructure automation infraction as code ansible is built for configuration management but for some reason um I don't know the vision of the tools or you know the vision of the developers or the maintenance but uh it's not something good in my at least according to me I don't think that's a good practice so if somebody asks you what are the problems with terraform you can definitely talk about these things okay so just even if you try to remember these things and put the exact things uh with your interviewer I'm pretty sure they'll be very impressed because these are some of the Practical problems with terraform finally terraform interview questions one is I already uh I mean I have done a full-fledged video on terraform interview questions and I have done a full-fledged video on problems with terraform so one is you can either follow that video and second thing is uh even if you want to get interview questions out of this session I have given you at least 10 to 15 interview questions while I'm explaining you okay just just for a rough overview one is uh people can ask you explain me some scenario where you had faced a problem with terraform okay you can talk about these things uh what are modules in terraform which I already explained you okay then people will usually ask you this is one of the most common interview questions that tell me about your terraform setup okay uh are you using your state file locally are you using a state file in git repository so you should explain you should be able to explain them that no use terraform with S3 bucket as remote storage and then we use dynamodb for locking so people also ask what is the purpose of dynamodb why you have to integrate with S3 bucket so you have to tell them that during the cases of parallel execution the state has to be uh used by only a single user so by using dynamodb you can introduce the mechanism of locking uh you know that that's how you can make only one person use that terraform then what are some of the good practices with terraform what are some of the terraform State good practices so we talked about a lot of interview questions right so one assignment that I would give you is anybody who is watching this video okay so this is a very interesting assignment like you know if anybody is watching this video one is follow this GitHub repository start with the basic example right start with uh write your own project terraform repository start using the uh you know local state build everything uh using the local state like the main.tf example that I provided then move to the remote State create S3 bucket create dynamodb and once you do that the task the second task that I would give you is prepare your own interview questions write your notes okay what are the questions that I covered in this specific session and what are the answers for yourself and you know if you want even I can review your questions and answers you know just put in the comment section that Abhishek have prepared the interview questions as per the assignment and I stored them in a git repository can you review them I'll definitely review and you know I always find it good if somebody is benefited out of my videos you can also post them on the LinkedIn like you can say that you know I watched this video and I have got a clear understanding of terraform now I prepare paid interview questions I wrote a Blog about the terraform interview questions anybody who is uh you know who wants to learn terraform interview questions then they can follow this blog and even I can be a reviewer for your interview questions and answers okay so this is a very detailed video on terraform uh it took a lot of time for me to prepare this entire slides and demo for you and also the GitHub repository that I prepared so please uh like you know if you like the video click on the like button if you have any comments like good or bad feedback definitely share them in the comment section and don't forget to share these videos with your friends whoever are interested to learn devops and if possible do post them about about these videos in LinkedIn so that uh like you know you can also spread our channel uh with wider audience okay so thank you so much I'll see in the next video take care everyone bye
Info
Channel: Abhishek.Veeramalla
Views: 24,106
Rating: undefined out of 5
Keywords: devops mock interview, devops discussion, DevOps RoadMap, aws, azure, k8s, tanzu, vmware, openshift, redhat, gcp, openstack, aws complete devops project, devops proxy interviews, #bts, BTS, bts, Ansible, Ansible FAQ, Ansible Q&A, Devops interview questions with answers, terraform, terraform interview questions, terraform Q&A, git, github, azure git, gitlab, azure devops interview questions, devops interview questions for experienced, DevOPs Roadmap, Free Python, Python course
Id: CzdfdKWRDB8
Channel Id: undefined
Length: 77min 52sec (4672 seconds)
Published: Sun Jan 29 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.