8 Terraform Best Practices that will improve your TF workflow immediately

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
terraform is one of the most popular infrastructure as code tools out there and if you have just started working with terraform you may be asking yourself whether you are doing things in the right way so in this video you will learn eight terraform best practices that will improve your terraform workflows immediately and make you feel more confident when using terraform in your projects now many of the best practices are around terraform state and state file so let's quickly understand what they are first terraform is a tool that automates creating infrastructure and then making changes and maintaining that infrastructure and to keep track of the current infrastructure state and what changes you want to make terraform uses state when you change configuration in terraform script it will compare your desired configuration with the current infrastructure state and figure out a plan to make those desired changes and state in terraform is a simple json file which looks like this and has a list of all the infrastructure resources that terraform manages for you because it's a simple json file technically you could make adjustments to the state file directly by manually changing stuff inside however the first best practice is only change the state file contents through terraform command execution do not manually manipulate it otherwise you may get some unexpected results now where does this state file actually come from when you first execute terraform apply command terraform will automatically create the state file locally but what if you're working in a team so other team members also need to execute terraform commands and they will also need the state file for that in fact every team member will need the latest state file before making their own updates so the second best practice is to configure a shared remote storage for the state file every team member can now execute terraform commands using this shared state file in practice remote storage backend for state file can be amazon's s3 bucket terraform cloud azure google cloud etc and you can configure terraform to use that remote storage as a state file location but what if two team members execute terraform commands at the same time what happens to the state file when you have concurrent changes you might get a conflict or mess up your state file to avoid changing terraform state at the same time we have the next best practice which is locking the state file until an update is fully completed and then unlock it for the next command this way you can prevent concurrent edits to your state file in practice you will have this configured in your storage backend in s3 bucket for example dynamodb service is automatically used for state file locking but note that not all storage back-ends support this so be aware of that when choosing a remote storage for the state file if supported terraform will lock your state file automatically now what happens if you lose your state file something may happen to your remote storage location or someone may accidentally override the data or it may get corrupted to avoid this the next best practice is to back up your state file in practice you can do this by enabling versioning for it and many storage backends will have such a feature like in s3 bucket for example you can simply turn on the versioning feature this also means that you have a nice history of state changes and you can reverse to any previous terraform state if you want to great so now you have your state file in a shared remote location with locking enabled and file versioning for backup so you have one state file for your infrastructure but usually you will have multiple environments like development testing and production so which environment does this state file belong to can you manage all the environments with one state file and this is the next best practice to use one dedicated state file per environment and each state file will have its own storage backend with locking and versioning configured these were best practices related to terraform state the next three best practices are about how to manage the terraform code itself and how to apply infrastructure changes these practices can be grouped into a relatively new trend that emerged in the infrastructure as code space which is called git ops if you want to know what git ups is i have a separate dedicated video on that which you can also check out so let's see the next best practices when you're working on terraform scripts in a team it's important to share the code in order to collaborate effectively so as the next best practice you should actually host terraform code in its own git repository just like your application code this is not only beneficial for effective collaboration in a team but you also get versioning for your infrastructure code changes so you can have a history of changes in your terraform code before moving on to the next best practice i want to give a shout out to n0 who made this video possible m0 automates and simplifies the terraform teragrand and git ops workflows for provisioning cloud deployments for example it gives you visibility of the infrastructure changes when creating the pull request and automatically deploys your changes after merging it into your git repository and with its self-service capabilities m0 allows developers to spin up and destroy an environment with one click but also integrates policy is called guard rails to limit direct cloud access check out n0.com for all its use cases and capabilities now let's continue with the best practice number seven now who is allowed to make changes to terraform code can anyone just directly commit changes to the git repository the best practice is actually to treat your terraform code just like your application code this means you should have the same process of reviewing and testing the changes in your infrastructure code as you have for your application code with a continuous integration pipeline using merge requests to integrate code changes this will allow your team to collaborate and produce quality infrastructure code which is tested and reviewed great so you have tested and reviewed code changes in your git repository now how do you apply them to the actual infrastructure because eventually you want to update your infrastructure with those changes right the final best practice is to execute terraform commands to apply changes in a continuous deployment pipeline so instead of team members manually updating the infrastructure by executing terraform commands from their own computers it should happen only from an automated build this way you have a single location from which all infrastructure changes happen and you have a more streamlined process of updating your infrastructure these are the 8 terraform best practices you can apply today to make your terraform projects more robust and easier to work on as a team do you know some other best practices that i forgot about in my video please share them in the comments and finally if you want to learn more about terraform then check out my other terraform resources that i will link in the video description with that thank you for watching and see you in the next video
Info
Channel: TechWorld with Nana
Views: 154,353
Rating: undefined out of 5
Keywords: terraform, terraform tutorial, terraform best practices, terraform modules best practices, terraform state, terraform state file, hashicorp terraform, hashicorp terraform tutorial, techworld with nana, infrastructure as code, infrastructure as code terraform, gitops, terraform state locking, iac, devops, terraform tutorial for beginners, terraform advanced, terraform for beginners, terraform state file locking, terraform ci cd pipeline aws
Id: gxPykhPxRW0
Channel Id: undefined
Length: 8min 57sec (537 seconds)
Published: Mon Aug 30 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.