How to Build Reusable, Composable, Battle tested Terraform Modules

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I'm gonna start with just a little bit of a story the context for what I'm talking about today so imagine you just completed your latest application and you're ready to deploy it right you finished your rails app your node app whatever you're building and it's ready to go to prod and so you think to yourself the microphone is gonna make noise so you think to yourself okay I'm gonna use AWS it's popular everyone seems to be doing it great so you log in to the AWS console for your first time and you see a screen that looks something like that which was a little terrifying and you probably feel a little bit like that it's okay you spent a few hours reading documentation browsing Stack Overflow eventually you get things more or less sorted out and you got a single server up and running now what well you keep reading and you realize that one server is probably not a great way to run a production site so you probably need multiple servers so now you start learning about auto-scaling groups and availability zones and of course if you have multiple servers and I get to put a load balancer in front of them so now you're learning about alb and ELB and n OB and somewhere you got to store your data so now you're deploying databases and fail overs and hard drives and then I guess files we'll put those in s3 I really know what s3 is so I'm gonna go spend some time learning about that and I am policies I guess I need monitor and you're learning so now you have cloud watch up there and then you realize everything you've been doing is in public is exposed to the public internet because you deployed it into the default V pcs so now you're kind of terrified and you spend a lovely week of your life reading about B pcs and subnets and route tables and nat gateways and redeploying absolutely everything from scratch then you figure out I need a DNS entry and maybe you need a TLS cert and maybe you need to encrypt and decrypt secrets and of course you need more than one of these right you have two environments you might have five environments you have ten environments so now you got to manage all of this craziness so you're looking into tools you've got terraform you grab docker you're learning all these new technologies and I got to test everything so now you're bringing in your CI servers you gotta have alerts when things go wrong so you hook those up and your life looks something like that does that sound right this is this what DevOps and infrastructure feels like so this is your life but it gets worse because now you have to maintain all of it forever and that's actually really hard last year alone AWS had a thousand new releases and if you look at this lovely chart that's that's not slowing down so you have 2000 new things you have to think about that webpage you log into is getting bigger much bigger terraform has a release every couple weeks faster now since they broke out all the providers and maybe worst of all there's security vulnerabilities every day if you want to feel really sad about the state of our industry sign up for a bunch of security advisories and you could almost turn it into a drinking game you know take a shot every time there's a Linux of ulnar ability a word press ball in there Billy it's pretty bad it's pretty terrifying so I think this is even a better representation of what a lot of us feel like on a daily basis so there's a better way to do this there's a better way to manage your infrastructure to deploy infrastructure that makes this a little bit less painful and that's to use modules so that's the topic of today and specifically I'm gonna talk about how to build reusable composable battle-tested infrastructure code that's a bit of a mouthful so I'll break that down throughout the talk the goal is a walk you through how to terraform modules work and I'll walk you through how you're going to be able to turn that into a couple pretty simple commands that's our goal I'm Evgeny Berkman I go by the slightly easier to pronounce the nickname gem so feel free to call me gem author of a couple books who's read Tara from up and running that's amazing alright now I feel good and co-founder of a company called grunt work and a grunt work we've been building Tara for modules for a couple of years actually and we've been using them to help our customers get up and running with all of their infrastructure to find us code and at this point using modules we can do it in about a day outline of the talk we're gonna talk about what modules are where they fit in in the world we'll talk about how to use a module we'll talk about how modules work and that third section that's probably the meat of the talk that's where we'll go through all the nitty-gritty details of what's happening under the hood and why why modules are going to be useful for just about all use cases and then we'll talk a little bit about what modules are going to look like in the future so let's start with what's a module the question I'm going to answer here is where two modules fit why should you use them what what's new about this so right now if you're if you're deploying infrastructure you're really dealing with two types of providers this is a simplification but it's I think it's a reasonably accurate picture of the world one is you can use infrastructure as a service that's things like AWS as your Google Cloud these are providers that they give you essentially a bunch of small standalone primitives and it's up to you to figure out how to put all of those together and as you can see in that diagram there's a lot of things to put together the second option which tends to be a layer on top of infrastructure as a service is platform as a service that's things like Heroku docker cloud Engine Yard and these tend to hide all of those low-level details from you and they give you this nice high level API right here's how you deploy an app here's how you deploy a database instead of focusing on routing together VP sees and subnets so the advantage of platform as-a-service is it makes it very very easy to get started hopefully you guys can read that you can basically create your app in whatever language you're on Heroku create and you do a git push and you have it live all that stuff that giant diagram more or less is done but there are downsides so there's a lot of limitations to most platform as a service but I'm picking on Heroku I don't mean to be mean to her okay actually really like Heroku but it's a platform as-a-service by design because they hide those underlying details there's going to be a lot of limitations you can only deploy certain types of apps in certain supported languages that work over certain protocols namely HTTP and HTTPS you have a bunch of limits built-in it's hard to debug it's hard to customize it's hard to scale so what tends to happen is most software companies once they grow they might use a platform as a service to launch once they grow beyond a certain size they tend to all fall back to infrastructure-as-a-service which means everyone is living this life so we're developers we know how to fix these things we know that it can be done better you know any time you see a pain point you should think there's an opportunity here and the solution is to use code all right programmers you know you can always just add another layer of abstraction and specifically we're gonna use terraform to improve the situation a little bit now I'm guessing most of you know terraform but there's always a small percentage that have never used it and they're gonna be very lost in this talk so I'm gonna do a very very quick primer on terraform so bear with me if you already know all this stuff I'll also be doing live coding today so you you are all gonna be highly entertained when things go horribly wrong okay so this super selection let me talk about what terraform is terraform is a tool for provisioning infrastructure the idea is you're gonna be able to write code that specifies how to deploy servers databases load balancers your network topology all of these pieces you can capture them as code and terraform particular works with a number of different providers so AWS Google Cloud Azure digitalocean pretty much anything you would want is supported within each provider there's a number of resources you can create that's over here in this left column and this is essentially the hello world terraform example can you guys read that okay great so at the top we specify which of the providers we want to use for this example I'm using AWS and I'm gonna tell it to deploy into US East one then we're gonna create a bunch of resources for that provider so here I'm creating an AWS instance basically a virtual server and within the body of this thing I specify parameters for it so the ami says basically what virtual machine to run I'm just running an empty Linux ami and the instance type says what type of server to run so this is a tiny little server with a gig of ram and one CPU so what terraform is gonna do is it's gonna read your code and it's gonna translate those into API calls to whatever providers are using so in this case it's gonna make a bunch of API calls to AWS so here that will here's how that works we'll go into the terminal so I'm in the folder that has my that main dot CF file that hello world example and the first thing you do is you run terraform in it and that'll download any plugins that you need and then you run terraform plan how's the terminal looking from back there is that okay alright I'll have to scroll a little bit so the plan command shows you what terraform is gonna do before it actually does it it's a really great way to prevent you know shooting yourself in the foot on a regular basis so this plan output looks a little bit like a DIF output and it's saying it's gonna create a single server for me that's exactly what I want plan looks good so to actually deploy it into your account you run terraform apply and now terraform is going to go read that code it's gonna find that I want an ec2 instance and it's gonna make the appropriate API calls to AWS to create that thing I'm gonna actually see that happening in the background if I go to my a to believe us account and there we go we have a server launching right now so the server actually if you notice doesn't have a name so it's pretty easy to add a name we can do that by adding some tags we'll set the name to example okay so that thing finished deploying if I run plan one more time now my plan is gonna look a little different so now terraform is telling me that the server is already exists so it's not going to create it again it's gonna modify it by adding a tag that looks pretty good let's run apply all right and if i refresh this okay my server is now called example so believe it or not for those of you that haven't used terraform that crash course is 60 70 percent of what you need to know to really use the tool the rest is just learning what the resources are there is one other thing in terraform though which is quite relevant for this talk which is this idea of modules so they did with the module is you can think of it as a blueprint it captures encode how did how to deploy certain type of infrastructure now in my case that might be a single server in your case that could be how your company manages let's say a micro service so it might be a cluster of servers with a load balancer database etc all of that can be packaged as a single module the way modules work and this is the last primer that I'll do so the rest of you can wake up in just a minute is any terraform code in a folder is actually a module so this code that I created here is technically a module and all I need to do is use it as a module so I'm gonna go into this other folder here called my service and in here I'm gonna use the module keyword call the module foo and in the source parameter I specify the path to where that module code is so in that case it's in that folder so that is gonna reuse all the code I have in this folder which in this case is creating a server so if I go into that folder I'll run terraform in it I'll run plan and it should tell me that it's gonna create one server okay there we go so modules are pretty easy to use what's cool is of course I can reuse this code now I can create a second module called bar and now if I run terraform plan it should tell me that it's going to create two servers so there we go plan two to add so there's one server and there's the other and the other great thing about modules is I can make them configurable so for example in the module itself I think I can declare a variable called instance name and instead of hard-coding the name I can set it to that variable and now when using the module I can set that variable to different values so this service is called foo and this one is called bar so if I run plan one more time what you'll see is the first server has the tag set to foo and the second one has it set to bar okay everybody follow along with modules anybody find that confusing okay cool so that is hopefully all the primer you're gonna need to understand the rest of the talk okay so talked about that all right so why do we care about these modules what's the point what does the advantage the advantage is if you package your code correctly and you can obviously do it wrong but if you do a good job of packaging your code as a module then you're going to be able to use infrastructure as a service so you have all that control all the power that you need for your company but you're gonna have a layer on top of it which is almost as easy to use as a platform as a service you'll be able to speak in a higher-level language instead of talking about V PCs and servers and CPUs and databases you can talk about your app your micro service your entire infrastructure could actually be a module and so that makes it much much easier to manage all of those thousand one things that you have to deal with and what's really important is with modules you have the code so not only do you have the higher level API but you can still see what's happening under the hood you can modify it you can customize it and we'll see a lot more of that a little later in the talk the other great thing as you may have heard this morning is you can share modules so one of the things that was announced this morning is the terraform module registry this is a public collection of modules that you can grab and use in your code so that you don't have to build the code at all you can allow somebody else to do it and maintain it for you I have heard rumors that the registry is having issues it it made it to the front page of hacker news so that's always a good and a bad thing okay it's it seems to be up so this is the module registry it's still certainly early days but you can find some useful modules in here you can search as it suggests cool it's working for console so there's a module for console and Azure in Google cloud in AWS and if you click on one of these you can see the code that's in there you can see it pulls up the documentation whatever diagrams are in there it'll tell you about what input variables that module requires and which ones are optional what outputs what resources who created at all this information is here and there's a bunch of these and obviously this thing is going to grow quite a bit over the future so here's a vault for example which was mentioned quite a bit in the keynote there's now module to deploy vault very quickly and easily and I'll show you that a bit later on ok so I always have screenshots in the background in case live demo goes bad so I have to skip through a couple screenshots all right so that's what's a module that's where it fits how do you use the modules in the registry that's the next question so as an example let's talk about vault you want to deploy vault because you want to secure your secrets you want encryption as a service it's a pretty nice tool the old way of doing it looked something like this and raise your hand if this is familiar you open up the vault documentation and you spend a few hours reading it you then deploy a few servers you install vault on the mewn sawl some sort of process supervisor you generate some self-signed cert scuzz you want everything to go over TLS then you start learning about the vault configuration file you create the s3 bucket for it you try to fire up the server you got a crazy error you spend two days of your life figuring out what the hell this error is you find out that vault is very picky about the self-signed cert and only accept certain encryption algorithms you flip a couple more tables over you get that working then you find out for high availability need console so then you open up the console documentation you start reading that you fire up a couple servers okay that's the old way the new way is gonna look a little different the new way you're gonna run terraform in it and you're gonna run terraform apply and when you're done you're gonna have this so you're gonna have a vault clustered that's gonna use s3 as a storage back-end and you're gonna have a console cluster and the vault is gonna use that as a high availability back-end and you'll have all the iam rules and you have everything else that you need for this thing to run I'll show you guys a quick demo that so the one thing I'll caution about is the this terraform and NIT command that I'm using this syntax did not make it in time for the release so for now you're just gonna have to get clone repo which is gonna do essentially the same thing so what we'll do is you'll go find the module you want to use grab it's open it's github page grab the clone URL and essential on your computer you would run git clone and paste that in and hit enter that's going to do essentially the same thing that that init command did it just takes an extra step okay I've already done that and so what we're gonna do in here is I have the result of what was cloned to my computer which is a whole bunch of files that's probably a little too big of a font alright oops that's too small of fun all right how about that guys folks in the back can read awesome so this is the after you run an answer git clone this is the code you have on here so let's run terraform and net in here make sure all the modules are there all right before we run apply probably you should run plan it's a good way to make sure it's going to do what you expect it to do so let's see what the plan shows us for vault okay so this thing is going to create 33 resources and you're welcome to browse through and see what it's doing there's all sorts of security group rules in here I am policies auto-scaling groups etc etc etc so that looks good we'll just pretend I read all that and I'm gonna hit apply it's actually start deploying it's going to take a couple minutes to spin up the clusters so we'll let that run in the background in the meantime what I want to show you is what actually got checked out so what got checked out it's not this but this so it's a bunch of code at the top you may recognize the same sort of provider configuration what version of terraform we want to use the aim is here's our vault cluster this is going to be an auto scaling group with the vault nodes in it I am policies for user data script load balancer here's console cluster that's being deployed etc etc that's what got checked out of my computer what's cool is if I want to I can ignore all of that I can just run like I just did terraform plan terraform apply and I'm good to go it's gonna deploy that whole thing I can start playing around with it learning you know you get to start with working code rather than a pile of documentation but if I want to customize it I still can because I have all of the code so some of you probably already have console running you don't want to deploy a new cluster cool no problem you go into this code here's the console cluster I don't want it so delete it congratulations you have full control over all of this code you can run terror from apply now it'll deploy without the console cluster and you can change the configuration to use your own cluster that's the basic idea is you have essentially something that is as easy to use as a platform as a service it's not hosted for you so platform as a service still does more but you have this nice high level API that's just says vault and you don't have to worry too much unless you want to and if you want to you open up the code you edit it and you make the changes that you want okay let's see if that thing deployed in the background ok so that finished deploying provides a bunch of useful outputs there's also a handy little script in here that will essentially wait for the server's to actually come up and for console to automatically bootstrap itself for everything to connect ok there we go so I guess everything actually booted up and so now it gives you some useful output my vault servers are running at these IP addresses I need to basically initialize and unseal my server so we can actually do that copy that SSH yes okay there's my server let's make sure console is alright it's able to talk to console no problem I don't know how readable that output is at this font size but hopefully you get the idea that the console server came up now it can run vault status to see what's going on it says it's not initialized now we can initialize it basically you have vault you can use it now you can all see my secret keys this is the best security possible but you have a fully working cluster you have fully working code that you can edit that's really what the goal is here ok so what's happening under the hood is is actually very very important because a lot of people are probably thinking to themselves well that's a neat demo you deployed the vault close for the way you wanted it but I have special needs that are different so there's no way this is going to work for me and you're probably wrong so here's why so to understand the design philosophy behind modules and what you how you really want to build them it's actually good to look at other programming languages so another programming languages this is vaguely Python you have functions right if you have a piece of code that you want to use in a bunch of place you put it in a function the function has a name it takes input parameters that have names it returns outputs and the cool thing about it is you can use it all over the place you define the function once and you can use it again and again throughout your code you can also test the function right it's hard to test a gigantic application but a single little function you can actually test and make sure it behaves the way you want to which allows you to build on top of these nice tested building blocks another really powerful idea with functions is composition you can have multiple functions and you can use them together you can pass the outputs of one as the inputs into another and finally abstraction so you could have a function that does something really really complicated and the body of the function has a large volume so to speak it's doing a lot but the API that the function exposes the surface area is very small so it abstracts away all of that complexity so that's why use functions and normal programming languages and for the exactly the same reasons we're gonna try to build our modules in terraform so they have all the same properties so that they're reusable so that they're testable so they abstract away complexity and so that they're composable those are goals so what does the simple module look like you've already seen the simple module it's basically a few files in a folder that creates some resources the typical naming conventions that you should be using is any inputs to your module kind of like the inputs to a function those go into a variables TF file and you can put a description for each of them so humans can actually know what that variables for the outputs for the module should go into outputs TF now people know what the module returns so to speak and then everything else the actual resources you create go into main TF and usually it's a good idea to have some documentation for your code in a readme file and the terraform registry will pick up all of this automatically will parse all this automatically and show it correctly in the UI especially if you're following these conventions so that's a simple module and some of the modules in the right that's all they are just a few tariffs on files in a folder but there are more complicated modules and how you build these is very important so a more complicated module it's gonna have the same basic files you still have the terraform code in the root you still have documentation but we're gonna add three new folders and those folders are modules examples and tests so the modules folder will have what we call sub modules there's a little bit of terminology confusion here so these are also modules technically so these sub modules each of them is gonna solve one orthogonal problem in whatever the overarching thing is so to make that a little more concrete let's look at the actual vault code so here's the modules folder for vault and what you'll find is the vault implementation actually consists of a large number of these sub modules there's one here for example that defines just how to run the auto scaling group for vault there's a completely separate module that deploys the load bouncer because not everyone is going to need a load balancer so by putting it in a separate sub module you now have that option to include it or not there's a separate module for security group rules there's a separate one for actually generating that self-signed cert that actually works correctly is just a separate sub module there are even sub modules that are not in terraform at all because one of the things that you need to do with vault is install it on your OS and run it which terraform doesn't really do for you so there's actually bash code there's a nice well-documented bash script that will install vault if you want to use it you're welcome to if you want to use something else ansible chef whatever you prefer you're welcome to do that because this code isn't a separate sub module they're all standalone that's the key and so the power of something like this if you design it correctly is that these sub modules they each handle one use case so one of them for example we use to build the Amazon machine image that has vault installed and has a TLS certificate we have a separate sub module that runs the cluster a separate one for handling the s3 bucket another one for security group rules another one for load bouncer and so you are welcome to use all of these together if this is exactly what you wanted you're done you don't have to change anything but if you have custom needs and everyone has a little bit of custom you'll probably be able to use 80% of this but the 20% you want to customize you can swap it out for your own code maybe you use some other load balance or instead of the one that we deploy and maybe you want to use chef to configure your servers you can do that and still use the other sub modules the examples folder is essentially executable documentation it shows you how to use all those different sub modules in different permutations so for example if we look at vault again you'll find in the examples folder we have a root example which is kind of the de-facto one there's a private cluster and you can read through how to deploy a completely private cluster that's not accessible from the public internet which is by the way the recommended deployment model for most use cases there's an example of how to build an ami an Amazon machine image for vault and that uses those scripts to install vault configure it so there's a bunch of example code in here and what's worth mentioning is that root example the one that I deployed right at the beginning the one that sits here in this root folder it's actually it is an example so the code at the root for these complicated modules it's actually an opinionated canonical example that's kind of the really quick you want to just get up and running with vault here's the example if you want something custom something custom you can look at the other examples you can look at this one and you can shuffle things around and make them work the way you wanted to so that's examples that's and by the way some of the examples combined not only sub modules from vault but also sub modules from console or even completely other systems so we've all we tend to use with with console so we're using the console terraform module in the example code as well and so that if you actually think about it it's function composition right we have one function that deploys vault we have one function that deploys console and now we can cleanly put them together and take the outputs from one and send them as the inputs to another finally we have the test folder not every module is gonna have tests we try to test absolutely everything we build and I wish I could tell you that testing is easy and that I have some magical sauce that I'm just gonna offer to you and you're all gonna be able to snap your fingers and have well-tested tariff modules the reality is it's not and the reason for that is it's the type of language we're dealing with if you're using Ruby or Python or some general-purpose programming language you're able to do unit testing you're able to isolate some part of your code from the rest of the outside world and test just that code and those tests are very predictable they run really quickly and those allow you to have these nice well tested building blocks with terraform and really any infrastructure's code tool you don't have that because the whole purpose of terraform and infrastructures code and code is to talk to the outside world it's to make API calls to AWS and azure and Google Cloud so you can't really have a unit because if you've removed the outside world there's nothing left so pretty much all of your tests for terraform are inherently going to be integration tests so that does mean they're gonna be a bit slower that doesn't mean they're gonna be a little more flaky because things in the outside world tend to change and break and it does mean that you it's gonna take you a little longer to write them but they're also extremely valuable we're actually able to maintain a pretty large library of modules precisely because each one of them has thorough automated tests so how do you test them there's no real magic I'll show you the code and you'll see very quickly that this is not magic at all so if we'd open up the test folder we actually have a test case for each of the examples which is good because then when you try the examples they hopefully actually work we write our tests and go we have a little DSL library we wrote that's essentially a wrapper for running terraform apply Packer build for running SSH commands it's just a go wrapper for all of that and if we dive into the code the test looks something like this I'm gonna skip over some of the little fine details but we pick a random AWS region to deploy into to make sure that the code doesn't have a bug specific to us East one or some region like that we run that sub module I showed you earlier to generate a self-signed cert and under the hood this is just running tariffs on apply we run Packer build to build our ami and you and you're welcome to you know dig into the code and see what's going on here it's literally running a shell commands that's Packer and build we then take that ami ID plug it into our terraform code and then we run deploy which is really just running terraform apply so the way tests work is you run terraform applied to deploy the code into a real account you're then going to validate that that thing works the way you expect it to and then you're gonna run terraform destroy to clean up it's not magic it's just what you would have done manually but you can automate it with a little bit of work so we run terror from apply now we're gonna make sure that the vault cluster works the way we expect it to and you can read the code we're gonna establish a connection to the cluster we're gonna wait for vault to boot up we're gonna run that init command that I showed you earlier so here we go we literally SSH in the box and run vault in it we're gonna grab the keys that it returns and we're gonna use them to unseal the nodes etc etc we're gonna go through that whole flow after every single commit to make sure this thing works the way we expect it to and we can do that because we build the module just once right instead of building it for every single company individually we build the module once so we can take some extra time to really test this thing and make sure it works that's the beauty of reuse that's the leverage you get from modules so we do all of that we make sure this thing works as expected and then you may have seen it here we had to defer which will basically run terraform destroy at the very end whether or not the test succeeded or threw an exception or anything else that's what you put it in a defer so that is testing in a nutshell it's a messy business it's very very valuable hopefully we'll get slightly better tooling over time for doing this so I showed you guys that stuff okay using a module we've talked about using simple modules complicated modules it's essentially the same you can point at the route and get that nice canonical example in your source variable up here or you can point at any of the sub modules so you're welcome to glue the sub modules together in a variety of ways again function composition you can call this little function in this little function and this little function glue all of their values together to fit your use case and still be using this nice tested code under the hood you get abstraction because you get the simple API in front of something that's pretty complicated like vault you get reuse because you can use that module many many many times and you get one other thing which is pretty powerful idea and again if you do this right it's just going to change how you build infrastructure and how you manage your infrastructure which is versioning so so far the source parameter up there we set it to a file path just a local folder on my computer but you can actually set that file path with the newest version of terraform to a registry URL and so it'll download it from the registry you can also set it to a get URL to have it download from any of your git repositories and what's really cool is for both registry URLs and get URLs you can specify a specific version and you can set this to essentially get tagged is what you're usually going to point it at so now you're using a very specific fixed immutable version of your module and that's a powerful idea so most modules they're gonna use semantic version versioning so you know it's the changes backwards compatible or not and you're gonna get better infrastructure and just by bumping a version number so as vault has new releases the terraform module for vault will have new releases and you'll be able to upgrade by bumping a version number but what's really neat is when you start doing this for your own infrastructure if you build a module inside of your own codebase that's how you deploy your micro services or how you deploy your databases or maybe how you deploy your entire infrastructure could be a module you conversion it you can create this immutable artifact that represents your infrastructure and now you can take that artifact and you can promote it from environment to environment to environment right this is the ultimate version of immutable infrastructure it's not just an app that we're moving from one environment to the next we're moving the entire infrastructure definition that's immutable from one environment to the next and you can be reasonably confident that if this thing worked in the QA environment then because the code doesn't change at all when you move it to stage it'll probably work in stage and if it works there it'll probably work in prod so this is beautiful and this is a very powerful concept if you start using it your infrastructure just the way you manage it will feel different okay final thing before I let you all go eat I'm actually doing pretty good on time is the future of modules so the summary of the talk is something like this brought to you by your assistant Minh Tyler Durden it's basically that your infrastructure isn't that unique I don't want to burst your bubble or anything but the reality is you all recognize this diagram all of you want this basically you all need this all of you need this right so everyone has the same general underlying needs right your applications are very different how you use that infrastructure is very different but the underlying infrastructure is more or less the same and what makes me a little sad as a developer is that we have thousands and thousands and thousands of developers building the same infrastructure running into the same bugs having the same security holes in a thousand different companies right if you drive down any road in Austin in New York and Silicon Valley you're gonna go by somebody trying to deploy vault trying to deploy Kafka trying to deploy Demong why it's the same why are we wasting time on doing the same thing over and over again we need to stop reinventing the wheel it's it's not healthy it's not secure it's long it's down and so the takeaway from this talk is basically a few things one you should be building on top of battle-tested code not something you threw together today after reading the documentation for an hour but something that's been tested in production something that's been used by many many different people ideally it's commercially supported so some of the modules in the registry are backed by companies who will provide support in case things go wrong or if you need help with the module but most importantly build on top of code and I think this is very much in line with Hashi Corpse philosophy as well is the key abstraction the key tool is code you don't want to manage your infrastructure by hand you don't want to be clicking around a user interface to do it that doesn't work and you don't want to really be using platform as a service for really large things as well because again you don't have the power or control that you need you want to use code code is where all of this power comes from and the advantages of code are pretty amazing so yet right if you had a system in in one company that spent six months figuring out how to run MongoDB then a system in in another company will have to spend again six more months doing it if you capture that in code the first work from the first person is immediately reusable by the second you can compose modules if you build your modules correctly they can be extremely reusable for a very wide variety of use cases you can configure them by exposing parameters you can customize them because you have the code so you're welcome to modify it you can debug things because you again you can see the code and see what's happening you can write automated tests it's not easy but it's doable and that allows you to build your infrastructure on top of well tested pieces you can version the code so that you can promote that infrastructure from environment to environment and you can document it I'm sure some of you have been at a company where there's one person that knows how the infrastructure works they clicked around for a while to deploy everything and if that person leaves your company shuts down for a while with code you can read the code you can figure out what they did it's captured for you it is documentation a grunt work we've been building these modules for a couple years we've gotten to the place where we have a bunch of companies all using the exact same infrastructure so that when one of them finds a bug we can fix it for everybody when we build a feature for one we can push it out to everybody and the result is that we can take this horrific terrifying familiar diagram and basically turn it into just a couple commands and now this got a whole lot easier with the release of the terraform module registry that's it thank you very much [Applause]
Info
Channel: HashiCorp
Views: 128,769
Rating: undefined out of 5
Keywords:
Id: LVgP63BkhKQ
Channel Id: undefined
Length: 38min 59sec (2339 seconds)
Published: Thu Oct 12 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.