Evolving Your Infrastructure with Terraform

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

We've just started considering Terraform to replace our hodge-podge of provisioning tools, and I found this video extremely informative for the new user.

👍︎︎ 3 👤︎︎ u/esquimaux73 📅︎︎ Dec 18 2017 🗫︎ replies

Terraform is great tool. I've used it on AWS and Azure. A breath of fresh air if you're suffering with Cloud Formation.

👍︎︎ 1 👤︎︎ u/neilhwatson 📅︎︎ Dec 18 2017 🗫︎ replies
Captions
I'm very excited to welcome Nikki watt to the stage here to talk about terraform and evolving terraform for your infrastructure please join me in welcoming Nikki to the stage okay welcome everybody so are we talking about evolving your infrastructure with terraform so just a little bit about me and the company sir I'm a CTO at something called Open credo and we're a hands-on sort of consultancy that specialized in helping organizations to adapt and adopt emerging technologies to solve their business problems and quite often this involves building sort of end to end sort of large-scale applications and systems in large part of making this of success is by implementing continuous devops practices and tooling and approaches obviously this is where much of a sort of Hasek or tooling comes in quite handy so as a sort of premier Hasek or partner we've dealt with quite a lot of different clients along there sort of terraformed journeys and kind of help them with that and this talk is really going to look to kind of pull some of the insights from the various clients that we've worked with and some of the sort of journeys that they've had in terms of evolving terraform as they've kind of moved along so in terms of this talk the agenda is going to be primary focusing on how you can involve terraform to progressively adapt and manage your infrastructure as your organization and your infrastructure kind of changes but then also going to briefly kind of look at the related topic of orchestrating terraform and some sort of challenges and and sort of areas around that and then we're going to conclude so to start off what we're going to do is are going to follow a journey of sort of representative clients or in this case a combination of representative clients as they embark on a terraform journey of starting out using terraform to create their infrastructure we're going to highlight some of the sort of common pain points that people sort of typically sort of encounter as they go along they along this journey and then have a look at how we can actually evolve to reform as we go through this process so hopefully as a result will kind of emerge with a better understanding of how you can use terraform to evolve infrastructure and in doing so we'll also identify some common sort of patterns and approaches that people could be sort of find themselves in I do want to stress there's no absolute right or wrong way of doing things different clients they've got very different sort of set certain requirements and although I'm going to give you a sort of linear type progression of how this representative set of clients will go through yours may not look exactly like that but the aim is really to highlight the sort of main areas that you might find as you go along your terraform journey so make let's talk a little bit sort of less abstract a little bit more concrete we're going to say we have our representative clients and they trying to deliver a sanso system which is an e-commerce system and this is sort of delivered as a set of micro services in an Amazon infrastructure and in this case they're choosing to use terraform to actually create the underlying environment itself that underpins our set forth and it's using kubernetes as the mechanism for actually deploying the micro services so there's a new feature in terraform which is actually created using terraform to deploy the micro services through kubernetes itself it's not going to be used for that and we're literally going to use it to create the underlying infrastructure the obstructions are relatively simple to begin with we start off with an Amazon VP see we have a public subnet where we're going to have things like a NAT gateway a bastion box and then we also got a single sort of private subnet where we're going to house at kubernetes cluster but I'll have maybe a single master node and three three nodes to begin with and for any database needs we're going to use Amazon RDS to sort of make that possible so we have Siri she's starting out as a DevOps engineer and she just discovered programmable infrastructure and terraform really excited this is going to make a big difference to help her really kind of manage the infrastructure differently and to start off with she creates a storm foolproof concept for kind of getting up to speed with terraform and quite often it'll start looking something like this we will be a single sort of terraform file which will define the resources that she once sort of create some hard-coded values maybe a few sort of variables as well and also a local TF state file and she likes what you see she's quite happy with this group of concepts going ok so she even starts creating the test infrastructure off the back of this now time Revlon pressure builds and the need to deliver more sort of formal environments becomes more urgent than devices actually I really need this production and infrastructure yesterday can you create it so she decides that's okay my best course of action I'm just going to take the proof of concept setup that I originally created I'm going to create my test and production infrastructure artifact she's basically starts automate the copy of the test resources that she originally had and just duplicates that for the production setup but you think well maybe that maybe I can do a little bit better at least I can maybe create two separate files one for the terraform production set up wonderful a terrible test setup and but we still maintain things with a separate sort of TF state file she runs perform applied the test infrastructure comes up the production infrastructure comes up and all is well now time passes again the sort of test team come along and it's actually I need a change to the test infrastructure we want to potentially look at expanding with kubernetes first though we need to increase the cider range of the VPC and can you please make the chest the change for us and test so she says this is easy I'm just going to go to my test file I'm going to change the the particular kind of setup and make a little bit bigger but I also really don't want to make sure I want to make sure I don't impact production we're just going to rename my terraform dot PS file to a terraform dot TF backup file make sure that their form doesn't actually sort of change any of the production infrastructure so terraform apply or she goes and as you could imagine things didn't really go too well for Terry so although she renamed the backup file as you're all aware terraform operates or fatir state file as a single source of truth and terraformed thought that actually she's removed the production resources so it landed up deleting everything in production this initial setup is what I would call a classic careless setup so this is my name for a relatively sort of monolithic configuration and one of the typical reasons why you see this pattern emerging with clients is because they take a sort of proof-of-concept set up and they just evolve it quite quickly into production without necessarily thinking about splitting things up and it happens more often than you think so the characteristics of a terilyn set up is that you have a single state file which rules everything so your tests annual production interest you also have typically a single state file where all of the definitions are created some hard coded config and management's in local states now the primary issue with the tariff is that you can't manage the individual environments differently so it's very risky from an organizational perspective to go and make a change for a test system and you inadvertently change production sotirios no sir distances this is problematic but I've found a few other problems with this particular setup the configure is not that intuitive I've got this big kind of file of stuff and I'm not really sure exactly sort of what's going on here and it's it's quite a lot of maintenance for me there's a lot of duplication definitions and maybe there's a way that we can try and sort this out so she says let's get some help in and see if we can evolve this so we land up evolving the infrastructure for the first time and remove tooth what I would call the multi tier Alice so this is basic the sort of second phase where the biggest change that you can make to make this infrastructure better is to have separate environments state management's and this is a massive bonus in terms of reducing the risk from an operational perspective at least not sort of destroying your production infrastructure as you go along in order to deal with some of the sort of maintenance and the readability side of things are also going to move to multiple turret and terraform definition files and start using variables a little bit better in terms of actually sort of managing the environment separately in our single sort of repository that we had before we just create two different sort of directories a test and a production infrastructure we copy all the resources over we'll duplicate it and we make sure that we have a separate CSS file managing both our tests and our production setup to help make things a little bit more reasonable we've actually broken that single file a call so into multiple files now different clients do this differently sometimes they'll break it down at a sort of technical level so in this case you decided to go for networks and VMs but other people will break it up in a sort of illogical kind of components as well but whatever makes sense that's that's fine it also to make things a little bit easier it's going to read a manage you've now got variables so at least we can define what are the aspects that I want to make configurable in my environment pair to the stuff that I want to sort of have hard-coded we at least sort of evolved our infrastructure to get to a point where it's a little bit more manageable now so our original pain points that we had with the perilous ones that can't manage our environment separately it's quite hard to understand and there was a lot of sort of maintenance in terms of the duplication with our multi-peril if we've definitely took the first box with at least managed to get to a setup now where we can manage our environments separately and we've done some work around making the configuration a bit more intuitive you can argue this probably still a little bit more to do in that case and in order to kind of move forward and also address some of the duplication we need to evolve our infrastructure again so we move on to this third evolution of our piriform set up and this is one that I would call the terror mode set up and every name implies it's a version of terraform that looks to really kind of make use of modules in order to create reusable components that you can start composing your infrastructure artists and terraform has got built-in support for modules and if we're going to use this as a base kind of building block in order to change our cell phone setup the characteristics of the pyramid setup is that as I said before we're going to go for reusable modules are we going to change our environment definitions to start composing themselves after these modular definitions that we're going to create we also going to have to change the repository structure a little bit as we kind of go along as well Tara's decided this is how I want to logically break up my module she's got to find some way of breaking the modules up and she's decided to go for three main areas so there's a sort of a core area a kubernetes cluster area and a database for the core kind of area she sees this really as a fundamental part of the Amazons sort of structure things like the VP sees all of the subnets lots of creation of things like the bastion host then there's a kubernetes cluster which is going to hold all of the sort of kubernetes set up in a separate area for the database and she's going to have a module split up that way in terms of restructuring in terms of our sort of single repository that we had we now have an environment directly and we create a sort of test and prod area as we had before and we also have a separate modules area we also then start to define the different logical sort of components in this case we've split it up by the three areas the sort of database core in kubernetes and we define that underneath the modules area now for each module if we have a look at the sort of core module over here we want to define all of the resources that make up just the creation of that one particular kind of the components involved in that particular piece so for the core set up we create things like the Amazon V PC and the public subnet in the private subnets that underpin the core kind of area we also want to make sure that we have a very clear contract that defines what are the inputs what are the outputs that that constitute this particular module the convention that I tend to use you know to have an input the TF file which very much kind of specifies this is what I expect to be able to configure my module with and an output that PS to configure the the outputs so in this particular example you can pass in things like the slider range how big your V PC is going to be and likewise how big you want maybe the sort of PMZ sliders and the private private subnet as well we also have an output and this is required because our modules are going to have to start composing them together which means that we have to understand what are the outputs that I want to make available from my modules that are the other modules can actually sort of start composing them and this has to be done explicitly by exposing outputs as I said before we want to make sure that the modules have got a clear kind of contract as to what we expect the inputs in the assets to be so for each core environment the terraform file that we have now becomes more of a gluing module there rather than having all of the sort of resource together we now specify actually my environment file consists of a kubernetes cluster a core module and a database module and needs refer to the module that we've just created here but we'll have to start weaving the inputs from one into the other because we're using modules we can take the output of a module that we explicitly created in an affricate TF file and weave that straight into one of the other ones the example over here is we have our core module and that creates our private subnet and we need that private subnet ID to be able to be passed as input into our kubernetes cluster modules that we can make sure it gets created in the right subnet and the example is just sort of standard terraform kind of code as to how you do that crucially because all of the modules are configurable there's a very clear contract which means that for the different environments we can start configuring things differently so maybe you want to say in your test environment I only need three nodes for my cribben area clusters but actually in production I want five and now that you've got separate areas for your sort of test and production you can have different variables that configure things differently so some clients will take this even further and they'll actually have quite different test and production setups so some of them are not quite as complicated the sort of test and/or production setup and you can compose things differently depending on how your turn and what you're trying to do basically so if we go back to the multi terralift which was the previous kind of setup we'd at least manage to evolve our environment separately we had more intuitive configurations and with the Terra mod setup we've really kind of at least taken the intuitive configuration much more a lot forward basically so the environment now when you have a look at them you can say oh my environment is composed out of a kubernetes cluster a database and a core module as well we've gone some way to reducing the duplicate definitions so previously we were duplicating everything in the test and the production setup now we're composing it with modules and we're just passing in different values and that's really kind of made the setup a lot more drier don't repeat yourself which is the programmer acronym so this is great we now have to move on though to the next set up which will allow us to further reduce the duplication we need to evolve our infrastructure again and this is what I call the power pyramid setup and this one really builds on the sort of terra mode setup if it takes the use of modules to a new level and you'll end up having nested more or modules within modules and the characteristics of the terra mode setup is that you have these nested modules and it typically kind of come in two different flavors there's a set of base modules which are more low-level infrastructure type setups and then you have the logical or the system specific modules which are the ones that we've kind of seen now sometimes people will land up actually creating their own separate module repository for the moment we're just going to stick with one but this is also something which people end up doing so where we left off we had our structure where we had the environment we had our modules definition and we had we had already kind of structured things to having our logically composed modules as we had before now we simply just add these base modules as well an example that we have here is maybe you want very low-level modules this is this is exactly how I create a VPC in Amazon or this is how I create a public or private subnet in Amazon and those are the sort of base infrastructure specific setups previously in our core module we had all of the direct resources being defined in there so we had the VP see in the subnet and this changes now to suddenly be composed of modules itself so we have our core module being composed of our base modules so and it doesn't have to be sort of only be this way you can compose system modules from system modules and base modules from base modules it really depends the level that you want to go but is a but there is a current issue in terraform which prevents you from fully being able to sort of take advantage of this and that is the ability to support a counter parameter for the modules that some of you are aware in some of the sort of resources you can typically say I'm in an instance I want five of these instances and terraform will take care of creating that for you unfortunately you can't do that for module so you can't say I want five of this module and have it kind of created on the fly for you and it's a little bit of a pain because what you'll end up doing is in your environment terraform file you'll end up having to duplicate these definitions so if you wanted maybe three say private subnets if that's how you define your modules you literally have to go into find the module three times in the environment configuration file the villa would have a pain but you can you can kind of get around it so our Terra vods recap from previously as I said it had addressed most of sort of duplicate definitions and things our power Terra might take that even further with our nested modules and we've kind of got to the point where we've managed to reduce it as far as we can given the sort of current restrictions so this is great theory is really chuffed things are working out very well for her and she hasn't accidently destroyed production recently which is a good thing she understands her code in fact she's building a team now and she is with some new team members that she wants to teach the ropes so recently she got a little bit of a heads up from the finance guys though and they said Oh we'd be getting some information from analytics about the environments and your Bastion box is really costing a lot of money and I think it's over provisioned and you need to you need to reduce the size so she reckons this is not a problem I'm going to you know it's a really simple change I can give it to one of my new team members frankly and he's going to make the change for me so Frankie goes along he downloads the repository he locates the correct environment production file and if there's where the variable up there it is the best in flavor are for large put a little bit big let's make it an M for large and this should be fine I double checked yes it is the variable it's going into my core module that's where I've defined the bastion box everything is good he didn't get the memo about doing a careful plan first you reckon all as well I'm going to terraform apply so unfortunately for him things also didn't work out all that well so he now seems to have unexpectedly triggered a rebuilding of his kubernetes nodes so what happened there so in this particular case all he wanted to do was change the bastion box flavor unfortunately there was a little bit of a typo in the configuration and the same variable that was being used to configure the bastion box was passed into the kubernetes node cluster and as a result obviously terrifical terraform thought well the kubernetes nodes are changing so I'm going to rebuild I'm going to rebuild the kubernetes cluster now you might have said if you've done a plan you would have seen this but you know he was he was quite confident anyway and that also happens a lot or more often than you think now although it's not quite as bad as taking out the hull of production we have hits the next pain point that a lot of people could have ten two hits in these kind of circumstances and that is that they are unable to change one part of the system without seemingly effecting a an unrelated other part of their infrastructure and in order to deal with that we again need to look at evolving our perform to the next kind of phase of its evolution so we go for part five and this takes us to what I would call the terrorist services setup and this really looks at taking the sort of logical components that we kind of had before and treating those as isolated units and managing them independently this will definitely kind of isolate the risk and the management that that people have in terms of managing the infrastructure where all I wanted to do was change the bastion box and somehow I affected my kubernetes cluster if we can if we can manage the core sort of infrastructure separately from the kubernetes cluster that will allow you to at least get around some of these big sort of risk components and the name is akin to micro services because I do actually think there's some kind of similarity in the evolution of how we've kind of gotten yeah so the characteristics of Patera services is that we have kind of break our components up into logical modules and we manage them separately so now we move to having one state file per component rather than just per environment and typically if you haven't already done so already you will start moving to a distributed or remote State type of setup and this is definitely required and helpful when you start moving to teens as well but this comes with additional complexity so as with micro services when you start moving to micro services suddenly you've now got to kind of glue these things together and as we'll see moving to this kind of setup introduces additional operational complexity as allman was saying in his analogy earlier so in our powerful in our power mod sort of setup we saw that we had these three different areas and we had created them as modules but we're still ruled by a single environment while a state file for that environment the terrorist services we're now going to have one state file or ruling each of these the real end up going from one one type of state file for each of the environments to having one for each of the main components that we have per per environment so in this case we'll end up having sex in terms of the implications of connecting things together obviously that needs to change now so previously this was the terra mode setup where we are weaving the module inputs and outputs into each other and that now needs to change that we can deal with these completely separate state files so there's not a massive kind of change that you need to do to make this work but the setup is that previously we still had our reference to our core module so here we have the course of terraform module file itself and it still incorporates the core module itself but now it explicitly has to also export the outputs of the the module to make it output for itself so that other services that want to reuse its core inputs will be able to do so the example here is the private subnet ID so the core the core set of needs to output that so that when the kubernetes state file or when the kubernetes service needs to import it it will be able to get hold of it and although it's redundant here we start also getting the definition of the terraform back end there's a new feature in Northpoint 9 o'clock and although it's redundant you don't need to specify the local set up I put a chair because I want to kind of show how you move to the remote state moving forward so in terms of how you configure the components that actually want to now consume another component it starts looking something like this so you need to import a component that you actually want to connect to in this case our kubernetes cluster says there's some stuff that the sort of core components output and I need that so we use the terraform remote States datasource reference and we in this case just point to the local back end where our core ITF state file was and then we just import we import that and we pass it through to our kubernetes sort of setup moving forward frankly and theory are much happier again they further isolated a sort of changes to the system and they reduced at least a mess of completely messing up one part of infrastructure that's potentially unrelated to the other but she has noted that there's a few other problems now and she's dealing with this local state file which is proving a little bit more problematic than it was in the past because now they're sort of more than one of her and it seems to be tripping a team up and you know not everybody kind of cools from gets sort of religiously and although there's warnings when they run terraform it's still a little bit painful security are also not that happy because they've said actually there's some secrets which are exposed in the state file and you're just conversing if I've been to get this was not a good thing and as she's noted before this is not a simple case of just running her terraform apply anymore she now needs to actually think about what she's doing because if she hasn't run the core component first the VPC and everything won't exist so these needs to be an order of how she does things so she needs to run a core first in the kubernetes cluster in the database or whatever the particular kind of setup is so in terms of moving to a remote state set up this is really simple so previously we had the local reference to the terraform state file and now all we do is we change and we say actually I want to use a remote background in this case it's Amazon s3 and we can also then get rid of that kia state file out of git which will also help us with some of the sort of security issues that we had before we're committing clear text secrets exposed in our state file into get so the service is actually needing to make use of these particular sort of environments they also then basically just use they change the terraform remote state files to now refer to the the s3 back end instead of the local back end now the bonus from a team perspective we start getting more things so specifically with the s3 back-end you have the concept of locking and this is only a very recent thing that is introduced from sort of north point 9 onwards but it's really handy from a team perspective when you want to try and prevent some of your team mates from potentially clobbering your stuff from a security perspective at least with the s3 back-end we can encrypt it which means that we don't have our terraform state file at rest with the secrets exposed and yeah it's it's a move in the right direction for teams from a git repository perspective we can keep absolutely everything in the same state file but what we've seen also in some organizations and in some sort of clients is that they'll end up having different teams that are responsible for different parts of the infrastructure so you may have a core team that's responsible for setting up fundamental sort of parts of infrastructure the VP C is because maybe there's direct connects or something that's a little bit more complicated to set up and then other kind of teams which are responsible for creating other other sections now once you've structured your code in a sort of mechanism or in a way like this it's a little bit easier to start migrating these into their own repositories and dealing with them as independent entities you can literally just take the core kind of module and create a completely separate repo in order to sort of deal with that if you were using the sort of common nested modules as well what happens is that typically people will have to actually create a common sort of module repository itself and then reuse the references for the get references in their individual modules in order to incorporate that which also brings in versioning and other kinds of things which I won't get into at the moment but that's a consideration as well so our pair of services set up has allowed us to evolve and really kind of manage our infrastructure much more in a sort of much better kind of way with isolated and reduced our risk and we've now aided with at least trying to move towards a set up where the teams can start working a little bit better we also have the remote set up remote space which is v we made things better but there's no such thing as a free lunch and moving to such a set that requires quite a lot more orchestration and sort of management ever did before the last part of the talk that I'm going to talk about is orchestrating terraform and some of the sort of concerns and challenges that people have in this particular area for the terror services setup but in general as well so this was our target infrastructure and we always have to have some kind of system or processes or tooling that we use to run and orchestrate and actually manage our terraform so just as there was a progression with the structure of our terraformer I'd argue that there's also a similar thinking in terms of how do we evolve the processes that go around us and managing that as we evolve as a team as well so to begin with we had Tyrion all she had was a single developer laptop not a problem it was a relatively simple kind of setup there was a single git repository we had a local state file which was committed in to get and with one state file per environment it's relatively simple as a human process you just go and run perform play and apply as you see fit and generate everything as well and when is one of you you can typically get away with it when you start having more developers that are trying to do things concurrently things become a little bit more problematic now you've suddenly got to sort of coordinate amongst each other make sure you don't over ice everybody stuff make sure you're working with the latest code etc and from a human perspective and at least not stepping on everybody's sort of toes moving to actual proper sort of remote setup something like s3 is a massive one from at least from their perspective this is not only restricted to the Terra services set that many people get to remote safe before them but from a team perspective it's quite important it also gives you things like locking if you're in the later versions of terraform an essential place to manage to manage your or your state but it's also not perfect you can still have people that pull the wrong version of the code and even though you've got the states in a central place you can run it against the wrong version of the code and still get yourself into into both problems and ends the setup itself starts getting more complicated so with the Terra services now suddenly we have multiple state files and this needs coordination and orchestration so how do I know that I need to run my core module first and then like urinating around etc etc you can't just rely on terraform to do that because you've now got to sort of make this work yourself and if I'm honest I think the main mechanism that people use to do this is just manually talking to each other readme files and it's run this one first in this one then that one then at one at etc and that is the primary mechanism that a lot of people actually use for us additionally we didn't quite go into detail on this but with the here our services set up sometimes where people land up viewing is they don't just create the infrastructure they also will invoke some kind of provisioning tools or something like ansible or puppets in order to actually install software on the box afterwards so if we think about the kubernetes cluster maybe you use ansible or puppet to actually install kubernetes in the feather in the setup itself and when you start having to share variables between terraform and these provisioners it also starts getting really messy so you can output things from terraform and have scripts which scrape first and then try and somehow get it into a puppet and sort of ansible or whatever but as a mechanism for dealing with this some people will move to a shared services type set up and that will be running something like console able to actually store the values that you can start sharing it between the different components that need it but this starts getting a little complicated because now you think they've introduced another whole system that builds the system or both the infrastructure so who boards the infrastructure that build the infrastructure and you know somebody's going to create the s3 buckets somebody's going to create the vault cluster and the console cluster etc and again what typically happens I'd say is that many clients actually deal with this as a completely separate kind of area so if they get to the point where this is the type of setup they have they'll have a whole kind of team which is dedicated to actually managing the infrastructure that build the infrastructure but as a initial kind of progression what a lot of people will do is at least try and start moving towards some kind of centralized way of dealing with things and the first step on their journey I would argue quite often is to reach for something like Jenkins as a place to at least have Tara form a single place where you can run terraform so you might kind of define your terraform configuration and all the developers or all of the people who are involved including instructure just go to Jenkins and say run the creation of whatever the particular kind of environment is that you want so it's a single place that people can see what's going on etcetera cetera it's not perfect because stuff goes wrong and then inevitably you have to download it onto your laptop anyway and paint and apply and fix kind of things but it's the first step that most people go for but quite often many clients will actually and of writing their own custom systems and tooling and you see this quite a lot so there's a lot of bash scripts out there which bring things together quite complex systems as well and we've actually been involved in helping a few people kind of set up some of these kind of things and you you'll see things like Tara groans and Tara health and various sort of combination of systems coming together to actually create the the tooling that ultimately is used to build your infrastructure and there are even set offerings of things like hash called Enterprise Products which are also there to try and help with some of this kind of setup what's my point my point is that it's not just about the structure of your code you also need to think about how you're going to evolve the processes and the sort of orchestration system that manages this there's no silver bullet here I wish there was but there's quite often a lot of sort of manual intervention and coordination it's required for many people to get this right and custom systems of kind of gluing things together but the key thing is to actually think about it because if you completely ignore this when you start having multiple people trying to sort of create your infrastructure at the same time you really will end up in a lot of trouble so the conclusion for this talk is that we've had a look at how you can evolve your terraform set up and we did this by taking a journey through a sort of representative set of clients and looking at the sort of pain points that they had along the way and how they can evolve things these are the sort of typical kind of setups that we see in clients not everybody lands up in exactly one of these setups there's probably various other combinations as well but the one you definitely don't want to be in is the kirilus where you are managing your tests and your production infrastructure in the same state file if you are there I'd say the definitely is a definite area you'd at least want to get to the multi terrorist where you're managing your your tests and your production infrastructure separately in terms of moving to a more readability sort of maintainability side of things the terror mode and the power Tara mode set up and its use of modules was a way to try and sort of deal with that sort of complexity and make things a little bit more comprehensible and also maintainable so that people would come into your organisation can also start understanding how is it that you've created your infrastructure and you're managing it with the tariff services kind of set up we saw that this was the way where we can kind of get to the point where we don't accidentally destroy different parts of instruction that maybe we weren't kind of expecting to do and the benefits involved in that is that it can help sort of moving towards a multi team setup we've got different teams of different roles responsible for creating different part of the infrastructure the nice thing with that as well is that there's some infrastructure moves at very different paces so the if you think about the core module sort of creating that is not necessarily going to change that often as compared to maybe the way you configure your Q cluster or something like that and don't forget about the people for simple setups with single man setups it's quite easy to you know just have a very simple kind of setup and there's nothing wrong with that there's nothing wrong with having a simple setup but as you evolve as you have more teams and more complicated setups you need to think about these things so with that yes thank you very much and I hope that was helpful [Applause]
Info
Channel: HashiCorp
Views: 52,159
Rating: undefined out of 5
Keywords:
Id: wgzgVm7Sqlk
Channel Id: undefined
Length: 36min 24sec (2184 seconds)
Published: Mon Jun 26 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.