Terraform and The Extensible Provider Architecture

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so today I'm talking about the extensible provider architecture so the parts here terraform provider architecture version 10 gets its own special little note because it has some implications and then demos which is going to be a great time one and two and three they're probably gonna go kind of quick I'll try to talk slow demo is where we're gonna roll with it it will probably work it'll probably demonstrate what I wanted to just keep an open mind when I say when I talk about terraform and extensibility it's kind of a loaded term so part oh I have a clicker part one terraform all right so looking at hashey Corpse mission provision secure connect and run any infrastructure for any implication application Tara forms goal is to be the provisioning part of that mission Tara form is a tool for infrastructure as code our goal is to write plan and create infrastructure as code I don't have notes the one denotes okay we're gonna figure it out then alright so where you want to create infrastructures Co we have a lot a lot of providers these are just some of a very very small number of them Amazon as your github and we cover all sorts of things like infrastructure as a service platforms as a service software as a service medal probably some other things too so it's not just I think I've heard before people think terraform is a tool for working with Amazon and it's much much much more than that so all right infrastructure is code so with terraform you get a unified view of all of your infrastructure not just your not just your amazon we're talking DNS records your Heroku applications and it makes it really easy to compose these into a unified view of everything so with interpolation we can define relationships between a DNS entry and an AWS instance and we allow operators to safely iterate on infrastructure over time so you can start very small and begin to add things and with Tera forms workflow you were able to see changes incrementally and make them safely and verify that that's what actually what you do and with terraform you get one workflow many clouds we want to abstract working with cloud providers into a unified way of doing it so you don't have to be intimately aware of each platform web console you just have a single interface which is terraform so key features terraform is open source who here has used terraform right most of you probably use there from has anyone contributed code to terraform thank you that's awesome and even if it's not code if it's like a a bug or anything thank you very much like this that's awesome terraform has a huge community and we are so grateful for that so yeah open-source terraform uses HCL which stands for hash e Corp configuration language it's meant to be a human readable machine editable configuration language so it's easy for operators to use terraform uses a dependency graph and what that means is it reads your configuration and it constructs a structure understanding what parts of your configuration need to come first and what parts can come later and what parts are unrelated so that when you're creating your infrastructure it can create the unrelated things in parallel which will spin up the process so instead of just going to sequentially top-down it stands that it can fan out and create the unrelated things at the same time we have kind of a two-phase way of doing things Terra firme plan and tariff will apply you can skip straight to apply if you're feeling lucky but terraform plan helps you iterate slowly and safely you can add a change to your Terra from configuration and hit terror from apply and it'll plan and it'll tell you exactly the changes it's going to make and that gives operators an opportunity to go slowly but also catch things what we call drift you can actually just run terror from plan on your configuration not even adding anything just to make sure everything is still that way and you can find those little security group rules that's so-and-so added from the console and they went around you and didn't use terraform so it allows you to tech changes in your infrastructure and go back if it I saw you shaking your head no one does that right it allows you to find those things and correct them if need be and then in terraform Enterprise you get collaboration history audit trail early terraform does have built-in support for collaboration in the form of you get remote state so instead of having the state on your machine you can actually share that with your colleagues but over time and the team size you you need a better collaboration features which is where Enterprise comes in there okay so Terra firme over the years 4.0 is right about when I started there and so this is a map or graph of resources and providers you see providers have been slowly on the uptick on 0-9 we had 65 providers I think resources on the other hand are ballooning out of control in a good way of course at 0.9 I think we had over I want to say yeah over 600 over 600 supported resources and I don't think that includes data sources at all so this speaks really well to Terra forms extensibility in that it's very easy to add these things by having terraform core focus on what it needs to and tear from providers having a flexible well-defined architecture there it's really easy to add on more resources and more providers so terraform itself if you're not familiar is a single binary written and go we chose go because we really like the language and it lends itself very well to a tool like this you can compile go for several different platforms unix platforms darwin linux bsd platforms and also windows platforms you can compile that from a single source of code and it will run on all of those we use a provider provision or structure our architecture so provide ok sorry I read that run so it's a plug-in architecture terraform core is a binary and it communicates with provisioners or plugins over RPC so it launches sub processes and communicates with them and the important part there is that it's just easy to extend it because sorry it's easy to add your own plugins there we are now splitted to tear from core and terraform providers they are separate they're like physically separate I'll cover that in varrock 0 10 and tear from as I mentioned earlier has a graph specifically a directed acyclic graph which means you can't loop that would be bad because then terraform doesn't know where to start but terraform will detect that if you automatic SCID entry add a loop in there so all right so big picture here you can think of terraform as this kind of structure here we've got core which is in charge of configuration and stay and graph and it's talking to providers and providers are conveniently on the outside of there but to a user it's all kind of one thing core responsibilities are reading configuration did you want a picture of that ok the slides we pasted posted later so core responsibilities reading configuration managing the state that you have reading your configuration and reading the interpolation there and configuring i'll show you what interpolation means if you're not familiar and figuring out the dependencies there by understanding that this resource has to come before this resource at constructing the graft it reads and discovers communicates with plugins and it offers the plan and apply so kind of another view here terraform talks about the plugins in this case the providers and the talk to upstreams resources upstream providers care from core itself is concerned with graph graph type ideas like what's the difference between the desired state and the current state applying those changes and also refreshing our state those are the concerns that core has will cover provider concerns in a little bit I guess we'll just go to a slide that doesn't have that all right let's see and I'm hitting back that would explain what happened there sorry all right again this is actually kind of a duplicate terraform raisa config makes the dag and so what we end up is structures like this actually reads top-down so the root is the just the core the the very base of what's going on and in this example we've got a DNS record that points to an Amazon instance so terraform will read the configuration understand it needs to instantiate the Amazon Indians simple providers and then at first it needs to create the DNS record in order for the DNS the Amazon instance to then use that record I think I said that right that's actually backwards maybe but you get the point so moving on to provider architectures so Tara forms goal is to provide infrastructure as code help you write and manage it provider goals then handle the actual provisioning on any infrastructure service or cloud the the great thing about this architecture is the core itself doesn't have to worry about these provider level things core doesn't understand that with Amazon you have five different ways to authenticate and how to manage that core just sees that as a block core doesn't understand that you know with an instance you can have several different things or that there's caveats the destroying instances you need to disassociate these other things it just sees it as a block in the graph core doesn't care about that stuff it offloads all of that information to providers and again here's some of our providers stuff so provider responsabilities detailed knowledge of the specific provider that's like authentication endpoints other various configurations and providers then define the resources so a provider will explain aerial detail I have support for all these sources here they here's where they are this is their schemas and the resource itself is defines an abstraction around the lifecycle management of this cloud unit thing like an Amazon instance DNS record a github team a github user a Heroku app the resources themselves contain that information alright so where I said earlier terraform core so concerned with diff apply and refresh terraform providers are defined in terms of create read update and delete if you've ever written a controller in an MVP C application those are all going to be very familiar to you terraform the plugin provider architecture gives you a thing called helper schema which helps us define resources in just these terms we give the provider resource a name and then we define these methods and then core handles the state transitions so core will then know that for this resources I'm creating it so I'm going to execute the create method it doesn't actually care what that method is it just uses RPC to invoke it and then the providers handle the actual specifics so providers themselves are actually binaries you can compile a provider all on its own and you can execute it it'll just say I don't make sense outside of terraform but it is its own binary terraform core will automatically discover and grab these binaries and as I mentioned earlier we use helper schema to define the life cycles there what's next okay so we're look at the the folder structure of what a provider looks like it's at minimum we recommend about three files right so you need a main go because that's how go does things and that's how you define it as a binary provider go defines provider specifics usually authentication and connecting with an SDK we'll see that in a minute and then you have your resources or data sources if you don't know what a data source is it's like a resource that only has it's like it's like that except it only has a read method and that actually ties into the extensibility of terraform and that it allows you to access infrastructure that you can't directly managed either permission-based like your team does not have permissions to create or destroy but you can consume this information or perhaps the provider itself doesn't really lend itself its API doesn't really let you do that so a concrete example of a good data source would be suppose one teams in charge of creating a base am I using Packer but you yourself aren't allowed to actually modify it so you can actually read that with a data source and still inject it into your infrastructure as long as the team that makes it gives you the permissions to do so all right so looking at a provider here we go we define the structure of what the provider itself needs to set itself up so in this very simple example all we need is an access key we'll just put in that's an API key some providers are a lot more complicated they allow you to do like a file with configuration and an authentication but they can be as simple as just an access key and we also define our resources and not shown would be a data source map but it's a mapping of this is a resource I support and the name the name that's the name that would appear in configuration and then the method that or the resource definition that matches to it so looking at let me go back that SC cloud instance maps to this resource super cloud instance this is what that would look like this is an actual definition of a resource we use the helper schema and we just return a resource with these define methods the methods aren't obviously shown but that's that's all a resource is there you define the read how to read it how to create it how to update it and how to delete it and optionally you don't even have to have an update if all the attributes defined I don't show the schema if all the attributes defined are what's called force new which means changing them would force a recreation of the resource then you don't even need an update just every change will just hit the delete and then create here's an example of what I mentioned earlier as a data source so I just define the read and there's an example of defining the schema where you say it has an ID has a name all that stuff alright and then back to main go where we demonstrate just creating the binary this is a this is really simple here the terraform plug-in package there just gives you tools to hook in so all you have to do is say I'm calling my provider method which I can go back and show that's the provider method right there that returns the structure that says I need a access key and that's the provider function that in a sorry that runs it and so when terraform can discover this plug-in it just invokes this method and then the communication the back end channel of communicating over RPC is handled all for you alright so version 10 in the beginning terraform was all one big repository all of the binaries all of the plugins were kept in a single repository and in the initial releases when you were to download a new version of terraform you would get a zip file and you would unpack it and it would have a terraform binary and then we have a binary for every provisioner we had that was how he packaged things at the time so we said we had this model right we had terraform and we had these things but they weren't actually inside the terraform binary they were just alongside it and the discovery mechanism would find them there but you had to copy them all into your path so as we grew that presented a big big problem because you can see right around version 7 we had 400 resources packed into I think it was 50 providers so imagine downloading a binary or downloading a zip file and packing it and having 50 provider binaries in there you know 40 of which you might not use yet so that packaging was kind of hard with version 7 wery architected it so that they were still separate binaries but they were actually compiled into the terraform binary the benefit there was that the tariffs on binary did not actually expand in the same way that like you would think of adding all those sizes together but it made it really easy for users because then they just had to download a new version of terraform that was great it had all the binaries in there for it it was still the plug-in system still had RPC but it was all in a single binary and that worked really good except we ended up with what we would call version sprawl starting with version 7 our version numbers started to get really out there we had like I think version 7 had like 14 releases version 8 probably had similar number version 9 I think went up to 0-9 11 what we found was that as resources were being added demand for having those new resources was high demand for getting fixes and additional features and these resources out was really high so we ended up releasing about every two weeks because I mean the Amazon provider alone would have 50 plus entries in the changelog just for that provider because there was so much movement we were getting a tons of new providers new resources in all of them so like we just had to keep releasing to minimize how much we were actually throwing out there but as an operator you probably I would imagine you probably weren't very happy upgrading the core library every two weeks to keep up with some of your fixes so yeah this is no same everything was in a single reap repository and they were tightly coupled right so we were having releases like every two weeks so yeah version 16 six six sixteen seven thirteen nine eleven so with version 10 we changed that we actually split that we took all the binaries that were in built in and we put them in their own tear their own github repository we rip them out and shoved them over there we changed terraform so that it'll actually dynamically look for those providers so when you download version 10 of terraform you have no providers well one the terraform provider and maybe an old provider but basically you have none and you now have a command called terraform init which will go and dynamically pull the providers you need for you I don't know what my next slide is okay so what we've done here is by splitting them all these providers now have their own separate release versions and release cadence which is great because terraform core okay somebody's talking yeah apparently it's it's that entertaining to that guy all right so we've separated course and we've separated providers lost all my momentum okay so this does a couple things one core features take a lot longer to implement they usually take weeks and weeks to plan maybe even longer to to actually write because they deal with the core the graphing which is like the pinnacle of what Terra firme has providers move a lot faster adding a resource or a new provider itself could be done in a period of hours you know when when Google or Amazon comes out with this new press release of how they added a feature or a new resource we could have a terraform resource for that as soon as the SDKs are updated like we have community members and employees who are just on it trying to get those things in and so providers need to be able to release separately so with this split we've we've enabled that terraform providers can now move at their own pace be released independently and Terra firme users can now dynamically pull down the providers they need per project dependency management meaning one of your projects can be on a newer version or a different version of our provider than another based on your needs you could be locked into one for some reason so you have version locking for those things yeah proprietor oh and so we want to do this to give also like minimum change to operators instead of operating terror from core all the time you can just upgrade the plugins you can upgrade them as they're released or each project can have new ones let's see nope that's a bad title I wanted to skip this light what I didn't all right so that went kind of fast but I think we're gonna be okay so talking about extensibility in terraform I met a couple things there one terraform itself is extendable because writing providers and writing resources is really really simple it's really really easy to do once you get the hang of it I'd say that I've been doing it for three years we're so you don't actually have to know much about core at all as I showed earlier all you need to do is tie into the plug in the terraform plug in and the helper schema framework there that gives you the plug-in architecture and you can ride in your resource so what are we gonna do here demo provision secure connect around infrastructure okay so in my demo I'm going to set up our Heroku app I'm gonna set up a land a function and I'm gonna get them to talk and how I'm gonna get them to talk is is kind of more interesting than the other two again in an extensibility I mentioned earlier data sources let you consume providers that you might not necessarily have a lot of direct control one so some providers just really don't give you the API to do that I think I'm ready to exit now I don't know what my next slide is so we're just gonna do that oh yeah I wanted to talk about this one so disclaimers I already wrote there Heroku app ahead of time I wrote the lambda function ahead of time a lot of this was actually already written ahead of time so I'm not gonna do a lot of live coding necessarily but I am gonna touch live code and compile things the code you are gonna see is proof of concept this is not the quality that I would ship normally but I wanted to get this demonstration working and live demos are always kind of crazy and as Kate mentioned I have a button thanks Kate and I've had a very painful experience with the button today because Wi-Fi connectivity has not been so great so I can't tell you for sure if the button is going to work all right on to demos okay everyone see that okay all right so uh here we go nope that's not what I wanted what do we have okay so terraform plan well I missed something as demos go so terraform plan this is this is terrible version 10 so okay so plan with just a Heroku file says I don't know what you want to do because you haven't ran terraform in it so terraform in it is a command that you will run often it's safe to do so and you need to start your project with it so check from in it it will look at my configuration file which right now is just this Heroku app here the provider is empty most of the providers are going to be empty because excuse me they're configured to pull the authentication information from my environment so that's where I've kept those so I don't show them on-screen and we've defined just a very simple Heroku app the Heroku provider does not have a means of uploading the code so terraform apply so this is just kind of basic setup stuff right now terraform show show alright so I created an app but there's nothing there I can do Heroku open and oh yeah okay so here's that here's the actual repository so get remote set URL I guess it's the same as when I did it earlier I've done this demo a couple times right now so get push Heroku master I'm gonna show up the code it's a very very simple go application you're not going to be impressed apparently I'm inside a field all right Heroku open nothing to see here excellent alright so we did that so the next thing I'm going to do is set up a lambda function right so lambda okay all right so this is my lambda function or the terraform definition of it so right here this is the important part the name is index it's filing them is index dot zip it's a nodejs application and it's already zipped up for us environment variables and then this stuff is rolls they're just part of lambda so again terraform plan is going to fail because it says I don't have Amazon so I've changed my providers where I've added a new one so I need to do terraform in it again and it discovers that I need the Amazon provider now a quick little detour providers as I mentioned earlier they're they're scoped by version I'm not specifying a version so it's grabbing the latest ones and it's met it's kept per project so they're actually stored in a folder right there called terraform so you can see that under plugins under darwin the lock file gets the hash and then it has the actual binaries in there we can go and execute those if we wanted to but we're not going to do that done okay so those are the two kind of boring parts of the presentation hopefully this is the boring parts I mentioned earlier extensibility in working with providers that you can't necessarily create or control completely I wanted to use some data sources and try to get these two things to talk together so the next thing I'm going to do is write a provider and by right I mean compiler because I've already written it this provider is for a streaming service called twitch twitch TV is a service where you can go and stream your laptop whether you're giving a talk or playing video games probably and so I wanted to create a provider there but twitch API doesn't literally let create a lot there it's a lot of consuming so what I'm gonna do is I've got a channel set up and I want to create a book writer for it so here we go switching directories really quickly I'm now in a directory called terraform provider twitch and I'm looking here at the file structure and we see our main go everyone see that ok still ok so there we go main go provider function is the Twitter going here into the directory we see that provider so the provider schema there we define a schema it needs an API key by default it will read it from the environment and it's got a description and then I define my map here so why to here the API port which the channels require an ID you can't last I look you can't look up by a channel name you need an actual like internal ID so the twitch user data source uses the token I have and it will without any other input it will find my channel for the person that matches the API key so the plan there if we go and look here is to use the interpolation here so we say data source twitch user me and I don't give it any attributes because I'm not looking for a specific person I want you to find the person based on the token and that's an implementation detail I'll show you in the actual code there and then the channel I want to actually use the result of me I want to use that ID and filter the channels by that ID this is an example of the interpolation so when terraform reads this it knows it has to do the user to its user first before it can do the twitch user channel because it needs that value and these are just an example of outputs those will just show the output I'm not going to run that right now though so back to this if we look at user it's very simple we just say get user and when we get it we just set the ID in the name it's a data source so it only has the single method and then we had channel which is also not very impressive channel ID display name URL we're gonna use the URL here so I go and I read the channel ID that's given to me from terraform I get it and then I set those attributes so now they're useful data sources all right so now because this is a binary all I have to do is go install so what that will do is that will compile it and then that will put those directory the resulting binary in my my path in my go bin alright so there's the original terraform and now you see one side by side there which is terraform provider twitch so now we can actually use twitch okay so alright I'll just leave those de for now so now I've got provider twitch twitch user twitch channel and I've got those outputs so terraform init not found well spelled it correctly and then it'll work alright so now ok so it doesn't pull that in because it's already in the path but I can do terraform lan and it'll read that thing I actually won't show anything here because well oh I did did I not spin up the I never I never did the oops I never actually I never spun up the lambda function my fault so what normally when I practiced the demo earlier was that it wouldn't do anything because data sells data sources themselves are basically item potent if you don't have them go into something else so Tara from plan won't show anything because there's nothing to do there all it is it's gonna do is read so right now we're uploading a binary and here we see our outputs for twitch so let's do something with that so instead of this we want twitch channel data twit Shh what I call it me mine so I'm gonna do that okay so now what I've done incrementally I have a Heroku app I have a lambda function I'm going to send it I'm an updates environment variables so they actually know where to go and how to talk to each other our Heroku app has nothing to see there so when I do terraform plan we're going to see that I have now taking the twitch data source the channel and I'm using its value and putting that into hat the lambda function and the Heroku app so terraform apply so here I have extended terraform to a kind of adopt a plot provider that I don't have a lot of controller access to but is still actually an important part of my infrastructure just because I can't create and completely manage it doesn't mean I can't actually use it and integrate my other existing infrastructure with it so if we go here it's now going to load that super ok so this is the part that gets less than super functional so C lambda we're gonna test this just to make sure everything works this is my lambda function you can see my environment variables there and then called test and it's going to work it worked hurray okay so what I did there was the lambda function connects via IRC that uses the channel name that I gave it in the environment variable that we got from the data source and then it actually connects and sends a message over the wire so the next thing I wanted to do was try to cross over a boundary and use one of these cool little IOT buttons if you don't know what these are they're a little button from Amazon you can go and configure to trigger all sorts of things SNS here I'm gonna use this to trigger a lambda function in theory now I have to get more caveats here in order to set this up you need a certificate and you need to enter I think you need to get like the serial number out of it so I had to do some manual setup those are physical things that had to be done like I had to actually connect to this on its own Wireless and upload a certificate to it those are things that terraform really can't do yet you know we're still adding new features so we want to what did I call it yeah yeah there we go okay so without covering too much about how the button architecture works you need to set up rules for it so what I'm doing here is I want to with IOT I want to say when a click happens basically I'm going to subscribe to this SQL statement here and that's the actual serial number of my button don't go stealing my button and I'm going to tell it that when the click comes through I want you to trigger this lambda function so we do terraform apply I was told once that you're not supposed to show things that fail so you do 10 from apply it says oh this doesn't work I don't know what IOT rule is so of course we go to our checkout of the tariff run provider and we add it so this is a new resource for terraform it is the IOT topic rule I'm going to give it a rule name the rule arm will be calculated for me I'll pull that down I'm going to tell it what lambda function I want to talk to you and the SQL that it needs to look up and this is an invitation of a resource it is really really simple and create I just grabbed a couple values that from a configuration send it off check the error and then I set the ID same with read I just read it get some of the values I already expect and just set the state delete just tell it to delete itself so let's go install now when that finishes a custom build of the Amazon provider is going to have support for this but not this one right so I'm still on Heroku I'm still on Amazon version 0 1 so what I'm going to do is delete terraform plugins dorwan Amazon bye-bye we just add the Hiroki one but I'm gonna do terraform in it and you see I still just have the Heroku one I still just have that one there locally but because the Amazon ones in my path it's now gonna use that Amazon provider I'll show you that yeah so now I've got the Amazon one the custom Amazon one there so terraform apply or not plan always plan and we see all right I'm gonna add this rule for your button I'm now gonna trigger a lambda function I have a minute and 30 seconds for this to work all right so terraform apply go now the sad sad truth I'm gonna click this button and it's not gonna work I'm gonna click the button and it's not going to work because terraform is at the mercy of the provider API is it's given and despite my github issue that I filed I think yesterday you cannot actually invoke terraform or you cannot invoke functions like this without manually adding a trigger so I made a rule that says whenever the button is clicked I want the action to get forwarded to this lambda function but I need to make a matching rule on the other side that says that this function can be triggered by an IOT button so I click i OT sorry that went really fast so I'm gonna add a trigger say IOT custom IOT rule and we see my rule that's the one I made just a moment ago and that's the query I use so I'm gonna say submit now again the Internet has not been the most generous to me so we're gonna try this it's actually set up to tether through my phone we're gonna see if this works plug get an IP address it blinks it has to reconnect to the IP of the Wi-Fi every time it's blinking and it said green so in theory I just pushed a button which should in theory send this and it should happen in the next like 10 seconds it unfortunately takes some time hey it happened twice I am resisting comments about JavaScript which by saying that I'm not alright that's my demo we had a lot cooler plans for the button but then that whole trigger thing really really threw me off and I was I tried desperately to try to work away around that but it didn't work so yeah what did we do we set up a Heroku app which I wrote before we set up a lambda function which I wrote before and learned about JavaScript we wrote an entire new provider for twitch and we exposed to new data sources so now you can leverage twitch you can pull information out of there can't really create things but you can still integrate with it because then it has an API terraform can do something with it and we extended an existing provider we took a provider that comes off the shelf that's maintained by a hashing Corp and we added things to it and we didn't have to wait for it to come up stream we added ourselves compiled our own version you can cross compile that you can distribute that yourself use it to your however you want oh it's counting up I'm over that's why it's counting up okay and yeah so we extended an existing provider added a new resource did it locally and I clicked a button and it actually worked worked way better than earlier when I demonstrated I was waiting like 30 seconds and like sweating the whole time and now I'm done yeah thank you [Applause]
Info
Channel: HashiCorp
Views: 4,633
Rating: undefined out of 5
Keywords:
Id: TMmovxyo5sY
Channel Id: undefined
Length: 41min 10sec (2470 seconds)
Published: Thu Oct 12 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.