KubeCon NA 2021 Technical Demo: Multi-Cloud Kubernetes with HashiCorp Terraform

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
oh i am ready for my end cool and um i'll hit the go live so we can enjoy this with other people because honestly that's way more fun than just being here talking with myself so hello everyone thank you for joining us for our first kubecon deep dive we're going to be talking about multi-cloud multi-cluster kubernetes with terraform today my name is karim setirli i'm a senior developer advocate at hashicorp and before we dive into our technical content we're going to have a quick look at our community guidelines some really simple stuff this is a public event we're streaming it on youtube if you're watching this live and you have questions drop them in the chat but make sure you're welcoming inclusive and friendly be considerate and be respectful there's real people working on our side there's real people in the chat on the other side let's be nice to one another and have a good time because that's the best way to learn in public if you need the full guidelines we have them on our site on hashicorp.com community dash guidelines we'll have a link for that in the chat as well and with that let's get started let me switch over to my code editor we're not doing slides here and hello everyone who's joining us hi tobias nice to have people from germany as well and from everywhere and let's have some fun so if the magic of streaming worked out then you should be seeing my code editor and i'm gonna need some help from you for me the font size looks big enough but does it look big enough for you just let me know in the in the chat or monitoring if it's not big enough we'll bump it up you should be seeing two parts the upper part is the editor interface the actual files we'll be editing or looking at and the lower part is where we run commands and welcome to all the folks that are joining us still it's going to be fun we're going to be setting up a couple of clusters we couldn't figure out which cluster to deploy in which cloud so we ended up deploying in azure in gcp with digitalocean and amazon and what you're in for for today is basically a single repository that will teach you how to do all of this with terraform so let's start out with some basics this is part one of a three-part stream and what we're doing today is essentially mimicking a platform team that platform team right now is represented by me my job is to deploy a kubernetes cluster on four clouds so my team can do something tomorrow and the day after so tomorrow part two my colleague rosemary wang will show you how to use how to deploy console service mesh on kubernetes building on top of the stuff we're doing today and then on thursday rob barnes nick jackson will show you how to use vault in the terms of encryption as a service for kubernetes so if you're joining us for all three you're going to have a pretty awesome deep life experience i think and if you're just joining us for one of the parts then clearly you made the best choice by being here today because this is going to be fun all right so increase the font size a little bit let me know if this works and let me make the zoom window a little bit bigger as well and let's see where this is going and i just noticed that despite having everything on you haven't seen me so hello still kareem very nice to see y'all here so let's talk about terraform terraform is pretty much my favorite tool for this kind of job because it makes it easy to deploy things but oftentimes you find yourself in a situation where you need to deploy stuff in multiple clouds and things get a little bit complicated so what we're going to be doing here today is building our building block and making that available to our friends from other teams our application developers our in tomorrow's case our expenses team you'll get that way better tomorrow when you actually join that and so what we'll do is we'll start out by defining our terraform stanza we're using terraform cloud for this for one simple reason all of our team is working on this we want to share our state and rather than passing around state files or committing them we use terraform cloud our team has access to this the nice thing about this is that i can share secrets without exposing them i can make secrets available without sharing the actual contents of those secrets with my team and that makes life for me a lot easier so we'll start out by defining a couple of workspaces and if you've ever worked with terraform cloud or teraform enterprise same idea workspaces are essentially a way of limiting the blast radius of your infrastructure and defining specific tasks so we started out by defining one very simple thing to limit our blast radius which is we require a specific version of terraform you'll find this one in the repository which we'll share with you later defined everywhere because we want to make sure you're running the right version this is not just for us for you to upgrade it is very much to ensure that your code keeps working even when you view this in the future and not just in the future as in the other side of the planet that already is in tomorrow but let's say you're looking at this code three months from now we want to make sure it still works so the way you do this you define the required versions in your terraform file next up we've got a couple of workspaces and let me actually show you how that looks like by going to our terraform cloud interface code is nice right but at the end of the day sometimes it's easier to look at it in something bigger also if anyone is working on a taco provider please let us know this is important to our team we are aware of the pizza provider but we need a taco provider for next week so definitely help us out there switching back to terraform cloud what you see in front of you is a handful of workspaces for all the clouds that we have we've pre-deployed a few of those we've killed a few of those again and then redeployed them for the fun of it and the first one we start out with is the workspace and the workspace repository will actually set up all these other workspaces for you and before we dive too deep into that we essentially define a little bit of configuration for all of this per cluster making sure that we're in the right repository we're choosing the right execution mode in our case we want to do it remotely because that makes it easy to work on this from the comfort of your tablet now is that the standard workflow of an infrastructure engineer probably not is it nice to be able to correct mistakes while you're on your tablet definitely helpful so we created a few of these workspaces we put some validation in there and we're enforcing the 108 version also remotely and with that we're gonna do our first kickoff and apply this remotely just to show you that it actually works in our case because the workspaces already exist the whole process is super quick uh we've got the workspace is defined with for each which sometimes is one of those things people struggle with so we have an example in there that you can also use and we posted the link for this in the in the chat now workspaces are nice but question really is what do workspaces do right so let's have a look at our actual code and i'm curious for all the viewers are you thinking about deploying on aks or azure aws eks are you thinking about digital ocean or on your own raspberry pi cluster what what are you in the mood for for today and i'll give you a few seconds to answer while i grab something to drink and i know this is scary sometimes talking in public but just let us know we're really curious what kind of cluster you prefer to deploy what kind of cluster you prefer to manage and probably the right answer is obviously you prefer to manage no cluster but a pi cluster can be um can be one of the fun ones what happens to be the only one we're not doing today but that doesn't mean you can't use half of the code that we have in here so let's go with okay so we see two eks gke let's do a digital ocean because digital ocean is actually releasing a newer upgrade version of your kubernetes cluster i saw earlier today so we'll um we'll hype that up a little bit when you look at the terraform code we're going to be going through at least one example today we might be going through multiple clusters but all of them are structured in the same way so if you understand how in our case digital ocean works you'll understand how the others work so we'll start out by defining our terraform stanza with a remote for the simple purpose of being able to share state across different teams and this is important because it allows my teammates to consume resources i'm creating without me needing to communicate the actual words to them the actual resource names or resource ids so we'll start out by defining a digitalocean provider we're grabbing 214 which is the latest version if you're building terraform code like this one thing i like to point out is version pin your stuff always there's no reason to not do it because it pays off in spades if this provider gets upgraded and the feature gets replaced a resource gets deleted in general terraform will deal with that gracefully but there are edge cases where it doesn't by version pinning you prevent those problems because what worked today will work in the future and to answer sergio's question yes we're structuring it by environment in our case we consider the different clouds environment but you could make a sub structure sorry to for those that didn't see the question sergio asked will you structure it based on environments using workspaces with that logic yes we are we're structuring it with terraform cloud workspaces and actually have a large amount of them here but we're structuring it with a single environment in mind and that environment is that specific cloud so for eks we have one cluster deployed right now for the purpose of this demo but you could use the code that we have and deploy multiple clusters so let's go back to our terraform instructions when you build your code always version pin i like to add comments in there to make sure people can find that version of the provider so if you click here if your editor supports it you'll get the 214 documentation this is useful for our major providers as well there's weekly releases and something that you edited three months ago is essentially 10 to 12 versions behind you wanna always look at the right kind of documentation for the thing you're working on and next up our required version for terraform so far super simple if we're going too fast let us know in the chat we'll slow down a little bit and then go from there now next up we want to define a couple of providers and in the case of digitalocean this is a very simple configuration all we need is a token which we have available in terraform cloud this is super nice for me because i can put it in there i can use terraform to actually put that token into terraform cloud which there's always a bad joke there that we're using terraform to configure terraform cloud so terraform can run but bad jokes aside even putting your secrets in like that can be useful because it means more of this process is automated and that means less chance for humans to actually make a mistake and the other thing we're defining here is an api endpoint generally this i'm going to say this is a an extra that you shouldn't be concerned about but if you happen to be talking against a different digital ocean api then you can change the api endpoint here now so far we've defined our providers we've configured our provider with the right token hopefully and the next part that we're going to do is we're going to define a cluster like before we're adding some commentary on the resources generally just a good habit to help people who read your code to understand what your code is doing and how it's doing that and the best way to do that is to think three months down the road are you going to remember the exact page is your browser going to remember the exact page or would it just be easier to have that link in there our team generally just puts the link in there it works it makes life simple so next up we're configuring a few [Music] things on our cluster we've disabled auto upgrade mainly for the purpose of the demo but that's also an organizational choice your company or your team might say you know what i'm totally fine getting all the latest versions all the time because i trust my provider to do this it's generally what we do as well it just for the purpose of this demo we didn't want it to upgrade mid demo because sometimes that doesn't work out and if you know anything about live demos then you know that the demo gods always choose the worst possible moment for you to do things we do have maintenance policy in mind though ours is 3 am on monday which basically means you wake up everything should be done and if not then at least you know how your week is starting so either way sounds sounds like a great start of the week no questions no need to do other stuff next up uh we're defining a note pool in our case we've got uh eight gigabyte instances three of them because that's how much we need for the rest of this demo but as with all the questions whenever people ask like what's the right size the thing you're looking for here is a book called the art of capacity planning that will help you learn how to make that choice none of us is qualified to give you that information for your environment because there's a lot of interesting stuff that's happening on your side that we're not going to be aware of for our use case um the 4v v cpu set up with 8 gigs of ram works just fine we're defining a region i think for all these demos we're trying to stick to california so if you deploy this and run your own workloads you will get blazing fast speed and then finally of course if we're deploying a cube cluster or any kind of cluster we need to define a software version and there's two ways to go about this in terraform we could define this as something like 119 120 um all the other versions that digital ocean has available right now i think 121 not actually sure if that one is in there let's have a quick look yeah 121.3 is available so we could set that but in our case what we do is we want to make this as hands-off as possible so we are using a data source and specifically digitalocean has a data source for kubernetes versions which we have here and we're essentially saying give us the versions that have the 119 prefix which is two versions behind but because our server is not set to up auto upgrade we're not going to get any problems here and i think our version is 193 and the way we can test this is by having a look in our terraform console so let me start that up and so this is one of the features that are highly underrated but very cool to try out with terraform console is essentially a way to peek at the resources you're using you could do this by looking at the state file you could do this by using tons of outputs the teraform console is just a more interactive way of doing the same exact thing and so in our case we have a data source here which would be called data dot digital ocean kubernetes versions dot cluster and when i run this i can see what my name here is we're doing a version prefix of 119 as you saw before and the currently valid versions for this 119.13 so if you're developing terraform stuff trying these things out this is a super nice way to learn more about the resources you have terraform console works for pretty much everything you can do operations in there you can do you can test out how your for each works you can do addition subtractions every terraform function that is available also works in a console and for us right now we don't need anything from the console so we'll go back to our code we've got our data source which gives us the latest version for the 119 prefix which ends up being as 1 19 13. that's perfect and then the next thing we want to do is run our code and see if we can actually get a cluster here and let me quickly check in another window if my colleagues are using that cluster because if not we might just tempt the demo gods and life deploy a new cluster so there's a couple of ways of doing this i think the first one the easiest one in our case we're going to comment this which basically makes it invisible to terraform and then we're going to rerun our plan and of course because we still have an output which we're going to talk about in a second this is the beauty of live demos you can script everything in your head but sometimes you do things and they don't work out the exact same way that you have them in mind so in our case we've disabled the outputs we've removed the the cluster and terraform agrees with that so we're gonna kill our cluster and let me switch to digitalocean real quick of course this is still running there we go apologies for the quick window switching but the easiest way often to show you things is sometimes the absence of it if you're watching this on repeat this is going to be much easier because you can just go back 10 seconds and see the window at that point which showed a cluster and that same window now doesn't and shows us the getting started screen because we just killed our cluster so so far this is going well we're able to destroy infrastructure yay us now the question really is are we able to bring back infrastructure and we're just going to do an apply with auto approve small caveat there auto proof means that terraform will just go charge forward with whatever instruction you've given it as you know stun men like to say i'm a professional don't do this at home only do this at home if you're actually aware of what you're doing or in the home office or a remote office wherever you are because sometimes this can get a little difficult so while this is running let's see if it's already registering here there we go our server is being created with 119 again and terraform is running this is always very exciting now you test these workflows and you make sure they work but there's a lot of things that can go wrong and especially when you're doing everything very remotely it's different it's definitely different so while we wait for the countdown to our new cluster i'm curious where are you all deploying from are you live at kubecon are you remotely watching just hanging out in your hotel because getting back to in people conference is really weird right now let us know in the comments we probably have another 30 seconds left for this and you know like filling up the time here i got a suggestion to sing and i was like we we can do this we have a community code of conduct we need to be friendly and respectful to people so that's why we're not singing here just looking at our infrastructure make waves positive ways which is not a digital ocean pun i promise so while this is working one of the things that you noticed beforehand when i disabled the resource to destroy it we had terraform bark at us for essentially having a couple of references that didn't work out and those were our outputs so if yes there we go you should be seeing my output cf file right now which contains a couple of items that we need for further consumption so we're outputting our cluster id cluster name cluster region and just for convenience sake our console url and our terraform workspace url and those things we've defined dynamically so that means if you run this and you go into your variables file or our variables file and it changes up from san francisco to any other zone new york or you change the prefix then this will reflect in the outputs and this is important because what we're doing with this repository is giving a promise to our application developers that they will be able to get the data that we have here through terraform outputs and that removes a little bit of glue and i'll show you what that means in a second but before we do that we're going to wait until our server until our cluster gets deployed the downside of picking a somewhat beefy setup is that it always ends up taking longer see ahmed is joining us from home very nice the best office chair is there because it's the one that you choose no matter if that's an actual office chair or just a huge sofa and so far you can see this is not a pre-recorded demo because otherwise we would have definitely not made this last three minutes and 60 seconds 50 seconds we would have just made this appear in the blink of an eye but as it is what terraform is doing right now is terraform is an api client that wraps providers like digital ocean aws many others not just cloud providers but also tools like datadog through an api and exposes that api to you through a language called hcl now if you're here if you're thinking about deploying infrastructure with terraform you very likely have used hcl before but if not what you're seeing on screen right now is hcl it's a language that we use for all of our tools so if you're thinking about picking up a new language and if you're into infrastructure i can highly recommend this one it's a configuration language so it's not burdened by the possibilities of higher level languages where you have many more options that sometimes only add abstractions only add confusion we're typing something out here that anyone who spends about an hour with this language should be able to understand and that's important because when you have a language that you understand that means you can have empathy for the people that built this you can understand that hey you know what this these three lines my work pool [Music] they convey a story they convey a story that my 4vcpus set up maybe is the best possible choice right now for our startup's budget but maybe that's really way too low powered obviously we're dramatizing this a little bit but if you think about it the infrastructure you deploy that's not your job right the infrastructure you deploy provides value for something it provides a way to run um to run software to run a database an analytics package maybe it runs your mobile app so all of these lines they really really convey a story that help fit into the narrative of your company so by understanding this by making it accessible in terms of readable human readable you're creating equity for everyone to just see what's going on and to remove confusion as most infrastructure engineers i guess we're all a little bit lazy in the sense that we don't want to do boring stuff we want our technology to be boring in the sense that we don't want it to [Music] to cost us brain cycles to understand what's going on but we don't want to bore ourselves with doing boring stuff so in this case this doesn't bore me because i understand it in a few seconds hoping for you it's the same if that's not the case or if you just want to expand your knowledge we've got a handful of really good tutorials available on our learn side um we'll have a link for that up in the chat notes in the show notes as well if you're joining us later easy way for you to get that information and so while we were talking took us five minutes two seconds to deploy a new cluster i'm gonna quickly switch to digitalocean again and it looks like our cluster here is ready i don't know if there's a good way well we can see that this was created seven minutes ago so apparently we talked two minutes over the time limit but we're here and this is this is working so so far our platform team is doing amazing right we started 35 minutes ago we talked a little bit about how we can use infrastructure as code to do all these kind of things and how to deploy clusters in in this case digital ocean but now the next part is we want to make this accessible to further to other teams further down the road so let's have a look again at our outputs and you can see here we've got our stuff predefined as we had it before and if we just print out the outputs then we should be [Music] good i think my screen froze there for a second but um let me see ah i think my console first that's the problem while we fix that one second you have to think about these outputs that you should still be seeing on the top of my screen as ways to communicate information from terraform to either other terraform projects or to tools outside of terraform so let me see if that works sometimes what happens with um editors like this is that the console just gets stuck a little bit which can be frustrating but we'll work through that and we'll just grab a different setup and let's see if we can get it to work here not and we'll work with this one so i switched to a different um editor i quickly restarted it you should be seeing it still at the same font size if not let us know in the in the chat but i think we should be good here and you'll see one of the things that we're doing in the dock space so we've got our outputs to find and let's get those deployed and switch back to terraform cloud and so the reason we're running this entire from cloud is that if i run into any problems i can ask my colleagues to help i can run this pretty much from anywhere as long as i have access to an editor which makes this powerful for me because that means you know one of the things i do with terraform is maintain twitter lists there's a provider for that this is not how the tool was designed with that in mind but it is one way of using the tool the way i edit this list is essentially go to github and use code spaces edit the file terraform cloud picks it up automatically makes the changes and we're good this is important because it lowers my response time to mere minutes i don't need to have the full stack installed i don't need to have all the tooling available locally if i'm confident that my code works and through the power of github actions in this case i can do a lot of good stuff i can make sure that all my stuff is properly linted before even going there i can essentially easily say you know what this is exactly what i want and this is how i want it and so zoom is acting up a little bit today but we'll um we'll see if we can make this work still i'm not sorry not zoom my um my editor is acting up a little bit zoom is doing just fine but let me switch back to my browser because that gives you a better story uh so i've got a change that i triggered via the cli my editor decided to not collaborate with that but the important part is you can see these five outputs not four and we're gonna get to the question in the chat in a few seconds quickly i want to show the outputs we have here our team agreed to have all these outputs with similarly named interfaces so we've got a cluster id name and region and the reason that's important is that we can use a terraform data source to consume these i think we already pasted the link for the the code in the chat but to give you an idea if you're using this code we have a little bit of a sample code in there for how to use this for downstream consumption and the way you do this is you use the digital ocean kubernetes cluster data source and this is if you think about terraform in terms of an api client this is a read resource that essentially pulls the remote api and queries it for all the clusters with the name multi-cloud cades okay it has if you're changing the name obviously this example will need to change as well but if not you're golden this will give you the information you need and this is really important because when you want to deploy workloads on kubernetes you'll need a cluster certificate you need a client certificate you basically need a whole cube config now i could store that in terraforms in terraform outputs but that seems like not the best approach it means that for you to consume the latest version i need to run my workspace again it means that you need to trust that everything in there is secure that it hasn't been modified this is all very fragile it gets too fragile for what we want to do so the way we're doing this in our case i published a name for you which is what's really important here and you can then consume that name through that data source and this is this is magical on a couple of of different levels the team downstream that wants to deploy their workload on humanities doesn't care if i deployed this to eks or aks or what version they deployed all they want to do let me switch tabs here they want to know how it was deployed what the name is and how they can consume it so in our case we talked about this a little bit we've deployed a couple of different clusters and what you're seeing here are the outputs for all those clusters and you'll notice we're using the same name uh just to make it easy for our consumers down the road so our platform team is giving you an output that contains a dictionary or an object with all the different cluster names and some extra information just for convenience i like having urls in my terminal because whenever there is a question how things are working or if i want to share the state of something not the terraform state but the progress i can just paste them the url share that on slack discord whatever tool you use and somebody else can just jump in there and see all right so let's grab our workspaces these ones and we can see we're actually retrieving four different clusters here you can see some of them were updated today others were updated at other points and here we have our output and so if we quickly refresh this and go back to our outputs you can see digitalocean has some of the stuff in here doesn't render pretty on the screen that's okay because this is a complex object it's not meant to be rendered on screen it's meant to be consumed through terraform and this is where this gets a lot of fun we've deployed four clusters and now i only showed you digitalocean but you can see on these links that we have others available so let me switch to google cloud actually not the compute engine but the kubernetes workloads you can see our cluster here is deployed we've got nine nodes and 144 gigs of memory and our monitoring is already telling us that we're low on resource requests because we haven't scheduled any workloads so let's have a quick look a little bit of a preview into what we're going to be doing tomorrow but before we do that biserub has a good question and is coupling managed kubernetes with deploying workloads in a same in the same terraform module a good practice and these are the kinds of questions where if you ask three people you get three different answers i would say no for the simple reason that you want to limit your blast radius the team that deploys the application on kubernetes the workload might not be the same team that operates a cluster that's totally okay maybe it is the same team and maybe you just want to limit the amount of things that can go wrong put those into different states and life gets a lot easier because rather than have to worry about the cluster all i have to worry about is a cluster id and so that's what we're doing here we're separating i don't know how well visible the sidebar is but we're separating our clusters for aks digital ocean eks and gke from our workloads so if you're in the clusters directory apologies if you're in the clusters directory all you have is a bearing cluster that runs nothing other than what the cloud providers put on there by default those are the things we don't touch necessarily but even if we do i don't want to make that choice here remember today's part of the story is that we're the platform team we're representing a team that provides a way for other teams to run their software we're enabling we're not making the choice for them we make certain choices in terms of how big a cluster is uh where it's located hopefully for the sake of team culture you're making that choice together with the other teams but if not that's also cool sometimes an application team might not know these things they might not care about these things so however you make those choices that is what we've done here today and if we jump from our clusters to our workloads let's jump to our console workload sorry vault workload and let me close a few things here for you to make it a little bit more visible remember how we said we've deployed this on digitalocean i've got a pretty nice setup here on how to do this with digital ocean and the way this works is imagine that the vault subdirectory is essentially our application a team that's tasked with deploying vault same could be true for any other application maybe it's nginx maybe you're running some complete different workload that team needs to get the cue config from somewhere so let's look at the process of getting there we'll start by knowing that we've deployed on digitalocean so we'll define our provider and you should be able to see that on screen now i've highlighted four lines same as before we provide a token which we have put a sensitive variable into our workspace so the application team can have access to the api but doesn't necessarily get the actual token so we don't have to worry about leaking it it's secure and it works next we need to configure our kubernetes and home providers to do that we're going to switch to a data source and in this case we're using the terraform remote state data source to retrieve the outputs from the outputs workspace that sounds a little bit confusing of course but if we switch back to our cluster our sorry our terraform clouds terraform cloud setup for the outputs naming is sometimes really hard you'll see what i'm talking about this object in outputs called clusters contains all the information we want to have so you'll see same workspace name here and that enables us to retrieve some information from our output uh dynamically so again as before we're in the outputs workspace and what i'm going to do is i'm going to start telephone console and make sure that that output actually contains what we wanted to contain and as before for some reason my editor decided to not work with me today so let's see if we can fix this real quick if not that's also cool we'll just work through it in words yeah looks like definitely we're working through this words so we'll just do it in terraform much easier so we've got our remote state data source which matches this here we're grabbing the outputs specifically clusters and if you remembered before in docs one of the outputs that we stored was clustered name this is really all that we need because with this this digital ocean resource the breed resource for digital ocean kubernetes clusters and these resource names get really long allows us to retrieve everything we need to know about our cluster and so if we go to our outputs window and let me open the console here and then type in the uh what is it the console there we go let's open that one up of course this path would not work the second we want to do it because that would be too simple what we're essentially doing here retrieves the cluster name and that gives us access to all the important information from our kubernetes configuration in our actual cube ctl config and what we're doing here then is we grab the ca certificate base64 decoded because it is stored as an encoded base64 encoded string in the api or in the api response to the data source we're grabbing the endpoint so which server we're actually talking to and in the case of digitalocean we're grabbing the token uh from the cubeconfig now if you're doing this on something like gke aks you might not be using the token you might be using the client certificate and the client ca certificate but the process is still the same with this information we're configuring our kubernetes provider which is nice very simple we're giving it an alias of docs and then we're doing the same for a helm dlks digitalocean kubernetes service makes it easy for us to see what we need to know and it's also the same abbreviation we used throughout the rest of the code now here's where it gets really interesting i think we've deployed one cluster and i don't know if i can deploy a second one or redeploy a second one because the others take a little bit longer but let's have a look at our module if you look um on the middle of my screen you're seeing a terraform module this is a local module that we have defined here and pretty much all we're doing other than passing in a root token for the purpose of this demo we're defining a provider specifically we're saying this module needs a helm provider needs a kubernetes provider and we want to have the reference for docs in there because that makes sure that the kubernetes interface for this module reflects the digital ocean so let's say you wanted to do this for gke note have highlighted those 10 nine lines copy pasted them and what we're going to do next is call it gke we're still using the same module call this gke as well and as long as we have a provider defined for those aliases this is all that you need to switch your workload from do to gke define another provider for aws switch it there now we're doing this on a cloud by cloud basis but there's nothing stopping you from doing this on an environment basis hopefully you're using the same exact module in production as you used on testing maybe testing is a version ahead but either way you want to make sure that the stuff you're doing using is the same across all environments because that way you're ensuring that your infrastructure's code actually contributes to a better experience if you're doing stuff in testing manually but production is code based as in infrastructure as code based then that's okay because at least production is codified but you're not necessarily going to be able to replicate the same exact environment because humans clicking in consoles will sometimes lead to mistakes it's also a lot of work i prefer not to have that much work so that's why codification is much easier so let's take away the gke module because i haven't configured that right now and let's see if we can deploy this while we have a look at digitalocean and maybe maybe not my editor will decide to work with me on this while we do this let's see if we can get vault deployed real quick in the last couple of minutes and we have terraform starting up you'll see that we're actually pulling in our module and rather than wait for it let me show you what we're doing here we're using a helm release resource in terraform for the helm provider that we configured against digitalocean we're specifying the repository to be one of the harsh corp releases if you have your own helm chart for vault or for your own application feel free to change that specifying a chart version please note that that's not the vault version that you're deploying that's configured separately we put that note into the code but it's good to call it out because we all know two hard things in it caching naming and off by one errors and so we want to make sure that this here is the chart version when in doubt over document your code right you're working on this right now you might be working on this six months from now maybe two years down the road somebody looks at this application goes like wow this is a work of art but i have no idea what they did here put in as many comments as you need code and version control doesn't care about this extra line but the humans reading your code will this line here lets me quickly click and see when i'm signed in which of course is going to take longer because something expired right now so we're not going to do a sign in here but you would be able to see all the releases of the helm chart and that's useful because then you can also look at the various change logs you can see what we need here what maybe we're deploying version what is it zero sixteen one uh let me scroll to that real quick and you can see it's it's still being deployed we have a vault set up here so sometimes that takes a little while longer and that's all we're doing here very simple code to get all your software released nicely encapsulated in a module that we can reuse through four different clouds or through a hundred different iterations on the same cloud if you use infrastructure as code that's you know this concept it's it's a good concept if you are new to infrastructure as code this modularization and pre-packaging of what kind of resources we need is a game changer it it makes it so much easier to do all this kind of fun stuff to do it reliably uh to work on it with your team and to see what's happening and in our case what's happening is that we deployed vault on do it took us 2 minutes 35 seconds we've got one resource currently deployed so let's switch back to digitalocean let's see if there's anything happening in our metrics for me it's 1800 and you'll see that there's a little bit of load average change over the past five minutes we started about three minutes ago so this checks out with us actually deploying a workload we can see our memory usage went up that makes sense we deployed an application our disk usage also went up again we deployed an application bandwidth usage not very heavy because we're not doing a lot other than quickly downloading a chart and that's pretty much it this was a very simple way of showing you how to deploy various applications on multi-cloud kubernetes with terraform all from the comfort of one code base now we had the link for this in the in the chat we'll put it in the show notes feel free to grab the repository we've got issues enabled we'll be using that repository for tomorrow as well and for thursday and you'll be able to just follow along if you have any questions feel free to drop them in the chat now or ask them uh in the issues and with that thank you for joining today's deep dive please come back tomorrow and check out what my colleague rosemary is going to be doing and have a great rest of your cubecon
Info
Channel: HashiCorp
Views: 2,641
Rating: undefined out of 5
Keywords: HashiCorp, Consul, Service Mesh, Vault, mTLS, SecretsManagement, CertificateAuthority, CertificateGeneration, AutoConfig
Id: EQasvKfQLy4
Channel Id: undefined
Length: 59min 15sec (3555 seconds)
Published: Tue Oct 12 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.