HashiCorp Live Codes Vault & CircleCI, Part 3: Configure Pipeline to Deploy to Kubernetes

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right and we're live I think we're like oh there we go I'm getting echo let me mute the twitch hi everyone alright I have to speak a little quietly today because there's two people in this one-bedroom apartment right now so I'm trying to be respectful of the other occupant and Angel right now is trying to install cubes easy is that right yes can you hear me yeah I can hear you but just to make sure everybody on the stream can hear everyone and just let us know in the chat if you can't hear us either myself or angel right now we're battling cutesy TL on Angel's laptop but anyway should we recap what's been happening the past two parts yeah sure I would do you like me to do that or yeah why not yeah so we started off in part one of an intended series configuring a pipeline to be able to grab secrets from a secrets management soul such as well from Hoshi car volt and we were successful in part one in creating secrets in in vaults and then having the pipeline grab some credentials which your username/password type like you know the old school type credentials for docker hub and the pipeline would build they built the docker image and submitted that grab the credentials from from vault and then was able to upload it or push the the newly created image up to docker hub from the pipeline so that was a lot of fun then the last time we got together was part two and that at that point we had plans to the next you know do the next phase which was basically be able to build a job in the pipeline to the then grab the Google cloud platform GCP credentials from vaults again using an apple which I was very new to and was kind enough to walk me through that and we were successful then so our goal was always to have you know show kind of the credentials being recovered from or grabbed from vault and then we were growing the end result was going to be deploying a new application or a new release from the pipeline using those credentials that we were retrieving from vault into a active kubernetes cluster that sounds correct we want part 2 part 1 and part 2 there in the chat so that's yeah so if anybody wants to go back and watch us for four hours attempt to deploy on a Friday feel free there's some fun introductory stuff that we talked about we recap the configuration for circle CI we talked about time outs we talked about vault agents we talked about well April and so today we're I don't think we're going to recap too much on April or agent or second CI config per se our hope is that we can deploy to kubernetes with our setup that we have in beery vault should have injected kubernetes credentials for us so we should be able to basically provide apply our application to kubernetes and the other great part about this is that we're actually going to be using terraform to deploy the application to kubernetes so this will kind of give us a way to parameterize the things we want and the things we need the only struggle right now is that we want to get em kubernetes locally but I guess I could share it on my laptop it just you'll see our our video screen get really small because I changed my screen to 4k so yeah so I I generally run Linux I do all my kind of work on into box but yeah I use Mac because of the capabilities especially the media capabilities and stuff so I generally don't I you shell out into a server I've here where locally or laptop but so my box isn't quite set up and in rosemary was showing me that the docker desktop in Mac has the capability to stand up a local kubernetes cluster and that's kind of the bottleneck rat right now but Dee should we go with sharing your screen oh yeah let's share my screen I will warn everyone that our faces will be very small yeah okay our faces will be very very small so apologies to everybody involved you know you may find yourself find yourself staring a little bit at our tiny faces one second okay all right so you should see everybody should see my chrome and you can see that we are very small over here if anybody knows how to get zoomed to increase this thumbnail size here I would appreciate it I'm on a different resolution again it looks pretty good to me yeah cuz it's on your laptop because I'm sharing to you exactly alright so what I'm actually going to briefly show is that we actually have a public vault instance I did talk about how we set it up how we configured it but you know just to recap really quickly we have a secrets kind of key value store complete with all of our pipeline credentials in here including docker hub this allows us to deploy our application image to docker hub we also have a pipeline as you can see here right now we've got a failed pipeline because we just added the docker hub credentials so I'll actually run the pipeline and we'll talk a little bit about what it's supposed to do let me just loading loading there we go all right and also previous episodes have shown that we had some issues with the time to live so I have increased the time to live so that I don't have to run anymore so that for those who were curious about how I increase the time to live I actually did it through terraform configuration a terraform vault configuration so I use terraform to configure vault so there's a lot of terraform configuring things today and like your style I know wow it was like we had to do this what three times what did someone joke that this was like the end of the Star Wars saga or something yeah my co-workers are like what but he's part three what do you do yeah well all good all good movie series I guess the end this will be awesome yeah so here we go as you can tell this is basically the first part of the pipeline which is to vault off so in summary what it's doing is that it has a vault a Perl it authenticates the vault receives a token the token allows it to retrieve some secrets so as you can see here we've got vault we've got GCP we've got some other things here and you can see that it stopped it finished it means that everything's been retrieved and at this point it should be pushing to docker hub so that's where you see this big command here docker login so it's going to push everything to docker hub if this refreshes today just for the prequels someone said well what would our prequels be out of you like this is this we did jump right into circles VI involved pretty quickly right that's where that's where we just show you how to stand up a hot like a legit physical server like I have sitting here and look in the corner so funny yeah we could do that free docker pre all that good alright so you notice that our pipeline has passed at last yay and we also have this second part which is now deployed Akkad's now we didn't actually deploy to kubernetes yet and this is just setting up the credentials to do so and the way that this works in particular is that retrieves the information from basically from the vault secret GCP secrets engine it authenticates to GCP right specifically and this particular service account that we've created through the default GCP secrets engine this was in part two it actually gives it a new service account every time so there's a new service account every time the service account has limited access it has developer access to our Cobra Nettie's cluster so what we have here is basically we've retrieved these credentials so as you can see we did cutesy TL version you'll notice that now it's been kind of dispatched here so you'll see then now it's authenticated to the g ke cluster the google kubernetes cluster that we have so that's pretty much all we got to but this didn't really do what we were intending which was to deploy to the kubernetes cluster this nodejs circles see I am right alright so let's actually see if we can do that yeah yeah I guess where we had some we had I know you had some pretty cool terraform in the project yes for a deployment yeah I see it yeah let me see actually if I can live shared to you this time cuz every time I try to do you live shared to me it didn't work but then for some reason maybe if I live sure to you it's fine alright okay it's a suggested contacts invite by email how do i all right start collaboration everyone it's not gonna see my github all right that's fine Wow okay it's thinking open yes code open that's a nice feature in that vs code I like it yeah I I knew it was there and I think I were the first one that I tried to easy with and it didn't work but I did successfully that someone that shortly but yeah it was pretty interesting yeah I think I have like 50 things running on my machine right now which also probably isn't helping the current circumstances okay you see did I get an invite I don't think you got an invite I mean send it to you by slack and then you should be able to join I would put this in the twitch but the last time I did that that didn't work either so we're going to try that mm-hmm well yeah max you logged in since which are let me give this a shot okay so while angels trying to join and what we actually have in this repository is a lot of terraform there's the vault terraform configuration so this was all configuring vault with terraform this is all in vault terraform so if you want to look back at that and you want the full explanation definitely check out part 2 which we included but the tip the thing we're going to focus on today is actually this kubernetes deployment terraform over here as you can tell these are not something that we should have right now nothing has been set we didn't actually do anything with them and I sort of put some boilerplate in there it's nothing it's nothing that's to sort of specific to the application we have so basically I need angels help to fill in the blanks this application because I don't know what ports it's running on oh yeah so five we don't know what ports we don't know what image we should be using I mean it's it's a whole situation yeah yeah so the port if if you go into app yes yeah it's five thousand okay so let's go yeah I guess forward it from yeah 82 I usually forward 80 just because it's a demo it's a 5000 or you can change it to whatever you like okay so this is 5,000 here yeah okay so the first thing that you'll see here is that it's terraforming files and what we're using in this particular subset of terraform files are the kubernetes provider configurations so as you can see here there's some back-end remote stuff that we have I'm gonna take this out for now we don't really need it right now so I'm gonna take that out for a second but you'll see we are using terraform Oda 12 to 4 we're also using the kubernetes provider I don't actually know if this is the latest Cooper Nettie's provider or not so I'm going to comment this version out for the provider just because I'm curious if maybe there's a new update but generally I put a little version there so I pin it angel have you used the kubernetes provider before for terraform yes yeah I have well actually yes for the first time this past week yes I was I was using it and I built yes so it's funny but I I the reason why I say that is I was trying to see I thought it was more more involved but it actually it's a lot easier than I expected it to be but it has different pieces right so you have a the kubernetes provider is usually what to define which system you want to attach to like a cluster and then you have other components to it right yeah which is like the deployments provider a service provider yeah and I guess it's kubernetes is a big ecosystem right so some people you know some people might just choose to use the amyl kubernetes the amyl in which case you'll see sort of like the the more traditional llamo file in kubernetes where you have the pods you have everything else but you can also use helm which is basically going to render parameters out to the EML files and if you have checkout if you check out terraform helm which is the operator helm tariff terraform operator for kubernetes helm chart for example you'll see what I mean by it renders the parameters into the yamo I'm in this case we're using terraform because maybe sometimes you do have things that you want to clear and tariffs or or in the particular brew Nettie's provider here maybe you have some parameters you have from other cloud providers that you want to be able to pass in so these could be things that are important for dns they could be important for ingress they can be important for other sort of administrative or monitoring tooling that you're putting into a kubernetes cluster you don't maybe want to use you don't have any other way of maybe retrieving that information so you just use terraform to interpolate that information in so those are the other any other options that you can think of to deploy things to kubernetes from no I mean yeah it's for me it's either terraform or you know the the gamal manifests right it requires yeah but I like the terraform option and the reason is you know when you're using like it's for like kubernetes and yamo files I think having it yeah the gamal is a data structure right so basically there's not very much you could do with it as far as injecting you know kind of logic into into it whereas with terraform you have that capability like you said to to kind of you know query some other bits or other terraform variables and stuff like that and then inject right and make it more dynamic so it brings a little bit of flex a lot of flexibility to creating the the required bits like for for a deployment instead of just having hard-coded yeah Mille or even maybe having a script to generate it you have this capability here which that's why I found it very useful when I started playing with the deployments resource for kubernetes you know I had some some variables that I want to pull out of CI CD and it makes it a lot easier to do but if you you know deploy it using the like I said the resources for for the kubernetes yes yeah it's right what parameters do you pull out of CI like out a circle CI that would maybe inject into kubernetes yes so so like if I wanted to name a label and tie it into a deployment tied into like maybe a commit for a resource I'm sorry to commit for a release you know I could get the Shah and have that as a label and inject that otherwise I'd have to write some you know yeah I mean you can tackle that problem a couple different ways but then you know you have that also that it Tara forms tracking as well right that release to a point so yeah there's it for me it's a little bit more just metadata type like you know that I like to keep dynamic fluid and not have to keep injecting you know things like that and like you said for me the terraform provider deployment provider with the metadata pieces definitely helps you know injecting I can grab information into Z so I think we're gonna need to use that pattern today right we need a way to inject the image with because I think the image has the shot in a commit shot in it correct it does well on docker hub yeah but I mean the image is pretty standard but you can yeah you could you could we could try that with environment variables in the circle CI so it has like a prime that's what I use so if we look a my able to control this oh yeah you should be able to you can click through oh yeah what I wanted to do is bring up let's see if I can hold on yours oh you know what let me just grant you this is the dangerous one there we go sup yuku brother oh come on again you know I feel like every time I update yeah it does okay alright I think it's okay now alright you should give it a try waiting okay tick control screen yeah I think you have access there you go we're good working it's just a little much so if you bring up I'm trying to bring up the the files there there's a the config file for circle if you could bring that oh yeah sure I can't bring that up there we go there we are awesome I am this is very slow today goodness gracious I think I'm running too many things that's okay we're almost there well if you can open that file for me that'd be great yeah I can look at it through there that one's okay yeah so if if we did it did it go down to the where we build the image I just forget what how what the naming pattern yeah so here's some yeah if you grow up a little there you go so right there's some tacking that's going on then I grab this circle see yeah so I'm using the circle shape build number here in another project I use a shop but anyway so so yes I use the project name so if you look at the end where you would do the docker build command there is the image name right so it's my docker login which is is part of the secret you know the username yeah and then which isn't that secret but we pulled it from bullet exactly and then we can leverage see all these all these environment variables so if we were to get grabbed well we would have to pass that on yeah okay so this is what we would pass inject kubernetes we passed the tag probably the image name basically anything parametrized here right and and if we go to docker hub you could probably just pull up docker hub and find it right there for you okay it's it's a public let me pull that see oh the interwebs are so today there's something probably running on my machine that is causing the heartache okay all right so what is the I just type a Riv number three are a so a Rivera with three instead of an e and then yeah you should yeah there you go a slash what is it I think it's no js' - yeah there is the second one okay there we go so tags there we go beloved 15 minutes ago there we go yeah oh okay all right so for now we'll hard code ish this but let's yeah let's fix up this kubernetes deployment first so for those who are more or less familiar with kubernetes kubernetes has a construct called the deployment the deployment basically says how many instances of containers and volumes you will require for this particular application and if you're doing yeah Mille the yamo looks something like this basically it says here's nginx I require three instances of nginx and this is the container image I plan on using so overall that is in some ways it's just the declaration of what you're expecting from the system so what we're going to do is mimic this in terraform so as you see we've got the terraform and in this case the terraform has one replica it looks fairly similar it's just not in yellow and we haven't the same image maybe we have a liveness probe in here so we have a couple of other parameters but for the most part it's effectively the same right and that's because you were deploying an nginx image right the example that's why I hate like or not hate but I get it get annoyed with some of these examples not from just from like Hoshi court but a lot of folks just grab and cut and it's like nope no one's really using nginx in that man I don't think right like if you're deploying to kubernetes because I'm eating for someone in chat to tell me that someone's using nginx like yeah so like maybe we should think about it and you know what I should stop being lazy and do a pull request to some of these Docs and and just you know give a good reason why because no one does this yeah well you know it's it's a simple example and then we're going to borrow it and build on it so yeah a lot of times we grab it from other places and then we never really like you know yeah update it to be realistic it's just a Doc's thing it's a pet peeve of mine but yeah it is well you know it's hard example apps are hard so yes they're yeah all right so the first thing we talked about was updating the image this is not the image that we're looking for right so we're going to try to maybe and you did mention that you prefer to pass in the image name dynamically right step through the pipeline so maybe we want to put this as a variable yeah we could do that yeah we could do that yeah sure the thing is the tag will be different so if we just use latest for the example today right yeah yeah okay yeah so why don't we do that we just do bar dot image yeah that seems for the most part we'll pass in the variable what variable we desire there you go the I don't think we need this liveness probe unless you have a probe on your no no that's not okay cause I think that's default does it you know you don't really need that yeah it does those default anyway okay so we've got a deployment we've got the image as interior as unusual with terraform you do want to specify the variable so right now we've got a variable app oops sorry we go we've got a variable app but let's do variable image type string this is using terraform OH dot 12 so what you'll see is that it is a pretty neat pretty neat way to describe the different types that you're going to get beyond string and you should put a description because if you come back to this and you don't remember yeah container image repository image and tag should we set it as a default to your image yeah do that in fact let me try typing that for you because I think I have access now yes Hey look at that yeah this is by the way for those who are just tuning in this is the first time that the live share has worked for some reason I don't know why but usually it's fine and then every time we've been streaming the past couple of weeks like it just has not worked you know I think I can tell you why mine wasn't working right now because that I had your your zoom oh you know screen up and I'm trying to click on you were trying to click on the zoom screen okay yeah that's I was like wait I can't open anything it's not working all right now I know why so yeah there are some others exactly all right um so we have a default that's it and there's a kind of a really for me this is like the most magnificent terraform command I mean everybody can debate on this but for me this is my personal favorite command and it's terraform format and it will automatically whitespace everything correctly and aligned at all yeah I know it's wait a minute wait a minute yeah is that new no it's I know right it's always been there it's always been there yeah it's Terracor format and yeah I you know I also know that the vs code extension you know there's some improvements that need to be made to ODOT 12 syntax so it's not quite you know aligning things perfectly so if you can't get it to work within vs code go ahead run tear form format and we'll white space you know everything I say you might want to run it again I made the change but it does nothing I think it's fine because you just added at the end of the string yeah okay see every I know everybody's like no that can't possibly be your favorite command I'm like actually it is my favorite okay so it neatly organizes everything for you which is kind of nice and yeah I feel like that's I think we can try to deploy this it will either go really well or really poorly but we can try to deploy this yeah I'm still getting a starting up a thingy on my kubernetes yeah so I'm not even going to try to deploy the service yet because we don't even know if this is correct so let's just make sure it's correct before we start doing anything to it and just to be thorough what do you want to name your app angel so we will just default that yeah just use them I would just use node yeah no js' right circle CI - circle CI a pit as simple I write AC this is irritating so I'm just going there you go there we go and it fixes it I know right I don't even know that to be honest yeah give me that but see that's the thing that's why this this whole live stream exists so that you can teach me about circle CI I'm out you can learn Tara for that to be fair though I mean that's nicely on the eyes but like I don't care when I'm doing real work but now I know how to quickly do it yeah and you could run it as commuter I guess you have to check it back in if you run it as part of a pipeline but yeah all right so basically I ran tear for minute which is just to initialize and grab the kubernetes plugin which as you can tell it is on version 1.1 so I'm just going to make sure I pin that it's better to pin the version numbers please do that I beg everybody to do it because so it helps with so what yeah what why would you recommend that I mean I I don't do it I'll be honest and and it does cause me pain so I'm assuming you're saying that because you want me not to have that pain right yeah because providers themselves go through updates and they'll change schemas right so if you don't pin them you'll not remember which one you actually last worked they know which version of the provider it last worked with right so you won't be able to tell what's Eman changes were you know what schema changes were made of what was affected and going forward it isn't it will be very important to pin your provider versions and especially as be slowly deprecated support for o dot 11 and you know some provider yeah so some providers will have newer versions and they won't be supporting o dot 11 so definitely pin your provider versions if you haven't already see if I ask you valid question yeah you always that I remember when going through the like the twelfth point oh yeah upgrade and you know actually core it's always like super hard when you're writing software and I've done I mean I've been through this so many times in my career where you have to you know if you want to get to the next level of a product you have to kind of do these breaking changes right to to adopt it no one wants to do it but it's it's just part of life and in software and yeah it's really hard to to justify it but again you know in order to be modern and adopt new technique on SEPs technologies you have to kind of break the old stuff yeah and I know it's as work you know the developer advocate for circle I hear all the time I'm sure you do too why did you do this to us you know what's the point I love the old system but on the same token I think it's just about you know moving on to the next basically leveling I operate with it with the product and sometimes you just have to break things - yes take it to a new place and yeah sometimes those those decisions aren't the best but I think they really are required what's the worst version change like what was the most disruptive version changed that do you think circle see I went through when we went from version 1.0 to 2.0 yeah because circle CI when you know when they when they did that platform change it literally had to be rewritten from ground up and then that's where we introduced the containerization capabilities right you know every time you have that version one product it's never future-proofed yeah so to speak right so yeah I think version ones are the worst maybe like that's just because you you don't know when you designed it like you know me it could have been five years ago in circles the ice case it was probably I've been guessing here but probably like seven years into it right the the landscape changed dramatically and all these new technology started kind of you know becoming industry standard yeah so you have to adjust and yeah I definitely won there for the the 1.0 to 2.0 and we still have very there's very few customers running 1.0 but they soon will be right fully migrated over because you can't support especially as a SAS product it doesn't you know the economies of scale kind of tip the other way if you in the red if you keep these old things around this dream yeah actually found a purse found a person who ran Terra firme ODOT seven and hadn't updated at all for a long time so yeah and it's rough it's really you know there was no way that they were going to upgrade it that easy well there's there's industries where you can write like so like there's companies that have such stringent change policies right that has to go through like so many reviews you know I'm hoping that this is this practice has changed but I know for a fact that they're still industries and companies out there that are you know they're regulated so heavily any little change has to go through a massive review process and compliance process and then right approved and then implementation could take a few months right so I feel for those folks yeah definitely all right so I guess we can yeah okay your kubernetes is working yeah my kubernetes is working I am running the docker desktop docker for desktop local cluster here so it's for those who are you know curious about the capability you can go to kubernetes docker desktop etc and um basically all it has is just empty kubernetes cluster there's nothing in it except for the control plane so what of what we could do right we could commit this change up and let circle CI run it but something that you know we were talking about earlier before we started a live stream was that we don't even know if this terraform is correct I mean it looks correct but we don't know if it's correct and it would be sort of a waste of resources for us to try to push this to circle CI and you know it would be a waste to build minutes be waste of our time to try it against the circle CI pipeline and for it to fail and that would be against the live kubernetes cluster kubernetes is neat and that you can test it locally so what we've done is we've created a local kubernetes cluster and we're going to try this terraform configuration against our local cluster before we push it up to our production cluster we're calling it production cluster we also have a production vault 2 so in that case we wouldn't be going in and changing things manually but we're able to test this locally so let's test locally yeah this is kind of like a unit test for kubernetes right required kubernetes yeah if you have on your Mac I don't have that but so if I were running this on my lubuntu setup it would be called micro Cades ubuntu actually canonical the company that produces it boom - or sponsors Ubuntu has come up with this really awesome developer platform kubernetes platform it's called micro gates and it's super easy to stand up and install and it runs really nicely so this is very similar in on the Mac version doctor has a client you know your your mileage may vary like mine and I'm sure mine's just as simple I don't know maybe I need to do some what I don't know I don't know what's wrong with it after I have you tried kind someone in the chat asked about kind yeah it's I I mean I like kind I do think that it is it is a it takes time to create especially when you're doing it locally it does take take some time to create and I think the project was new for some period of time and you know it works out really neatly usually actually we use I know a number of people you know like Nick and or Eric on the team or many others who use kiiis which is from which is sort of like the this much smaller kubernetes and but I actually do enjoy kind quite a bit I use it for actually like doing higher versions of kubernetes and a good example of this for is you know kubernetes on docker for desktop is very version specific so if you actually take a look at that even though my client is 1.18 my server version is 1.15 so the version that's part of docker for desktop is 1.15 and most of the advanced features you know we're released as part of 1.16 so this is like server-side apply but then you have even higher other versions right that's 1.18 right now so I like kind I think kind is 8 1.17 by default maybe 1.18 yeah and you can change the versions of kubernetes because as part of docker for desktop and the problem that I have with it is that it's really hard to test against different versions of kubernetes yeah there's there's so many rate like dev tools which but whatever one you use definitely run I agree with you on that I as a developer right I know Lee knows tarballs the the creator of Linux and get he's he's pretty adamant about folks you know like when should you commit this is a conversation that you know people have all the time and they argue about it as well I pretty much agree with with leanness on like you know you can you can commit locally all day long but pushing that code into a shared repository you should have run it through the wringer locally you know just to you know I don't know it's almost like keep your keep all your dirty work in you know your scratch pad your scratch work in your scratch pad which is your laptop and then you know once it's all polished or as close to polished as possible you know send that up yeah so that's where I totally agree that at kubernetes cluster locally makes total sense yeah it now be honest in practice I have not been doing that lately with my demos I just been like oh I'm just gonna run it you're just pushing like straight-up production oh no no no no no that's a production I'm just saying I'm developing no no no I still I think I've been battered I've been around his business too long and you know he stopped so the a little bit like you know don't you play on Fridays even though I know that's changing yeah trained in that sense well we don't have to worry too much about this this is a a little cluster you know for the most part you can tell from the terraform apply it's just pretty much doing the defaults of what it's detecting within kubernetes and also within the provider itself so you'll see some things computed you'll also see that here's this nodejs circle CI that's the image we specified we also had a ton of little labels that we put in there just to make sure it was able to be tracked so the app nodejs circle CI everything here and for the most part this this tariff the the kubernetes provider will work against a number of the kubernetes versions as long as the api is are backwards compatible and so it will work for a number of them it does not do server-side apply as it heads out for those who are curious it does not do service ID apply so you'll see the actual diffs right here that that terraform itself is actually checking for those differences it's not asking kubernetes for those differences if you're doing server-side apply the ideas that you would ask kubernetes what are the differences it's basically the dry run capability in kubernetes so as you can see terraform is going to create these this app and I let's see I'm gonna open another tab and we'll see if it comes up so I missed the part where you oh you set the context inside of your laptop right to point to the local machine or the local crew Vernetta yeah so by default when you enable the local cluster on docker for desktop it will set it for docker for desktop otherwise you know you can set it through here you can also use cube the cube context there are a couple of other different ways that you can set it everybody has their own preference but here if you're using the docker for desktop you can click the drop-down and just default it actually but it is it's a good point though so the provider for kubernetes in particular will pick up whatever is default right it will pick up whatever is default in your current context but you can set it to an explicit cube config you can set it to explicit and you know certificate cluster name host and there some examples of that so if you decide to let's say pass that from another component for example you know let's say you created a kubernetes cluster with tear form and you want to extract the outputs of the cube config and directly pass them in here the main difference is that you would specify the specify this kind of load from config file parameter and you'd have to say false and then you can explicitly say something like a host password something you know you can explicitly define those but by default it will pick up from cube config and we actually have another question in the chat do you prefer to develop applications running directly in containers or do you prefer using its tools provided from the framework so dotnet run versus docker run yeah I got a go to docker run on this really so you go straight to docker run yeah really why is that I don't know this I don't know I'm not a big fan of you know using the proprietary things I'm not sure he's dotnet run I haven't used that one but I'm just pretty much yeah using the tool locally right so I'm just used to it I guess yeah I do think though it kind of depends on the framework so I mean I think with dotnet run I don't tend to do dotnet run and that often because it tends to it tends to it tends to take time to actually built the dependencies and it's about equivalent to saying docker run and so in some ways dotnet core has made it super easy yeah it's dotnet core has made it super easy to to basically run things in containers so in some cases building the container then running the application and seeing what happens is in some ways a little bit faster sometimes a little bit faster but dotnet run yeah the running dot I run locally I mean the problem is that sometimes I just have a bunch of things that get messed up so the dependencies are all over the place so it makes it difficult so dotnet specifically when it's dotnet core I do like using the docker container and building the docker and container for it but for Java for example that's usually that's not always the case so if I'm using spring like I'll use the I'll just do the Java you know jar it up I don't know I think it's just the old like this admin and me that you know just just create I just have one bash group that has a bunch of arguments and it's like the same one for all of you know I mean for all of my builds to be fair so especially when I'm doing local yeah so you know it's all how I guess you want to you do your work yeah what's interesting to me though it's like some things have much like much improved tooling right so for example I run dotnet I run dotnet in a container because dotnet sometimes like sort of has these nuances to running in a container in which case I want to know right off the bat if there are any things that I should change is part of my dotnet configuration that will allow it to better run in the container and with spring for example they have a ton of a ton of nice kind of frameworks for you to do it so there's good patterns established and the tooling and automation to do it right and like your nodejs app for example did you end up using do you write it and you go like node whatever or did you run in a container yes so I usually run it locally then I package it and run into a container running a container test it you know do all the local things oh man I I oughta mate those and once I do it once if I wanted a via Slee I'm gonna do it more and more but this simple app right like for this demo I didn't put that much effort but yeah if I'm writing a straight-up application that's you know serious application yeah there's a I do a lot of automation so they don't have to repeat myself and then you know you make it make that automation as dynamic as possible so accepting arguments and parameters right so that again the idea is you know right at once use it many times and then sometimes I'll even build libraries right to to do all of that as well and then it's like if I'm doing in Python I'll just create a whole Python module and you know class it up nice call it right now oh I want to work smarter not harder yeah yeah I know well right now we're not this is not working it appears that latest was not a tag that existed so we probably want to take that away let me just oh yeah you know what you're right so j-just take the I'm sorry yeah yeah yeah you know what yes that doesn't work because like if you go to that can the config file I could tell you let me just take a look at that real quick yeah so for those who are curious basically we attempted to deploy terraform was kind of stuck it just said like it's been a kid continues trying right it's like I need to deploy this so I'm gonna try and unfortunately it gave kubernetes gave this error message that said image pull back off which generally means that it can't access the image and that's because well we don't actually have latest so that's where you see this manifest unknown if it couldn't access the image it would just basically say that it can't reach it but in this case it can't find it so we don't have a latest for this we had a specific build so if you if you jump over to that that config file though I could explain why this if folks are wondering what happened yeah and we could actually fix that too but all right but we'll do that later so if there it is so so there's a line there line 36 there needs to be another - T docker without the you know just all of that without the image tag and the reason that is is it will it will you're doing basically one tag you're just saying you know you latest has to have basically it's a default to nothing so yeah it it would definitely be just adding it there I thought I did and we didn't that's why it couldn't find it right and we did we just said leave this so that's good all right we're gonna add that near there for later but for now let's just test this make sure it well that's right all right so I've destroyed and everything was fine so let's uh terraform apply once again we've updated this with the specific image tag and notice the apply is running so I'm going to just say yes and let's see what happens it should kick off this time ah so this is funny this is kind of why I think the kubernetes provider is a little bit under some you need some construction but basically we didn't gracefully delete the previous deployment gotcha so yeah yeah I've seen that before yeah yeah not fun um I can say that maybe we will improve it it's definitely something that's gonna get some improvement but right now we'll delete that and I've actually seen that before yeah yeah then you have to hard so basically the answer is you have to hard delete it with cube CTL right yeah which isn't fun because you have to remember what you called the application was a describe rate so you keep still describes yeah you can still do a get you can check it out but mmm it's kind of irritating um so this is deploying once again and at least it's not giving us yeah yeah so what you'll see is under cube CT I'll get pods right this is where you'll see the container creating yay yay so maybe it's going yeah now there are a couple ways that you can watch it come up there's the W here that will just kind of wait for it to come up but I don't think it just sort of adds it on there let me just pop this up higher so you all can see that there we go so it's thinking it's creating the container I don't know if it's creating it correctly but you can see it should have come up by now I know do we are we even sure that this container works oh it works you know what we're just gonna do copy paste I could get fancy with the the selectors but I don't really want to do that today okay let's see what's going on I couldn't get fancy with the selectors I don't even know okay yeah so we're pulling okay I think it's just because it's slow my internet is and that's yes sir very it's still grabbing okay well at least that's yep it's still yeah I wonder what I did to this though I don't think I did anything too bad to it no no it's still it's just it's not that big but if you're you know you're streaming from that machine doing a lot of things going on right now it's sharing zoom to its you know it's not fun it's not having much fun let's see all right yeah I don't I could I guess pull this image manually although that's not going to help anything do I have to resort to the second the second cluster we have the stunt cluster that's externa I think we do yeah let's abandon the sonority yeah I know well part of this is so for those who are curious because there have been just internet concerns everywhere only lis I did do a stunt cluster that is extra happy you did that I know yeah so this is actually a to do right the devotion yeah it is a do cluster nice because [Music] there we go OK cube CTL version let's see yeah so this is different right because previously we were at 115 this is different for those who are curious we could also get cluster info but then you'll see all my endpoints so I'm not going to do that yeah don't do that I mean anyway yeah yeah but you still have all kinds of cool unwanted doctor image oh definitely okay so this is still doing this will just ungracefully shutdown plan let me just make sure that I actually it is set no it is not set in this one okay we're gonna do this all right let's see no configuration files what happened now oh you're in maybe the folder are you in there I am in this now oh it needs the cute config darn well that that's not helpful Oh from right okay so here for those who are curious this is the documentation so we'll need to actually do that to the file in cluster load config file load config file there is usually a config path can be sourced from cube config or cube config defaults to cube config yeah yeah so it was gonna say that yeah it needs to be changed in here right well I think if you if you connect to do you have it I didn't source no I didn't connect right antecube context I should but I did not right that's okay you have that's what Tara forms for right to allow you to put that in a variable and then I know so you can I did not actually create this through terraform though so that's what the that's what the proud the conniption is right now I should have done it through terraform but I did not name okay just so it's a little bit easier also I don't know if it's just because the computer is slow today but it's making it a lot harder to type okay you can probably turn off your docker client right yeah I should just turn that off it's clearly just causing problems yeah that thing sucks up so many resources Oh tear for a minute sorry once you change your config path oh right no yeah you know yeah that's a good thing hold it that must have actually been why it was the problem in the first place yeah Friday rain for those who are joint I'm joining us this is what happens on a Friday okay we'll apply it and then I am going to quit talk just cuz I got it you got it you can do it it's just not working if I like let it run like that in retrospect I should just pass it to you then but we do you have cute you configure I do you I don't know if it's big but it sits on my machine okay on this one yeah let me see what I have that you come in and see if your is it info right yeah no there's not info itself it's II's cute CTL oh there we go it's running now thank goodness alright so you know that it's running cubes you can also get the logs we'll just take a look at what's in there but just as nerds node server is yeah that's right yep okay so we did that so that was the first bit we actually deployed it to we actually got the deployment ready but and it's not really you can't resolve to it right so like right now this is just an app running but what if we wanted to access it I guess that's the big question service okay so do any ports in here in the deployment yeah I think if you're going from 82 to 5,000 right Oh everybody you need to tell the deployment which yeah can you bring up that dock yeah let me bring up the dock sure click that all right here's the dock I mean increase it I think I'm just gonna turn this back to 1080 because it was it was a lot easier to deal with it was there you go yeah so if we go back to I guess um the is it we have to look at the deployment this is the this looks like just the provider yeah oh do you want the deployment of yeah yeah the kubernetes deployment is the know down in near docks the kubernetes deployment o peace that's where you would I believe that's where we set the path to well we didn't do a live Ness probe but let's see if there is a deployment yeah mall somewhere we can at least set the port's me well in the in the docks that has an example oh okay that's how this thing if you can pull up the first yeah this no e in your kubernetes docks the the Hat she Corp ducts I'm sorry the hash Equifax okay yeah yeah the first time I think you had it there was a if you look for the deployment kubernetes in a score deployment yeah this one yep yes and if you scroll down I believe isn't there one here yeah I scroll up a little bit no I don't think so okay no you know it's the service one right well we do have to tell at which port right it is available no I I think not in the deployment but in the service let me scroll up and then go to serve servus servus servus there we go okay there we go so here's the service so yeah right there so yeah okay well the sorry go to spec container spec container yeah yep it's at the bottom yeah this one yeah nothing okay there is a well yeah okay this is right then yeah yeah this is right yeah you got it let's give it a try then we're going to take this and uncomment it because I copy and pasted it before and we so a couple things that we did differently in this service app is it so we got the name of the app which is fine but we also grabbed the selector and the selector interpolates the label from the kubernetes deployment so rather than save our dot app because we don't necessarily know if it's fired up what we're going to do instead is say kubernetes deployment app metadata 0 labels app so we'll pull it from the deployment itself so port and target porch now what do we do okay so here's um I'm gonna you get that file going right so that I'm looking at some example when the just sever the jar my memory so there is a in in the let me just put it out there for you I don't know where the ends will be but so in our in our deployment TF you can actually in this spec container because you did your template right yeah in the spec container you could actually specify the ports so let me do that for you okay all right that I was I was raped I don't think we need this so we could do am I in the right front no I'm not in your project you're right now on cathode there you are okay okay all right container and just want to make sure I'm right here so inside of container we would need to add this okay and yeah I just threw a name in there but this is this is okay and right that's what I was saying so and then in our so that that looks that looks like that this looks like it will work based on my experience okay and then the service piece yes there is a a port in the app selector I think you covered that one and then the type is load balancer right yeah look at that yep yeah yeah and then this one needs to be 5k or five thousand okay and this yeah so basically the load balancer will have port 80 but the target port is to the container port which is 5,000 that's right so right so when we builds when you're building on the service we're basically telling kubernetes we want to expose this application and that's why it's a type of load balancer right so that that will let kubernetes open up a public port which is designated here normally you would do this with 443 or some secure right the TLS type port but since we're just you know doing a demo we are in some insecure unsecure on 80 yet an ingress in front of it yeah right right so you you shouldn't that that load balancer could be uncommon it needs to be uncommon see what we're deploying locally right so if you're deploying moly there's no load balancer it will default to whatever the cluster tells it is the default but because we're deploying locally but because we're deploying locally and you know I'll let it resolve to whatever local so if you're doing tests there the different that's the nuance I think between local testing and testing it on the live cluster sometimes you do have to recognize which parameters are locally available versus not locally available yes yeah I I forgot I was like already in GCP movers it's it's fine like it just remember before you push just clean up the things that aren't valid locally anymore and there are ways that you can kind of separate them and make it a little bit easier but what you'll see here is that you know there's some neat ports you know so there's cluster IP there's some things here let's just make sure it's able to deploy and we also changed the plant you know some of the other stuff so those will be in place so as you can see we talked we did change the port so you can see the mother port gets changed over here see it working at it so yes and we'll deploy you know is that some of these are modifying others will be created so right that was pretty fast yeah I should have been super fast I know so if I actually look at the pods you know you'll see that it's still running everything is fine but it's new you can check by the age and then I'll do the kids service and now we have this nodejs circle CI port is on 80 so if the most part this looks fine and yeah the most part it looks fine we can I don't know if we want to run a test pod the dreaded nginx pod no just to just to show that it is actually if I curl yeah that curl to it oh darn does this let me just use just did anybody use busybox anymore is that still a thing yeah okay I believe that's the Alpine OS yeah still use okay curl what did we call this node J s circle CI - circle CI a-- no no no okay you know what let me you get yeah yay all right does this index.html look good yes so basically what we did was if you really want to verify that things are fine maybe what I just did was create a busybox image and just exact you know exact into it cube CTL I went into it and then just curled against the end point so this is the second point yeah and it knows because it's locally right so kubernetes knows about it yeah exactly and the trick to this though just to be vigilant you do have to delete so definitely delete them after you're done so I'm gonna delete it because there's no point in keeping it around but after that's been deleted this is pretty much good to go I think that means maybe we need to uncomment this before we deploy it before we push it right well that's where I would use like a variable to you know like like you like you do you're flagging good idea you know put enough the flag oh yeah we could do that yeah like a a tight bang what would we call that service type bar dot service type yeah you could do something like that and then you could default it to load balancer right and you could over write exactly five years again these are these are for me the benefits of using terraform over a static yamo file you know you can you can create all this infrastructure and template it up real nicely and then reuse so at this point right we would probably make a module and terraform module and then package that so that you get encapsulation but for this example let's go with this yeah thanks all right so there's one other thing that we probably need to be conscientious of and that is we don't when we deploy things right as with terraform when you deploy things and you don't save the state it has no clue or no awareness of what it should be managing right so that's the kind of the trick though with this with tear for using terraform to deploy to kubernetes you do have to store state somewhere somewhere could be pretty much anywhere in this case because I don't really want to set up a back-end yeah you're doing local right yeah yeah it's just I don't I don't really want to set up a back-end or anything so I'm just going to terraform cloud to do it for me because I don't feel like doing this for myself I don't feel like setting up s3 buckets right yeah exactly so I'm just gonna create a new workspace you noticed that I did have a workspace previously and that was for my vault config I keep it separate because I don't those two the state of my vault configuration doesn't need to be messing with my application state for you right oh and that was one of the things now that you're talking about that so that was one of the interesting things I found that about providers kubernetes providers and terraform so they recommend that you keep even the provider and the deployment separate as well mm-hmm yeah exactly there's two different states yeah so I was just like putting everything together and I was stepping over I was just like what's going on and then I read the docs and says yeah we recommend separation of concerns there and I was like oh that makes sense so yeah I kind of stumbled on that the hard way and it but yes totally the clustered appointment from the application deployments or whatever you're deploying on top of kubernetes don't put them together because there's you know all sorts of dependencies that are really difficult to map don't don't do that just well I think what happens is because it's it's it's a sync as well you know it's separations of concerns or race conditions right so you could step over a configuration and intentionally cluster wise and that's yeah so yeah at reading it I I found out after the fact instead of me just again you didn't read the effing manual so you know and but again like it's it's you want to do that right that's the thing you you want to be able to do that and you know we're aware some people really want it to be together and in which case if you do and you're using terraform cloud you can use something called run triggers which basically like if you make a change to a kubernetes cluster it you can configure terraform cloud to basically trigger the next pipeline right yeah so it's kind of like source on it so you you can control your orchestrate the terraform mm-hmm exactly so it makes it a little bit easier definitely check out there's a webinar that was there's a recording of it of how you would do it it's a really great way to at least do your kubernetes deployments and that way you don't have to worry about it but it separates the state so what I briefly did just so everybody was curious what I was doing I'm pre creating the workspace if I just kind of left this year it would create it for me but the problem is works this is this is how we actually started debugging together because oh yeah what happened was that if you do create the workspace and if you just let it create the workspace the workspace by default is set to remote so you didn't hear this man little button cost me like 30 hours of pain yeah so if you want it to just store State which is what we're doing in this case we just want it to store State change it to local so create it first change it to local there might be some opportunity in the future where you will be able to set this as a default for your organization and terraform cloud but for now you do have to go in right and you have to click local so so can I tell them what happens when you don't change it so I wasn't aware of this so I was very new to terraform cloud and I figured well I remember getting an email and I was like oh I want to because I wanted to write a blog post about how to not store state and like eight you know AWS or some other place so yeah I met rosemary at what was it it's a free invent reinvent right yeah so I went to their booth and I said hey I'm having the time I in and what was happening was my builds were running so if you have that remote your builds will run in their cloud in a container which means any environment variables that you have so you know like I said I was pulling environment variables and if I'm running it in that container it's not gonna hold down environment variables so yeah we were going back and forth back and forth back and forth and then after reinvent I was contacting her still a little bit like I don't know it was good and then finally when I went through the entire terraform cloud a UI that you just saw I saw that little button now the oh there it is and as soon as I clicked it's a local problem so but yeah you know so that's two stories yeah and I just said I think it's a message okay let's update your Docs and so I had to dig the prop the issue was I had to dig for it a little bit yeah yeah it was it was like one of those things that we bumped up in forums we tried to like figure out if we I think it's actually almost at the top of the FAQ I don't know but at this point it was it was one of those things that you just unless you were really looking for it you wouldn't really know what to search for yeah yeah but you know I I love I really do enjoy like I don't consider there's waste of time you know it's frustrating but in the same token you know it kind of reminds me that like you need to really dig deep and look you know yeah look deeper into the the the system that you're using a little and have a little bit deeper knowledge sometimes these things and imagine if you're using I won't say any names but there's there's platforms out there that have tons of switches and flags like that I can sense that there's a story behind that one too okay big name brand you know complicated systems okay let's see okay oh cool we're gonna do some cool stuff now yeah well we're actually going to finally configure the pipeline No so briefly what I'm just going in and doing just so we have a sense of whether or not this works and is I'm going to go in and I'm going to do deploy to kubernetes it will be you know tear for a minute and then tear form plan I guess we should do we manually we would manually write set up a in apply right well what do you mean by that like a hold yeah so let's just do a plan for now and then we'll see if this even works if it doesn't work we have to go back and fix it and then we'll just keep iterating yeah I would I would I mean if you're gonna do this I would just do a we could do a we can do it in here depending on how long for fire test I would say do a terraform plan for sure and then I would put a and apply okay if it's successful and then let it you know kick off and do it and then what the next thing I would do is put asleep in there for like you know six seconds something and then run the know you could do it in the same oh so wait you know what you're right that's good so you can do they deploy and then if you wanted to do are you looking to destroy it as well or run a smoke test there's I mean ideally we should be able to destroy it right so I mean maybe we just plan it and we'll see if this evil that works first and then we can apply will add the applying and then while you're doing that I'm going to just to make this run a lot faster I want to are they're already running in in parallel yeah and right okay cool cool yes yes you got it I was looking at the workflow alright cool we are good all right Oh before I do anything I need to change this to not use this explicitly there we go okay remember clean up when you do some local testing git commit we are going to call this try to deploy to cuber Nettie's okay all right well this is GCP the big boy yeah this is the real this is the real this is the real thing so actually well the nice thing is we already have our creds coming while they were coming down so that's great out of where they're coming out of the vault I don't actually think that there is a terraform clout oaken but we can try it let me see let's see oh I already have it open by the way that was my cluster we can ignore that all right let's go back to this thing Oh without a with the state you or we'd have to do that yeah yeah we'll probably have to do that what yeah oh that was 1 hour ago did my the internet confused itself up very good okay it didn't confuse itself all right let's see okay so let's see well this is getting pushed up now yeah that's that was yeah I was gonna say but anyway it's okay good well it does need to push up the latest right so yes because we won't be that much though so this small image a couple weeks ago someone was saying what is the what is dinner and the question now I'm going to present the question what is dinner going to be I think it's good to be sushi today oh your dinner about that like pizza someone last week said that they're all right yeah the last time right yes yes yeah yeah and then there was this big debate actually about whether or not pineapples okay on pizza and it was oh that's not good you're in the no pineapple on pizza yeah oh yes all right I love pineapples you know I'm Puerto Rican we love it but not on my future no Navi tha it's just the texture the texture nobody lady box me with the pineapple peeps you will be on my poopy list okay so he didn't like me yeah what did it it did not like the file there was no file found no yeah you know what you're probably we didn't check out do you see a check out oh right right not yet it's the same thing I think we have it check it let me see the good did we check out the repository yeah yeah part of the circle CI oh no you didn't do it yeah yeah you have to check out the repository in order to use anything instead repop Tori so that's right I forgot that said that because I was like there's a step that's missing here and and the doctor set up to remote doctor no you don't need the remote docker we don't write or we do no no okay well yeah we have steps yep there you go we missed that yeah yeah okay that's fine we're gonna check it out mm-hmm add git commit check out repo okay here we go once again CD kubernetes yeah no should be good yep all right let's see well at least it says it's failed so out of curiosity right now we have the tests running in parallel with Walt off what I want to ensure that running tests passes before I ever deployed a kubernetes oh for sure okay so this would have to require the run tests as well yes yes we could change that too right right yes the deploy but also right right yeah and then there would be two kind of connectors coming off of the vault off and then the run test into deploying yes so normally you would have like yeah if you chain those together just it's just a way to speed things up because they'll run concurrent um the reason why i run tests outside of you know building doctrine was just like like let's say my pipelines if i were to do this professionally or in for reals as they say it would be all of I like to separate all these different components out the tasks yeah and um it doesn't really matter so if your tests don't pass you know as long as you have a separate a job like a deployment of some sort if you make that a dependency of that deployment all the other things can pass in your test don't then that's okay right it's but it's it's designed to just do things faster you know we can go faster so I'd like to tend to break things out a colleague of mine we just had a what we call open circle session and circle CI so what we do is during those as we bring in experts from circle C I the the actual company engineers and lay yesterday's was talk was they about optimizing your pipelines what we were talking about there was basically you know Glenn was talking about how his base lines like benchmark is one minute like he wants his parents to run within one minute the whole pipeline and I knew that you have to parallel to that versus parallel runs you have that concurrency running so you can run multiple things at once and let's say you know as long as your test doesn't depend on that token there you go all right so we have to have the terraform cloud token in there available now my question is I know we may or may not have stored this involved in which case there is yes so there is it is stored involved the terraform cloud token I don't think indeed stored in vault yeah maybe we didn't pull it out into the yeah I could very well be that I don't remember doing that yeah it is in there I think so yeah I think it is rendered but I don't think we you'll find it yeah I don't think we passed it anywhere so like if we look at the ball off this is why we used agent to do it because then we can formulate the token into that into you so let's see if I look at this thing so if I scroll up and look at this there is a root terraform our C that's been generated so that's good oh but which image are we using we're using the giant one the custom image that okay all right I don't think we attached the workspace yeah I don't think we attached the workspace from all I see yeah you know we at right there no but it's not it's a it's rendered at root right oh well we could do is we could change it to temp mm-hmm what I can do I think that's right yeah and then what I think you can do is as part of this step we can probably set export I think it's TF CLI I couldn't fix something like that or something TF CLI config so basically it points it to the terraform RC that we're looking for oh I see yeah so you move that folder to okay can't you yes so usually you it would look for it at route doctor from RC but the idea is that we probably don't want that there so we can apparently there was a tariff TF see let me see TF see like config file this thing so you can specify okay so we'll just tell it to point to that temp yep slash dot terraform RC and I think I you could probably set this as part of like a a general like a default set of environment variables right if you just yeah you you you could put that in the dashboard right in and right let's or use the circle CIA API but yeah this that's good alright get add it it set terraform credentials to temp so when you try to authenticate to terraform cloud it's going to specifically direct you right to this kind of like formulaic file that it will parse some of the credential information from so what we're doing here is we're just basically saying here's where the file is so you can parse the correct information right so that you can authenticate and let's hope that T of C token is correct otherwise we're gonna have to reissue it well it did you yeah I mean wait so are you is that is that vault it's calling back it does it call back to terraform cloud and no it does not I wish it did I wish there was a terraform cloud secrets engine which I could yeah i thought has crossed my mind to develop one yeah those are but see that's what I'm saying you know it's software right so it has to evolve and those are things that yeah sure it's somewhere on the roadmap and products hasn't gotten to it yet but they do I don't know I just need one cuz I need to issue my credentials constantly I see so yeah you could do some middleware action I guess right yeah so if we're like even going further with this it would be kind of you can so vault has a similar concept to sort of terraform providers that's what we were talking about last time right so you have the GCP secrets engine which is a plugin that allows you to kind of authenticate and rotate tokens and it would be kind of nice if we had a secret engine for terraform club yeah that way I could get my own token through vault yes and vault would be able to do it for me rather than me is statically issuing the API token yeah that's pretty amazing the of course like these are things are exposed later though once I mean terraform clouds fairly new right the as far as like general availability ever called just a few months ago wasn't that long ago yeah yeah so it's you know there are some some things that we're learning in terms of like how do you yeah you know work with a SAS so it's but it's been that's been interesting so before that it was um terraform enterprise right so it was only available to folks that were using enterprise level yeah exactly so yeah we'll see if we get something if we can just do a plan and then we'll put the apply Auto approve in there but you could hold you could I guess do like a manual approval I guess you would wait for a manual approval and then you could do the terraform apply well so yeah I have this concept of smoke testing so what I mean by that is I like you know we give a talk on increasing app confidence and one of the things is in you see I see the pipeline you should be able to you know at certain stage like I won't say every time but if you have a step where you want to no worries we're going to reissue my API token first okay hide your screen if there's any sense II I am I'm gonna hide it so that no one can touch it that's great okay hold on but anyway what I was saying was yeah what was smoke testing smoke testing right so so the idea is I have a pipeline that you know when I'm ready as a developer to like I said it's all polished up ready for for at least um a review package the app and then there could be a step or you know you put a tag in and then you in circles shall you have these concepts of filters where you could say hey if I tag a branch as like let's say kubernetes ready or test kubernetes or smoke test test deployment or something anyway the idea is it would go ahead and do the builds ready build the docker image all that stuff and then use terraform to the infrastructure as code like we're doing now to create and provision and new cluster deploy that application into that cluster and then run a battery of really quick smoke tests like okay 200 is check for some 400 floors or you know all kinds of quick tests just to put a little bit of more confidence on that release and and some knowledge to it like hey this thing actually ran in a GCP you know that's target environment the GCP gke cluster in a docker container in a pod and and that's what I would use that for and then I would not use the manual step for that right Lake because then you can set a destroy command right to kill all of the infrastructure and then again you know it's just another confidence building kind of test and inside of your pipeline yeah that's fair so basically you're saying let's just continuously deploy yeah okay well you know there's like and it depends on your business right and and the the regulations and compliance see the requirements that you're running under sometimes you know what you need like four sets eyes on this thing before it goes it could be you know a medical application and and I know in the in healthcare industry especially if you're releasing software you know that that's going to be used by your customers and it's a very weak life-threatening type thing situation very risky there's there's a whole board of doctors that need to kind of you know run through this at least get a couple sets of eyes on it and okay so you know it makes sense and it depending on the industry but if you're like you know I don't know shipping shoes or something I don't know I wouldn't think it's that bad but then again maybe it's some more important than and then someone's else I don't want to get between within their shoes I have a couple I have a couple friends that are like super it's a sneakers and yeah I think they would probably be upset yeah yeah even more so than yeah their health like getting a shitty health report or something okay so anybody was wondering why these might take so long it's because I have a giant image that just has everything in it but it's all-purpose and it's not optimized for people and running well there's also like tweaks you can do here to see this is running a docker medium which is what we give away for free in the free tier if you were paying if I was paying for this then I could increase the you know the performance or the what we call the sorry the it's the best Friday you have a name for this the class resource class so get a bigger machine getting bigger pipes right and should be able to do things sauce it works it did this well it plans so let's uh using auto prove apply right so yeah okay all right awesome if we were not Auto approving you know you would put that manual step in there but well you know let's let's do this too while we're doing this mmm let's put a destroy command to yeah Oh at the end of it well yeah so like let's you have Auto proof that's yeah that's that's that's right and then I would just do another command or another job or do you want to just yeah if you wanted to do it yeah you are running tons of stuff on that I don't know it's just not cooperating with me today oh no when I do this when I jump on it yeah you should dump instead of telling you I could just help you please do this cuz yeah this thing is just not happening today so what we could do is here I'm in the yeah I'm in the right way okay yeah just one make sure so we could do like run and then name then we'll just call this Troy infra or kubernetes this movie and then from here we can say and we'll give it a pipe just multi-man it and then from here we can just say something like sleep what do we want to do like three minutes five minutes maybe yeah that's too long three minutes wait three minutes and can I do it I mean I think is there a way that maybe we could persist just the cube config and the terraform token and then just basically do this like separate pipe separate a fan out that says like destroy and then we just destroy it when it's or after deploys and we just say destroy so we don't even need this little oh you want to do I got you I got you all right let me like a completely separate job not a step yeah yeah that's that's what I was I thought I was doing but yeah thanks all right you want to call it destroy yeah destroy a Patricia something like that what's a kill are we doing all right yeah we're doing scores I think well the suitcase right help take know that it's the whole point of that thing then yeah you know let me just paste this stuff here I'm not going to type your long yeah I know it's a very long version name we need to get all this to right yeah leave a G cloud because I persisted the root cube config yeah so we need to persist temp I think as well okay yeah or something so let me just all right and then boy all right so let me just look here at the docker image set step check out and then we'll do this then I get that too yeah I did all right great and then this is off there this is where I'm off and then we need to do a tab shakes is all off today I mean it's ok and then we need to do the persistent workspace good yeah so I think they probably again like maybe you do want one specific place that you're putting things we're just going to kind of keep it you know if is there a way that I can persist like the temp terraform RC oh yeah I think you just create a wait so you're persisting this to route config right yes I'm persisting the cube config and I'm persisting the terraform RC so the terraform RC has already been persisted so does that propagate does that propagate the next time I persist or does the new persist overwrite the old persist hmm good question you see mid like let me take a look on the docks real quick yeah this that's a good question yeah because if it the reason why we're we're going through this is like if let's say I already persisted the temp terraform RC is there a way for me to just add the route queue config on top of that rather than say like let me just reattach everything I'm just just want to be sure about that I don't use that persistent can isn't too much I mean we could also Rhian you know and then but the cube config we'd still have to the cube config we'd still have to write or we'd still have to retreat oh it says here glob identify files or non glob path to the directory that add to the workspace interpreted relative or space not I don't think it's actually that bad to reoffend - Kate - GCP yeah we can just redo this whole step at the top again we do need to reopen to Kate the GCP anyway I think because the token the OAuth token from GCP that is granted GCPs gke works a little bit differently right and it uses an OAuth token so that OAuth tokens lifetime if let's say we waited for even longer maybe we just want to keep that you know we just want to make sure so yeah I think let's do that I'm gonna pop the steps here so we still need to persist once again attach it to the workspace we'll do that we don't need this fruit queue config we'll just persist the we'll attach the existing persistent one and to reoffend - kate against GCP again and then that way we can run a terraform destroy and I think that should be fine and then I will just read in it you could I think you could keep some of the the the terraform cached stuff right like I could also purchase that I see what you're saying right right so we just redo this part but you already have yeah we already have that saving State - if you're running that locally right you could see that state yeah so if I'm running this locally the state would remain the same race everything would change here well if you persisted it in the workspace yes so actually this is better yeah this is actually right yeah so will reopen to Kate to vault to basically retrieve the you know service account not all to Vaughn sorry well re-authenticate to Google to retrieve the kubernetes config you could store the kubernetes config in vault you could setup you know a couple of different ways but we're just going to do it this way for now is there no one chatting I feel like I'm logged in but I'm not you know I think it's a quiet day I actually it's very quiet today someone sent a message let me know you out there please I think it's just I think it's a quiet day today all right how do I do so that we can let we can actually access the load balancer with this thing oh yay hi that's Melissa okay I don't see one oh there it is yeah there we go lucien geek i like that what was a manual approval wait approve me approve destroy i'm gonna put that there because i don't want it to destroy quite yet i want to actually be able to see that it's working oh so you want to just click a button instead of sleep alright cool actually see you know i want to get ready to destroy it when i'm ready to destroy it right i don't want to just yeah yeah i think it's either to do a hold let me hear that for you it's an it's a hold so I don't I'm always just continuous I'm all about it I you know I think there's a merit to continuously deploying but to continuously destroying not so much right like I said in the context that I use it in right it's just for testing and I don't really care but yeah I've been spending a lot of time in the test headspace yeah so yeah that that's what I'm that's all that but yes I agree like I said there's always I'm trying to find the damn hold I want to make sure I give you the rate yeah I think that's important to consider for those who are looking at this and saying like is this the canonical pattern or approach it is definitely one pattern and approach and I think that they're depending on sort of the approach that you take some of these steps could differ and some of these could change we're still securely passing the secrets we're still securely passing and authenticating and managing the secrets the vault the trick is with this destroy piece you know keep in mind that destroy could mean that you never do it or destroy could mean that it's a dev environment that you do every single day and in which case you have to figure out the time to live for your authentication such that you're able to destroy on the cadence that you're looking for so it could very well be but this pipeline doesn't have a destroy step at all it could just be you always to put a production and only certain people can destroy it and that could be sufficient for the exercise and it could be representative of the pattern that you would take to point production but if you're doing this in dev and maybe you expect to destroy this application multiple times a day well you know you can set a time to live for some of these secrets to a very small window and then they'll expire they'll be revoked and you know you won't have to worry about it after that they'll just be reissued each time versus a long destroy you have to wait for the time to you might expire you might destroy after the time to lip has expired do we figure it out yeah I got it so basically what we need to do is looks like we have a jobs and then we need to build a hold right and I'll add that okay and it'll require obviously write the gonna require the the other bit the the creation so let's see I need to put a hold let's put it in between these two just to have some clarity believe it what wait a minute and I think that's the wrong place that I'm thinking about all right so let me double check here just gonna be sure that I get this right guys so to build a blow job and then jobs so we have and then okay so we're gonna have to add it in our where's our jobs list here so we make sure I'm getting this correctly just doctor this is not right I'm sorry I'm still looking this up it doesn't that doesn't smell right but I'm glad I'm looking at right now it's not good so let me go back okay so the bad one I can't use Type Approval I'm trying to get that for you again it's I last I don't even have an example of a member like two weeks ago I had one and it wasn't working properly because of the change I'm guessing into this because I basically I vaguely remember it but I don't completely remember I think it's this like type of you know right you're right that's that sexy right that's right there you go it's in the work flow my bed and what's really neat is I can actually test and see if this is right by Circle C I config validate it will tell me oh no you didn't like something jobs deploy to K 8 expected type string found mapping huh extraneous key oh wait is this the hierarchy wrong is that the problem oh yeah the bear he's wrong you have no heed no yeah that requires the this guy this one here needs to be the one that this this one deployed I think well because we want to only destroy it after we approve destroy right otherwise it will just destroy it immediately I think it passed now I think we accidentally put the destroy k8s one two indents in because yes that is a good thing you brought that up yeah there is a a CLI tool that will help you at least validate the formatting of it just because for what is it completion we did say that we would have to depend on the yes of this approval it out from some other project and they just like I never use it because I've been doing a lot of smoke tests demos yeah I feel like you should put the smoke test in here some time now yeah definitely right well that would be at the end of one of these bad boys here that's where I use that sleep command right yeah oh okay yeah do you have a smoke test written for this actually do you have to dig it out from somewhere I know I have it somewhere yes I would but not for this but yeah now if anybody who is curious and the repository were using we've been using it's mine I can I can post it in the if you could drop it in there there's a branch called vault and you can take a look at that there's like a there's a terraform bunch of terraform in there and there's some vault config well agent felt a parole so you can see it end to end I'm going to I'm actually probably gonna you know after this is the third one probably the last one right with our vault the chests we then we're going to do the the prequels but the prequels would be to be honest maybe deploying ball actually deploying vault we're gonna like have him in like the files in in exposed the secrets exposed in a file and like get arrested no please no by the way for those who are laughing about this the us in this vault instance we have deployed this vault instance three times and its new every time and no we did not manually go in and set it all up because it's automated I can I can attest to that because rosemary and every time I go to the oh wait what happened to that oh I turned I took it down or she'll let me know hey this is dead okay thanks yeah I just had your commit there it is alright let me make sure this is the right to branch yeah you add destroy yeah add it destroy it's sitting here it's just now deploying to kubernetes we'll see if it actually deploys to confer Nettie's that's all that the repo there so oh so we had a question so thank you for watching watching us and we're glad you joined us midstream we're still happy that you joined us and the auth is the vault cluster running in GCP I'm curious to see how you set it all up yeah so actually I use it's a kubernetes cluster and it's vault on kubernetes if you're curious to see what I did to set it up I have a repository that uses the vault helm chart and it sets up the GCP back at GCP storage back-end and a che mode and it also does the auto unseal so if you're curious you can take a look at this repository and it's just a quick way for me to create a kubernetes cluster it's not anything to anything complicated in in sort of the setup or in the dependencies as much as just it's making sure you have the right config so for those who are curious I actually have this vault helm module that I wrote to make it easier and to expedite it so what it will do is actually deploy it and you can also initialize it so this is actually the script that I set up to actually basically deploy run it and then I'll initialize and unseal it on auto set it up to auto unseal so if you're interested in the module itself the module is in it's it it's this module let me see let me see if this is pull up today this this is deploying to kubernetes yay at least it's it's that's what the standard out style do it it's gonna do it are you sure it's gonna do it yeah it's not messing around so I think I have a co-worker watching oh yeah hey doing it well it's either spinning up or not spinning up we'll see so I've got a kubernetes cluster here and we can take a look at the workloads let's see that we're good yeah these things I mean it takes a few if you yeah you know what Oh image pull back up oh is it because it's using latest oh no I should be fine does not have minimum availability one updated one honey one unavailable huh I usually go with three nodes I don't think really had one maybe I only had one node in my cluster I think so yeah yeah that there's your master your control planes they told you to go down no it's 5 it has 5 ah there are 5 so what is this thing doing I don't know it deployed it just maybe it didn't retrieve latest could that be the case I don't even know what that means image pull back there well let me go to docker hub and see that this should be a latest I mean we you go that's another reason why you should pin right yeah let me see Parker calm I want to end this on a good note so if you can stay on let's [Music] let's see you have 6 minutes ago I see image pulled back off so I don't see you latest them you see all that maybe we should pass the tag explicitly pulling image failed to pull image latest not found yeah latest is not found that we committed that I think we did but it doesn't get the second tag I think the problem is we probably need to push this yeah second tag separately like it can't just be built we have to do docker tag mmm hold on a sec let me see how I have it because I do this all the time and I never see it might be that it has to go first Rosemarie okay you want to make the change yeah I can do that let me just do that real quick unless I have a different docker version which also is the case on this machine or on this image yeah well I'm sorry can you undo oh yeah no you don't have to undo anything I screwed yourself wait so am i undoing your work no good I'm not doing anything oh oh there we go it just wasn't showing up all right can we redo I'm sorry it's just a little laggy on Lyons I did a lot of stuff I think just take that the the the one without the tag colon tag yeah by my variable and put it to the Florence okay that that will create like the latest one and then okay so colon tag build that I'm right and then let me make sure all right just to make sure it's cuz this will keep running and this is what I'm talking about with this whole sleep I'd destroy it then yeah you know but if yes oh yeah well if we cancel the problem is that if we cancel it's not going to gracefully stop right exactly so we will have to go and remove it anyway so if you were doing this just be conscientious you have to remove this the other option I think that it would probably make a lot of sense to do again I think it's partly the provider you know improvements to the provider itself when you're upcoming but you know part of this deployment yeah I don't work - yeah there are some timeouts that I think you could also set as well right okay okay that should be clean so let me so this should be working correct yes okay so let me commit this commit build latest what you could do is you could also pass this docker login image tag through and you can put it as a variable a terraform variable as well so it's at least the idea is that you could pass in if you if you parameterize your variables file for your terraform which is what we did before you'll actually see that well in most cases you wouldn't use latest you would yeah that's easier what's for dinner that's for do what is for dinner for you now I was curious I was gonna grow I do a lot of grilling but not today it's just too late and that's okay let's see so yeah yeah and all the other examples I used the the docker image name and I always build like that so well again it could be my image with docker it just could be oak which wait but you know what you know what it is I know what it is I'll stop that Kevin I think I know what happened with our with our part whither our previous problem was the last line I'm let's wanna make sure I mean the right thing yeah so the reason is this we're pushing that where oh there we go oh yes okay hey somebody stop that cancel workflow don't do anything okay it has been cancelled which is good yeah so get rid of that that tag piece and then we're good to go because what will happen is it'll send up that that that build the regular lead that information so yes this was right the problem was the yeah the push the one the thing we're pushing we're not pushing a latest we are pushing just that tag push latest okay well it's the one without anything right yeah this is the one without anything yeah yeah no no postings yeah so this is why we push on Friday all right so I guess as we come to the conclusion of this trilogy you know I think if you watch the four hours prior to this and then the two hours of this you get the end-to-end idea of how are you it's securely inject secrets into your pipeline and use them to deploy to your Orchestrator of choice in this case we're using kubernetes is that fair to say yes yes absolutely this was like again the Hulk this all came about because I was just curious about how I could leverage you know hashey core vault or secrets management tool to to basically protect keys and also dole them out right so one of the other feet nice benefits of using a tool like this is we can you know eliminate the just saving passwords like in your in your organization may be among your teams in in spreadsheets and other tools right that's one of the other kind of benefits of using that but it's definitely cool to you know use these tools to rotate secrets and generate them as well automatically right obviously you know it takes a little bit of work to get it going but that security should be the painful to get up and running and I think it's just that much painful to kind of hack things to very well once you get this pattern once you now have the boiler plating to do it for many other pipelines you know maybe you sort of you sort of standardize on this pattern too so so I'm watching the builds we're close no sorry it's locked hold on let me unlock it we're almost there it's just locked because it thinks there's another execution to it oh yeah yeah yeah so this is what terraform will do it will lock State mm-hmm are you you're unlocking it in the cloud right yep I'm unlocking it in the cloud I'm not watching the gaps you're doing it live okay oh no no locking force unlock force unlock there we go the good news is I can just rerun this from failed so yes oh no it will not do that is that because it I need to refresh it hold on let's refresh it mmm let's see rerun from field there we go there you go yeah so basically if multiple people are manipulating state right um there's this idea of locking state locking and terraform so terraform cloud does that you can tell it to unlock it you can go to force on lock and unlock it so this is just getting rerun let's see yes we don't have to build the image again yeah yeah we don't have to in this case you know I guess in terms of like making it faster I guess you could vault off you know you could selectively vault um you know use vault CLI to grab the secrets that you need for each stage right now we're just sort of doing it all in batch because we weren't sure what we needed in what stage and how so yeah maybe that's part for not just kidding doing it in the feel I don't know all I can say is like curta by the way for those who are who do want to see not apart for but another interesting exercise and it happens at I guess it would be like very very early in the morning actually in the middle of the night I guess it would be like 3:00 a.m. Pacific but to this twist channel there's Hoshi Kirk says hashey Corp does minecraft so Nick oh yeah Nick and Eric who are DA's for Hoshi Corp have been streaming and they are working on how to use the Hoshi Corp tools to create minecraft constructs and such so yeah if you're awake and you feel like watching something definitely check out tune in at a like I guess it would be 3:00 a.m. Pacific something like that and you will be able to watch them deploy things you can also watch the playback there's actually some replays as well that's that are available on this channel so you can watch them frolic in Minecraft and do all sorts of things look yeah it's been added we did it alright refresh refresh and you'll see if this works because my internet is slow today yeah so you'll see that it is up and running which is kind of nice and we can also see the services let's try that endpoint yeah watch this does not work it's gonna work stop it oh I need to go like this I think I need to just copy it like this yeah usually you know I used to usually use terraform to output that for me oh yeah let's see this usually you would just yeah oh there we go look yay we got it I know all right well amazing that's that right I guess we can say we conclude that is a rat that is a wrap it was it's been fun to learn that you know to learn all the nuances of circle yeah I and debug you know that was pretty you oh whoa whoa wait we started one more stuff we have to destroy yet right I'm destroying it you did it alright well let's watch this bad bad boy go down no thanks all for who tuned in and who hopped into chat and encouraged us and games shout outs and stuff appreciate the support as we rip our hair now you did great rosemary actually you know you drove pretty much the whole time I appreciate your knowledge especially in the proof process they I was like we don't haven't used that thing in a while and but that's okay right it's it's it's reality and how we alright so I you know I think it's gone it's now destroyed in 33 seconds you know all of our work over six hours collective has gone so this is awesome rosemary I really learned a lot from you you know and I really appreciate it and I look forward to doing some other things with you I know we're gonna we have we'll find out another stream another thing to partner out and we'll definitely hack it through so thanks for those who have been joining us very diligently the next stream I think we're going we will potentially have someone who is very security minded who's going to teach me JavaScript because I am very very terrible I'm absolutely why would you want to do that no just kidding so we'll probably do a little bit a little bit of security and JavaScript so for those who want an intro to it it will be sort of intro I'm gonna tune in for sure so okay that's gonna be awesome that's awesome all right all have a great waiting at you and thanks for tuning in yep thanks
Info
Channel: HashiCorp
Views: 848
Rating: undefined out of 5
Keywords: HashiCorp
Id: 5VBcJbBl7Uw
Channel Id: undefined
Length: 135min 33sec (8133 seconds)
Published: Tue May 05 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.