12-Factor Apps and the HashiStack

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
you saw that we just renamed the talk so the talk is going to be 12 factor apps the reason why I made this adjustment I'm talking about G RPC I think is a little early for that kind of optimization so what I want to do is talk about like apps and how you need to build them in order to take advantage of all the things you've been learning about the last day and a half so I'm going to be doing it on things what I call the hash e stack so I'll be using console vault and nomad and the goal is to put all three of these things together and see what it takes to actually write an app that can use them directly so I won't be using console template I won't be using anything that makes it easier to take an existing app what I want to do is show what happens if you buy all the way in what is your app need to look like okay the other thing we need to know is that I'm running all the new versions that they announced yesterday so if things don't work it's probably a bug and that my fault there's not gonna be very many slides I promised someone yesterday that I would have about five slides so I'm gonna stick to that all right so it's funny that we're at a conference about infrastructure and scale and all those things but most people actually have this right this is what most people have like if you have this raise your hand right see this is that's the real life like you come away I want to talk about scalability and I say what is your infrastructure and they're like like this like you can scale that on your iPhone so but well we'll just work with it right so what do you need to do to take that and use it on top of the of the Heshy stack to me I think 12 factor was kind of the kickoff point so how you write your app some of the principles outline and twelve factor really what resonate well with things like nomads that want to schedule your applications on various machines and also this idea of having your configuration be dynamic so instead of put it in the file and having a static config that you push around the idea would be your application launches and it goes grabs its configuration for the environment now in 12 fact there that means getting them from environment variables right and that's not really realistic in my opinion I think in the environment variables can be used to pass around necessary tokens let's say to talk to a vault and maybe grab other configs we've seen in some of the things around vault where you can do things like get dynamic credentials for your database and we're gonna show some of that stuff today so twelve factor apps I'm going to show you a little bit of the application that we'll be using I won't do a deep code walkthrough but I will show you the application so the application is called hashey app right there's on github if you want to look at the details but we're gonna do just step through a little bit of the code so you can kind of see some of the logic it's written in going as all applications should be you're not write anything golang that's a problem I'm not biased all right so one thing that is most important to me is like log messages for some reason people don't like to write log messages one of the most important log messages is that your app is actually starting have you ever seen assist that meant that the place that app it has no idea what's happening is it starting is it crashing so that first log statement does wonders to make people feel confident I must have done something because your app is starting right so this is important the next thing that I'm doing here is I'm going to be using vault for my secrets and I've talked to all the people that work on vault and there is no better way yet until we have native integration between vault and nomad so until then you're gonna have to pass in your secret in an environment variables now the ramification for this is if you have a nomad job spec and you put your volt token in there and you put it on github you will be on the news in the bad way okay so don't do that so we're gonna do this here we will talk about how we're gonna manage that going forward and then here basically have any client that goes out and connects to vote for me so that I can interact with volt my application generates JWT tokens for people that authenticate so we can use an in subsequent request and just so you know this is a monolith application it's okay to have monolith applications they've worked just fine adopting these tools do not require you to move to micro services first okay you still can get a lot of benefits out of this particular stack here I'm grabbing my secret from vault for my JWT to be able to sign the the tokens and also I'm going to use dynamic credentials here so I'm going to go out to volt and read from this my sequel creds hashey app and I'm expecting that to return a dynamic credential for me and once I grab it I will initialize my database pool using the credentials that I get back from vault so at startup every application instance will have their own username and password to the database and volts going to manage that for us there's gonna ping the database here this is just an example app just to make sure that it's up and then I'm going to instead of binding to an HTTP port on my own I'm going to assume that nomads going to inject the right port as I start up right so I'm going to pull my port from the environment and bind to it the rest of this stuff is just me exposing various endpoints for my app one of the most important endpoints is might help the endpoint so I would like a way to help check my application so instead of writing a bunch of scripts Nagios checks and those things I'm just going to expose one health endpoint and I'm going to be responsible inside the app to verify that everything is healthy so no one has to build anything around me right so this will be nice with a console integration the other thing that I'm going to do is renew my credentials in the background right so once my application is running I'm going to spawn off a separate grain-fed go routine to renew my database credentials on an interval okay the other thing that I'm going to do is make sure that my app shuts down cleanly so when Nomad is running and you asked to stop a job or shut down a job your responsibility is to shut down cleanly most people do not do this right I look at many people's code bases and they just crash stop done in the middle of database calls so handle those things cleanly so here I'm watching for errors or I'm watching for signals on the channel and I'm going to exit cleanly and I'm also going to close down and finish up my last HTTP request okay so this is the app we'll be working with if you're interested in all the details of how this is wired up the application is on github and this talk I will be using binaries for my workloads right so I won't be wrapping them in any other packaging format I'm just going to be using the binary that I get from my build tool ok so that's that application right so that's the 12 factor F we're going to be working with so self-contained has all this dependencies we don't need any files and it can be deployed as a single binary and we're going to use nomad to manage that for us so this is what the stack looks like for most people you go to a conference like this you go back to work and you start changing things right we're not running enough of the Hashi stack in my organization so you start to try to find a use case for these things so to me I think from what most people are after work of management application I think the three things at the top a vault cluster a console cluster and a nomad cluster is good enough for managing your application whether there's micro services or a monolith I think you can get a lot of value of just those three pieces and down here on the worker nodes no matters really flexible is one thing that I like about nomad we don't have to run a particular workload in this case we get to actually just run a static binary which I find unique to this particular platform which is kind of nice and this is just what our worker load looks like our working notes look like it's pretty simple you don't need to get too complex here the other things that I have here is also I'm using the DNS from console so I do have DNS mask why not all of our servers and we'll look at how those are set up in a moment this is what most people think that they want to do right so when you look at this stack people said what is it optimized for why do we introduce all this complexity why a lot of people do have situations that look like this right you have a bunch of services here I won't use any buzzwords in this talk I'm refraining from buzzwords right you just have applications that have to be happen to be smaller than your monoliths that's not going to describe the hose and then batch workloads right so you have these workloads that you want to do on demand when you have this kind of setup these tools really shine and soon become a necessity right so you can brute force the other case but this case is really hard to brute force right you guys ever heard of the meat cloud alright so the meat cloud is where you have a bunch of humans you more of them so that way you can run and manage more applications right when people have automation they just use a cloud platform but when you don't have any automation you hire more people then you have the meet cloud all right so now you've learnt the new thing if you have the meet cloud don't be ashamed now you know what to call it so when you're gonna stand up why are we doing meet cloud change it all right so what we want to do is just get back to reality right so this is what most people have okay can we get value out of this all right so let's see what it takes to manage and deploy this application remember self-contained gonna get our secrets from vault we're going to get our configuration a service discovery from console and then there's this other issue of now that we're using nomad and we start to scale or have multiple copies of our web application how do you find them all how do you actually get traffic on top most people skip over this part so we're gonna see what it takes to actually get traffic to all of your workloads once they're running all right that's the last slide all right guys this is where if you're a if you work on Nomad raise your hand all right so I know where to look if I get into trouble running Nomad RC release I'm going to look in your direction yell out some help okay all right you asked me to run the RC I'm running it so the first thing we need to look at is to make sure some things are healthy so we do have part of my cluster set up so we do have the Nomad already bootstrapped if you want to follow along on the setup I do have everything here on github so everything I did to provision my stack is here so if you want to follow along after the talk feel free to use this and here's my cheat sheet in case I forget something but I'm gonna try to remember things I need to do so we have volt running we have our initial console cluster vote is unsealed and you can see that it's unsealed because and a new version of vault that was announced yesterday volt will automatically come up and register itself with Nomad and this is actually nice because now I no longer need to pass in the URL to my vault server on my app and the nice thing is once you unseal volt it also registers itself I said that it's unfilled so it's eligible to be actually the master which is kind of nice so if you don't Unseld a couple of them their status will actually be failing here we have our three node nomad cluster and again console so this is our initial statement now it's time to get our application so what do we need to do first so what I'm going to do is log in to where am I so I want to log into my control stack one of the nodes so this is my three node cluster with vault console and Nomad and I'm going to make sure that we don't have any jobs running make sure I cleaned up other myself all right so we don't have any jobs running in nomad currently is this font big enough for everyone to see good okay so we have a couple of jobs and the first job that we have we'll look at is our application okay so let's look at our application so it's Heshy app right so this is the big job file if you've never seen one so this is our starting point we need to describe our job to run it so I'm gonna run it into default data center dc1 the type of this job is going to be a service right so I just want to control where it runs based on the count below here I'm also going to be doing live updates in this demo so I want to show you the kind of this update so here whenever I need to change your role versions and we and that causes the job having to be destroyed and rolled out this is the interval that we're going to do so we'll kill in this case one at a time every ten seconds all right because we don't want to drop traffic if clients are coming in and we're in the middle of an upgrade here I'm going to start with three instances online and here I'm using the exact driver okay so this will assume that I'm going to pull a binary from somewhere or it could be on the host already and I'm gonna call this command so the commands gonna be called hash yep the next thing I need a little bit of an environment variables you see that they're empty here I need my vault token until there's native integration with Nomad this is the best you're going to be able to do all right just don't check it and get help the next thing is the address to volt itself you'll notice what I'm doing here i'm using volt that servers that council because I have console running on every worker node so I'll be able to use the fact that both auto registers itself ok so I don't have to go around looking for it and then the other thing that I'm going to need is my external day now I was a little surprised when I went to do this I was assuming that I could just register any service and counsel even one that is not running in my cluster on any node but that wasn't actually easy to do so really what I wanted in this case was a hitless service I want my internal applications to assume that they can ask counsel for my external database and that actually wasn't quite easy so I'm going to have to put the database URL there myself here I'm supposed to find the artifact so here I'm going to be running version 1 of my application and I'm just gonna put this checksum to make sure that I grab the right binary before we run it the next thing is aren't resources a couple of things here this is going to help the scheduler decide where your workload will fit and give you some type of SLA about what resources you can assume that you can use one bit gotcha here in bits right so this is my network bandwidth requirement most people are setting this to ten or to 100 if you do that in the 0 for release you're going to be in for classic surprise most nodes especially in the cloud when you fingerprint them they come back and say that they only have a hundred megabits of bandwidth if all of your apps say 10 or 100 how many apps are gonna get per node 1 this is going to surprise you and zero for because in a 0 for is when it started to be enforced so now if things were working before and you upgrade I guarantee you you will not be able to schedule more than one per machine so you need to think about that going forward so here's my hack I'm saying that my app requires will be megabit a bandwidth and it's not necessarily enforced on the server side so this is ok what I would like to see here is doesn't be optional right unless I really need to specify my bandwidth this should just be a default value and just let the operating system deal with it here I'm asking for dynamic port and then here's where I'm integrating with console so I'm saying the name of my service is going to be Hoshi app and notice the tag this tag is going to be important because I'm going to use a load balancer that has console integration and based on this tag it's going to automatically create a reverse proxy with this host entry so Hoshi can't hashey app.com and then here's my check so we looked at the code earlier and I had this help Z in point so this is how we're gonna call our health checks in in the app so here's our job before I do that I need my token okay before I get my token I need to get a policy or I need to set a policy of what that token can do so here's volt and here's our application policy so for app instances in the Hashi app category they should be able to read their secret so this is where my JWT token will be stored under secret Hoshi app and they should also be able to read a dynamic password and username and password from my secret creds hashey app and then it's your responsibility in the app to renew your dynamic credentials and you need to make sure you have the ability to update the renew on system all right because that's a gadget if you don't put that in your policy you won't be able to refresh your token so what we need to do now is submit this policy to vault just gonna run vault here just to make sure that uh status is to make sure it's up and running all right so it looks good so now we're going to submit this bash history for the win anything I was gonna remember that did you all right so now I have the policy in place so with the policy in place I can generate a token for the application itself and I'll have the ability to do all the things specified in that policy so now I'm just going to do a token crate I'm going to ask this token to use this policy that I just pushed to the cluster and the name of it just going to be Hoshi app this is a display name so I grabbed this I think this is the only time that you see this token you forget it you'll be generating a new one okay so I'm going to take this token now and just stuff it in my file again don't check your tokens into github I should do a scan on github to see how many people have done this all right so what we're going to do now is going to modify the job so maybe gonna store jobs like this or generate them on the fly with some kind of wrapper that injects the token that's needed so we're gonna stick this token in here so the app now has the ability to communicate with volt to grab other secrets while I'm here and to go to my external database so here's all my nodes in GCE like the best cloud platform ever plug gotta do that alright so here's my hosted database most people get really excited about running a database inside of a cluster manager like Nomad this is going to make you lose your job guaranteed it's not only because no Matt Kirkley doesn't have support for like volumes network-attached towards volume management is a really really hard problem right it's easy to bootstrap yeah you can run a little script and it's all up you show your friends but what happens when the volume gets detached who puts it back or what happens if the node dies and then you try to tap the volume somewhere else without making sure that when that note comes back it doesn't react the volume you will lose all your data so if you're going to start out with something like Nomad and you're using my sequel or Postgres those particular things aren't really great at being moved around right even though you get your replication set up that was designed in the world where the machines don't get touched you put them in a cluster manager where start moving things around are you going to lose your data if you don't know anything else after this talk know that part do not get excited and spend a bunch of time trying to run databases inside of these cluster managers maybe for dev and QA if you just need something that isn't fault tolerant you bring up a stack and you tear it down make sense for production data loss okay so I'm going to run this outside of my cluster so I want to grab this IP address you guys like why don't you spend some more time on that because I see you people in IRC and you're like I lost my data nomad sucks it's like why'd you do that don't do that alright so I'm gonna drop this in here and we're gonna do 3306 so now I know where my database is right so our app is almost ready to go here alright so now we have our token we have our policy one thing that I did before this talk is I mounted the my sequel back in so if you look at some of the things I need to do here so I'm adding their credentials erase those before you guys hold my database so I gave Nomad or I gave vault the ability to create users in my database the next thing I need to do is tell it how to create users where is this thing here so this command just says hey whenever someone needs to create a role here's the template you use when you log into the database to create that user and grant the privileges to that user so I hit enter and make sure everything is good alright so the next thing I'm going to do is just to make sure I can actually get my credentials so I'm going to read from there and we see it works so that command forces volt to go out to my database connect to it and grab some credentials so this username and password if you're fast enough you can connect to my database but haha you're not that fast all right so now what we're good to go our dynamic environment is set up the next thing I need is console on all the worker nodes ok so we're going to do there is have a system job so let's look at jobs really quick so this is my console job so this is basically going to say run console on every machine in the cluster right this is a really handy way of doing it so the type here is system and what I want to do here is run the console agent on every machine in the cluster so if I add new machines I don't have to worry about adding them this will automatically spanned out and I'm just going to pull the binary for console got the shot one put it on this and then run it ok and this will connect out to my existing console cluster I'm going to join the cluster alright so we're gonna run this on every node so one thing we can do now Oh Nomad plan let's just show that off right newslady please show plan wanna show plan alright so here's the console thing alright so that's fantastic that's really impressive you show your boss like we have plan ok so you see this thing is going to create five instances because currently I have five nodes in my nomad cluster so we look at nomad what is it node I remember the commands I look like I know what I'm doing don't I like feels good okay so now that we have the plan in place let's go ahead and run it so we see that the dry run says totally work so let's run this now nomads run jobs console all right so now we we get allocation for each of those instances of of the job and at this point we should have if it Internet's fast enough on my cloud platform then we should have this running so Nomad status console right so we see that the job is now running on every machine so at this point I should be able to join all of those agents to my cluster alright let's do that now right so we join each of those I know that those are the names of my workers ahead of time so now if I do console members we see that we have all of those machines attached so now we can do service discovery locally on every no matter worker in the cluster and no man knows how to use a local instance of console to automatically register our services when we specify our service and our job definition alright great so we're making great progress now so the next thing I need is to deploy the app itself everything is in place for this thing to actually work so we've already seen the job we're going to run it let's go ahead and plan that again jobs has she app so right now it tells me I'm going to make three of these things there none of them existing so it's going to forcibly create these so let's go ahead and run those now so gonna start with three instances of them nomad run what is this jobs and we'll say Hashi app right so at this point if everything is working these should start running on some of the nodes in my cluster right there's only three of them so the scheduler will place them to where they're a best fit so let's say Nomad status so at this point let's go a little bit more detail about this job well see that we have three of them running now my app could be running and since I retry my database connections I could actually have issues so how do you find out if you have issues in a nomad cluster you want to look at the logs so the new release of of nomad you have the ability they actually made it a lot easier to get the logs and you can actually get a logs for a random instance inside of that particular job so since we have three of them I don't necessarily need to use one of the Alec IDs I can just use any of them so what I can use is the FS command so this epic and wall dynamically grab in this case one of the Alec IDs from this hashey app job and then go and look at the Alec directory so every time you create a job in nomad you end up getting out of directory where your binaries put and also your logs okay so now if I hit this command I get to see what my standard out logs look like so I see the help checks are flowing through so I think everything is healthy but I can also look at the standard error logs right and most logging libraries after you log the standard error and that's standard out this is why you need both streams but in this case you see that my app is starting I was able to grab my shared JWT token from vault I was able to get the dynamic database credentials I initialize my database and I bound to the port that Nomad provided me right and now I have this background thread that's renewing my credentials at some interval shorter than the lease duration okay so my app is running right so I'm feeling a little confident here now you're ready to get traffic these are private IP addresses Wow you could hit this IP from one of the instances inside of the cluster this does nothing for outside of the cluster so I say curl here right we get this hello message let me just clear this a little bit so that works but who knows what the random port is for all the other instances we can't be publishing these things as things to get moved around so how do we fix this a couple of days ago I learned about a load balancer product called how many people have ever used this is it fabio habilis so this particular load bouncer has native integration with console and based on tags it will create reverse proxy with all the settings that you need zero config I cryed it and it actually works there are a few things that work when you first try them this is one of them right so what I want to do is I'm going to use a pattern where I run this particular job on every instance in the cluster the reason why I want to run this on every instance of the cluster instead of on a separate few machines is that I get a little bit of high availability and I can take the rest of the nodes and put them behind the low bouncer and use my cloud provider or if you're running your own data center you can use your high-level load balancer to just send traffic to the port that this is listening on so I'm going to roll this out to every machine in our cluster so now we have the ability to do load balancing this way so we'll do nomad forget the plan thing I'm just going to be I'm sure it's going to work you know great all right so at this point we should have this application running on every machine in the cluster let's look at status really quick all right looks like it's running and then let's go and check our UI so I have console running don't think about hitting it I'm using a SSH tunnel because I did this before and then people started like removing services so it's proxied on my local host so put your laptop away all right so you can see all of my services are healthy now right so fabulous healthy hashey app is healthy I have all these instances running so now we should be able to actually use our reverse proxy so we come over here to the dashboard we see that our three instances are in place ok and also I have a high level IP address from native integration with my upstream load balancer so let's go look at my load balancer really quick so again if you look at the infrastructure here we have a load balancer ringing on every node in the cluster we have our operating on a subset of those nodes with dynamic ports registering themselves to console this is reading that based on tag here I get this host if I come in like with curl and I set the host header to Hashi app comm I'm going to be routed to this back-end you can do this based on pass or you can do it based on the hosts of the host attribute so now if I go here to my networking stack and I look at my load balancer you'll see I have this load balancer called how she stack in place and then what I've done here is I just put every one of my know mat workers here's backends and they're sending traffic to port 999 and that's the port that might load balancer run zone so this is just going to pass through the traffic here and I can also use this port to start to interact with the machines so now let's just see it this works and all we need to set the host ok so here we'll set the header the host header caps lock will be hashey app comm right so that works so we're gonna do now is I'm just gonna put this into for loop and we're going to close this out a little bit so let's just run this in the background all right actually I want the version so we know it says hello let's actually pull the version got it alright so this is great so we have this full stack in place so the next thing I want to do is like how do i scale the application right and that's pretty easy so the whole system is declarative I go back to my job spec and I say Hashi app I would like to increase the count so how many should we run say a number 300 No 300 divided by 10 minus 15 plus some other stuff gets us to eight okay so what we're on eight of these 300 this guy alright so we'll do no matter run and this goes out and says well a few of them need to be modified but five more need to be recreated okay so let's do this again and see what plan has to say about it so we're gonna go back and put this back to eight from eight to three let's see what plan does so the previous experience would be that's how you do it we'll just run this again to get us back to three these things happen pretty fast so let's say we bump this to eight and given the new feature for plan am i typing faster than the internet wants to go right now okay let's put this back to eight all right and this look what plan has to say about this so no mad plan Jobs has she app so this tells us hey you have three currently running you increase the count from three to eight so I need to create five all right so we know that actually works let's just run that again so these will spin up and then let's see if they actually come up and get registered so our load balancer will only pick up knows that report healthy and console so this is why you need a health check in place right so this tells me two of them are failing so they're probably still initializing but they haven't been able to satisfy the health check so they don't get add to my load balancer output once they start to succeed we'll see this number oh so they're all passing now now we see that we have eight in our back-end and we don't have to change any of our upstream clients so this works and let's look and see what is happening to our network traffic are we dropping packets on the floor so you're scaling up and down and a nice thing we're not dropping any packets on the floor right that's kind of important dropping packets is bad okay now the last thing we want to do is scaling out as easy what about up rates okay so we need to roll the cluster to a new version so what we're going to do this now is we're going to modify our job again so we'll come here so now what I need to do is manipulate the artifact so I know I want to go to v2 we'll come here to v2 and I need to shop for this particular binary so we're going to change the binary watch these VIN skills like whoo you ever see someone do that for the first time like I don't know how to do that so let's get this shop really quick so I actually have it let me grab this so when you if you're gonna use the exact back in it's probably a good idea to store the shot with the binary so that way you can actually find it ask me why later so ideally in my storage where I have all of my binaries I'm storing the shop along with it as well so that way if I have to do a dynamic configuration it the shop file will pull up and I can bring it down but take your time Internet all right so let's just put this in place all right so given those two things we'll run our plan again it's on no mass like okay all right you've changed a bunch of things and some of the things that you change will require things to being destroyed and update it ideally the artifacts fall in that category so what we're going to have to do is based on our update policy we're gonna have to roll through this based on the number you want in parallel in our case we're gonna do one every ten seconds until we roll out the entire update to every instance in the cluster so here's what plan is reporting you have a new sha a new binary and we're gonna have to do this rolling update are you sure like yes so we'll run this and you'll see from the output here that it's going to do these one at a time okay so something gets destroyed and you'll see it telling you when the next evaluation will occur for this particular job so you see here you'll see that the next evaluation will happen in ten seconds so that meets our policy okay now while that's rolling only help the instances are being added to the backend so you guys CTO pop in there yeah so there's one instance of tonight oh so this will continue to destroy so this is why you need to do clean shutdowns in your application right no Matt can't be responsible for that you've asked it to destroy add new instances so your job is to make sure you shut down cleanly so we don't drop any packets along the way and since we're doing all of those things in our application no Matt can safely roll out our updates without us being paranoid if users are going to see five hundreds or other errors so we'll continue to roll this until we're complete let's verify here so we have a couple more allocations and evaluations to complete and let's see how far along we are let's look at our back ends here and you'll notice that in your load balancer there's a chance you may have more than a - maybe less than one but that's just because you're going through your loop but eventually you'll get back to four capacity so once we were back to eight here we can be pretty sure that we're close to being complete we pop over here and you can see that we have 2.0 across our cluster so that's how you use the hashing stack to manage twelve factor applications thank you for Kelsey questions about the hashtag Nomad versus kubernetes whoa those are two different things seriously I'm going to be hacking with one of the Nomad distinguished engineers over here later on possibly hooking up Nomad to kubernetes so what does that mean what is no matter in this case Nomad is a really awesome scheduler right it's already proven that I can schedule all kind of workloads kubernetes is more of a full platform right not only is it scheduler it has three of them it has all these other things and it's like a foundation four billion a platform so this is why you see things like open shift Deus and all these things built on top the hacky stack in this case is more like a collection of LEGO bricks that work really awesome matter of fact volt would be a great back in for kubernetes secrets no Matt will be a great scheduler for the kubernetes platform and for any other platform right you could imagine a cloud provider in the future taking Nomad and using it to schedule those VMs that you click on in your cloud platform okay so I think they're just two different things and this is what the hashey Corp team is really good at right they build these pieces that you can use console falls into that category so those are things you actually can't really compare so when you saw me put together this that I had to compose the stack that I actually want it and for a lot of people Nomad itself is the missing piece they already have something for consensus so I think it actually is a great place to be in and we need something across that category hopefully that answers your question any other questions there's one here are you failing the the health check before you're exiting - to minimize the relax so that console will t register it prior to so basically yeah as as you're you know taking one instance out of rotation like if you scroll right down at the bottom where you've got the block and clothesline 103 so that's obviously going to close existing HTTP requests but what's to stop the risk of the load balancer sending new HTTP requests to that instance while it's in a shutdown state so what happened here is the library I'm using common manners it's going to reject all HTTP requests so if I have one in flight and I'm waiting to respond I'm going to respond 200 the next request that comes in if I'm in the middle shutdown will be rejected okay right and also no Matt I'm stopping you I know what the service is and I have integration I can go pull you out as well right so I don't necessarily need to do it here yeah one thing that I need to do here though would be if I don't need that secret anymore on shutdown just tell vault hey go ahead and get rid of those credentials yeah and also yeah I could also say if I wanted to that you know what I'm no longer healthy and yellow on purpose but we don't necessarily need to deal with some of those race conditions but I do like the idea of throwing away your toque isn't telling volt it's ok to get rid of the crust ok cool thanks good question I think I have time for one more question oh ok you put Fabio and console as nomads jobs and they obviously take up the network as those meta services taken up network resource where you said earlier you had to you know you had to provide a value and it would prevent other jobs jobs that you typically want taken those resources is that a sensible is that a sensible thing to be for this for Nomad to be doing alright so I think the question was I'm using nomad to schedule some of the cluster dependencies like the Nomad agent itself right and I think this is an important thing to do you want Nomad to be aware of all jobs running in the cluster so an ideal state to be in is anything that's using any resources the question managers should know one thing I didn't show is that there's a new stats that you can get so in Nomad itself you can actually see a bunch of stats about what the note is actually doing so say Nomad status and charge this really quick and if I look at one of these let's see console and I think can we do it here let's see so I made me have to do on the allocation ID so just do allocation status or Alec status and then I can do it on one of these IDs so we'll bring one of the IDS from here you can actually see what's being used okay so here's all the IPS ports CPU utilization memory utilization CPU and stats and ideally this should help Nomad make better decisions when there's time to kick things off of the server but you don't want to have it any blind spots so I think having this that's implementation lets it go and get other services that are running that you didn't schedule but ideally I like the idea of having this thing scheduled everything so I can kind of know if it will fit or not relying on something that was installed backdoor that might be a bit fragile and also using a system job for the console agent makes it easy to grow the cluster because if a new node shows up I can ensure that no Matt's going to make sure that it's running all the system jobs that need to be there all right so a great question I think I'm out of time I'll be hanging around in the back so if you have any more questions or you want to see this demo again I'll be happy to show you thank you so much
Info
Channel: HashiCorp
Views: 20,200
Rating: 4.972414 out of 5
Keywords:
Id: gf43TcWjBrE
Channel Id: undefined
Length: 41min 59sec (2519 seconds)
Published: Mon Jun 27 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.