From Zero to WOW! with Nomad

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone and thank you for joining us today for a chic web solutions engineering hey now hosted by mice puzzle um I'm gonna go ahead and record it so that you can all get the recording afterwards so today we're going to talk about how chic nomads kashikar nomads isn't easy to use and flexible cluster schedule that enables that organization to automate the deployment of any application on any infrastructure any scale handling anything from containers to VMs rest will show you how to get started with Nomad right from the very beginning of what it is and how it works through downloading it and getting it running on your computer within minutes I also want to know that they know is recorded and the recording will be made available after post-processing usually within a day or two I'll email it out to all of you so the demo will be about 20 minutes and then will allow up to 30 minutes afterward for questions so please go ahead and submit your questions through the portal and then we'll get to them at the end so with that let's go ahead and get started take it away Russ awesome thank you very much Amanda so hi everybody greetings from a dark and gloomy London unfortunately today it's been been raining quite bad but that's just the UK weather so yeah today I'm going to go from zero to Wow with nomads one thing I will guarantee is we're gonna start at zero hopefully we'll get you close to two Wow or at least two that's interesting we've got a lot to get through so let's get started so the agenda today I'll do a brief introduction but then I'm gonna sort of talk about what what is nomads and hopefully you know I've got lots of people on the call who have at least heard if nomads or maybe know a little bit about about nomads perhaps you're using some of the other hash you got tools and you want to know what it what is nomad what can it when it can it bring me in addition to terraform and volton and console i'm going to go through some important nomenclature so there are some terms i'm going to be sort of using throughout this this presentation I thought I'd kind of spell those out kind of early so we all know what we're talking about I'm going to do a very very simple architecture overview I'm not going to spend a huge amount time talking about kind of the full architecture and all the various different ports and things that you would need because I really want to get to step five as soon as possible which is starting simple okay so if you've never used nomads before you want to understand how to literally get started from from nothing this hopefully is the talk for you then we're going to sort of show how you can run your very first job then I'll show how to scale things up and then we'll talk a little bit about some of the things you can you can try next after this and then we'll have time at the end for for Q&A as Amanda said so very quickly Who am I I'm unresponsive oh I'm one of the solutions engineers for for hashey Corp I've actually been a hashey court forward just coming up to two two years to start of next year we'll warm up my two year anniversary even though I've been here for two years I still feel sore fairly new with with no with Nomad we've got some people in my in my team I work with here in London who really are kind of the experts and might get kind of go-to people so I figured it's about time that I learned a little bit about a baton a Matt as well I thought what better way of doing that than to do it all together with with everybody on on a webinar so we're gonna go through some of the some of the basics together I'll go through the kind of the steps that I kind of first went through recently to kind of get you started on the Nomad journey so the first question is what is nomad so Nomad is a modern lightweight workload scheduler the question then is what is a workload sherrilyn so back when I was a server administrator I was the workload scheduler so application teams would raise the tickets they would throw their code over the fence to my team and then we would arrange compute storage and networking the obvious problem being it wasn't particularly fast or scalable solution also as well we weren't fortunate enough that every time we had a new application to run we could we could buy additional equipment to run it on so this usually meant we needed to find a suitable home for the application on the existing infrastructure now thankfully skip skipping forward to today we have Nomad that will do all of that hard work for us so I said we will create a giant pool of compute network and storage and we'll let Nomad place workloads where it's most efficient now Nomad supports containers traditional VMs and static binaries amongst other things because in practice it's fairly rare to find an organization that's moved all of their workloads to containers so we still live in a world where we've got different types of technology that we're using that we need to manage so Nomad supports you know obviously things like docker but also things like Java and binaries running natively on the host operating system as well it uses simple job config files we'll see some examples of this in the demonstration in a little while that allows people to understand and deploy workloads onto onto Nomad I've kind of already talked where they said that kind of idea of automatic inefficient workload placement so instead of me having to find a server that had capacity to to run this particular application which used to be quite difficult no matter actually do perform an evaluation a calculation and it will work out where that workload actually needs to run it also supports a number of update strategies as well so when we've you know we've deployed our workload at some point we can have to perform an upgrade there are different types of upgrade strategies that are supported inside of nomads so there are things like a rolling upgrade where we upgrade you know a set number of machines so maybe we update two instances of an application wait for them to become healthy then we do two more and then they become healthy min - more - more so you want so the entire application is rolled out it supports canary deployments whereby we leave the existing application alone maybe we bring up one or two instances of the new version of the application we do some testing all looks good and then we perform a promotion and then it updates the rest of our application and then some people are using Nomad to do kind of Bluegreen deployments whereby you know you leave your existing application where it is you bring up an entire fresh set of the application you know all the various different components of full deployment of it if everything is ok you flip over to use the new one and then you take the old one down fully automated failure recovery as well so you know if if I workloads I'm running perfectly that's good news but if there's any kind of any kind of issues with them they stop they crash for whatever reason the Nomad or actually restart those for you so why Nomad I guess is that is the question so we know what Nomad is but why would it why would you use it so if you've got a mixed environment which pretty much everybody has has a mixture environment that you need to support nomads a good fit there if you need to move your legacy applications to the cloud maybe you're not quite ready to put everything into into containers right now but you like the the flexibility and the elasticity that perhaps the cloud provides nomads really kind of is that bridge between those legacy applications and modern ways of working get maximum efficiency from hardware so you know typically servers probably operate around about two percent utilization obviously this can this can vary I appreciate that but maybe with nomads we can get to twenty percent utilization of our servers which probably doesn't sound like a massive jump but if you end up doing the maths it means that if we previously needed 100 servers to run your workloads with nomads we'd only need ten so nomads automatically works out where to best place your workload you can define additional parameters of constraints so maybe it needs some particular hardware it needs certain amounts of CPU and memory maybe it needs a certain operating system for it to run on maybe you've got Windows application that doesn't run on Linux so you need to put constraint in to say hey when you schedule this workload make sure you put it on to one one of our Windows machines because that's where it needs to run our self-service provisioning is kind of really important as well so none of us wants to be in a world where we raise a ticket and then we have to wait and then we get to the front of the queue and then there was some kind of issue and then it gets thrown back to us and then we have to go through the whole process again so the idea is if we can enable self-service provisioning with nomads we can put the control back into the hands of people consuming the resources and then we can abstract away all the complexities of of operating the underlying systems but seriously why nomads okay so I've talked a lot about scheduling maybe you're convinced that schedulers are the other way forwards but why would know might be your scheduler of choice so for me one of the massive massive benefits of using nomads is it simpler it really is easy-to-use hopefully we're going to get to that while stage later on and you'll see how kind of easy is to get started it's a single binary the mental model for nomad is very simple it doesn't have you know half a dozen eight different components that you have to manage and update and maintain it's literally just a single binary so upgrading operating it is very simple the flexibility we've kind of already talked about this if you've got that kind of mixed environment containers and VMs if you're running service jobs which are the kind of the long-lived jobs or batch jobs maybe you need to do some kind of short bursts of processing maybe need to do some payroll processing or some number crunching no much really good for those things and it also runs across Linux and Windows where most people are doing it it does actually work on Mac OS as well which is good for me today because I'm going to be running it on my on my macbook although it don't have too many people who are running my cost in their server environment a scalability and performance is a real kind of key differentiator of nomad as well you might not need the very very highest levels of performance but we wanted to to push Nomad to see what it could do so a couple years ago we did the million container challenge where we spun up 1 million containers in less than less than five minutes is that here's a graph of the of the containers being being sheduled it's two hundred and sixty seven seconds it took to schedule a million containers there is a link so you can actually find out more about there about the challenge I did here one really cool story about this so we've got one customer that uses high performance computing and they saw this and they looked at it and went wow you can spin up a million containers in in five minutes that was that's really cute but we want to use nomads to do forty million containers can we do that thankfully we we can Nomad is a exceptional performance it scales very well and we operate on the principle that's if we can scale up that's high then we can scale down too so wherever you are in that sliding scale nomads got you covered cool start to get a little bit more into things now so I wanted to talk about some of the key terms that we that we use inside of Nomad so the first is a job so job is a specification so it's a file that that you provide that declares the workload that you want to run on nomad so all of the job files that I will show you as part of the demonstration you'll see right at the top it says it says job that's kind of like a highest highest level of configuration and within that there are various different components that that will actually specify so inside a job file there will only be one job inside of that job description there will be a number of task groups so a task group will contain one or more tasks that all need to be run together on the same client node so they actually need to to be co-located needs to be running next to each other so perhaps you're running some web application and there is some logging component as well that also needs to run alongside that that web application we want to put it on the same system we would put those things inside of a group now we can have multiple groups everything inside of a group will run on the same node different groups potentially could run on different different nodes anything inside it will run on the same unless in my case I'm going to run everything on my macbook so everything will just be on a on a single node so inside of the task group we would have one or more tasks so they're the smallest unit of work in nomad probably makes more sense when I flip to this screen so inside of a task we would specify the driver that we want to use so this is where we say we want to use docker because we want to run some containers or we would say I have a java application that I want to run so I'd use the Java driver or I've got a virtual machine I want to run so I'd run qmu or QE me however you want to say that so that's where we would actually specify this within here and there are certain parameters that we would provide inside of that to say we know which container we want to run or what virtual machine image that we actually want to run as well allocations so an allocation is the mapping between the task group so inside of our job we have a task group which has a number of tasks that running in it and a client node so if I've got a a group a task group that contains ten different tasks for example they need to run somewhere so Nomad works out where it's going to put those those ten tasks and it goes out that machine over there is a perfect fit for this I've done the calculations and it goes and drops that allocation onto that machine it does that by performing an evaluation so don't need to worry too much about the evaluation side of things unless you really wanted to kind of be dig into digging deep into internet' mad but the evaluation basically is nomad deciding where it's actually going to place their allocation based on a number of different conditions in packing so Nomad works on the principle that it wants to be as efficient as possible with your resources so look at this I love this Tetris oh it's gonna go it's gonna go and now terrible really really yeah hate that but the idea is Nomad will try and use your resources most efficiently you've got all kind of different tasks that need different CPU and memory and different kind of hardware drivers and all these different types of things it will try and use those it try and pack as many of those tasks into into a worker as possible cool so nice dive into the architecture just a little bit so there are kind of there are basically two main types of agent that run inside of Nomad even though it's actually the the same agent the same binary that runs it runs in one of two different modes so the easiest one is that is the clients so that's basically some way you need to run your docker containers or your virtual machines so it runs tasks they are the clients are managed by the by the server's so there's kind of a number of servers that kind of form a cluster together so it's highly available one of those will operate as the as the leader and it will basically replicate certain information to to its followers now inside of a cluster so a cluster being you know a number of servers and a number of a number of clients the servers is always going to be an odd number it uses the raft protocol so it's a consensus protocol so you need a you always need an odd number typically this is three or five it doesn't make sense to go to seven or higher because actually reduces performance obviously doesn't make sense just one so what you see me do here and in the demo today is not a highly highly available solution so don't recommend that but it's either three or five servers and then in terms of the number of clients anywhere between 1 and 10,000 ok so we actually need to run run our workloads it will scale into the into the thousands awesome so now this is the fun bit so we need to start we need to stop but we're going to start simple really really going to start from the very beginning from zero so it's unlikely we that we need to shed your 1 million containers today certainly my MacBook will will struggle to to shed your a million containers I couldn't do that we are going to start simply so we're going to run no matter locally so I don't want to have to find you know machines to kind of try this on and set up Network and all that kind of stuff if I just want to try this right now today as quickly as possible I'll run it locally on my machine and showed you some work on my machine caviar obviously being you may be running a different operating system from me even you might be running the latest version of Mac us it might actually operate slightly different from from the way that my machine works so by all means check the documentation I have the latest information if there are any particular differences between your operating system and kind of how that works I have cheated ever-so-slightly in that I already have docker installed on on my macbook simply we don't have time to download I think it's a several gigabytes installation size these days so doc was already running on my on my Mac but that's the only thing that I've already got pre-installed so the download and installation process is really kind of simple pretty straightforward you visit nomad project dot io and you there's a download button and you download it for Windows Mac or Linux then you unzip it and then you are done basically it's a single file file is a single binary I recommend putting it somewhere on your system that that you can find if it's part of your path so instead of having to type the full path to - no matter to get it started if it's inside of your path you can just type Nomad and then some commands so with that being said let me show you how easy is to download and install so if I jump over to the screen over here if I go to Nomad project ia I bump the size of this Oh ever so slightly so this is the this is the Nomad website obviously there's lots of information on here there's kind of an intro getting started there is a guide section as well so you know once you start to become more familiar with Nomad and you want to try some of the various different deployment strategies and those different types of things the the guides section is great for that the documentation obviously all the documents about how things work is all there including the API reference as well everything that you do inside of Nomad you can also drive through the API if you if you wish to but the one that we're really most interested in here is a download so you hit download you find your operating system you find your particular architecture so windows 32 64 the next 36 for mac OS everything is 64-bit these days click it and it downloads a zip I'm not going to do that cuz I've already kind of I did cheat a little bit I kind of did that but literally you download the zip and unzip it and it'll have one file inside of it so let's take a look at that so basically I've got Nomad and I've still installed onto the path of my system so literally this is it it's one file it's 93 megabytes for this particular version of Nomad that I have but that's it there are no other components to install literary download the binary put it in your path and then that's it so if I did like a just a nomad dash dash help for example it provides all of the commands that are available if I wanted to get more information about one of them so maybe I wanted to do nomad agent dash dash help it'll give me loads more information on that on that particular command as you can see loads of different parameters that you can passed inside of inside of here as well let's name it that's that's how you download and install it download the zip unzip so the next thing you do is you want to actually run nomad ok now we've made this simple as well there is something called dev mode so we're going to start Nomad in developer mode in dev mode really good for doing doing testing on basically it runs as both a server and a client so the thing that is managing the workloads is also the thing that's going to run the workloads not something that I recommend that you're doing in production you should have the separation of those things so you know ideally we'd want a highly available cluster when we do this in production and the real real key thing is data it's not persisted so everything is in memory when I'm using dev mode so as soon as I stop the process I lose all of my data so don't try and do this stop the process restart it and go oh where's all my data it will be gone really easy to start it you just do nomad agent - dev so if I go back see I come online clear that say Nomad Age agent - dev run this so it's going to start the the nomad agent give it a second for it to get going basically what it'll do is it will fingerprint your system you'll find out all the capability of your system how much memory CPU those things you have it also tries to work out what drivers that you have available we've got a little bit of an annoyance here on a mac OS basically when Nomad tries to do a detection for Java Marco's helpfully pops up this dialog box which you click OK on and it goes away but obviously no man is constantly trying to fingerprint the system so this dialog box keeps coming back which is really annoying so let me show you how to fix that go back to here oops also what it does is it takes control of my takes the window focus as well which is really nice if you get in something like this you're running on a system you can actually disable certain drivers I'm going to disable the the Java driver there is a config that is in within here as if I pop this into a new tab so this piece of config here I'm going to add this into into my name add installation I'm running just so that basically yep you see that message keeps going up all the time it gets really annoying well look this is the webinar should be on now so I'm going to do this now and the way that you do this I'm going to stop this nomad agent - dev and you see I've actually passed it within here so normally what you would do is you take that conflict I just showed you in the documentation so this bit here and you put it into a file on disk and then you would just say - config equals and then the path to that file on disk now I'm being super lazy I'm using a bit of bash magic which basically is going to put it into a temporary file and then pass it to nomads so it's pretty similar to how I did it before you may not actually have to to do this I do it just because I find the macker prompt really annoying and that means it won't pop up now so that's gone let's go back explodes cool so you saw that nomads take two Nomad has a user interface it's available on port 46 46 you can view all of the different clients and servers that we have and we can run and view jobs so I'm going to show you the interface real quick so I'm running this on my local machine so I can get to this port so a local machine pop 4646 and this is the UI and let's bump this up just a little touch so the minute obvious I'm not doing any jobs pretty obvious because I've literally just started this I can see all of the different clients I have I'm only going to have one because this is my Mac I can see the capabilities so I can actually I can use the docker driver I can use the roar exec drive and we'll talk more about that a little bit later the other things I'm not using I don't have qmu installed I don't have Java installed obviously as we as we kind of saw there's lots of information about my machine the version of Mac that I'm running and the kind of capabilities that it has similar with the with the servers so again I'm running a server I'm running nomads 0.10 dot is within that so that's the UI we're going to come back to the UI quite a bit during the demo so we'll come back to that in a bit running our first job so we define our job in a job file in this case I'm going to run a docker container it contains an application called HTTP echo all that does is it renders a HTML page you provide an argument to that docker container such as hello world or something similar and then it renders that page for us it also listens on a particular port so we provide it with the port number so it actually listens on that particular port so let's run that let's see what that looks like now this is the code that I've prepared earlier really kind of simple so this is the job file that we talked about so it starts with job and then has the name of the job certain things that has such as a task group so I'm going to call I've got a task group there's one of them running and basically I'm running the docker driver and provide some configuration to it so you know this is the image I'm going to pull from the docker hook to actually run I've also need some resources so I need some network connects out to this box so we can actually see something it'd be a bit boring in this case I've got just saying that I need a 10 megabit Network it's not particularly high performance website I've opened up this port called HTTP 2 port 8080 now this port could be called anything I just go to HTTP it makes sense but I could have called this thing this label here anything echo port or I don't have something something of this nature so if I take copy this and go back to the UI I go to run job here all I got to do is paste this in hit plan this is your the kind of cool thing about nomad as well before I actually deploy this I can do a plan and it's gonna say yep that that's going to succeed or it's gonna say no there's going to be an issue there's going to be some kind of failure in my case it's there's nothing there so it's going to create this job so I can hit run on this and then it goes away and performs an evaluation that says where can I run this workload it's a pretty easy calculation for it to make today because there's any one place it can actually it can actually run this so there's information about okay we've got one running we've got information about the allocation down the bottom so the allocation remember is I need to run this task somewhere where can I run it I'm gonna run it on this particular client if I go into the allocation I can see I've got this address here that I can go to i go to this now okay this is my this is my application that's actually running so that we just saw from the job file really really really simple this is a docker container that's running on my machine but I can prove this if I goes to here a docker container LS I can see that docker image or that docker container actually running on running on my machine real simple right though already running a first job within the first 20 20 minutes or so the next thing we want to do is scale up our applications so if we want more than one instance of the application running which almost certainly we do for high availability we can increase the count so in this case we change the count from one to five and we resubmit the job so let's scale up the application so they go back to here get back to jobs again I'm gonna run the job paste this into here so it's the same code is before but I'm gonna change the one to a five we're gonna plan this and you get some kind of error so the error that I'm basically saying is no matter saying I can't deploy this you've asked me to deploy five and I can't and the reason is is because there aren't enough port 8080 s available across your entire estate my MacBook has only got one or 8080 so there are five things that all want to use the same port obviously I can't do that because you can't be shared so basically it's saying sorry I can't do this there is port collision because they're all trying to use Brady 18 so we need another way to deal with this so I'm actually going to stop this job because I'm not gonna run anymore it's not particularly useful to me so I'm gonna hit stop on this yeah I want to stop and then it kills it off and stops this so now we need a way to scale up our application where we're not going to get port collisions so the way we do that is by making our job dynamic so as we've seen multiple tasks can't all share the same port so because we're using a static port it's causing a collision but what we can do is we can use a dynamic port assignment so I can run many instances on my macbook that all use a random free port such that they don't step on each of this toes and there are no collisions now there's a number of runtime environment variables that can be used inside of your job files that will basically make your job file dynamic as well so instead of having to hard code a port in there I can change my job file to be dynamic and use these variables where I actually need to specify what the port is running so let's take a look at that real quick so again if I go to this job file I've bumped the count up to two five but I've started using these these are my environment variables that I'm actually using so this will replace nomad port with the HTTP port which is what I've called down here and also notice now that these braces are empty before it said static equals 8080 I'm not using a static port anymore this is going to dynamically assign a port to all of our different things so I'm gonna copy this go back to our user interface and submit the new job climb that within there this looks better that looks good run this okay give it a few seconds so it's just trying to start all of these various different jobs so in a moment I should start to see some of these running which I can so if I go into one of these again oops going to one of these so I can go into this and I can see my job so it's running on one two seven zero zero one on Paul 31 to 220 okay that's good it's dynamic let me pick another one try and pick a different one okay this is running on port 21 zero six six open this one up okay so this is running on it on a different port so this one I think we can probably already kind of see the problem I can go into this and I can keep clicking through this and find out what the port is but it's kind of annoying having to find out what the ports are the dynamic I don't know what they are it was different from when I when I practice this obviously these are all different every single time it does this deployment which is kind of annoying so we need a way to fix that let's let's figure out how we do that we do it by using something like service discovery so when we showed you like workloads as you saw the port's change the host could change as well okay it's all running on my macbook but when I start having tens of thousands of workers it could be anywhere inside of my environment my applications still need to be able to be able to talk to other resources and components so they need to know where they are what is their IP address what host are they running on what or today running on and a service catalog is that thing that provides that way to find other services so when you start something up it registers itself inside of a catalog and then everything can query that catalog and go oh that's where the webserver is or that's where the instances of the webserver are there is a famous service discovery product it's called console which is another hash record product maybe you're using it so with service discovery we need to solve two challenges first of all so one is we need to run console somewhere and we also need to register our application inside of the registry the good news is we can solve both of those really easily weird nomads so running console on nomads there are different ways that you could do this but the way that I'm going to demonstrate today is using the raw exec tact task driver well that basically means is you said a bit running inside of a docker container it run it in Mac OS it will run it natively on my machine Nomad the the job specification has this thing called an artifact stanza so basically within there I can specify where I want to download console from and it will unzip it and then it will run it run it for me as well so I didn't even need to install console on on my machine I can actually put that description inside of the job file and it will do it all for me Nomad also supports console natively so with the service stanza so let's see that so we need to run console and register our application to go back to here here is the job father I'm running for for console it's very similar to what we saw before but now I'm using this driver roar exec so it's going to run it natively on my on my Mac it's going to run the binary cord console and it also provides a couple of parameters to it which is agent dev so like all of our products we have a have a dev mode so we can actually run console in in dev mode which is great for our purposes this artifact stands are here basically says where I actually want to download this from so this is actually a link to our website so it's going to download it fingers crossed my my bandwidth is doing okay it's going to pull this down and actually run it inside of my cluster natively on my Mac so let's go and look at what that looks like this job actually I'm gonna gonna kill this one off I don't need these containers anymore so if I were to run a job now and we're gonna paste this definition to here hit plan all looks good run this so again it's going to gonna a load console to to my machine unzip it the the artifact stands are in there also does the unzipping of it for you as well and then it starts it running and already it's running now let's take a look at what console looks like and this isn't this is not a console workshop unfortunately but kind of a little brief look into into console as well so this is console to the UI and it's running nomad itself has some health checks and and things that it will put into the into the cat the console catalog difficult to say so that's what I can actually see we registered here so I can check the health of Nomad now I need to register my application within here as well so again very similar to what we saw before if I flip between these two you'll notice the only difference is there service stands are down here so this basically says hey when this service starts up go and register it in consul go and tell consul that this thing exists tell you what port is running on all those different types of things such that I will snap that inside of the catalog so if I was to run this code I go back to my jobs run this job paste this within here so everything we've seen before is exactly the same it's gonna run five containers inside of docker dynamic port assignment but it's going to register the services I'm gonna plan this looks good run this and away it goes and again it's going to go through the same process where it starts up all of these containers now I can still go into each one of these if I want to and I can see the support but if we start looking inside of console we can start to see that they're actually registering inside of here so now I can kind of query this catalog and see all of the different services that are running in all the different ports that they're actually running on within here as well so that's how simple it is to register a service inside of console so dynamic ports can be queried from the from the console catalog they're all available inside of there that's really useful for machine to machine communication ok so if I was a machine and I wanted to talk and find out where the HTTP echo application was I could use DNS or an or the API interfaces of console to do a query and say hey where is that machine and what port is it running on subtly I'm running a browser browsers aren't particularly good at using service discovery DNS is fine but browsers won't actually find out the port that they're actually running on so we need to introduce some kind of load balancing strategy so I'm going to introduce you to Fabio and they're not that fabio different Fabio Fabio the HTTP and TCP reverse proxy so it's going to operate pretty much as a load balancer forests really really great product this was originally developed by eBay it's now maintained by the by the community so this isn't a isn't a hashing up tool but it is kind of a popular tool and is not by any means the only low banchon load balancing strategy that exists but this is just one that I like using because it also configures itself using information from from console so I don't have to configure it I just start it's a single binary again I love things that are single binaries big fan of that it's really easy download a binary and you run it and away you go take a look at the fabio website if you might if if you're interested in in more information about that so running fabio hopefully by now you start to get the feeling of how we're running things so in this case this is the fabio job specification very similar to what we saw before i'm going to use the roar exec again so this is going to run on my macbook natively i'm going to pass the command fabio so that's how we start fabio we download the binary called fabio when we run it we also pass a parameter to it so by default fabio basically picks a random host i don't want to do that i want to do it like a true round robin i want to go in order one two three four five one two three four five as i kind of rotate between them the artifact stands are very similar to before but in this case because the the file name if i was to download it it would be called Fabio - 1.5 . 13 go bla bla bla that's kind of a mouthful pipe in so i basically said download this file but just call it fabio put it in this local directory and so that's what that part does so let's shed your this and see what happen to this run this job place this within here plan this run this give it a few seconds fabio to actually start up that's it so basically fabio is going to detect that console is running on my system yes I do want I wait to talk it's gonna detect their console is running on my system in fact if I was to go back to we were to console I'd probably see Fabio pop up Fabio war zone has its own health checks within here so Fabio knows that console exists and automatically it's actually using these tags it's a topic for another day but basically it's using these tags to auto configure all of the HTTP echo instances that are running and put them inside of inside of Fabio now one of the cool things that we can actually do is we can take a look inside of the the application so in this case I can see this particular task so remember an allocation is one or more tasks that are running on a particular node in this particular task I can take a look at the logs as well so this Fabio processes it's downloaded it and it's started let's take a look at the logs so all the information I need about about the Fabio logs is in here and I can kind of see things so automatically connected to the console which is running inside of my local data center and there is an admin interface available on port triple nine eight so this is running on my machine so if I go to triple nine eight whoops triple nine eight so basically this the same it's detected I didn't I literally just started this I didn't do any configuration of this so if I go to - Fabio hit the port 9999 which is the non admin interface see actual load balancing interface the particular route to my application is HTTP - ok so let's see what that looks like and 27.0 doser about one onboard nine nine nine nine slash HTTP okay so there we go so from a browser point of view if I could just hit refresh so this is port twenty-nine 163 it's an instance to instance three instance for instance five and then back to the first instance twenty-nine 163 instance two into three instance for instance five back to instance one so now I've got load balancing for all of my different instances of my application that I can view from the browser and it will automatically round-robin me to each one of those different instances so that was fabio working with console which is being used with nomad which is putting the service discovery information into into console and it all just works it's magic so the last thing we're going to do is update our application there are different update strategies that are available inside of gnome and the one I'm going to cover today is a canary update basically leave those five containers as they are bring up a new container with a new version of my of my application I can do some testing against that that'll actually be in the load balancer so we'll see it if that looks good I'll do a promotion and then it will update all the other five of our of our systems so let's take a look at the updating our application if I compare this to the previous one you'll see that there's this additional stanza within here so this update strategy basically I'm saying when I perform an an update what are you going to do now my case I've set canary to one so it's going to bring up one instance of the application okay I could set that to two that's fine I could have a couple of them I could test if I set this to five it would essentially be a Bluegreen deployment okay so in the real world I've got five I bring up five of my new version and then if that's good and I do a promotion the new one stays where it is and the old one goes away so there'd be like a Bluegreen deployment this max parallel basically says when I hit promote how's it going to upgrade all the other ones now I've just gone for it I've just set five so basically if I promote this it's going to update all the other ones at the same time and also the thing that I've changed is my application is now you know has this piece of text in it which you didn't have before which says update successful but everything else is the same let's go back to nomad and show you all this job go back to jobs go back to run job and go down to here and hit plan now this is an update of my existing job okay hence why this isn't like a brand new create it's gonna ignore five of them because I'm leaving my existing application where it is but I'm going to bring up one canary to actually do some testing and it's going to update my my application code as well so I'm gonna run this and then eventually what I will say is I've got this new version of this application so this is version one on my application which is now running so I could do some testing against this system to make sure it looks okay so let's go back to our fabbi a load balancer and just hit refresh a couple of times still it takes me to I should see there we go so the existing five haven't touched them they're still working is this good yeah it looks good if it's not I could kill the canary off and say that didn't work let me try again but this is good I'm happy with this the new version where application looks really really good so if I go back to nomad and then I say promote canary basically what that saying is they pressed my application with the new version of the code now because I've set this to five basically it's going to kill off all five of those those jobs opposed to get the load balancer and hit refreshed yeah it's kind of stuck at the minute because there's any one that's running that's the canary decided to trash all the rest of my my five nodes my five instances but eventually what we'll see is we go back to here is eventually you will have all the new versions of the app running so there's me one healthy at the moment it's still kind of like them my MacBook start to croak a little bit eventually what we'll see my MacBook catches up is we'll see this application that'll be running fire and instances we'll come back to it in a second I think my Mac was just on the go slow okay cool we'll come back to that a second status game awesome so what did we cover in the last four to four minutes so I showed you how to download install and run nomad again really simple you go to the website you click download you download the zip you unzip it easy when run multiple instances of a web application so we started off with one real simple web application and then we scaled it up so we had multiple instances of that web application running I truly how to do service discovery and health checking so we used console as I said there's loads of information about console out there so you can find out more about service discovery in health checking but you saw from a nomad point of view how easy it was we just put that additional service stanza inside of our job code and basically it registered that service with console automatically I showed you one of the load balancing strategies that you could use fábio's I mentioned is is one of them not necessarily the the only one you might use in nginx or f5 or some other kind of system that you want to use and then I showed you updating our application and I will check in a moment to make sure that has finished finished updating so hopefully we come in in this position now we've kind of gone from zero to kind of Wow to the land of make-believe I'm not quite sure if we're in the land of make-believe but hopefully you can kind of see how you can literally get started from rowid Nomad I start streaming some work on that so in terms of the next steps why I'd recommend and what I'm going to certainly go away and do as well try this for yourself hopefully when you get this recording you'll be able to watch this this video back again and you'll be able to see be able to see if you know exactly how to go through the leads go through these steps such that it's not particularly difficult to actually do the learn portal and the guys let me show you those real so if I open it up in a new tab oops this new tab I certainly cannot enjoy million containers on my macbook unfortunately particular magazine running so the lone port if you just go to learn to hash go calm forward slash Nomad when that eventually loads there is like a whole series of of guides that you can actually run through that will show you right from installing Nomad running nomads jobs have to build a cluster up as well and hopes of things and then there is the guide section as well so the guide basically allow you to again through a series of steps we're in the process of moving it from this page so this guides page they're slowly being migrated over to the to the learn platform so some of these links will actually go to you'll actually see they'll be on the they'll be on the on the learn platform soon if they're not already next step check out the documentation so go and take a look at the the documents you saw where that was on the Nomad website basically everything that I've showed you all the the job file specification that's all in there there are many things that we we didn't have time to cover today those things are all in there the next step would be to go and build a multi-cloud multi-region cluster so Nomad by default supports Federation so if you wanted to build this out in multiple datacenters here in the EU in the US in Asia Pacific Pacific even absolutely you can do that you can even like submit a job to your local cluster and in the Job Description you can say this needs to be run in the US for example and it will go and run it inside of the US u.s. cluster and then the last step is going to plot a million containers really simple so let's take a real quick look see how this job is going maybe hit refresh a couple of times and then I think we're open to Q&A yeah there we go just things are running a little bit slowly on my Mac because I've got so many different things different things running if as you can see they're all running I'm gonna stop those from running now such that my my books processing becomes happy with me again but yep that's it so any questions you you I'm just checking out looks like we've we've got a few questions through asking so a blog posted to accompany this then yes absolutely that's on my list of things to do so basically I'm going to turn this talking to into a blog post as well so you'll be able to actually go through this this step without having to refer back to the e/m back to the video nomads have a way to run syphilis apps like ws lambdas or GCP cloud functions really good question um as things stand today we don't don't have that capability a lot of bleeping terraform to to actually shake those and there's different types of things but in the future who knows maybe maybe there will be some some optionality like that no Manta scheduler right it has a applicable driver interface so all the drivers I talked about today they're basically plugins okay so people develop new new task drivers then we can actually import those and put those into them if we wanted to use terraform with with nomad it's only in use terraform to spin up your nomad clusters so if you have you know you haven't need to spin up particular when you start spending up thousands of machines that becomes rather laborious if you have to do all those things manually so certainly you can use terraform to actually spin that up there is also a terraform provide for nomads to actually use terraform to submit jobs to to Nomad as well if you wanted to is there any way apart from constant for dynamic port questions so basically another another service catalog I'm not going to percent sure about that one don't if there are other things that integrate need to be with with Nomad that's the reason why people use no matters because it really integrates with other hash eco tools certainly all that information it's bit well through the API though so if you've got a system that can do API tug queries then you could do a query of the allocation and say okay tell me all the allocations for a particular job and then those occasions will contain information about what ports are all being used as well anyway to use with with OpenStack good question not entirely sure how to do it's great this with with OpenStack you could certainly run it on a stack if you're talking about scheduling things on OpenStack again good question don't know off the top of my head it might be another case of it's possible through the use of a an additional driver is there any equivalent to a Lincoln T or SDI for a nomad or is it not needed really question its console again okay so console is service message obviously I didn't didn't have time to throw up but it's my Mac yeah really is struggling right now with a kind of workload if I go to the documentation section and the job section I'm gonna do this and then connect so we have direct integration with nomads and console Connect for for service mesh so yeah there's no mad support a service mesh yes it's a ports console connect service mesh how does Nomad handle States specifically with long-running servers which may run on one node and get rescheduled again there's a guide to this so I recommend you to take a look at that which talks about running stateful workloads so there's a couple of ways you can she run stateful workloads with with Nomad there is also a talk by one of my colleagues a common weather talk with school but basically where she talked through how you can actually do this so there is actually video available on the on the resources section how would you map a yeah I see if great question I can see put it in in both places awesome how would you map a dynamic Nomad port to a static port in a in a docker container a good question I'd have to have to check that one report the configuration as part of the job specification where you basically say outside port is X and inside port is is Y and but I have to double check that can we define dependencies between groups or tasks that's another good question community finding pet so you know if you needed to start something up first before you started up another thing good question I know you can specify priorities but yeah I'd have to double-check that one whether you can actually specify which the order in which one jobs to be scheduled so you need one system to come off and be healthy before you actually start another part pretty sure you can do that but I'll have to double that double check than that just Nomad handle provisioning VMs or is that something auto scaling groups take care of so as today no Nomad doesn't have doesn't have the ability to spin up additional VMs obvious you could do that with something like terraform or like you say an auto scaling group so as you need to kind of spin up and burst so a lot of people were nomads and then they'll burst so we've got some people who run nomads on Prem and they actually need to burst and leap a person to the cloud so when they need that kind of elasticity they they burst up into the cloud because obviously Nomad works across all of these different things but no matter itself doesn't have that kind of provisioning VMs capability that's something that we best left to to pair up form I'm just checking to see if there's any other other questions so I talked about how I'm never handle states so I'm not going to answer that question about the rain in London sorry but yes had use terraform with with nomads so again it's not a case of you you probably wouldn't drive terraform with with nomads the way that I would see that integration working is you've used terraform to actually spin up additional Nomad clients and then there's a follow up to that so I think you I think I think that's all the questions answered I think I've kind of checked them all I think that's that's all of them so any other questions I don't think so thanks for getting through all of those rows so yeah I guess if there's any other questions quit go ahead and type them and otherwise we hold back Pisa all right then rest so thank you both wait one more oh just some listening thank you okay suppose you thank you for getting through all of those for us and as I mentioned at the beginning of this hangout it was recorded and we won't make the recording available on our website after processing I will send an email out to everyone who registered with the recording link also if you liked what you heard today and want to start exploring nomads I encourage you to go to learn and learn site you can find it on our website at learn coffee Corp calm so I hope you all enjoy today's hangout and have a better understanding of how to get nomads up and running and how to scale your nomad clusters so thanks for hanging out with us today and a big thank you to us for his time that wraps things up for us thanks everyone goodbye thanks everyone goodbye
Info
Channel: HashiCorp
Views: 15,953
Rating: undefined out of 5
Keywords: HashiCorp, HashiCorp Nomad, Nomad
Id: xl58mjMJjrg
Channel Id: undefined
Length: 54min 23sec (3263 seconds)
Published: Tue Dec 17 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.