Learn to Load Balance Your High Traffic Sites

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] and good morning everyone welcome to another digitalocean tech talk uh for those that you that don't know me my name is mason egger and i am a senior developer advocate here at digitalocean so good morning everyone hello to everyone in chat we're going to just kind of sit here for a second i'm going to say hi to all y'all so if you're in chat say hello tell me where you're coming in from and we're going to get started in a few man let's say two threeish minutes to give everyone some time to drop in so good morning it is what is today's date it is wednesday december 8th it is chilly in texas today a little bit chilly supposed to be a high of 77 and sunny these are the beautiful times of the year in texas um absolutely love it so yeah hello everyone hi uh effendi and from indonesia hey kenny coming in from winnipeg uh hey stefan good to see you today morning texture from hong kong it's good to see everyone in this morning i should start preparing jokes for this i feel like this opening segment because like whenever we start these tech talks like i don't just jump into it because like the weird thing is like people get on youtube and then they get an ad and i don't want to start so i usually give everyone a couple minutes to get here and i really should prepare this banter a little bit better but as you can see i'm just good at it but in general so welcome everyone morning tech morning james from canada i wonder if it's so how cold is it in canada right now i don't know my i have my my little brother lives in seattle he's visiting me now but he lives in seattle and apparently it gets cold up there in the winter in texas it's like 70 and it's fine like it was like 40 something this morning a little bit chilly but all that stuff can't deal with it uh tarek coming in from lebanon welcome uh passport coming in from florida nice to see you uh the sharth i hope i said that right coming saying hello uh john one of our do sharks telling me to have a great stream today that's awesome good to see you john i also know that john is really yeah working with him and pdoc stuff really enjoyed working with john i don't know where i was going with that my brain had a moment it does that sometimes johnny the pirate from poland uh some linkedin user from indonesia coming in rohan saying good night from india yes it's always it's always uh time zones time zones and the fact that other people live in different parts like good day good morning good afternoon good evening good night good whatever i don't know um james says it's 23 and snowing that's a big old note from me um that'll be a no for me like i wouldn't turn around my chair on the voice for that uh roll is developer but also does some cto work awesome staphon negative five in canada i'm hoping that is in celsius and not fahrenheit um which yeah that looks like that translates roughly because i think isn't zero and celsius 32 and fahrenheit we're amer yeah we're american we use that in that imperial system that no one else uses oh greetings from bangladesh and from india middle minnesota i bet it is cold there um about dinner time there hello from egypt hello india uh gray and rainey south carolina that's that you know everyone i've ever heard has talked about that so okay i think we're gonna go ahead and get started so today we're gonna go ahead and talk about load balancers specifically how to load balance your high traffic sites uh again for those of you that are just coming in and don't know me my name is mason egger i'm a senior developer advocate here at digitalocean you can follow me at twitter at mason edgar if you have any questions um feel free to drop your questions in the chat live i like going through questions live in chat um but i guarantee you i won't get through everything today so feel free to just tweet at me or if you like spicy hot takes sometimes i do those on twitter too um you can view my website and right view the deranged writings that i have all sorts of fun stuff so today's goals um we're going to talk just like we're going to do an overview of load balancers what is a load balancer how do they benefit you why do we need load balancers um we're going to talk about digital ocean load balancers in general um this tech talk is partially being done the fact that we just had a release with our load balancers um where we increased a whole bunch of like a lot of new features a lot of cool things coming out so this is kind of like a the tech talk on load balance was also kind of a smaller uh let's hey look at all the new stuff we're doing you know a lot of cool stuff going on at dio right now we're going to do a demo on load balancing droplets so we're going to go like from 0 to 100 on deploying droplets getting stuff set up load balancing them on do set up with ssl termination all of that so load balance across three yeah i just said that um we're going to deploy an app to app platform and discuss how app platform does load balancing so for those of you that are unaware that platform is our platform as a service style thing um and it will allow us to just deploy our code instead of doing traditional server stuff and we're going to talk about how you can use load balancers on digital ocean kubernetes um we're not going to have time to get into a full demo on kubernetes uh mostly because just timing and also it just it takes a while and also i'm not that great at kubernetes um maybe next year maybe next year we'll do a more in-depth load balancing kubernetes talk and i'll do one with my co-workers but we're going to talk about how you would do it i'll show you the product docs and send you some documentation pages but we're not gonna be able to demo that one today okay so here we go we already have a question in the chat and that's a great question um currently i believe the answer to this is no you cannot set load balancers for droplets in different regions um not with digital ocean load balancers if you were to use maybe a third-party service such as cloudflare and do a cloud photo bouncer you could probably do this but do load balancers the droplet has to be specifically within the region that the load balancer is in so great question uh yes so let's talk about what is a load balancer um load balancers basically are a piece of hardware or software they're software-defined load balancers and hardware to find load balancers um and they distribute traffic across a group of resources this allows you to decouple uh the health of your service from a single server and it also helps ensure that your services stay online so you know we can deploy a server say i deploy a blog on a droplet simple vm um if i just have that server and something goes down then my entire site is down but if i deploy multiple of these like just it's a i have a static site blog which um i use hugo to generate you can do this with dynamic apps as well but for simplicity static site blog you know i can deploy the exact same thing on different servers no one would know and then load balancer will send it like basically in a round robin style number one gets it number two gets it number three gets it back up to the top this will ensure that not one server gets all the traffic and if one of those uh droplets goes down the load balancer very quickly figures out that that droplet is down and stops sending traffic to it so your site doesn't really go down so a load balancer really comes into play here whenever you move out of that phase of like okay this is my side project this is my pet project into i need reliability um personally my personal blog site like if i had to when i used to run it on a droplet it was never behind a load balancer because it was my personal blog and the three people that read it if it went down they would tweet at me and i would fix it however if i'm running a business if i'm you know running an e-commerce site or maybe i'm my my static site's a little bit more important like maybe it's my home page for my website maybe i run a tire shop and i want to make sure that people can always go to it and you know servers get sick things happen then it's beneficial there to have a load balancer because we definitely don't want people not to be able to visit our site if our site is you know part of our revenue or it's a high important site so that's where load balancers really come in with that um what is the benefit you get from load balancers like we kind of covered this but we'll cover it again just to uh drive the point home um load monitors just help you ensure that your service stays online when your servers are overloaded or when your servers are sick um so that is a thing um yeah so that just allows you to again make sure that your service is online it's all about uptime and then mitigating uh traffic so it also helps distribute the load you know with a static site just serving it's not gonna be that much but if you have a dynamic site underneath and it's doing a lot of computational resources and it's getting all of the traffic it could get bogged down one of the ways is to add more servers and then load balance it this is what is known as scaling horizontally for anyone who's ever heard of these terms horizontal and vertical scaling horizontal scaling is when you add more workers to something so for example say i have i'm a moving company and i have this really big couch that i need to move and i have a small we'll say fifth grader okay they're about big and about wide um not a lot of upper body strength they're in fifth grade you can't really blame them we'll say 14 year olds get 13 14 12 year old i don't know what the age limit is for that one of them probably can't move a heavy couch but if we get 10 of them together they can probably pick up the couch and move it and that is horizontal scaling where we just add more of similar sized resources to come in pick them up now we could do different sizes but most of the time it's it's it's similar size now if we were talking about vertical scaling for example same couch but instead of asking the fifth grader to do it we go get a bodybuilder or a strong man or someone who you know lifts very heavy things for a living they can probably move it themselves that is when you take a server and you add more resources to the server so if we had a one gigabyte one cpu um droplet we could we would resize that into maybe an eight gigabyte eight cpu droplet a lot more resources can probably handle the load a lot better so you've never been curious about the difference between the types of scaling those are the those they are load balancers specifically tackle the horizontal scaling problem okay looking over at chat we have another question from rohan can you see a future uh in the load balance different regions um i don't know of anything on i don't know anything on the roadmap on this and honestly if i did i wouldn't be able to tell you but i can 100 say i will take this to our the pm the pro the product manager for this he and i are pretty good friends and i will say this is a request that we have so yes if you have requests for this um and he may be lurking in the chat somewhere he does that sometimes um yeah i will take this uh stuff back and i will you know advocate for you to be able to get this um yeah i have no idea of it being on our product roadmap at this time and unfortunately even if i did i couldn't tell you so sorry about that but no the official answer is no and in reality i actually don't know i haven't heard anything about that um so digital ocean load balancers let's talk about load balancers uh why okay great question why does one decide to scale horizontal horizontally versus vertically this is a great question so sometimes your resources well there's a lot of reasons in the cloud one could be cost um so adding adding another five dot two more five dollar droplets might be cheaper than adding a larger droplet um so that might be a reason to scale horizontally also when you scale horizontally it gives you redundancy so instead of having three servers now we have a hundred of them running we're a little bit more redundant if we had three servers and we scaled them all up to 64 cores um it wouldn't it you're still at the same level of redundancy like you still have three which means you can have a critical failure of like three before something happens a critical failure three happens in coding all the time um another reason is is not all software is architected to take advantage of high memory high cpu stuff some software some programs won't benefit like some pieces of software it depends on how you write it so it's your software if you're not writing multi-threaded code and you're not running it in a multi-threaded way adding more than one uh core doesn't benefit you in any way shape or form like if you're running one instance of say a python app let's do that let's say you're running one instance of a python app and you're using like and you're not using g unicorn or some sort of uh whiskey or asgi service to create multiple processes then no matter how many threads you throw at that because of python's global interpreter lock it stuck behind one thread so you'd have one thread working really hard and seven threads doing nothing it wouldn't benefit you at all um ver like in writing code that can actually take advantage of full multi processing is pretty tough um now we have tools around that like usually your web server does try to take advantage of it it tries to do requests across both apache and nginx are very good at this but it could become a problem behind on your your end that it just doesn't work that way so having more would be could be a one beneficial thing um but sometimes it does make sense to scale up so if you're talking about say like a java app the jvm heap fills up pretty quickly and can definitely sometimes be problematic um so if you're seeing performance issues because like there's too many people hitting the server and like you know we'll say a thousand requests and you're seeing like you're seeing out of memory errors popping up in your logs and stuff horizontal scaling could would fix that problem but it may also be wise to scale up and just give it some more space certain programming languages deal with things differently java ruby very memory and resource intensive um well not very but more so than some the the jvm can take up some space um again depending on how you've implemented your code we used to have this problem at my previous job all the time is every now and then the jvm would run out of memory and one of the options we had for that was just to kick the server and give it some more memory um but yeah that could happen sometimes so i would say personally horizontally is when you want to add a little more redundancy and you know you're already set up to go and you know that it's just gonna work like it's running fine on six dollar droplet hey let's go with it let's move it up uh vertically typically is like you're seeing you're seeing out of memory errors you're seeing um complete resource uh like like degradation like you're like your memory your cpus are sitting at 900 percent like okay now we might we might want to consider scaling up here um yeah both work both can achieve the problem you kind of have to play it at eye level and just kind of figure out yeah um okay another great question does do load balancers provide ddos protection that's a depends depends on how big the ddos attack is probably i would say probably not um so if you have someone who so say you have a web service that has your web service can handle a thousand requests per second and someone is ddosing you um with a thousand and one requests per second okay obviously adding the load balance you'd actually have to scale up your load balancer because this might be a limitation on the load balancer if you can scale up the load balancer and then scale the services underneath it where you can distribute it yes in theory you could now the problem is is usually ddos attacks are huge um and they have there is they have a sneaky way of lingering so i feel like you could you could attempt to outrun small ddos attacks with load balancers um but at that point your best bet is using something like cloudflare or you know maybe if you're noticed you're getting a lot of attacks from the same ip range banning in ip range for a period amount of times um yeah i wouldn't say that it really if it's a very small ddos attack like if i i wouldn't even call it ddos if it was a dos attack because ddos implies dynamic and implies huge it was really small yeah you could distribute the dos attack and maybe mitigate it but i wouldn't say if i was being ddosed that would not my first reaction would not be to try to load balance it away my first reaction would probably be to use something like cloudflare ddos protection or start banning ip ranges that i know are hitting me really hard um i could see adding scaling horizontally as an intermediary to try to keep the service up like that's when you're panicking and you're screaming and you're like more like more like shoveling you're shoveling water out of a boat at that time and you're trying to get it out as quickly as you can another bucket's gonna help but you have a bigger problem that's got a leak in your boat the metaphors are flowing very well today um and i love that yes many cdns have built-in protection for dos there's a whole bunch of stuff around it i would not classify load balancers as a top uh thing around them around mitigating ddos attacks there are much better tools you can try to outrun them but honestly that's going to cost you some money so um yes digital ocean load balancers these have been great questions uh another great question does digital should have a cdn offering yes it's called spaces digitalocean spaces you can just deploy it's our s3 equivalent um and basically so it has a s3 compatible api it's an object storage and there's a button or a check an option you can set when you're setting up your spaces to globally distribute it as a cdn so yes digitalocean offers a cdn offering it's object storage cdn offering and it's known as spaces i am loving the questions today these are great questions okay digitalocean load balancers i'm gonna stop answering questions for a second and then i'll come back to them as i get to them later um because we have to get these droplets deployed so digitalocean offers load balancers for your compute needs um they're compatible currently with droplets and kubernetes um if you're familiar with that platform app platform apps are currently load balanced but they're not load balanced with our load balance product they're done a different way so you can load balance that platform but if you go to digitalocean.com go under the cloud console and click on load balancers you really don't have an option for app platform at that at that time so don't worry about that um load balancers are scalable nodes on digitalocean that allow you to have more fine-grained access so this kind of goes into like what is a load balancer um load balancers can be hardware they can be like just a hardware compute that's entire job is that or they can be software defined and i believe ours are kind of a mixture of the both but either way you will digitalize you do is allows you to provide load balancer nodes so we've actually recently changed this if you've followed the load balancer progress through digilotion's history um we first started off with just load balancer it balanced your load that was it and then uh not that long ago i would say six eight maybe ten months ago we released of where you had small medium and large load balancers that allowed you to scale you know like this one does ten thousand this one does thirty thousand requests per second this one does fifty um and it allowed you to kind of have a little bit more flexibility with it and now the latest iteration of our load balancing product is this one that allows you to have nodes so every load balancer is a set of nodes now set of load balancing nodes um each node does 10 000 requests per second it has 10 000 simultaneous connections per second and it allows for 255 new ssl connections per second um each one of these now i there was some whenever we first released this people like oh that seems this is exactly how it's always been um so if like it's just maybe it wasn't as broadcast as well but this is this like a small load balancer was exactly this and now what we can do to it is you can add up to a hundred nodes so now you can uh load balance up to a million requests per second and i think that's what 25 000 a new ssl connections per second if you had two zeros for that yes so you start off with one and you can scale all the way to 100 each one just adds more functionality i think it's really cool i love this scalable architecture when when the pm originally told me about these plans was like oh people are going to love this it's really cool so now you could have you can have more fine grained control so if you just need 10 000 per second um and 250 new connections per second again that's new because remember ssl takes time to like ssl is an expensive operation and by expensive i mean like compute cycles not dollar signs um it takes a bit to do that so this is pretty typical um but most of that overhead happens at the top so anyway um you can go ahead and do this and add these add these and you can just expand as you need to and you can scale down as you need to um you can scale up or down at any time a once per hour is your limit at this moment i'm talking with them on getting that um looked at but right now every once every hour you can scale up to us or down to a different number of load balancers uh load balancer nodes um it also supports managing your let's encrypt certificates for you especially if you're trying to do ssl termination they support ssl pass-through just a whole bunch of really cool stuff for it so let's go ahead and get into the demo today so we're going to have this load balancer demo we're going to set up three droplets running nginx displaying their host names so we can see the round robin happening we're going to create a load balancer and add it and point it to the droplets we're going to add a dns name to the load balancer and we're going to use let's encrypt to do an ssl termination so let's start off we're in the right project let's go ahead and go to this and these slides will be available um after this inside of my the tech talk page which we will post at the variant so what we're going to do quickly is we're just going to create three droplets that kind of show us um so we can so we have some little load balance so we're going to click on the six dollar droplet the premium intel amd and nvme ssds i love these personally that extra dollar is so worth it these these these things are smoking fast um we're gonna put it in sf03 just because that's where i feel like putting it today ssh keys but what we're gonna do is we're gonna use user data so this is gonna be a quick little detour for me to be able to teach you about user data if you've never used user data before you're kind of missing out for provisioning user data is a way for you to present cloud init data or just raw bash to your droplet that will be executed at the time that it's created so we have a doc here that's how to provide user data and this right here will allow us to basically install nginx and then add our droplet name and our public ip address so we can see when we go to the default nginx page it'll just say our name and our ip address so if we go here we're going to do that now this code is a little out of date i just submitted a pr to fix it you have to change that line from user share nginx to var www.htmlindex.html and also because i haven't been enjoying this as much we're gonna try this and hopefully it doesn't break anything you know what no we're not we're not gonna do that this works i didn't test it we're not gonna do it i wanted to make it a little bit bigger but we don't need to uh and then i'm just going to come over here and i'm going to do lb dash tech dash talk01 we're going to add three droplets here and we're going to add the tech talk tag to these so let's just go over that again real quick we're creating three droplets three ubuntu 2004 droplets we're going to use this user data to just basically install nginx and then this is the public or this is the uh internal api that is available to all droplets so i can get my hostname and get my ipv4 address we're going to echo that just directly to the var www nginx or varw in the html index.html yeah that's right and then ssh keys sammy load balancer and click create so this will create three droplets for us this will take a little bit of time so if there's any questions so there's going to be a couple points in this in the in the program today where we're going to be waiting on some deployment so if anyone has any questions now's a great time to ask them um because there's going to be like two or three sections where we're going to be waiting on um deployment so if you have questions please drop them in the chat and i will answer them as we wait on these to provision because not only do they have to provision they also then have to install nginx and do the code that it's in there so um any questions we'll say hello to sally who is saying hello from denver hello from denver i have a lot of co-workers in colorado a lot of them like every everyone is in colorado i don't know why do i recommend any low generation tools i'm a little confused uh on what a load generation tool is like are you talking about like something to test it so you can like do testing on it like generate fake load for testing purposes um the only things i can think of right now are chaos monkey and toxiproxy but i've never actually used them i've just been adjacent to teams that were using them um another question can we load balance via ipv6 or ipv4 or will it be using the private e ipv4 for low bouncing so we can currently only load balance across ipv4 um and then yes you're right so every droplet that is created now is created inside of a vpc and so will the load balancer they're all crea in your default or you can put them somewhere else and the load balancing that happens between the load balancer to the droplet is happening on the private interface inside of the vpc so though the load balancing coming in will be across ipv4 but i think we only accept ipv4 coming in um i again will ask the pm the product manager about if ipv6 is on the roadmap um can the digital ocean load balancer operate in front of digitalocean manage databases ooh that's a good question i'm not exactly sure i don't know i don't i don't i don't know i i will look that up and whenever we send out our follow-up email i will um we will answer that i don't know i've never tried it i don't know if they have it built in or not um the managed databases have a lot of stuff built in and i've never really uh try that so it seems like people are saying like a batchy apache bench or apache j meter uh for these fake user kind of data that's cool i've never actually tried doing that and i'll have to look into that as well um but awesome we have our droplets up so if we just go and click on the copy here and we run it now it's like a little bit small but you can see we have uh just that well there's nothing nothing to happen when we refresh it because it's not going to do anything so now let's create a load balancer so we go over here and we click load balancers also all of this is available via the api and via terraform if you've ever watched any of my past terraform talks i did this exact same thing but in terraform um so we're going to go ahead and choose our san francisco region because we have three droplets in that region we're going to do one node um we don't really need more but as you can see as i click up the node price goes up also i forgot to mention each lb node is ten dollars a month so i mean you could spend a lot on it if you got the full 100 um yeah thousand dollars a month if you got 100 and you needed that but like one a month it's ten dollar one ten dollars a month um what we're gonna do is we're gonna connect this to our we can connect it to individual droplets but we can also connect it to things that have the tags on them so we're just going to say tech talk tag we're going to forward 80 to 80. simple as that and we're not going to do any advanced settings so we're going to go ahead and just click create load balancer this one actually does take a little time unfortunately so again if there's any questions please feel free to ask them in the chat because this one will take a little bit of time i'm trying to think if there's something we can do while we're waiting on it um [Music] i guess i could just talk about the next part yeah we'll talk about the next what we're gonna do next and then we'll unless i see some questions come in um but so the next thing we're gonna do is we're gonna add dns to a load balancer and then we're gonna do ssl termination so for those of you that aren't aware of ssl termination there's there's two different ways that your your load bouncer can handle ssl so there's what's known as termination and there's what's known as pass through so ssl passthrough is as simple as it sounds it literally just takes the request and forwards it straight to the server um this is this can be a little bit complex in some ways because you then you have to manage ssl certs the same ssl cert across multiple um droplets so if you had say i had those three droplets and they were all masonic.com um i would have to manage that start across all of them which wouldn't be terribly difficult i think i could just do it on one and then copy it to the others um but it also puts the burden of decrypting the ssl to your droplet and remember if you mentioned if you remember how i mentioned earlier ssl like connections are not a quote unquote cheap uh operation they take cpu cycles anything cryptographic takes compute cycles and we usually don't do too much like investigation into like how bad is ssl making my performance um because it's a necessary thing like we just have to live with it but sometimes you can but when you're doing ssl pass-through it just goes straight through and then the droplet handles it um personally i don't i don't i don't do that one that much i use termination which means that the load balancer level decrypts it and then it tr talks basically across plain http on the back end and your your servers your droplets don't even know that they are dealing with ssl this allows you to take all of that complexity put it on the load balancer it also means that we only have to do one certificate it's on the load balancer and then the load balancer renews that and then that's what is in control of everything so that's one way to do it and i think it's a pretty good way um and that's what we're going to demo today let's look and see if we have any uh things so yes we have a question here do you have to define what ports to load balance yes you do for now we're i'm demoing http load balancing and then we're going to do https at the end and we'll change it but yes you have to define the ports um and we'll go over that again it was in the creation but we'll go over that again um and then question for managed databases does dio offer replicas read replicas and failover yes yes and yes um go to the digilotion database product documentation and um you'll be able to read about there and for let's see where we're at with the load bouncer i might be able to talk about it uh yeah i'm gonna move forward yeah i'm going to well here we'll just your digitalocean databases product docs i'll pull that up for you real quick manage databases um where's it at it's gonna be in postgres yeah single node clusters high availability read only so i'll this is just for postgres you'd have to click on the individual one i'll drop it in the chat but yes digilotion does offer all of that uh so we have our load balancer we have our load balancer there it is we found it wrong tabs um and now we have a public ip address for our load balancer so if we come here and we go to our public ip we'll see our load belt we'll see it and then as we refresh we'll see that our hostname and our ip address is changing because we're actively load balancing across the droplets as you can see sometimes the hard refresh doesn't always take i guess probably some caching um so yeah you have all of that and then what we're gonna do now is we're gonna add a dns name to it i keep clicking i'm gonna click through all the tabs until i get there so let's go ahead and add a dns name so we're going to go ahead and go to i have a domain already in do so one thing is if you want do to manage your let's encrypt certificates you have to use digitalocean dns to do it so like and basically all you do is you buy your donation domain name on a registrar and we manage dns we're not a domain registrar you cannot purchase domains from us but you can just point your name servers at us and boom you're ready to go so i what we're going to do is we're going to call this techtalk.sammy.cloud and we're going to redirect this to our san francisco 3 load balancer which we could have gave it a better name but i didn't so we're going to go ahead and create that record and we called it techtalk.cme.cloud hopefully dns usually propagates pretty quickly but it might fail on us here for a second nope we got it so now we have pns working um and once we have that we can go ahead over here to our control panel and let's go ahead and add let's finish this this demo off with some ssl certificates so we go to the load balancer and we go to settings and unfortunately in my opinion this setup you have to do the name first or at least you used to and this is a little bit of a confusing ui experience personally i do this in terraform it makes brilliant sense in terraform and it's super easy and i will pull up the terraform code in a second and show you because i think i have some somewhere so what we're going to do is we're going to change our forwarding rule to https and as you can see when we do that it go ahead and says hey you need to do a certificate um and it allows you to do a straight pass through this test one was my certificate from earlier but let's go ahead and say a new certificate and then we have to search for the domain that we have registered in digitalocean now what we can do is our digitalocean does support wildcard domain so you could just do a in a wildcard domain which i don't really want to do right now we're going to specify us select a specific subdomain here which means we're going to say hey techtalk.sami.cloud that's the one i want to do and we just call it techtalk and we're going to click generate certificate and when we click generate certificate here it's going to create it it takes a little bit of time the other thing i want to do um is ssl redirect i want to change oh first we have to click save here um create dns records for all new let's encrypt certificates i'm going to uncheck that because i already did it so this used to not be here which means maybe we do it for you now which would be pretty cool i need to test that um very rarely do i ever use the ui for this i'm usually doing this through terraform um and it's super simple in terraform so the other thing we have is we can select ssl and we're going to say hey redirect all of our http to https and we click save this should be done on everything it is 2021 almost 2022 there is no need for unsecure internet traffic please put your stuff behind ssl um a couple of other options that we have here so if we wanted to click resize again we can't do it because we're within an hour but we can easily just resize to whatever we want if i try to update it now it's going to say no because you can't resize it hasn't been an hour yet this is the time in which you can do your next resize um we have our forwarding rules set up we have health checks so the load balancer this is how often the load balancer checks to see if a droplet is still available you know we can we can really mess up a drop we can mess this up because you know the load balancer is supposed to detect when things are down and continue to distribute to them um so yeah you can do this across whatever port you want you can set your health check time out as much as you want sticky sessions are pretty cool but basically this adds a cookie that allows you to make sure that you get back to the same droplet that you have established a connection with this is stripped off at the lb level so you really can't even see this on the user side this is more for us but it's a cool thing to use i've never really messed with it uh proxy protocol you can do some proxy stuff you can do a back end keep alive but yes so if we go now that should have been enough time for the certificate to be created if i click refresh here is it going to automatically try to do https i try to do http again sometimes that uh redirect takes a minute but now you can see we have our secure connection by doing https and i'm pretty sure if i just went to it without my browser caching it it would automatically go to it now but as you saw the http didn't work it'll probably redirect once my browser cache is fixed and now we have ssl so that's droplets for this i'm going to demo one more thing we're going to destroy one of these droplets because i literally saw a question in here about what happens if three isn't enough but one of the things we can do is we'll just destroy this droplet and we click in the load balancer oh drat i did it the wrong way the load balancer is really smart it knows it knows that there's nothing there anymore okay so let's do this a sneakier way let's go here that's ssh in root app system ctl stop nginx okay so now one of these has a down web service so you would wonder what's going to happen okay so it's going to say service 503 unavailable and it's going to do this for a little bit because it's going to take a bit of time for it to figure it out but okay are we there it has to go through that health check so we defined how many of those health checks it is i think with based on what we did we defined it at what do we define it at so check in 10 intervals response time out five seconds unhealthy threshold three so about 30 seconds is what it's currently checked to check out um and that's when it will basically determine that it has uh no longer yes so now i can press this as many times as i want the load balancer has already determined that that side is down and it's not going to it's no longer routing traffic to it now sometimes it takes a little bit of time in here for this to update but so the ui can be sometimes be a little bit behind i've noticed personally of noting it but the load balancer itself it knows and it's not sending us sending us there anymore because it knows that that droplet is down awesome so that is load balancing droplets what i'm going to do now is i'm going to keep going and we're going to talk about low bouncing at platform apps we're going to deploy an apt platform that usually takes a good amount of time just because with first deploy and stuff and that's why i'm going to go back to all of your questions so i've seen there's a lot of good questions coming in please don't please keep dumping them in the chat i will answer them as soon as we get through this next segment um because we're going to have a lot of downtime wait or not a lot of downtime we'll have more downtime at that so that's how we do it at the droplet level but not everyone wants to use droplets other people want to use things like app platform or something like that so we have this code on github if this is a this is fork from the sample flask app it is a simple python app with one line and basically i import the socket library class we'll just zoom this in a little bit so you can see it gets a little bit small and we're just going to get the host name and we're going to print the hostname um this way again this is just so we can demonstrate that we're actually on different hosts so let's go back to this and let's go and create a new app this code by the way if you want to play with it is that mason edgar dash demos slash show dash hostname dash app because again i'm great at naming things we go to github here we wait for the sandy shark to swim sometimes it takes longer than others also there's a lot of stuff like in these things so it probably takes a bit for that uh show hostname app we won't do a branch made we're going to main sorry we're going to automatically code deploy changes we click next then we take these web services we have our g unicorn so this is already built in uh into our stuff it detects that it's a flask app and it's like hey we kind of think we know what you want in this instance that's correct we're going to use g unicorn as a wsgi port 8080 uh pretty standard straightforward python stuff this can be done with anything i'm just using python as a demo language any programming language can do this we're going to give it a name and a region and then we're going to the only way to get horizontal scaling in app platform is to do a pro plan so and then you would have to have a certain number of containers so this is if you really want high availability in a in a um in your app platform app so we're going to add three of them which i mean they're 12 bucks a month so it's a little bit pricey again this is like for production workloads or like near production workloads you you really want this horizontal scaling and we're going to click launch pro app and when this is done like it's going to go it's going gonna have all this progress bar so we're gonna wait on it and when it's done we will demo it but that means now that i can uh go and answer your questions so where was the question okay so first question is from varun um what happens if three droplets are not sufficient to balance load so what if what basically what will happen here is you will see that your cpu usage on stuff um let's go back to that let's see what we can see i've never i've never really looked at the control panel for load bouncers um as you can see the load bouncer has detected it that that one is down graphs so we'll be able to see like you know the amount of responses load balancers load balancer cpu utilization and all of that so if you're starting to see that that's not enough you're going to see degradation in your website you're going to see requests not getting served things timing out you're gonna see high cpu percentages probably you're gonna see higher ram percentages probably if three are not enough add more like you have to deploy more droplets you have to deploy them the right way um like the same setup uh and go with that and that's what you would have to do to get these to work again so if three are not enough add more um next question are there tools available to replicate across droplets underneath llb once a change has been made to the master drop okay ooh that's a great question um are there any do tools no are there advanced devops tools that do this yes if you go back to my previous tech talk where i talked about packer which is um a tool that you use to provision droplet images so if you need to be able to do this you can use a tool called packer it's a hashicorp product uh p-a-c-k-e-r um really awesome if you look at our youtube channel you'll see one of my last tech talk last month was on this topic using packer and ansible to provision a droplet so you would use these automation tools to create a what we call a golden image it's the image that works and then it would upload it to digitalocean as a custom droplet image um usually you use a base so like it's based off the ubuntu one it does all of my base install stuff and then it goes you would then up it would upload to do and then you would just click deploy on it again and make sure it's within like the right tag and the right vpc region and all that and it would just go um now if you're asking is there a tool that like say i have a droplet and i change it and then it immediately replicates um that's called rsync uh it works with rsync i don't know of any advanced tooling that does it that's really hot that's that's not really hard to do that's complicated so you're kind of better off either baking a golden image or using a configuration management tool something like ansible salt chef puppet where you make a change a programmatic change it applies to all of them so configuration management tools here will work making a golden image will work if you're just changing files you can technically do an rsync across them um and get all that to work so those are kind of like those are three good options there's probably a lot more um i would probably start with the packer one personally i used to be really big into config management tools and they are amazing um i would use packer i would bake new golden images i would give them a new name so like say i had version one of my software across three droplets and i want to get to version two um i could use ansible using something like ansible or salt to just push version two um i could also just bake a new golden image test it make sure it's working with version two deploy those three three new droplets of version two alongside the other one so it'd be six droplets which means you would get like an a b testing a blue green deployment where half your traffic would go to the new one half your track it would go to the old one once i was positive that the new traffic was fine i would just literally delete the old ones and then you're stuck with your left with your new ones this is kind of how we used to roll out stuff at my last job um is you know slowly rolling out the new changes and and slowly decreasing the old ones that's an option when is the digilotion dns gonna support dns sec i will ask um dennissec is a really cool tool but it's also um one extremely complicated too usually causes most of your outages in three not a lot of people use it or a lot of people use it but like it's a very complicated tool so if you read the slack outage that happened in september they released their postmortem it was dns sec um that's what caused it and whenever you mess up with dnssec you get to let it propagate and it takes forever to fix so a lot of great questions today let's go back to our apps i'm going to just check and see where we're at with this app and i'm going to continue asking questions answering questions okay we're still waiting on it so uh can you set up auto scaling in or remove droplets in flight currently digitalocean does not offer auto scaling purposes um it is a highly requested feature and it is something that i bug product about all the time so you can guarantee that i am advocating for you to make sure you have it um you could set up your own auto scaling like if you had like some sort of monitoring alert tool and then you have everything that's being done uh via like everything can be done via an api so you could have your own tool and whenever it gets to a certain point you could fire an api call and do it so you can build your own digitalocean doesn't currently have any but i i am i am constantly asking for that feature it's my it's my last feature when i came to do i had three features that i wanted um one is done one is in the progress and this one is the one that i'm trying to get like so like if i feel like if i can get these three done um i will be happy and i can't tell you any more than that but i will i will continue to advocate for that um can i share my terraform cord yes github.com mason eggers oh i don't remember where i put it um well there's d o dash community sample terraform architectures so this is a repository that i worked on a little bit and i haven't had much time to work on since this is a minimal web db stack um so it's terraform sample architectures i'll drop it in the chat and i'm gonna make sure one of my goals is to build a lot more of these which are just sample fully fledged out terraform tools to get this going i probably need to update this one because i haven't tested it with the new load bouncers but um if we go to web servers um and then we have like we have our digital ocean certificate that's how we set up our dns certificate this is how we set up our digitalocean load balancer as you can see i copy ever i comment everything so you can use this as an educational resource you're also 100 welcome just deploy this as you want um forwarding rules digit firewalls to make sure people can't access it the architecture this one sets up is this so it creates a three web server uh thing with a database all within a vpc http only comes into the load bouncer and you have to ssh into a bastion server to be able to get in these web these web servers um don't allow for external access so you can't access them via their uh droplet ip you can only access them via the internal ip this is done with firewall rules digitalocean does not have private only ip address droplets at this time but yes i dropped the code in there next question terraform versus ansible both they're they're completely two different tools terraform is a provisioning tool ansible as a configuration tool terraform stands up my servers gets me going gets ubuntu up maybe does some very basic stuff at the very beginning but it's there for managing my infrastructure managing what package is installed managing what versions of stuff are doing what users are existing i would never do that in terraform that's an ansible thing so you have you have a classic infrastructures code versus configuration management they're not enemies they are the best of friends they work together hand-in-hand and everything the answer is both i'm gonna stop for questions real quick we're gonna finish demoing the app platform app i'm gonna talk a little bit about kubernetes and then i'll come back to questions we are having a lot of great questions today and i'm loving it this is probably the most questions i think i've ever answered on a cloud on a tech talk and i think it's great so let's talk about the pi let's go back we were doing how does app platform do load balancing um it just does it for you that's the magic of that platform it just does things for you so the host name um is ugly because it's a docker container but as you can see as i'm refreshing it it just does it um that's all you have to do you go to a pro plan you say horizontal scaling you add more than one container there's nothing more you need to do for the app platform thing remember that platform is just way more fully managed um i know that seems like it was like maybe there should be more to that but no it's that simple just add more containers boom you're done we've taken care of all of it for you it really does add to some peace of mind uh okay next thing kubernetes so digilotion does have a kubernetes uh talk and as you can see we have like 10 minutes left i have no time to go over this but digital ocean kubernetes does use our load balancers so by you this you can set a create a configuration file to load balance i believe with ingress i'm gonna say us some stuff and if i'm wrong about the kubernetes stuff i'm sorry i'm not a kubernetes expert um but you create a you create a load balancer config file and i think it has to do something with like your ingress but it looks like it's just it's a type load bouncer um and you specify where everything goes this will this will apply it to your kubernetes and you will see the load balancers appear in your cloud console like you'll see them appear um they will be associated with your kubernetes cluster so you would a the way that you do a load bouncer in doks or digitalocean kubernetes service is the same way you do with a load bouncer in kubernetes you would just do it our service knows that you want a digital ocean load balancer and it just does it for you so not a lot to really go over the kubernetes part i'm gonna go back to the questions now okay yes and exactly devops labs got it right uh can it notify you via email if your droplets are down yes it can it can also notify you via slack you would think that i wouldn't lose everything that i do oh did i close it i think i closed it let's make sure so monitoring digitalocean has monitoring and alerting uh i can create a resource alert i can say cpu is above so much i can say tech talk which i can do by droplet name or by droplet um or by droplet tag and then it can email me and i can also have it linked to a slack it looks like this account's already linked to we use this account is what deploys our hacktoberfest stuff um and you can create this resource alert is above for how long um and then what metric bandwidth disk utilization yes so you can get it via email you can also get it via slack great question uh let's go back to the main project view uh can we one load balancer balance multiple applications or can it not differentiate between domain to forward to a different tag uh one by load balancer per application per per domain so if you're doing our sub domain so test dot so techtalks.sammy.cloud um that's all it can do these load bouncers don't have that ability to do uh multiple applications that you would get that in a much more complex one um or like a hardware load balancer but no you would need separate ones for different subdomains great question uh if a droplet goes down will the lb bring up a replacement node no um or do you bring up a replacement mode systematically no we don't have any of that implemented or options that would be something you would have to do on your own so what what i could see myself doing as we talked about the alerting i would do like some sort of slack bot that well we could you could do it as a slack bot or an email um but basically if it got that re that notification and you wanted it to do that you could ingest the slack message and then have another slack bot that maybe you know reads it or knows what's going on and then they could do it for you um i actually have another tech talk if you go back a couple months on how to build slack box and python on digitalocean so you could totally do that but no we do not have any like this is this would be considered like auto scaling style stuff we do not have any of that currently um yes whenever i see it happening whenever i see that little loading screen i always think baby shark it's hilarious okay i think i already answered that question um i can access your first load balancer on https but not http did you set it let's look take a look uh yeah let's each colon double that tech talk dot sammy.cloud oh it's doing that i know what i did wrong i do this every time and i don't like this i'm gonna go i'm gonna go complain when i'm done with this if you don't have a forwarding rule for http to http it closes the port and doesn't do the forward i make this mistake every time in my mind that's not intuitive if i say that i want you to redirect it then you should open that port for me by default i'm going to go talking about it but i think now if i try curling it okay so that i think it's like a 301 so it didn't fail this time um http colon uh techtalk.sammy.cloud there we go and that redirected you have to have that port open on your load balancer i forget that every time thank you for bringing that up i would have forgot uh is there any way we can set up auto scaling if a lot of traffic you're gonna have to do it on your own with your own stuff like use monitoring and metrics uh and you have to write on your own again i am advocating to get you this feature it is it is it is my my like my whatever i'm doing my my olympics my i'm gonna die on that hill that's what i'm doing uh would it make sense to load balance at the cloud floor level and then also at the digital ocean level uh yes some people use both 100 i've i know people that use cloudflare uh load balancers um for this i think the app platform stuff is using cloud fire load balancers but i'm not positive um so yes this totally could be a thing this happens uh this happens a lot uh okay la we're running out of questions great does it have does it have to load balance a subdomain or can you do a base domain yes we do we have wild card domains so you can do a base domain if you want um we just did it as a subdomain but there is wild card and you can do top level domain so it doesn't have to be a subdomain uh can you doing some metrics what's going on i unfortunately don't have any time to do that today i've only got four minutes but that would have been a good one oh cool so it's doing a 307 temporary redirect now good i'm glad i figured that out um we are at the end of the questions so and we are at four minutes until i have to run away and go do other things so if anyone has any more questions feel free to drop this is pretty much the end of the talk i hope you enjoyed it um there is one more tech talk next week if i pull up my calendar over here where you can't see it i've gotten better about that i had a bad habit at the beginning of always showing my calendar off on streams and people were afraid of how often i was in meetings um next week my wonderful colleague kim is doing a tech talk on automating git ops and continuous delivery with digitalocean kubernetes which just sounds fantastic so if you want to come and visit that one i'll i'm sure the person backstage you have a link for that backstage or i need to find it so we can drop it in the chat i have a i have a there's a there's a magic voice in the sky i haven't called her that in a while but that tells me when it's time to leave and stuff and she handles moderation on the back end so yes there will be another great tech talk next week i highly recommend that you go and attend it um my it's just one of my other developer advocates here at digilation kim and my other my other co-worker chris both are amazing speakers anytime you get the opportunity to uh listen to them speak i highly recommend it um if you enjoyed this and you want to know more about like when i'm doing stuff follow me on twitter i usually post about what i'm doing all this if you are a fan of our weekly show called cloud chats which we do on thursday which is kind of like our little news talk show we play some games we talk about stuff kind of like just developers chatting that is coming on tomorrow at 11 a.m eastern and it is our season finale this will be the last one you get for a while until we return in season two um yes we have one more question okay so if load bouncing in n a in europe is there a way to have a managed db replicated in either region to reduce ooh i don't you are asking great questions um i will ask i do not know that on the top of my head my twitter handle i will put in the chat it's at mason edgar um it's also if you look at my little uh whenever whenever this goes away ah stop like if you look uh i'll just ah i'm big now um if you look down here in the bottom left hand corner it uh it's it has it so at mason egger is where you can find me we have two minutes um but yes this was great i've enjoyed i've enjoyed this the questions were great this is honestly probably like the most engaged tech talk i think i've ever done and the questions were great and i loved every one of them made me think about it um now i want to go play with some load balancers i want to go do some load testing i love networking networking is one of my favorite things um and unfortunately you have to if you want to play with a lot of networking you have to have you know a cloud account or a really big home lab so okay so when i was using uh g unicorn i had an issue when using multiple workers it seemed like it was for getting initialized variables i set workers to one and it fixed the issue any advice ooh [Music] i feel like i've seen this error before but i don't remember what it was um we're using g unicorn with fly i will look into this i feel like i've i've seen this before and i feel like i have an answer for it go look on the digitalocean q a on our community site because i feel like that was a q a question that i answered like at the beginning of that platform um but yeah look if digilotion has a q a site which is community and then you just look it's called questions i think community community slash questions and look for stuff like that you should be able to find it but that does look very familiar um unfortunately i am out of time for today it has been great this is my last tech talk of the year so this is me signing off for now i will be back tomorrow for our last cloud chats and then i will be gone for a while taking a nice uh little break but thank you everyone for being here thank you for attending uh this tech talk and it's great as always it's great seeing you so i will see you either tomorrow at cloud chats and if i don't see you tomorrow at cloud chats i will see you next year with a whole new set of tech talks and interesting things to talk about have a good day everyone [Music]
Info
Channel: DigitalOcean
Views: 937
Rating: undefined out of 5
Keywords:
Id: gFs2B2LPzzI
Channel Id: undefined
Length: 60min 55sec (3655 seconds)
Published: Wed Dec 08 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.