How Google designed their Highly Available Load Balancers using Anycast

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
uh the paper title is anycast as a load balancing feature and uh this paper is from Google had no idea about it uh you can see the name you can see the paper on the screen I've also linked a paper in the description down below so uh the title of the paper is anycast as a load balancing feature uh from Google this paper I checked uh the research get website and found out that this paper was published in November 2010 so it's pretty old this is not something that Google uses today but it does touch upon uh some very interesting design decisions that were taken back then and think of it it's 2010 the scale is not as high as it is today so anycast as a load balancer so let's start going through this paper even I have just skimmed through it once uh you can see a bunch of links there which is open uh try to understand how it is actually practically done but we'll go through almost almost I'll be going through it uh while we are doing this life so our it organization basically Google is made up of subsystems each providing a service such as DNS ldap HTTP proxy and so on each one deployed globally using their own replication mechanism team provides load balancing and failover services we recently added anycast as a service we offer to other teams so they're offering any cast as a service we'll go deeper into what that means the key reason is they needed to be able to fail over between load balances so that's pretty interesting it's more about that hey we already have a bunch of load balancers how do we do failovers across them so what if one load balancer is down how do we Route traffic to another load balancer that's what this entire problem statement is all about while anycast is complex now this is a very interesting line while any cast is complex and mysterious to many system administrator no matter how smart you are it's always a mystery our architecture provides the service in a way that other teams do not need to worry about other details they simply provide the service behind the load balances they currently use with additional virtual IP addresses right the paper describes how anycast Works its benefits and architecture we used to provide any cast failover as a service right that's the context so let's driving into what architecture of Google was in 2010 the decisions they took uh and what they got out of it so it starts with the most natural way to think about load balancing is to put many service replicas as required in your server room service replicas implies let's say your profile service has multiple API servers so service replicas is not DB replica service replicas is the replica of your services basically multiple all service payment service profile service this service that service so that are service replicas in your server room and remember Google had on Prem infrastructure which is why they said in your server room so they are they are not assuming your Cloud n because back then Cloud was not even a thing so keep that thing in mind when we reading this it's 2010 and have a load balancer distribute the load amongst them so load balancer is also inhouse so what it says about load balancing many service replicas required in your Serv room and have a load balancer distribute the load amongst them to increase reliability a load balancers are usually deployed in high availability pairs now this is a very common pattern that whenever we discuss something he we need a standby node this this is a classic case of high availability pairs so you would never have one load balancer you would have multiple instances of load balancer again this is not easy to load balancer this is not an infinitely scalable load balancer this is literally one instance of a load Bal you can see the diagram here here there are two load balancers it's not that this is infinitely scalable load balancer this just single instance running right and this is high availability pair so literally two instances of load balancers over here load balancers are usually deployed in high availability PIR and we assume this to be the case throughout this paper a regular load balancing scenario would look like drawing one this is drawing one so user through the private Network internet and whatnot the request comes to load balancer this is literally a single instance behind which you can see multiple service replicas running odd service two instances of odd service assume that right so this is a very primitive setup the above is already an improvement in reliability why because given that you have two instances of load balances running one of them fails the the other one can still continue to function your system is already having an improved reliability but it can go further so they said that although this is good let's take a step ahead like let's go a step ahead and make it even better what that's like what problem is it solving then although they are having pretty deep reliability the problem that they're solving here is imagine a disaster scenario where the users are still active and requesting the service and so is load balancer but all the back ends of the specific service are not what they're saying is what if assume let's say hypothetical situation where here uh what you're having is Friday evening right say you deployed a buggy code and it somehow started crashing your services due to mishap configuration wrong code null pointer exception this and that right so what what if load balancer is running fine network is running fine your infrastructure is running fine but your backend Services is not so now the high availability of load balancer does not solve this problem why because you chased reliability you chased High availability of load balancer by having two machines running one load balancer like two load balancers both are actively working right now what is happening over here is they say what if the servers are down so even if here you have two instances it it does not matter why because your service itself is down right so having multiple instances of load balancer servers is not enough to solve this problem just perfectly fine a better design now which is what they said that this solution of having two load Balan servers will not solve this problem so better solution would be what a better design would automatically redirect all those clients to another location or server room so it suggests that if in a particular server room when the code was deployed think of it like a region in a particular region when the code was deployed to any region any reason that region is facing an issue that the service deployed in that region is not starting up configuration map whatever happened you would want to redirect it to another region think of it like region because you all are Cloud native people then you would be able to relate to that right but think of it like one rack not working you move it to other Rack or think of it like one site not so site is a very common word that is used across Cloud where you say that one site is not working that's why you move it to another site now you would want to make this process as transparent as possible Right obviously switching from one side to another needs to be made as transparent as possible a way to accomplish this is to identify the nearest secondary location that's all the problems at is that one site is not working your requests are coming you would want to redirect this to other part so that your system still continues to function you want to do it to nearest secondary location and configure the load balancer to proxy or redirect all the user requests there until local service is reestablished basically until the site that is down remains down I would want to redirect the request to other site which is act active and healthy right until this comes back up when this comes back up I would be switching it back this is what we need to do most load balancing products offer automatic redirection as depicted in the right drawing number three which will take a look here this is what they're trying to do this is what remote failover is Right user reaching out to something that is not working because wrong code is pushed back ends are not working you would want that request to automatically redirect to another site this is site one this is site two site one not functioning you would want this request to transparently seamlessly move to another site that's what we want over here drawing three but what if load balances are also not available right how can we like said in diagram four this case so they're not just saying that only the backend servers are down but they're suggesting is what if the load balancer on this is on like why would that happen why would like load balance or solers everything is off imagine someone literally plugging out those the the power supply of that site right imagine someone chopping off the network cable of that and this system or this entire site became isolated very much like there are hundreds of reasons for this to happen what if there's a flood in that right and that entire site is submerged underwater that entire data center server room is submerged underwater right right quite possible so which is where you would want this redirection to happen automatically to this other site that's what you are chasing okay so what it says how can we this be accomplished in the least intrusive way what this intrusive mean over here basically without any human intervention like without a human needing to go deeper opening the console changing something what if this failover happens automatically that's what it talk about so there are multiple ways to solve this problem now let's take a look at how like in 2010 again this paper is 2010 at that time they suggested there are multiple ways to solve this problem depending on specifics of each implementation so let's start with first one one possibility would be to update DNS records for the services so they're suggesting is DNS based load balancing which means think of it like let's let's say each region has its own interfacing IP address and you would say that this service goes over there to this IP address this service or request from this goes here to this IP address so you can have different DNS records or one DNS records pointing or one cm record pointing to multiple IP addresses that is one right that's the easier way to do it so users can now reach the service in different locations so you let's say think of let me be little more verbos over here is you have DNS record configure which says let's say uh uh uh lp. payments. google.com and you configure this in your load balance in your sorry in your DNS service uh against which you specify two IP addresses one for this load balancer one for this load balancer right if this is down you change the DNS entry and say hey this is down I won't so I would remove that entry from the DNS so when the next request comes in for lb. payments. google.com it would be going only over here that is DNS based routing right and it is very common way to do it and even even today we all do it it's very common using CM entries in DNS Route 53 or any any like if you're using Cloud native let's say AWS specific just check out Route 53 how to configure C name that ponds to multiple IP addresses right that's all but now here's an interesting catch with the DNS based stuff and that is potentially this DNS update basically what it say is when you pointed this service to go to this site and now you're are changing the IP address right to this site that hey for this thing now you would say why would that happen imagine you are having let's say asia. lp. payments. goole.com and then you have america. lb. payments. google.com you had Asia thing pointed to IP address one America pointed to IP address 2 right but now Asia site is down right and you knew it so what would you change you would change asia. lb. payments. google.com to IP address of America right this is what DNS based outing is that you don't change anything you operate at a DNS level and you just change this IP address so that request for Asia now also goes over here right now when that happens when that happens now look it's not easy it's easy to say but in real world what happens is this DNS update can be automated obviously when you see this is done you can M you can automatically change the DRS configuration and just reload the service right but there has to be a mechanism to check service status in other locations and keep track of their state so the system knows where to send users in case of failure now here the key thing is Imagine human is not there yet right imagine it's fully automated system now what would happen is someone needs to know that Asia is down America is working fine so I would need to change because Asia is down the Asia specific endpoint's IP address needs to change to America's IP address right so someone needs to know so if you have large number of deployments like this you would need to like you need a mechanism to check the service status in other location and keep track of their state so that system knows where to send user to in case of failure considering that services are sometimes deployed in hundreds of location we'll say a why hundreds of location because don't think of this as regions within a region that could be multiple sites that's why the better word to use here is sites that let's say in one region itself you might have multiple sites right so if you have hundreds of locations it would not be if effective to have a large to have a to have a central place collecting all the information about Services see this is a globally distributed thing right you would want to get a central place to have the status about all the regions or all the locations or all the sites which one is available which one is not right now that becomes pretty challenging so DNS update mechanism would need to be distributed to as many ation as a service is deployed yeah we consider this as nonoptimal solution since there is a possibility to integrate the monitoring and automatic failover in existing load balancing infrastructure perfectly right now what other problem comes with DNS TTL can also be a burden sometimes it is not possible to use a very small TTM and the time it takes to propagate DNS changes would still be downtime for for the users perspective this is a very interesting point that while you detected or while your service detected that Asia is down you are changing the IP address for Asia load balancer or raia endpoint to USA while this DNS propagation changes happen your request is still going over here which is from the users perspective is still a downtime you cannot have very small TTL because then a lot of invalidation would happen and a lot of request will go for Resolutions there but you cannot have it too long either right so which is why still a problem right once the system is back there is again a need to update a DS ENT point back to it which is again like your request would take time to go back over here right so for for sometime user would still experience it but at least you still need to change the configuration if not anything else okay so this is where any cast comes in now you would have seen if you read uh uh any if if if if you build a habit of reading paper you typically see half of the paper is there to just set up the problem and potential and possible solutions and then the other half talks about actual solution which is why you see why uh when when people spend a lot of time they spend a lot of time understanding the problem statement defining the problem statement understanding existing solutions that they thought about and then why this makes sense so it's very important that before you work on any stuff it's important to understand the problem statement and the constraints really well right because devising a solution becomes very easy once you have the full context okay so basics of anycast so this is where actually anycast comes in so they talked about one possible solution using DNS now the second thing that they're talking about is anycast like this is where anycast fits and there the second solution anycast is a network routing technique where many hope hosts have exact same IP addresses here they using the term host think of it like lb servers right so the idea is what is anycast in general anycast is where multiple host multiple machines think of it like multiple E2 servers they all broadcast or advertise the same IP that hey I am 10.0.0 one this maion also says I am 10.0.0 point1 so then which IP to advertise it's up to you you try to keep it unique but it's up to you on which IP address you want to advertise and this IP address is then stored in the switches router table or the routers router table and firewall and network switches across and it maintains a router map for that right so which is why the key idea here is multiple machines are advertising the same IP addresses clients trying to reach that IP address are routed to the nearest host this is what your entire internet is based on short test path first obviously it's an NP hard problem but still the internet tries to get from one location to another actually traveling salesman problem is an NP problem but the idea is you try to find the shortest path with the information that you have right so the key thing is that's how your internet is designed to work that is this the shortest path it based on distance Network congestion uh reliability this that many factors according to right so there are many factors that it considers that what is nearest but this is how your internet is actually working if these duplicate host all provide the same service the client simply receives the service from the host topologically nearest never say distance nearest it says topologically nearest it could be anything right so the idea is all machines advertising the same IP request comes your the route it would take would be the nearest to reach to that IP that's the whole point that's the whole game that's how internet functions now anycast per se doesn't have the now this is where the limitation of anycast comes in now read this very carefully anycast does not have information on service specific health status obviously it does not have who has the information if Services alive or not the actual let's say o service payment service profile service the machine knows if it is on or not load balancer would need to know if they are on or not any cast is a routing protocol any cast has no information that if America site is working or not or if Asia site is down or not anycast has no clue no clue right and we need to be aware of this that anycast is not a health check solution not at all so any c per does not have the information on service specific health status which might result in requests being sent to location which do not have a healthy instance of service leing true because it thinks it is routing to America which is working but it does it's not sure what if America is also down right so it does not have that context it is then necessary to think about service specific heal checks if a given service has about 200 different instances so if a service has 200 different instances managing heal check and Border Gateway protocol 4 configuration for each of those instance can be very complicated so H checks across 200 servers complex storing it complex are detecting failur is complex and whatnot right so which is where this is what they are doing now let's go into Google's implementation from 2010 the idea is not to see that what are we doing today but the idea here is to absorb like understand the thought process different failure scenarios and how they are uh and how they Sol it back in 20110 so obsorb the patterns observe the solution absorve the thought process right so their implementation their implementation is very interesting we use any cast for failover between load balancing clusters providing the benefits of any cost to any service behind load balancers so it says that it is any cost as a service right the idea is anyone who wants it gets it they don't have to do anything but we'll go into how the how that system is built this reduces a lot of network environment complexity given that's fine uh our solution uses bgp because it allows creation of hierarchy for the route advertising but other protocols work as well using anyast there is no longer need for remote failover using proxies providing a cleaner solution since CL connects directly to failover location now this is something that we need to absorb so what it says the key advantage of using any cast is that you do not need you no longer need a remote failover using proxies you're not taking all the request to the same proxy and then redirecting somewhere else it is automatically a switch from a non-functional site to a functional site right that's why any cast was preferred back then whilst you lose CL and identifying information it also saves the proxy overhead between servers and users so what it basically says is because not everything is going through one it's a seamless switch it's very easy very non-c complex it's a very clean solution right because your client connects directly to the failover location because there's no proxy that needs to be configured it's automatically simless switch that happening from a from site a to site B okay so our team provides load balancing as a service making it completely separate from specific service setup that is fine multiple service can profit okay not technical detail here it is another advantage of having load balancers as any cost peers key thing right load balancers as any cast peers which means load balancers all the load balancers are advertising the same IP address they are anycast peers which mean they are all advertising the same IP address here right is reduced number of routing changes because load balancer combines multiple instances of service into one virtual IP now here this is another important line load balancer combines multiple instances of a service into one virtual IP which means the servers or the machines running the same service would share the same virtual IP address that's what this statement says that has been one of the concerns regarding anas deployment having load balancer deal with service specific state health checks make it possible to deploy any cost to not only for UDP Based Services but also TCP Based Services let's read it one more time having the load balancer deal with service specific state health checks makes it so here what they're saying is let's use any cost to just take care of switching while let your load balancer take care of do doing the heal checks for it this decou system where any cast takes care of routing simless switching at what T while your load balancer is taking care of heal checks and routing the request within that service because the thing about this paragraph is all about that what if hypothetically what if we expose all the servers and let them uh advertise the same IP address let's not have load balances at all they say why to do that because anycast cannot possibly do it which is why what we actually want is we want any cast to just having them advertise the same IP and I would just seamlessly switch from one to another finding the shortest path there while I should not be carrying what all servers come up and they also advertise the same IP let's use Virtual IP address behind the load balancer making it easier for load balancer to uh to Route the request to servers and whatnot and that's fine and while anas takes care of routing across load balancers right so they they split this into two halves we configure the network environment so there is one subset reserved for any cast virtual IPS the routers are configured to accept sl32 route advertisements in that subnet basically any IP like exactly that IP being ma advertisement in that sub subnet from the load balancers this allows the implementation of protection against misconfiguration by using ACLS which only allows route from a specific subnet preventing accidental over take over of Ip space okay bit of networking I'm not a pro in networking at all I'm still trying to understand I don't have much clue about it I do need to explore few parts right so I'll be brutally honest over here I'm not I don't know networking very in depth have some interest but not much right but the idea is somehow but basically you don't want accidental IP overtake of this space and all it just tries to do something it's fine now let's go into software that they use a little more uh practical here all the load balancers are deployed in high availability pairs we saw that to protect against single machine failure because two load balancer servers are needed so that if one goes down other is there to serve the traffic the incoming traffic not a standby note but literally both active active because they nowh they mentioned it's a standby note they said it's there so I'm assuming it's active active for this we used hardbeat from Linux H project Linux H right now the website doesn't even open Linux H I tried opening it it doesn't even open so I found this page Wikipedia page about Linux h so what it says is uh Linux ha or high availability Linux the project provides a high availability clustering solution for these days operating system which promotes reliability availability and serviceability the Project's main software product is hardbeat and what it does is it basically does what it says a hardbeat right so when one goes down other takes it place that's what it does f the minimum number of node hard can be used to build large clusters with simple logic this resource monitoring resource can be automatically restarted or moved to another node or failure fencing mechanism and whatnot basically what it is trying to do over here is that it is a cluster resource management software hard bit brings network interfaces and back management software up and down basically attached to a load balancer and maintains hardbeat within load balancer and most probably within uh the service itself who who is up who is down and all but definitely on the load balance the actual load balance load balancer that when one is down other takes its place right for backend monitoring oh sorry no it definitely does that actually hard bit brings network interfaces and backend management software up and down so this is basically load balancers High availability of load balancers right for backend monitoring and failover which means when service code they use l Director D Linux Director D so L Director D runs health checks against the backends for each of our VIPs virtual IPS adding or removing from load balancing Pool Services instance that changes the health state it can also redirect to connections to different location in case of failures of in all back end and using a fallback option so what they did is this I by the way I found this very interesting website listnet and it actually has a uh uh uh an article about configuring a load balancing cluster with exactly heartbeat and L director I I just stumbled upon it sometime back right but they have given actually step by step on how to do this it's it's pretty simple looks pretty simple setup go through it if you're interested on how to configure what to configure what exactly would change how it would route the traffic it has lot of interesting stuff so if you want to play around with this configuration just spin up up a couple of e or two or two or three E2 instance and just do this it's very interesting it's called it's from lnet just do a Google search setting up a load balancing cluster with heartbeat and directory okay we added a feature to L Director D implementing a fallback command when the last of the local backend goes down so L director is detecting if the back end is down and the last of the local back end goes down which means the site is now completely down it triggers this command which is the fallback command which basically updates kind of updates that hey this site is down route to other place right so it triggers this command we use that to bring any cast IP addresses Up and Down based on the backend status so no so that load balancer now here who is advertising this load balancer who is advertising the IP who are any cast peers the load balancer so you need to detect if one of the back end or sorry if all of the services are down if it is down you say that this is down so you stop broadcasting this IP address right so you sorry not broadcasting you stop advertising this IP address right so this is what they implemented like Google the folks from Google implemented this now L Director D communicates directly with ip config to bring IPS up and down the local IP config that you have ipvs is IP virtual server to add and remove packets now what is this ipvs let's go through it this is ipvs now ipvs it's a transport layer load balancing L4 load balancer usually called layer 4 land switching as part of Linux con something that comes within Linux so that server can itself like the server if you configure ipbs that in itself would do balancing across the configured instances will I think here also we have it ipvs yes here also we have it there are settings i p through it installed see here so it said this is my IP address and these are my backend servers see the real equal to this this real equal to this these are backend servers that are there 10. 8.84 10.88 10. 8.84 this is where it goes lb check. HTML is what it checks and it sees if it is alive or not so what it says is that here ipvs is on incorporated into Linux virtual server where it runs a host and acts as a load balancer this is your load balancer load balancer so ipvs admin runs within your load balancer right and this is actually your layer for load balancer which accept the request routes it to the back end servers right round robin whatever algorithm it wants to it wants to right ipvs can direct request for TCP UDP Based Services to real servers and make services for real servers appear as virtual services on a single IP address your load balances IP address right that's what it does and how do you configure this this is how you configure this this is my IP address and these are the two IP address behind me so ipvs is your load balancer so to run your load balancer you just need to run ipvs admin like might be like better tool would be there now but this is what it is as for the paper I is a Linux kernel module for load balancing it currently supports tunneling half Network address translation and direct routing modes no idea what that is I know Network address translation I have no idea what direct routing direct routing Dr mode is in our setup all VIPs are configured using Dr no right then there is some tool called qua is a network routing software allowing Linux systems to participate in network routing Protocol no idea that's fine it's okay we allocate an IP for each service now this is where things starts to become relatable again we allocate an IP for each service and create the standard configuration for Linux virtual server to bring the VIPs up and use the feature we added to L Director D to bring the anycast IP network interface up and down basically to advertise IP or not that's what it is talking about right and virtual IP is behind them depending on the state of the back which is fine right now this is what it looks like let me just zoom in a bit okay this is what the diagram is this is what that flow is you have high availability load balancer multiple backend servers real servers now this what each load balancer would be running we know that it runs heartbeat within the load balancer it needs to do high availability that one goes down other knows about it and whatnot it does that L Director D to maintain to see which one of the backend servers are alive or not that's what we saw there ipvs is what is real it what is maps to that that H check it does that then ip config is a local configuration of this where it changes the route table and what not advertise IP and whatnot and quag some software right so this is what that stack is so each load balancer High availability pair runs this module right but I do go through this p uh this website dat to know how it is configured in case you'd want to go very practical I'll try that if I find time but that's pretty interesting I actually found that exact same setup there okay now what it says adding new services to the setup now this is where if now because remember they provided any cost as a service service so what if Google spins up a new service this is again 2010 not today not today not relevant today but in 2010 Services can be added to anycast by simply configuring their back ends in a virtual IP on an anycast enabled load balancer in this basically the settings the file that we saw here here this thing you just change this configuration that's all you need to do right the network configuration will already be in place significantly reducing the barrier of entry for new service because network is already configured they just attaching virtual IP to that service so each let's say payment service gets one IP uh like one virtual IP all service gets one virtual IP and what not that's what it seems in my understanding might be little here or there but that's what it seemed while reading expanding a service to new location follows exactly the same process anyast routing will take care of sending user traffic to New nearer load balancer setup right Services currently using the setup DNS HTTP proxy radius web SSO ntp ldap ldap is in test now so back in 2010 Google would have been just like 2 years old or something yeah 2008 Google set up like Google I don't know 25 years it has been wait let me do it 2023 minus 25 yeah 2008 right sorry 1998 it has been 10 years 12 years there right all the mentioned recovery times okay failure modes and Recovery times all of the mentioned recovery times take into consideration our specific any cost deployment environments with different timeouts and configuration parameters can have different response and recovery time the route propagation time is less than 1 second and we can have 30 second dead timer for routers to consider bgp PR dead basically not being able to do it this is where you would set the time that hey after what time you would consider your load balancer debt your service debt bgp basically implies your service is dead not cleanly stopping the bgp peering service creates an outage of less than 1 second for the service as the routes update in the case of all service backends becoming unavailable it will take the service specific heal check time plus less than 1 second route propagation delay for recovery so time to know that the packets are down plus route propagation de so it's pretty less like one or two seconds you see 30 second for that to propagate so at Max the outage like at like when you discover it the flip happens within a couple of seconds that's what that's what we over here right less than 1 second here less than 1 second over here and you're considering dead timer to be 30 seconds which means after 30 seconds you can consider your PR is dead pretty solid in a sudden network of power failure at the location it will take time for the dead timer to expire so 30 seconds when there is a when the entire site is down 30 second plus a small route propagation delay for Recovery one less than 1 second so 30 32 seconds it would do that switch in case of an entire site outage pretty solid right in conclusion moving any cost routing configuration to a manage load balance service minimizes the work and complexity required for service configuration while providing them with fast automatic distance aware failover so it just abstracts out the complexity of having regionalized failover or site based failover pretty neat it also helps reduce load and complexity of network infrastructure because now every team does not have to do that it's a it's something that has been abstracted now people just need to pick a virtual IP for their service and just get few files configured right and you are pretty much sorted by aggregating service advertisements into one peering Point per site and reducing the rate of routing changes to complete site failure so again not every backend server needs to advertise the same IP only the load balancers are part of any cast IP or any cast peers while behind this it becomes a bgp and the way it runs it right pretty interesting and references these are references the website is not even opening but that's fine few interesting things do go through this page if you're interested into knowing how to actually set it up setting up a load balancing cluster with heartbeat and L Director D and through this only I got to know how it is because I my my my my concept was still little hazy but here you can see see real servers are your actual OD service payment service and whatnot they get uh they get their IPs 108 848 10 88849 and this is uh and these are your load balancer backup router active router your load balancer and but they both are uh they both have different physical IP address 108846 10 8847 virtual IP is same 10 8845 this both are given the same virtual IP this is what is maintained by L Director D and hardbeat so heartbit is happening between them what I just explained and the router routes the request over here if one goes goes down other takes its place that's where virtual IP comes in right so this is what virtual IP is the physical IP over here and the responses directed back to so this diagram help me understand where heartbeat lies and where L Director D lies right so do go through that whenever you find time and the way you can configure is right here right so go through it and this is where how you configure it so this this website seems pretty decent enough that you can try it if you want to not sure this package are still supported by latest versions but if they are try that out and here you can see the hardit configuration dead timer this heartbe H Linux H configuration dead timer this that then you configure IP tables 2. to this proper detail step by step this person has written pretty interesting right so just go through that go through that to understand how to actually configure it but this was pretty interesting paper did not expect like it is November 2010 when this paper was written right and how they abstracted a bunch of things out and again toate the paper is any cast as a load balancing feature I attach the link to the paper in the description Down Below in the description you'll also find a link to a Google form and that Google form is all about what topics you want me to cover I've been doing this for four or five weeks now it just fun like just click record and start uh reading papers and blogs and whatnot last week we did cap theorem this time I picked up this paper and I'll continue to do that by the way I picked it up from that list of topics only where some one one of the person wrote uh can we talk about any cast and how that setup would work uh what you covered in duan's podcast I'm like why not so that's where I stumbled upon something uh little more credible which is where I stumbled upon this paper let's let's cover this right and yeah this is what what I cover what I wanted to cover for today yeah hope uh you found it amusing uh but I love reading papers I try to be as Hands-On as possible creating prototypes did not get time to create prototype for this but it's pretty fascinating the site I've mentioned over here just use this to set things up on the right and try the out so super that's all what I wanted to cover in this one uh I saw an interesting question I don't know the answer to that on how that works again I said I'm very naive with networking the question was uh uh by Sur ran uh how can we prevent someone else from advertising other person's IP I don't know this yet I have no idea but this a very interesting question try asking chat JP or bar to at least uh get some um get some pointers on what to explore but this is something that I'll take uh but what I'll try to find it if I can find something interesting about it I'll definitely make a video but pretty interesting I like like this so super thank you so much folks for tuning in on Saturday evening I'll uh upload the recording I'll edit this video a bit and upload the recording on the channel uh so super thank you so much folks have a great have a great evening have a great Sunday bye-bye
Info
Channel: Arpit Bhayani
Views: 15,551
Rating: undefined out of 5
Keywords: Arpit Bhayani, Computer Science, Software Engineering, System Design, Interview Preparation, Handling Scale, Asli Engineering, Architecture, Real-world System Design, Load Balancer System DEsign, How to design a load balancer, highly available load banacer, how load balancers scale, how anycast works, anycast in load balancer, how google works, failover using anycast
Id: WjT253DBlXk
Channel Id: undefined
Length: 45min 20sec (2720 seconds)
Published: Sat Oct 21 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.