Geographically distributed multi-master replication with PostgreSQL and BDR

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
it's my pleasure today to introduce Crake Crake is PostgreSQL and he has a special interest in meeting the needs of the real world user so thank you Craig thank you it's quite old actually because I got into PostgreSQL looking at usability issues and the the really common use case that everyone's faces and now here I am writing distributed multi master systems for niche areas and half spending half my time arguing with people about why they shouldn't use them so it's been a very strange journey but I'm Craig ringer I work for second quadrant we do pretty much everything PostgreSQL at the company and I will try not to talk about the commercial side as much as I can I stay involved in stack overflow you can find me there I'm a reluctant Twitter users etc but yeah please folks if you see usability issues in Postgres I'm really interested I don't think the core community eyespace enough interesting I'd really like to hear from you so that's my little personal interest aside now talk proper company I work for here I said I just wouldn't talk about them too much and here they are but uh that's life they paid me to fly here lots of PostgreSQL sponsorship lots of development work be involved in all the replication all of the replication features and so on and a lot of the core project has been built with and through the team I work with essentially um supporting that outfit helps pay for parts or parts of it I can't certainly time claim all of it so talk Bravo multi-master we hear from multi master now I know the talk title is partly about PDR the specific multi master flavour I work with but I'm actually mostly here to talk to you about multi master and distributed systems in general the the way that there isn't just one there is not one pure multi-master there's actually lots of different ones and people say we need multi master management says we need multi master this guy over here told us we need to master it will solve all leverage a problems everything will be magic will scale better we need it get us multi master and it's just not that simple because it's like apples and oranges except even more absurd there are really discrete sets of different multi master systems with different trade-offs and different compromises and it's not just implementation detail its fundamental and as I'll get to later it's physics so quick review of traditional high availability in PostgreSQL and other sort of block based database systems here's a typical Postgres deployment you have one master multiple stand buyers they're read standbys they're probably streaming at a block level with post-crisis block replication protocol but this is just an example you could be using div d or whatever uh on a failure the usual one one fails so you terminate it you promote a replica or life goes on you repoint the clients there's no drama this is probably old hat to most of the audience here there are other ways many sand vendors will tell you that replication of this models are very silly and they're a waste and what you should do is buy they're large expensive Sam which will never ever ever fail and you should use a shared storage model now I don't have to tell you why you shouldn't use that because Postgres doesn't support it so but yeah needless to say there are problems not that said it has its users and all of these models even though we had once have the users sorry you've been running a traditional replication setup it's all going pretty well you've had a bit of trouble with the tools but you got them then someone says we're starting a Singapore office this is fun so you create a clone your replica over there everything gets up and running you're happy and then the Singapore clients start saying this is really slow all my apps are really slow everything I do is slow it's painful and when I save changes sometimes when I load them again disappear and then they reappear again a little while later and it's just make this stop so what's happening is that the oops jumping ended myself what's happening here is that the clients in Singapore must they can read from the local one if you're doing some kind of read load balancing but they must write by the master in Sydney and you've probably got several hundred milliseconds latency in network alone or between those two lots of database systems we'll be doing many round trips for a simple right you could have seconds between these two and in terms of the human perception I'm sure you're probably familiar with the idea that above about 200 milliseconds people start noticing and about 500 for a second they start getting frustrated and after about five seconds they don't get coffee so you're facing frustrated users productivity losses etc and someone says why aren't we doing multi-master especially once this happens one day you master off your main office goes down you guys wait we planned for this this is great we'll fail the Sydney clients over at the Singapore office there's no dramas sure Sydney's slow but we're up we've retained our connectivity at least assuming the wine isn't what failed which it does so traditional model again multi-site failover disaster recovery you stay up we talk about that slow slow reads or writes I've got ahead of myself if you man fails then your second row site is completely out of luck they can read they can't write they're probably not very happy but hopefully you've done your business continuity planning and you're ready for this you know that this will fail you know that you can go read only you can cope but management will inevitably say that's not good enough we must stay up and available for all of the notes well you can do that but this isn't how this happens quite routinely one of my fun little things that I get to do it periodically is to clean up after this someone promoted the replica it's usually a script someone wrote a script that auto promotes when things break so they don't have to deal with pager call until tonight now I hate automatic promotion please don't use automatic promotion unless you test it in production okay because what will happen is your system will auto promote and it's never done it before and it will break and then they'll page you and it's worse like this you'll get told well Sydney's saving stuff and it's appearing in Sydney but not Singapore and vice-versa what do we do now now there's pretty much no really good automated tools to clean up after this data diversion so you get to keep the pieces and clean them up be prepared for a pretty sad couple of weeks most of us want to avoid that and in the end what it comes down to is any automation system is inherently flawed because to safely promote after a failed replicate you must ensure that the old replica is gone like sorry the old master the old master is gone it is absolutely non-functional it's fenced off the term fencing just means isolated so it's probably still there but you've pulled the network you've removed its VLAN and access you've done something that makes it inaccessible and the other term that's often used is Stone Earth or shoot the other node in the head which is pole plug physically destroy the server with a chainsaw you know make sure it never comes back but you cannot do these things not without a way because your communication channel is gone so lots of solutions exist you might be using a side bang telephone them and say make sure the plug is pulled and you just hook the satellites working you might have a slower sideband that you can use or an automation over such as an expensive satellite link you can't run your database over but you can run controller chat a control channel over that sort of thing but you can use a sideband possibly some people use DNS based systems they're a terrible idea for relational databases they're great for some stuff but because relational databases have strong ideas about data integrity and things you tend to create a split plane situation in another way where some changes are out at one server some to another and things get exciting so I don't recommend that for traditional the planets and again the simplest solution is accept that you have that point of failure use a proxy and if things go down there go down and a surprisingly sensible one in my opinion is to accept and plan for the divergence have a cleanup process have a part of your business continuity plan part of your disaster recovery that you test is to allow divergence to happen in fix later you've got tools you've got plans part of that is because you cannot prevent that divergence completely even in the simple master system people may not realize this but here what we say is that two transactions have been done on the master the second one tx2 occurs after network failure so where does it go it was on the old master it's not run the new one it's gone into the void ether so it's a form of divergence where the old time line ends and the new one begins there's no duplication but committed changes still got lost if other systems are aware of those commits then you've got inconsistency with other systems etc so you always have to allow for some data loss of divergence someone comes along we've been talking about this they're probably nagging you since we started this talk about why you should use multi master look we just route transaction to to the second master while it comes back up we'll wrap the transactions from the promoted master back to the old one they can both be masters they can play together we'll merge everything it's magic that's wonderful so and it really is for some things don't get me wrong a multi master system can be really wonderful you've got no local right Layton sees for your clients everything's fast you get everyone stays up it's just perfectly available there's none of this failover fencing mess that you're dealing with you're not worrying about plug pools and all of that it's absolutely brilliant and there are situations for which that's true there are simple data collection applications some things where that is exactly what you want but the vendors got into it and the vendors said this is perfect for everything you should use it really they probably went to your management and they said you should use it but it's not that easy not with relational databases not with most things that aren't have a very simple insert only log of changes because while you're isolated you might make changes that could not occur in a single node database system now there is no way to prevent this while we're us later because we cannot talk to each other we can't exchange locks we can't do a row locks we come to any of this so if we're remaining available if we are continuing to accept and commit transactions both of these notes must commit these changes so what are we doing when the network comes back what's the answer there are two correct answers to a single query that could only have one answer assuming that this unique constraint which I didn't make clear there is a unique constraint here lots of other issues that related to that lots of people use synthetic keys well if using synthetic keys they're usually using a counter and if you can't increment the counter because your links gone or it's slow or whatever you're in trouble you have issues where rights become visible on one node and not another and then you make calculations based on them it's get exciting but at a more fundamental level you're facing the problem that your application was written possibly some time ago because as a number of these talks have mentioned nobody has green fields especially if you are using an ideal DBMS you are working with existing applications these applications authors may have even read the manual they may have read some of the spec they have come to rely on things like row locking working they have come to rely on things like data that's committed being visible after you committed an odd vanishing and if you are doing a simple load balancing between the multiple nodes these assumptions may cease to be true certainly they inherently have to cease to be true if those words can't talk to each other anymore so your app you're breaking fundamental assumptions your applications authors have made you're pointing them at a new system that has the same interfaces the same queries etc that the app is used to so you won't get any errors not up front but you're changing the rules underneath them you're changing what the words they're saying mean but understanding each word individually is the same thing so great caution is required you can leap in and it all seems to work really wonderfully at first and then you have your first Network outage or fault or whatever things go really really really messy so the fundamental point if you go home with one thing today there's more than one kind of multi-master well okay one thing one thing the key points I want you to go home with today there are more that's warrant more than one card multi-master and all of them involve trade-offs has to single master systems multi master is not always the right answer it can be brilliant you need to do requirements analysis for your applications your tools your needs including your legacy concerns and determine which tool is right for you and a really useful trick I've learned with this is making sales people say no turning questions backwards make them say no if they say yes it absolutely will lose all my data when you crack yes it will lose all your data when you crash well wait you can't you filtering the yes-men my point here is there's a lot of really dodgy sales data out there on a lot of products that glosses over a lot of the details you need to dig into the specifics you need to test it so I've said there's more than one kind of multi-master I'd like to get into that I use they're not my invention but I find it a really full form I used two categories of multi-master to broadly describe things to people based on strength of coupling loosely coupled and tightly coupled models and multi-master now that tends to come along with the loosely coupled model tends to be generally shared storage or tightly replicated it tends to be sorry I'm my apologies I got that backwards the tightly coupled model tends to often that tends to have shared storage or replication of lots of coordination you've got a lot of internode chatter you have general you have conflict prevention you have a highly consistent model where the application gets to pretend that it's talking to one database one instance one node and all of the multi node multi master stuff is largely concealed now this is a spectrum it's not to discrete things so you've got a lot of things that are in various positions along the spectrum but in general tightly coupled systems try to look like a single node certain big vendors big names you may know them loosely coupled systems quite the opposite we go for we go for more of an optimistic conflict resolution after it occurs setup tends to be exclusively replicated because you can't rely on being able to access the same storage and if you can it's slow we are like such systems will be tend to wreak okay you'll know the term eventual consistency pretty popular it was really big in the 2000 era well the name may have faded but that's pretty much what a lot of loosely coupled systems i doing eventual consistency lazy conflict resolution replication etc it's all great but as I'm sure a lot of people have worked with tools like Cassandra and the other ones that became popular early on learn app changes are pretty much unavoidable so on the Left we have a common major vendors application cluster model approximated so lots of chatter no to talk to each other they agree before I'm gonna commit this yes you can you can commit this I'm acquiring this transaction ID I want this block of IDs lots and lots of chatter they perform well on low latency networks that are reliable they have facilities in place for removing adding nodes failed nodes etc but they like to have two well-defined stats this node is alive or dead it's not I don't know you can also do tightly coupled systems with replication they set the right one the right one there is actually somewhat similar to how Postgres Excel works but there's numerous other tools the point is that it's a replicated system but you have manager nodes that provide single single points of truth possibly called mode input ha2 say again this transaction is committed this one is not they prove up provide a globally consistent view of what's committed what's locked what's visible to allow the app to pretend it's talking to a single system by large it's really nice but it doesn't work so great when you start distributing it geographically to face wider availability needs to serve users who are separated by latency and unreliable networks etc it's generally limited to data center level availability because they just don't work with high latency networks you can't exchange 400 round trips a pre transaction so why we have the loosely coupled systems like and of which PDR is one of many examples many of you will know Galleria that is yet another it takes it has different trade-offs but it's also a largely a loosely coupled system peer nodes this generally no single master the war coordinator rather they're generally equal peers they replicate changes to each other they don't do a lot of work saying I'm about to change this yes you may change this no you may not what they do is they say I ran this transaction apply the changes tensity replication rather than cluster so lots less overhead lots less chatter but there's a reason all of that chatter and overhead exists when we get rid of it we are throwing away a lot of the guarantees that are made by those tightly coupled and single node systems about how the system perhaps I've got myself slightly out of order yes yeah you know the term acid okay hands anyone is anyone not familiar with the term acid in databases it's fine I wasn't a while ago no okay we got or shy acid yeah so it's a it's a common set of assumptions that's promised by most or all relational database products and physics as I'll get to shortly says you can't have that in a distributed multi master system that is designed to be fault tolerant everyone wants both most marketing departments promise both of course you can have perfect real-time consistency across your five millisecond link to the moon sorry 500 millisecond to the moon that would be very nice that's no problem yes yes yes we can do that yes now sign physics says not so if they're promising that they they're not usually like what they're usually doing is wheeling a little bit because many products have two or more different operational modes and what they're saying is that in mode one it can do this and in mode 2 it can do this therefore the product as a whole absolutely can't do these things they just come to them at the same time so time Lightspeed we can't go faster than light and life happens real world networks are messy they get cut things break so if your vendor promises you an immediately consistent globally distributed multi master real time system that is application transparent they have one of these and there's a really well storm established but body of theory to justify that many of you live it at the CIP theorem it's actually a bit oversimplified and it's commonly misused to talk about databases but I'll mention it because it's a good entry point it's familiar the idea is you can have something that's consistent something that tolerates network outages and partitions and something that's highly available and you know deals with nodes breaking in things but you can't have all three you can only have two of them at a time pick the truth is that the definitions of partition tolerance available innocently use consistency use picea PAP are not the same as those that you think they are because it's actually a really simplified abstract model which is why we have the PACA LC mole or pack elke blah blah blah I'm not gonna read that to you you can look at it it's on the slides point is if it's up as fast or consistent you can't have both when it's online it must be it can either be really responsive or guarantee consistency because Lightspeed says you can't chatter between distant nodes and when it's down when the networks are isolated it can promise consistency or it can limit availability by bringing down all but one of the partition that's all of the browser's owns you can't have both because you can't agree on what you're changing PDR which is you know what I'm here to talk about in the sense is a loosely coupled system so we chose partition tolerance and latency tolerance and we sacrificed consistency so we're talking more cassandra than RAC here it's not a tightly coupled system what this means is I spend a lot of our time arguing with people about why they shouldn't use it now that said I think it's great I think loosely coupled systems are great but you must understand them you can't point you 1985 application with that was written for impersonal terminals and one to one user to database connections at this stuff you can't point it at a load balancer and just figure it'll work you have to accept the changes are needed you need to deal with uniqueness of keys you need to deal with what happens when changes are made in isolation that conflict and how you need to make decisions about how your application and system respond to that those are application specific things they are not something that the system can just magically do for you because okay it's a simple example if node one sets a row to a value for and no two sets at a seven we don't know if one of those nodes added one or if it's set it to four we don't know the old value we don't know we don't know if the new merged row should be the sum of two additions or if it should be the newer value that was set we can't know that because it's application-specific we don't know if it's a counter or if it's a Flags field or what there was just not the information there there is some theory around doing that in select cases but it's out of scope so I've basically told you it's hot and I think it's great but it's really hard and that I argue with people not to use it so how do you use it if you actually want to if these benefits if the advantages to the geographic distribution etc are worth it for you if your application needs to stay up in remote locations if you can't tolerate being read-only if you need this stuff and you're willing to do the work okay first my favorite phrase but we can't change the application if this is you if this is your people this is your team then I'm sorry I just can't help you I have actually dealt with people who promise that they can make the application change as we talked about they promise that they can switch to a different key generation model and when I actually come to them and we come to try and deploy it so no we can't change the application as the body vendor it's like they weren't listening to anything so change because it doesn't follow the full asset model you cannot just point the existing at okay can't change the app can't do it if you can't change the app then you can come up with solutions for things like key generation issues monos generate keys they replicate them everyone's happy because they have unique keys but we need to make sure that they all have unique keys without talking to each other because they can't promise to talk to each other lots of different methods for doing that we can use some side channel that's hopefully very highly available and outside the normal their book the database replication channel check numbers things that are really business critical you might do it with that you might accept that that means that you lose some availability their common solution is instead use discrete counters step and offset you just different new partition another space you can use a natural key please don't but you can use in at 4k or brother ok names are not keys just repeat that to yourself over and over and over again names are not case they have none of the classics of test characteristics of keys if you have not read the article things programmers should know about names please read it ok and government IDs are keys until the government changes the ID scheme so yeah natural keys they seem great people love them they've done and the system we use in PDL one was to communicate between the nodes in a fairly tolerant way to agree on ID block allocations or do something weak in cherry works pretty well so long as you know it's not out for too long etc this is a pretty fashionable moment use really really big random numbers and works pretty well it's actually really unpleasant for b-tree indexes because your inserts go all over the place you'll end up with more page splits your scans aren't time ordered etc so there are trade-offs it's not magic but it's a choice and like all of this you have to do your requirements analysis you have to make these choices the vendor the tool cannot make them for you because they depend on the application we know this okay another one time stamps we can use a bit range in our IDs part of its time stamp I of the node ID part of it's a local counter it's semi time ordered so it's kind of friendly but you tend to run into the problem that 64-bits just isn't enough and you will either have a deadline where you just run out of IDs because your time stamps hit the limit or you don't have enough node counters or whatever but that's actually one of the models that media offers and prefers and we just accept the deadline as life will go to 128-bit if it happens I've talked about conflicts what are conflicts I've talked about ok conflicting inserts but media videos answer to that is last update wins most of the time we don't know what the app really wants we guess we can't guess what the app really wants and so we figure yeah we'll pick the newest if the app doesn't like it we in PDR we offer some limited user defined conflict handlers but they are extremely limited and the truth is most of the time what we should be doing is designing and tweaking the app so that the last update wins model is the correct outcome and it will really surprise that in BDR but this is most systems suffer from this sort of thing can all that give your application developers surprises the conflicts aren't just we inserted two things in two different places conflicts can be I inserted this row that some other note is deleted but some third node hasn't replayed the delete yet so it's - inserts and picks them yeah there are non-trivial forms of conflict there are three node plus only forms of conflicts I can't go into all of it because there's a hell of a lot but that's why we have documentation but the one that I do want to highlight is for encase we delete a grade node graph and and sorry an entity graph and at the same time another node adds a child there is no correct solution to that when both of those are committed if you resurrect the parent you violated one no this promises about the committed transaction if you remove the parent you violated the other nodes promises there is no one correct solution foreign keys are not compatible with completely distributed systems if those systems are both changing the same set of related objects so again we come to application design you can accept it drop the foreign key sorry I know I sound like pedometers work on my sequel three point three but it's it's physics it's not me um sometimes you just don't have it or tweak your app so that it makes consistent changes to entire object graphs if necessary delete and recreate the rough it's a bit awkward I want to add some better tooling for that I want to add some better conflict handlers for that but fundamentally what it comes down to is will violate your assumptions check your apps right so check your apps we have we have tools there's like you can simulate latency you can run VMs that get killed randomly you can start and stop stuff you can introduce delays packet loss etc you should do all of this simulate the real-world conditions but you can also tweak how your application works you can try and improve data locality so that all of your accesses to some sets of data are strictly defined or only be on one node and if you fail over and suddenly it's not all on one node you just deal with the conflicts you plan for that you may have a manual resolution set up again but yeah app tweaks design tweaks and testing not enough testing you know um I've kind of meandered a little bit in my long time so I'll just quickly get to the point that beyond write conflicts there are other changes that aren't necessarily as obvious like here one node makes some changes and other node make some changes both of them do a song and the answer to that some is not an outcome that could occur if both transactions were run on a single node because of the delay in replication replicating the changes this isn't too bad normally but if you then use that sum to write to another row you'll propagate that into your system and that's stale data can propagate through the system so if any of you are familiar with Postgres snapshot isolation it tries to prevent the sort of thing on a single node but it comes down to us you can't have the same semantics I'm repeating that I'm repeating that for a reason we can't have single node semantics another example we don't replicate lock States you lock a row sorry something locked on the node that you ran the locking on not the others so if you're doing things like that let's counter seek capitals counter generation used for check numbers etc you'll get your pockets sorry and it's silent you'll get conflict reports in the logs in BDR but it depends on the system you're using others will be differently again you might think you can fix this within two no chatter and stuff but if the network goes down and you want to stay Hub pick one BD our specific tweet tip but if you're using it any relational system that's making a schema change may create a situation where the rows are not that where rows that are committed on remote nodes in an asynchronous replication setup no longer makes sense on other nodes that have made a schema change you might add a new column that must be non null but you have committed but not replicated and rows on other nodes where that value is not or it's in a lot there are some tweaks we can make on that but fundamentally what it comes down to this you have to flush all the replication cues stop the world synchronize everything up and then make the schema change on all the nodes and then get back to normal so it's a special case of having to go temporarily synchronous so I think that's most of my application specific tweaks so I just like to get on to the testing side that is you've tested your app sure you tested it it's brilliant look you just pointed it 3bd are nodes in your lab it's running great you didn't have to change anything your users are happy and you're planning on going into production next week doctor because your labs network is fast it's reliable it doesn't drop packets the nodes don't die the load probably doesn't reflect the real-world load you won't get the concurrency the way you actually see the problems that come from distributed asynchronous multi larger systems real-world testing is really important try and get as close to the actual conditions your production system will be in before you go live you can't just go completely introduction you can't ask a hundred million users or whatever you're using hey test my beta so what you'll do is compromise us you'll have a set of graded test environments at minimum you should have something that does latency simulation randomly kills nodes packet loss packet duplication a bit of craziness it's not perfect someone will always come up with a better idiot there's the phrase I've heard but you can't test for everything the world is crazy but what you can do is test for as much as you can and add new tests and new conditions when you discover new crazy so I love chaos monkey and that principle not necessarily that told by that principle this is why I was less than thrilled with automatic failover at the start because people use it but they don't test it what a metaphor is fine but single master or multi master if you test it in production all the time but if you don't they'll make an existing outage worse this is massively more true with a distributed async system like this because latency is all over the place a network fails in a way where it passes packets in one direction but not the other because they've been routed through Russia there is a lot of craziness out there and yeah the more crate more of that craziness you can creatively simulate the feel for a and page of course you'll get so loosely coupled systems they're not effort free it's not magic no multi master is magic no multi master is compromise free and for a lot of users the right solution is a single master deployment with failover and stone earth and fencing or maybe a tightly coupled system or a hybrid you might have tightly coupled systems and different regions and replication between them you have lots of choices the only choice you don't have is magic teleporting wormhole whiz-bang that your vendor is trying to sell you where it's all consistent and h.a all of the same time when you management comes to you and tells you that you need to push back push for testing don't go into production this week yes we know you promised yes you'll pay SLA penalties they'll be worse if we fail okay test pushback accept the benefits work with benefits you've got happy users fast databases everything's close the latency is low the network goes down and they can all still work and so on it's wonderful I work with a film studio that does this and it's just brilliant because they they can't they lose millions an hour if their users and one studio can't work because the database and the other studios down and I can't just resynchronize it's impossible for their for their work but they've planned for the problems they have a recovery plan but for short-term outages they have a plan for what happens if it's out for too long they're ready you need to be ready if you're gonna use it and push back against the behind so yeah at the risk of going to 90s for you add more nines of availability if you do it right but accept the effort accept the planning don't just throw your apps hat please I don't want more work in urgent support so questions anyone have I repeated myself confuse you too much or are we relatively okay okay short version there like to summarize if the masters aren't geographically distributed and you assume the network between them is reliable can you use something like keepalive to ping between them choose one is the right master at the time and have the other as a passive know that it's a master candidate you failover to it anytime lots of people want it yes you can but you are not free from those conflict problems because at that time it fell over they can read note two there can be changes committed on the old master but not replicated to the new one you start pointing rights of the new one and now but they replicate from the old master either you might depend on the way you failed over if the old master was deemed too slow and you failed over then they might replicate soon they might replicate in two weeks when the power comes back on in the generator room we don't know but at some point if there are committed but not replicated changes you'll face a conflict what happens in a single master setup instead is that you've changed to a new timeline those those changes are gone absolutely forever so it's probably better to have conflicts than to throw them away but you have to plan for it it means that your app must be able to cook so yes you can do it but it doesn't make the problems go away it's just a different set of comprises anyone II okay can triggers suffer from similar problems regarding visibility in mrs. consistency yes they can these example I gave of some for example you might insure you might be trying to assert that there can be only for child Rose because parent your triggers on both nodes check this when they do independent inserts both of them locked the parent row as they should to ensure that concurrency locally is maintained correctly but those locks don't propagate so when the two nodes reconcile it changes they flush they're committed buffers you have five children because both nodes added a child and they overlapped the limit so yes triggers used for consistency and integrity and suffer from exactly the same sorts of issues based due to them the fact that we don't replicate locking and we don't replicate changes until commit and that's also true by the way on related systems like Galleria which is a slightly different compromise but largely a loosely coupled async system what they do is they replicate a change after commit and they look for conflicts and by default actually I don't I don't know if it's about a fault but they can bounce that change back and they can say okay don't commit it on the origin node because it conflicts with another no but that is the default okay great I was afraid of speaking out of my because I don't but that one that doesn't encode knowledge about which rows you read it doesn't it can't know what did that trigger check so it protects against some sorts of anomalies not others yeah latency and node and its petition tolerance its EAP pack elk it's a distant different set of compromises one more question anyone I've successfully confused the audience thank you
Info
Channel: LinuxConfAu 2018 - Sydney, Australia
Views: 7,019
Rating: 4.9200001 out of 5
Keywords: lca, lca2018, #linux.conf.au#linux#foss#opensource, CraigRinger
Id: ExASIbBIDhM
Channel Id: undefined
Length: 44min 34sec (2674 seconds)
Published: Thu Jan 25 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.