Bi-Directional Replication for Postgres: A Multi-Master Solution

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
and with no further ado I'd like to introduce mark one second - to talk about partial replication [Applause] so scale is another good one to go out see which of a little bit closer right that's true that's true this scale in Los Angeles Eton College yeah first weekend I think I might starting the night yeah so I'm here to talk a little bit about bi-directional replication what second tractor project I'll give you a little bit about me my name is Mark so I work the second quadrant and I've been I guess recognized as a post case contributor for a little while now originally for doing a bit of performance testing on vigorish systems and then I wonder do people use PG top anymore these days a couple yeah they could probably use a little bit of a refresh I'm also been a director at the United States Post Christ Association trying to do some hopeful advocate needs to add but cuz he stopped from the US and I'm from Portland involved in the user group up there if anyone ever anyone ever made it up to Portland so I came down to talk a little bit about multi-master replication what the bee yard solution is and and what to be aware of if you're interested in using that so how many folks are are looking forward to using multi master replication how when you're using it now so just wanted to do a quick with you I think I think a lot of people should be familiar with this general idea having a single master one of the particular setups I think well it'd be good to get to feedback to you but I think one of the particular scenarios that that make people interested in multi master is you have two data centers made one up in Canada and one of Argentina's so the you got this you got this dark blue guy as your primary system and anyone who's close to the Canada site is gonna have some fairly good response times and and you're servicing people closer to Argentina also well they have to that that primary node up in Canada and response times not so good they're accessing get up and Canada it's replicating whatever changes are made locally and then changes getting made all the way down the Argentina to which which people will still use as a read-only system if well as long as you have queries I only need to read data you can access that Argentina site directly otherwise you got to go up to Canada so I made up some numbers I'm not I'm not sure how realistic 160 milliseconds from Argentina is all the way up to Canada but just to convey the idea that you access is probably to be a little bit slower than than you'd like so everyone everyone has done a hot standby with Postgres how many these issues I mean guys had to deal with split brain because the standing body had a promote promoted by accident incorrectly on purpose but well basically something goes wrong with a failover some standby gets promoted by accident maybe it took a little longer to get that standby started than then you were hoping depending on what your situation is if you were in a state where you had a where you promoted that standby how do you get the other system back or what full various reasons something catastrophic could happen me maybe maybe it wasn't so bad and you could deal with with getting that previous system back online how how many folks wish she could fly to all the other systems so that's that's were we hope multi-master replication helps us dealing with all those particular things not a lot of text and no reason to read through all that there's a definition of the Wikipedia definition of a multi master system so these these are the things that a multi master solution isn't something that you can always drop in one of the important things to you you'll hopefully understand before diving blindly in so to speak is that there are some things that it's good for and some things that it's not so great at so the main things that the multi master solution would be good for is well first thing unfortunately is having an application that is designed for a multi master solution and what we mean by good is something that is aware of potential conflicts of data being written or updated in addition to that one of the other use cases it's good for is is having geographically distributed data so an example of that is let's say you have customer data on the east and west coast it doesn't necessarily mean it's it's physically separated but if people on the East Coast are gonna be modifying data it means that there's gonna be a subset of that data that is specific to that region similarly if you have customers on the west coast users on the west coast would be hopefully just modifying data are those customers on the west coast so all that data could be living in the same table could be partitioned but the idea is that there is a hopefully distinct so the data that different regions are gonna be working on whether or not that data is is physically separated or not works well for insert only only workloads so if you have a special use case well not it's it's you may or may not have a workload that is insert only but if you do that that would be one thing that that would work well with a multi master solution and one of the other things people want out of this is being able to do a switchover failover and and not not in the same sense as you would with a hot standby but if you have an application working with that works directly hold one node because it's the closest one to you geographically physically that note goes down you could just simply redirect that application to another note in another part of the world so the I think the two bigger things that you should shy away from is if you thinking that having multiple writable nodes is going to improve your right scaling you've changed it on one note all that data has to change also on all the other nodes it's it's not distributing the the work because each of these each of these nodes and in this cluster is a full copy of all the data that you have and one thing that you'll find very quickly if if you weren't aware is if your workload is changing data the same data on different nodes and and that will be immediately apparent on on how well the cluster starts performing so how that looks is that each of these nodes are talking to every single other know what popular term is a mesh network I believe so the more nodes you have the more communications that that is will be happening between these two each of these nodes these these requirements are I think the requirements that we had for the solution that we wanted to put together BDR not necessarily all all multi-master solutions might might have these specific goals to what they're trying to meet so we want to make each node accessible because if you're gonna have this cluster distributor on the world you want to you want your application to respond as best it can so so it needs to be able to talk to the know that it's closest to you mate you want to make sure that when we want to make sure that if you make a change on one node all those changes do get propagated back and forth between all the other nodes in the cluster and for those of you know who for those of you who know more about a little more about the internals of how replication works in Postgres we know that we can't use physical replication because that will that will only let you know sir I would put have but only what you have exact copies of one node sorry I can't think of a clearest way to put this you can't make changes to one node because that had to be has to be replicated all the other ones and be able to make changes to another node and have I feel like I'm saying replicated too much but yeah forgetting words here actually apply like write these bits to these blocks right all right so I think I think the other way I'll try to put this is you make an update to one system the you you need to logically make those updates so all the other systems you can't make the exact binary updates to all the other systems because that's David was saying how they're how those bits are physically laid out on the disks are not going to be the same if changes if different changes are happening on each and that I was at the same time one of the other requirements that wasn't true the original of most material with with the original release of PDR with 94 was that this still had to be Postgres over the cover can't be some Postgres fork in the long term we wanted this solution to be highly available meaning that if one of those nodes go down the rest of cluster needs to be operational to some degree well fully operational ideally and then somewhere to the bullet above we didn't want to have some different on disk format this this needed to be I think it's a little oversimplified to say that you need to be able to swap PDR out if you didn't want to use it but it couldn't be something that was again some kind of fork of what what Postgres is or is doing on disk and one of the other goals is not immediate overtime or what not was that was up the changes the work that that goes into developing this this BTR solution that we had was gonna eventually find its way back into the core Postgres code and we try to illustrate some of this going on so this is something that has been happening over time if you can imagine well I don't I don't know what the lines of code number is with BDR but as of starting with 92 I think the inception of PDR second quadrant how many years ago as I know one two three four about six years ago there's there's been a number of things that have have come in from the work that's gone in the PDR on I think I think just the name of a few of the big ones the background workers was one idea so the logical decoding and even even with 10 it's not necessarily what direct committed us that's not the right word PG illogical wasn't wasn't directly committed in the core but I think a few of those concepts were were eventually implemented yeah so more to come hopefully and and I don't know what went in till 11 but we'll see what goes into 12 and so forth so another another graphic to illustrate a little bit more about what the current iteration of what PDR looks like is the blue nodes are being the actively accessed nodes and in this multi master cluster there's also this concept of I think we call them shadow masters shadows something with these B's conceptually these gray dots these gray notes are replicas of the individual node so that you have the individual node so you can picture each of these nodes being in a different data center for example that's that's one way to to conceptualize this for greater availability within for a particular node if one of these nodes go out hopefully not the entire data center one of these other grain oats can can quickly take the place of that of that down system and continue operating yes so that's analogous to warm standby where you're not tracking any queries to it until you switch it in yeah yeah although the reality is is that these are really not even I mean there's still more than a hot standby in reality these are up and ready to go knows but the idea is to treat them like a warm hot standby so the beat our solution is actually fairly simple to describe and in a nutshell I mean there's a really more to a most multi-master solution than just being able to write two more or write to more than one physical system but there's still a lot to be aware to be aware of there's if you're thinking about a multi-master solution I need to put a little bit of effort into figuring out whether it really is right for you so I will go through some of the issues that come up and give you an idea of the effort you might even if the multi-master solution is right for you there still may be some effort that you may need to put into your application to make sure that you're really using a multi-master solution properly so how many how many folks are familiar with that cap there the idea that that you can have two of three things so those three things being the C ARP consistency availability and partition tolerance I think if I recall correctly this theorem was really meant to be applied to network topologies so in a distributed system they don't they don't apply directly there's a there's a little bit of a that's the word I'm looking for yes thank you yes that's exactly what I thank you so how many how many heard of this pack out theorem it's a it is a extension to the cap theorem that is meant to apply to distributed systems so it's it's a great acronym because it's terribly hard to remember how this how these letters came out so the the although it it is it is kind of fun to explain sometimes so the the pack I think is a play on cap it's the way I remember is thinking what the story is supposed to be so in a distributed system where partition tolerance isn't one of the things I can give up the idea is that you you have to choose between availability or consistency otherwise that or else I'm gonna stick to the letter you need to choose between latency or consistency so in a in a distributed system in a distributed database in particular you need to in your design or your implementation but as you need to choose between wait and see our consistency so when the earlier versions of BDR it was the latency that we that we wanted to that we chose to be more important than the consistency but with the latest version of BDR you are able to pick whether you prefer Witten C or consistency between the notes so let's respect the latency there are things you do have to remember the time it takes a propagate data from one node to the other especially when you want your question to be distributed worldwide you know just the amount of time it takes for data to transfer between eastern seaboards or the west coast cross the Pacific over to Europe well you did whichever direction it's going to go you do have to remember that that it is gonna take some time for one node to see the data and the other thing to remember is also how many nodes you have just because same idea what I was mentioning earlier about the right scaling the more nodes you have the more data that needs to be written and depending on the path that they did that the path that the day it takes it may take a little bit of time to get around so it's basically I think I think the pen so so if I'm understanding your question someone's accessing data in New York it gets written there so then let's assume in New York needs to send the changes over to San Francisco and it sends it out to Europe it's it's not that the cost of waits for it to be written across the whole cluster in West you decided that you wanted the consistency more than the latency then then they would show up as they can applied but yeah if you were making the system synchronous then I think you're right that that it would be eventually how or slow the soleus link was going to be between our whole system is there a concept of linking BitTorrent and population where you know the first guy likes to a second guy and that both of them are distributing it out right so you're effectively so is in your example is New York right into San Francisco and then Europe and then to all the other nodes that kind of sequentially at the same time yeah or once it sends it to San Francisco both New York and San Francisco you know the rest of the nodes inside the cluster and start communicating with them and that one node in New York rights to all of them yeah is it so it's not it's not writing in San Francisco and then if San Francisco sending the same updates to the next node it's the one no that's that's pushing out all the other ISM and in the case of sort of a database connection and then you are like I'm mentioning before when the node goes down mm-hmm right all the applications or all the clients are connected to that particular node they have to figure out what other know is available and wait connect to it flora there is some kind of a broker that's sending them right right the responsibilities on the application to decide what to do if the note that it's talking to is no longer responsive and what's the granularity of replication every single transactions or every single ring like I'm assuming the whole transaction is done you're not rolling back those actions across right right right you know I actually always kind of forget this myself in terms it so the the it's no different than what logical replication is doing now and I oh I do always get a little mixed up between the when it starts to replicate the data depending on the replication and so forth but but it is it is the same consideration you in the normal single master primary stamp I would with illogical replication so in current logical replication the replication doesn't begin until the primary the preacher of the transactions bits because it won't even have the the decoded at the end the plug-in won't even see the data exactly it happens that doesn't mean it's committed on the destination field of course it'll start playing it over and if that leak breaks for example the middle never happen on the destination machine there are so there are a few things to remember when the note goes down how many folks have have had had the joys of dealing with with replication slots because so if one note goes down with BD art is using replication slots so as changes are happening on all the other nodes the transaction data is getting saved until that note comes up and starts consuming those transactions again but at the same time if you can rest assured that if when that note comes up it shouldn't be missing any of those any other transactions that have been building up on the other notes as long as nothing else has happened to any of the other notes in the in the cluster acknowledges yeah yeah so those communication that goes back back and forth the one node is is keeping track of yeah well so one of those is is building up the transaction information to send to the other system and it's waiting for the other system to acknowledge that yeah I've consumed this point in the in the transactional log how does it handle order is order doesn't order need to be serialize there is it just a website wins oh well get to that a little bit yeah so so this is what I was I was hitting at the the so while the well BTR is tolerant of knows going out there there are limits of that and that's gonna come down to disk space how however much space that you have or where those transaction logs are building up because once once you run out of that space that I think it'll the whole cluster will go down because that one node won't be able to won't be able to handle any more changes on that note and I won't be able to take any more chance all right well it well the this the none will shut down typical out of space the typical out of space handling on on a post Crescent node so then all the other nodes arguing that no one's gonna go down or you already lost one node and one the next node is gonna go down when it runs out of space saving up data for that note and that and it could cause a chain reaction at some point I mean I'm not like an instantaneous chain reaction but that's the all the nodes are gonna start backing up more and more transaction logs because more nodes are going down and not consuming them so you can go to any of the other nodes existing on the cluster to tell the cluster that this node needs to be removed and when you start working through that that process of removing the note then the replication slots will get dropped and in that state that space will be reclaimed so there's there's no better way of dealing with that at the moment if one node is gonna go down recover ibly and you know that it's not going to be recoverable then that really is your only only sewer at that point now and we just kind of talked through this I think that's okay notes for a notes were later so when one other one other thing that that's probably worth noting is is those be if so the the first note first one goes down for whatever reason any any changes to all the other nodes the are going to save that transaction lock for the one note with the expectations gonna come back out so if that first note never comes back up then the transaction log is going to build up until until space runs out on whatever on all the other nodes that's going to sorry okay not saying that to Corey but this the transaction logs gonna start filling up the disk on all the other knows so then the next note that goes down is going to be the one that runs out of space first and so on and that's more notes go down that that problem compounds because right and right so so all the notes are saving transaction logs for one note that goes down when the second note goes down the remaining systems are now saving transaction logs for two systems and and so on so if one note goes down it it's going to I think snowball is probably a good way to describe it yeah so did you guys look at partitioning schemes for that because you can partition like pending because they hey the first plate if I had like and move this little thing to store that yes oh if I'm understanding it correctly each each node that the cluster doesn't share a transaction log there there is no single Universal transaction log between the clusters between the nodes each each node has its own distinct transaction law so there's no there's no sharing across the cluster so uh I think to the original question about how you deal with this there's there's nothing built in for you it is something that you will hopefully not run into but making sure that you have some monitoring set up to keep an eye on disk space for example the replication lag between the nodes and having a proactive DBA well will hopefully keep the entire cluster from going down because you can't always you know accidents happen for whatever reason or another right it's a couple notes on security one thing to remember about this cluster is that it isn't it still isn't a single image it isn't a single database that you're accessing these really are physically separate databases that that are just talking to each other so having clear policies on what your how your going to grant access or who should have access of these systems because you can set up privileges differently on each of these systems I think the the that's nearly you want to stay away from is that you restrict someone's access on one node and you forget to do it on the other notes so things like that making sure that you're are applying your security policies appropriately on each node yeah people have been using this into complying with I'm not sure yeah I'm not sure to what extent people take advantage of views or hopefully use it a good way but yeah but I can certainly understand that yeah the system that's supposed to be replicating data between other nodes unfortunately doesn't apply to user rules another caveat is that serializable might not work as you think it would across a cluster as it is on a single system dudu folks have have need for serializable now so the thing to remember is that if if you're running in a serializable isolation level that will be honored on the node that you're on but it won't appear that way with how the locking is granted on the other nodes in the cluster so that that's a artifact if you will of keeping the latency down on on each of the systems or across the cluster we still good right man well well though the way I would think about it is that locks aren't replicated so whatever whatever locks that you would normally get you would normally expect to get granted for whatever transaction or any DD oh well okay sorry DD l is not a good example with an update delete or truncate you you will get the locks that you're expecting to get on and know that you're working on but just because you are truncating a table on one note doesn't mean that that lock is granted for the table for you from all the other notes so people are still able to do whatever update insert delete on the other nodes and they would get the locks that that they would expect to get granted locally so yeah it's so that you can see a partially right maybe what you might get a right - yeah yeah so the subscriber note you know something that's receiving transactions a local operations running a serializable mode could eat in three get serialization failure because of replication replicated transactions yeah I I think the answer is yes yeah I would be surprised to be clintons yes you know so so the person that except but like what's your savings perfect sense because the two sides don't aren't sharing the serializable lock information that's right that's right I mean it's not like even in that scenario you can't really all because you're just going to get right complex between nodes right you're already in a situation where you're not expecting a lot of right conflicts I mean that's why you know I think I don't mean to you it means that the the survival mode that if you turn on serializable mode on the publishing system it's the something has to serializable move down to the subscription system yeah and have the transactions so if something would if if you imagine you know suddenly pop the gap and everything was running on one machine you could have a situation where you would receive a serialization failure in that scenario wouldn't so US and UK are both trying to alter the same record well that's not serializable puzzle prevent that that's just a little bit locking this is serializable its predicate level locking yeah or you wear something where for example select come up in conflict with an update so it's probably a rabbit hole we don't work but especially I think I think we wouldn't be lonely if summarizing is the incoming replication traffic is always read and recommitted both you know you know is it's never even even who's generated insert Eliza mode will be applied miss realizable yeah yeah serial serializable snapshot isolation is also fun implementation to you can probably describe rights to you only right you say update this table set everything to false the other note says update set everything true yeah they yeah they cross and you're not getting conflict on records but you're getting wacky message yeah and then we can we can go into the conflict questions so I say that one for the last because I figured that would be the most fun part yeah so kind of leading into that it is one thing to remember when you have more than three nodes or more than team notes really so I think I think it's easy to easier to visualize you have to know to make change on the one they go to the other make changes on the other note they're just kind of going back and forth in order to some degree is kind of easier to visualize in that scenario too because they're gonna happen a little way first before they happen on the other note so now when we add that third note in there there there is no guarantee on your first two notes are a B you don't you don't know if she's gonna see the changes on be first or a first and then just it just continues to the order just continues to become more unpredictable the more more notes oh yeah so yeah now now for the fun part this is an asynchronous system between nodes just don't know when can't always predict what order things are gonna happen in one of the things that are these okay one of one of the fun scenarios that happens is when you're using sequences so to kind of over simplify one scenario you do an insert on node a and the next sequence is ten you do an insert on node would be roughly the same time and it also gives you the next sequences ten in the typical conflict resolution than the normal conflict resolution is that the last update wins well we have to well the last change once is probably better way to put it but yeah you have to new roads with the same primary key given to ya there how do you resolve that I think in the way this changed that wins is and doesn't really resolve that right you you just lost data in that case for example it's if you give the system credit for throwing one of those inserts away just so it stops arguing there's no stop arguing between each other so so the way to deal with that particular case is to use what's in the latest for what we call the latest version of BER a time shard of sequence which is effectively what which is basically taking a combination of the unique identifier for that node and a timestamp to almost always a very small chance of having a conflict than that and in that case of generating unique ID that you don't have to worry about about those kind of conflicts yeah so that's a very key type that you typically use this fragrance right right with actual data type is that I think that's a good question actually I don't I know it's in the documentation what I don't remember is if it's if it's a integer type some kind of integer type or whether it's a you at Eastern type but did using the good name of an extension that generates a you IDs for society yeah right right so that that's also another alternative to have a just have the function generate a new ID for you instead of using what is with those lower client time chart sequence right right my Instagram also has a function that they use that's effectively the same thing it has echoes of chart ID into it the advantage of it over UUID is there so time orderable with it with it a single chart which is universal right right right and that was that was one of the things that we started recommending folks because originally that first version of EDR how many folks have have tried PDR or using it now so in that first release of BTR there was a what we call the global sequence was basically managing a the one counter across all the clusters which turned out not to be the best solution for for managing a a sequence across clusters so then what we started recommending it was that just like you were saying well still use the counter but incremented in a way that keeps it unique across for that particular node so well well that worked where things started getting still wasn't totally solved were when you're adding another node what if you didn't partition that for enough with enough was the word I'm looking for buffering because I really the best word but what they need to have to do it all over again and even even to even if you were in a situation where you weren't doing that to begin with kind of doing it on the fly it sounds like it might might work but sometimes mistakes can happen trying to even stuff that's getting that partitioning of that sequence in place on a life system can sometimes cause some headache so I mean ultimately being able to rely on the system to set that up for you without worrying about making mistakes is the way we want to go so that so some other things to hopefully alleviate some worrying is that BT art will vlog when conflicts happen for you so so you don't have to worry about did something happen what did some did some data need to get or how was a conflict resolved so being able to look through logs and see that you know knowing that what has happened can give you some peace of mind to a degree but as as you might imagine the more nodes that you have more probably is more likely that more conflicts will happen and and keeping an eye on that you want to try to monitor the login to some degree to keep it to keep an idea how many conflicts are occurring that is resolving conflicts isn't necessarily a trivial amount of work you have changes on one system you can you already know that that's writing that all to all the other notes then when you have conflicting changes once you figure out what gets resolved and you have to resolve that in all the notes on top of the original writes that happen to all the notes so so for performance reasons even though you may feel confident that the system is resolving any conflicts for you you don't you don't want to just let it happen just because they're being taken care of automatically for you being a little proactive in trying to mitigate the amount of times a system needs to resolve conflicts will be a good for a healthy cluster yeah and is it true that like easy logical a conflict in this case means constraint violation so that's what the Commons do BTR I I guess I wouldn't necessarily I think the constraint violations are the toughest ones to deal with tougher ones to deal with I mean he could just be as simple as I'm thinking of the baking situation you update the balance on one node and you don't eat the balance on the other node what do you mean how would it know how would I know there's inconsistency so as the the updates can hit two different masters at the same time and then be based on the replicas to each other can they overwrite each other right so they look right but the simplest case is saying update update the balance to be a thousand actually the banking example is probably actually not a good one because this doesn't make sense maybe gauge is better if I'm setting your age to 18 and someone else said your age to 20 which is correct I guess the question is how'd it be you're detect that situation is wait let's say it's a single line update so it's running in transit yeah one site does it the other site does it gets they get pushed over across right one receives India goes oh okay no whatever and applies it applies to do is copy the other one does the same since right so is it by in this case effect was for the application sort out so that so meteor will pick the one that the most recent change would whichever one happened to be okay so across okay basically listen even was clock sir and say yes yes literally the wall clock what happened to the right reason then it will then do it by node ID actually then then I will just actually pick effectively it picks one no to all anyone and then it will log that as a conflict I see - I see two timestamps - transaction the same time right that's a conflict right right okay so how confident are we that the conflict Decepticon resolution is deterministic across all nodes for all scenarios oh it's oh I think I think we're pretty confident in that because well we know I think we all know that that clocks can only be so much in sync yeah but then each node does have a unique identifier so it's knowing that the nodes will have an a unique identifier then we can't just pick whichever one I actually forget if it picks the one that sorts first of sorts the last at that point but it'll it'll pick one of the nodes Beit depending on whether it's identifiers are as a order so they keep him some throwing hash or some like that saying like oh I'm seeing you come through with it I can compare the trainer should against the other oh that that's if I'm understanding the question that's that's the transaction log okay okay so it's query in the transaction log well yeah yeah it's it's part of the apply process of the transaction logs good so going going back to my oversimplified example of of the updating that that one feel there's this concept I they in the the text calls us a data type which I think depending on the system it is a real tight but in Postgres or on video are right now it's is really more of a concept this conflict-free replicated data type can be implemented as a type of what does I call it log style bookkeeping so I think I think the bank example is a little bit better in this case you have whatever your balance of your checking account is instead of when you're depositing money into it and instead of I think you can you can imagine updates happening one of two ways that an update is going in with what the new value is supposed to be so get a thousand dollars and you're depositing I wonder if you could say update set to 100 or you can say writing write update just add a hundred to whatever the whatever the current value is so the idea is without without having a data type that did that under the covers for you the idea is to have a another table that stored the changes as a rose on the table so then it didn't matter whether it was added if someone was depositing money subtracting my name someone all at the same time or whatever whatever the two deposits happen at the same time the by having each by having another table have a row of each individual change and then being able to sum that up makes that how David put it having the changes communicable commutable yeah I need a little bit of math terminology review but having a way to to sum that all up all the changes without without having to worry about the order that they happened in is is one way of getting around some of these conflicts some of these update insert all right those are just update type conflicts I think this this vaguely reminds me of what a time series database does very creative of you you know you created a construct they wouldn't call it a different thing this effectively we know over over over data then you'll come back as a different value or scalar value like the lab this so you say well give me the last six six weeks of this this value of the edits key B is continually keep it up today yeah yeah I think for some reason I think Redis does this all right that that's what Redis does that CDT turn oh yeah yeah that's different than my understand oh they're more so it is my training is that let's say you have liked the document and there's been a dress ended up you have like a versus the document right and so you delete record three and so this hair beauty would recorded in a log minus 3 right um and then you add something and then I would say plus this and but the way that keeps the addresses of the document it's such a way where you can delete and add to it and there's some trade-offs trade-offs for that but basically it's this notion though if you have the log of those alteration of the document you can reconstruct the document anywhere and those transactions also happen order right there on the page you can have an upward sloping supply state right right now assuming different assuming trade-offs that's my understanding of see reduce right yeah yeah so tonight's why did think about the time serious discussion I was like I was lost as to how they would be directly with the ball but yeah so maybe the whole way yeah are we getting close to uh yeah ok yeah I asked so that so I think I think that was what was considered operator based the steepness is a little fuzzy to me by God yeah maybe that's maybe the example here I think you know yeah so then as much fun as it is to talk about conflicts then what I wanted to leave you guys with was for those of folks who have played with BDR BR the first version of BDR was the one that we open sourced back in 2014 since then we have kind of closed it off in order to read no we needed to read sorry words are trouble for me right now for the amount of money that second quarter has invested in the development we need to try to recoup some of that with sewing to support for some customers but we're up on the third iteration of BDR three there is no date yet of when we might open source fat but it'll be somewhere down the road but as a BTR 3 up folks RP geological the BTR 3 is built upon P P geological and the current version of P geological will be released earlier sometime in early 2019 and we'll leave it at that for now [Applause]
Info
Channel: San Francisco Bay Area PostgreSQL Users Group
Views: 526
Rating: 5 out of 5
Keywords:
Id: sOZ4UodQFL4
Channel Id: undefined
Length: 62min 0sec (3720 seconds)
Published: Fri May 22 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.