CAP Theorem - From the First Principles

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so what we'll do today is we will go through the Wikipedia page of cap theorem uh to understand some key aspects of it we'll try to make it we'll try to keep it as practical as possible but there are and I would try to point out examples at every single step right just like how we did with SQL versus no SQL space and other spaces right so that's one apart from this uh we'll also go through uh uh the blog that was published back in 2017 by Google when they introduced spanner database so we'll also go through this other blog to understand what they do how they do uh and you know make sense out of it so those are the two key agendas that we'll go through today and I made a boo boo where I shared this okay that's fine we I I'll sort it out at the end so that's F okay superb so let's start with uh the Wikipedia page uh of cap theorem and so to understand what cap theorem says it says in in basically theoretical computer science the cap theorem also named as Brewers theorem surprisingly uh the blog about Google Cloud spanner uh and the cap theorem the one that we'll be covering today uh as well is actually by Eric Brewer himself who who actually uh stated the cap theorem for the first time right so we are actually talking about his contribution itself right so uh also Nam as Brewers theorem after computer scientist Eric Brewer states that any distributed data store can provide only two of the following three guarantees now two of the following three guarantees typically implies a trima like we have a dilemma either A or B trilemma is out of this three we can pick a subset of it right and it's a difficult choice to make and which is holds true for cap theorem right now before we understand cap theorem we need to understand what C A is there are very interesting insights something that we also spoke about during SQL nosql space uh holds true over here as well so C stands for consistency see what Wikipedia says about consistency that every read receives the most recent right or an error now this is very different I I have been saying this for a very long time that the consistency in acid is very different from consistency in cap theorem because consistency in cap theorem implies that every read receives the most recent right or an error while consistency in assid is about that your database goes from one consistent state to another consistent State using constraints and whatnot right so this is where the first thing we need to understand what this consistency is all about this consistency is all about that whatever value was r last was last return is exactly what I'm getting right that's consistency of like in the in the cap theorem the consistency C stands for this second is availability that every request receives a nonerror response without the guarantee that it contains the most recent right it takes time to obsorb this right so every request receives a non-error response like every request non error basically Let It Be non- error so every request receives a response without the guarantee that it contains most recent right so here it excluded the the the the criteria for consistency which was it receives the most recent right here we say that it receives some response some non-error response it does not mean it has to be the most recent right right so the availability the definition of availability excluded the most recent right constraint it says whatever is the response just send it to me right and that's typically what availability means that hey whatever I have I'll send it to you I'm not sure if it's going to be the most recent one or not but I'll send whatever I have and that's what the definition of availability is and the third one is partition tolerance the system continues to operate despite an arbitrary number of messages being dropped by the network between notes basically this partition tolerance implies that in case of a network in in case there is a a partition which means either a breakage or a network or a software glitch or anything that creates a virtual separation or a virtual split between the network that you have that hey a subset of nodes is part of one sub Network while subset of node is part of other sub Network and they are not able to talk to each other so you have a completely disjoint subset of networks that is party partion and partition tolerance implies that you are tolerant to that Network being partitioned right so C stands for consistency where it's about you you receiving the most recent right it's different from cing Acid consistent database with constraints that's different availability is about just receiving any response any non-error response and it does not say anything about uh you getting the most recent right it says you could get anything third is about partition tolerance right now given that we have established the definition now let's start going through the magic now here it's a very common thing that cap theorem States you can pick two out of this three but look what it's written something very beautiful when a network partition failure happens it must be decided whether to do one of the following so it does not say that two out of this three right what it states it states that in case of a network partition failure which means in case there is a logical split between Network right where the two subsets of network are unable to talk to each other you have to pick one out of these two what are these two options first is you cancel the operation and thus decrease the availability but to ensure consistency so you lose availability but ensure consistency of data that no matter what your data remains consistent now here the consistency is what that you would always receive the most recent right so in case of a network partition you are still ensuring quote unquote strong consistency basically you're taking a downtime you're decreasing availability which means you're taking a downtime so this statement effectively means because you are taking a tradeoff on availability what you are effectively doing is you are effectively you are effectively just uh taking a downtime to ensure that your things is basically consistent right that's what it uh basically that's what it boils down to what we are doing is we proceed with the operation and thus provide availability but at the risk of cons but at the risk of inconsistency that it is possible that your data becomes inconsistent basically you will you are it is possible that you would not receive the most recent right which means that's what the definition of avity was that it returns response a non- error response right that you are running a risk of your system being available it would return something whenever you make a call it would return something but at a risk of your data become like your dat it's not inconsistent inist it's more about you are definitely not receiving the most recent right because like think about it you have a network and it broke into two half right now what would happen is let's say your read request went to one part of it like one half of it right while the rights went to the other half now what would happen is if the read is going to the first half while WR is going to the second half what would this mean this means that when you are reading you're definitely getting oh sorry you are definitely not getting the most recent right that's what it means it's that simple right so either you take the complete downtime so you are giving up on availability right to enure consistency that no matter what whenever the read Happ happens the read would always be the most recent right you would say but you're taking a hit on availability you're taking a downtime that's true because you are giving up on availability over here right of a network so basically in case of a network partition you are giving up on availability which means you are taking a downtime to ensure that whenever your read returns a success it is always going to be the most recent read that you are getting and in case of or otherwise you are picking uh you proceed with the operation and you get availability which means your reads are going to 1 half of the partition while from the other half you are not while the other half is accepting the right request so you are running a risk that your system is available but you are not definitely not getting the most recent right right that's what it boils down to right okay super so what else thus if there is a network partition one has to choose between consistency or availability know that consistency as defined in the cap theorem is quite different no distributed system is safe from Network failures this network partitioning is generally has to be tolerated now this statement holds true no doubt but think about it when does this statement holds true when there is a wide Network right so if let's say you are looking to just do do it on a very small Network a local network the chances of your network partitioning happening is very limited at an internet scale where let's say one database is connected in one end of the world while other databas is connected in other end of the world it is very much possible Right but in case of a small Network a small data center it is highly unlikely that your network would partition this is one very important Insight that a lot of modern databases have adopted that if you ensure or if you go with a very high quality Hardware you can almost reduce the risk of your network getting partitioned which means that if this doesn't hold true your system becomes both available and consistent right so CA system you would have heard of it right okay so in presence of partition one is left with two options consistency availability when choosing consistency or availability the system will return an error or a timeout if particular information cannot be guaranteed to be up to date due to network partitioning when choosing availability over consistency which means you're choosing availability which means up time the system will always process the query and try to return the most recent available version of information even if it cannot guarantee it is up toate due to network Mar that's so either you take the downtime or you do this right read this this is one line that you know basically decimates lot of stuff right just it literally decimates a lot of videos that are available in absence of a partition which means there is no network partitioning both availability and consistency can be satisfied and this is a very solid statement because a lot of people the way they think about asset is they think it's uh uh two out of three you can pick any two in most cases it is true but if you think practically that's not going to be the case which means that you can build a system which is highly available and highly consistent when you are assuming that your network cannot partition and this is where Google spanner comes in so if you Google about this that hey does Google span broke cap theorem because it was a very famous statement being thrown that Google spanner solves cap theorem or or it it basically guarantees all three it it guarantees consistency and availability and partition tolerance so what exactly are they doing there so here few things this blog is also written by Eric Brewer who is VP infra and a Google fellow the person who bring who who brought uh cap the to life right so read this this blog was published February 15 2017 long back when they introduced Panner but it holds and basically today I'm at a stage that I can understand this right but it's it's very beautifully written right and read that if you if you do a Google search or a B search or a chat GPT search and you search that does Google spanner break cap theorem or which database breaks cap theorem you would always see this option that hey Google spanner gives you all three right so let's see what they actually mean when they say that they give you all three Building Systems that manage globally distributed data provides data consistency and are also highly available so look here they started with this statement that that manages globally distributed data provides data consistency and are also highly available it and when you do this it's really hard when when you have globally distributed data right the beauty of the cloud is is that someone else can build that for you the cap theorem States this leads to three kinds of system CA CP AP based on what letter you leave out designers are not entitled Perfectly fine now this is where things starts to become interesting for distributed systems over a wide area it is generally viewed that partitions are inevitable over a wide area and this is one of the most important words of this block over a wide area which means that if you are not operating out of wide area if you're not building a globally distributed database if you're not going across continent you do not you are not operating in a wide area right so if you're operating in a local data center the chances of parting of network is almost zero and you can assume that it would almost never happen so you can build a system that is both consistent and available right okay so it is generally viewed that partitions are inevit inevitable although not necessarily common they also mentioned not necessarily common if you believe that partitions are inevit inevitable any distributed system must be prepared to Forfeit either consistency or availability that's exactly what cap theum States right in case of a partition you either go AP or CP uh which is not a choice anyone wants to make so what happened behind the scene is they got rid of partition tolerance all together they say let's assume that we would never ever ever have a partition tolerance if we guarantee that that by making our Hardware really sturdy really solid right can we build a system that is distributed basically it's consistent and available a distributed data store that is consistent and available that's exactly what the foundation of spanner is right in fact the original point of cap theorem was to get designers to to take this tradeoff seriously but there are two important kards first you only need to Forfeit consistency or availability during an actual partition even when there are many mitigations second the actual theorem is about read this the SE the actual theorem is about 100% availability a more interesting discussion about the trade-offs involved achieve High achieve realistically High availabilities obviously like 100% available it is assumed like when you say that it's consisten availability or partitioning you can very clearly see that it is so difficult to build a 100% available system every system has downtime software havea every system has downtime right so it's really difficult to build 100% available system which means you have a lot of leeway if you're building a globally distributed database you have a lot of leeway at hand because although you are governed by the cap theorem you are having a lot of Le that given that you obviously not have 100% availability you would not have you would not have partitioning if you're operating in a small data center so you can very easily ensure very high consistency and that's what happens right so here if I take a detour a small detour and not talk about distributed database right if I just take a simple uh MySQL single Standalone mySQL database right and I think it's also mentioned in the blog somewhere down also right but let me still talk about it that if we talk about a single database normal MySQL instance that you have there it is a single instance so definitely partition is never going to happen a single node even if get that gets cut off that gets cut off right so that is never going to happen so uh p is gone right which means that never partition going to happen which means your database can be highly available and consistent right but because it is single node if it goes down everything goes down so it would not have what it would never have availability 100% availability because if it goes down your database is gone right so what you get is you get very high consistency and you see it satisfies both you when you write when you read you would always get the most recent right because it is single data node on which your request goes and you get the response and you get uh consistency of acid also that your data moves from one consistent state to another consistent state right so now let's talk about spanner so today Google is releasing Cloud spanner for use of gcp customer spanner is Google highly available Global SQL database SQL database it manages replicated data at a great scale both in terms of size of data and volume of transaction by the way it be it is 25x more than what ddb handled their recently published a Blog about it read it if you are interested right the amount of load spanner handled it's nothing that ddb handles 25x more than that right and that do almost breaking quote on quote breaking the Gap the now here interesting caveats it assigns globally consistent realtime time stamp to every data like single data point return to it and clients can do globally consistent reads across the entire database without locking now this once statement covers so many so many things it's very tough State tough words globally consistent real time timestamps through every data return to it right and clients can do globally consistent reads across the entire now this is where Google's proprietary Hardware comes at play right the way they do this it's called as true time you can read about it true time algorithm the way they assign this and it is really very specialized Hardware very specialized hardware and because of which they are able to quote unquote break the cap the and I would always say quote unquote break the cap the now here few things in terms of cap spanner claims to be both consistent and highly available despite operating over a wide area now this is the statement that states that hey did we break cap theorem because spanner claims to be both consistent and highly available and while operating our globally distributed database so has it actually broken the cap theorem right the key thing is it is not so true why because here it says that the short answer is no obviously Technic but yes in effect and its user can and do assume CA so theoretically no but in practice you can say it has quote unquote broken cap theor which is where how because obviously you cannot break the theoretical limits you can always come up with a proof you can always come up with a condition where uh C and A both get like in case of a network partition you would have to choose between C and A so you have typically not gotten all three but however no system provides 100% availability so the pragmatic question here is whether or not spanner delivers availability that is so high that most users don't worry about outages so the framework the cap framework has made it very easy for database designers especially distributed system designers to think it in a certain way right so given that no system can provide 100% availability but can your system give you very high availability that your users don't care about outages 100% is impossible to achieve right so but the availability is so high that users are not worrying about it okay for example given that there are many sources of outages for an application if spanner is an insignificant contributor to its downtime then users correct or then the users are correct to not worry about it right that's how they are going for right in practice that's fine we'll go inside span right it's okay okay there are several factors but most important one is that spanner runs on Google's private Network that's where specialized Hardware comes from that spanner is running on Google's private Network unlike most wide area networks and especially the public internet Google controls the entire network and thus can ensure redundancy of hardware and paths the actual physical Hardware they are talking about and can also control upgrades and operations in general so they what they are stating here is that we would make our Hardware so specialized we would make our Hardware so good that it is almost near impossible to have now see the the kind of uh tradeoffs they have chose not be a very uh a system would not be very highly available they have lowered the bar but they say we would still 100% availability is not possible and they' have lowered the bar slightly they say that if we would make our Hardware such that it is very highly available and would make our Hardware such that your network the parts that it would cover it's very hard for it to fail you are effectively saying that my system cannot ever have a partition my system almost cannot ever have availability issues then you can easily ensure consistency right that's what they are playing with right that's what they're saying it's very it's all about proprietary Hardware right fibers will still be cut equipments will still fail but the overall system remains quite robust so they're going they say that failures would happen but what if we have fallbacks for everything on the hardware level right helping it uh become better right which is where the quote unquote breaking of cap theorum comes in so how do they do that it also took years of operational improvements to get to this point for much of the last decade Google had improved its redundancy fault containment above and all its processes for evolution we found that Network contributed less than 10% of spanners already rare outages here's a statement that they made that Network partitioning will be there but on their very proprietary Hardware it's less than 10% of all the outages it's not that 10% time it is down right of all the outages that they have seen and this is 2017 a lot of things have changed from there but the key thing here is that it has contributed less than 10% Like The the outages the reason for outages being Network right so they're making their Hardware so so so so good that you would almost never have availability pain points obviously like some availability would be gone and but you would almost never have some partitioning so it's more about wearing the degree of availability and partitioning so they lower this a bit and then they increase consistency that's how they are almost breaking the cap here Building Systems that can manage data that spans the globe provides data consistency and are also highly available is possible it's just really hard when when they say it's possible it's all about it's all about uh what do we say uh making their Hardware more reliable it's all about that it's all about making their Hardware more reli right and that's what we see throughout this blog post that they improved their availability through very high rate they improve their partitioning they improv their Hardware so much that Network partition would never happen and that's how they're trying to quote unquote break the cap here using very long like far too many instances right the beauty of the cloud is that someone else can build it for you and you can focus on Innovation code to a service and application and obviously there's a white paper attached to it which obviously we won't go through but if you are interested to go through something that I read for like long time back but would recommend you to go through spanner white paper it's not really difficult to read uh and to be honest if you are interested in databases and distributed data stores uh uh the key thing over here is that whenever if you read two or three papers or two or three systems other systems you can automatically derive because the core idea Remains the Same right but just one thing I want to talk about this next steps here it covers spanner consistency like the white paper covers spanner consistency availability in depth including new data it also looks into the role played by Google Go's true time which provides globally synchronized Lo the thing that we started with that it gives you real time time stamp to every data this is that Google's true time right so the paper actually covers that read about it in case you are interested like the magic is in the hardware so the way they are challenging uh the cap theorem and obviously who is challenging the cap theorem the person who who who formalized it Eric Brewer right VP in and Google F like who can challenge it so he knows it in and out so what he is like through this block post it is very evident what he is trying to do over here right ensure that your Hardware is so good is so available and so fall tolerant over to the network outages that you would automatically get almost automatically get all three not 100% but something that your end users would not be able to perceive so they give you those guarantees Almost 100% like almost like no one can change 100% available right so yeah this is where and this all came from um U one topic that someone asked me to talk about that there are a lot of gimmick videos and which I thought like hey let's dive deep into this to understand you know cap theorem and how systems are able to get what it gets right do thing here also we see now I'm back to Wikipedia page uh the database systems designed with traditional asset guarantees in mind such as RTV Ms choose consistency or availability how because it's just one data Note if it goes down you're giving up availability but what you get you get high consistency where a system designed around base philosophy common in no SQL movement for example choose availability or consistency that they said that day we would give you and it's a very blanket statement does not hold true much but you see the trade-offs that the databases are taking or any distribut system for that matter is taking right then there are few few things that they have mentioned but read this line in 2012 Brewer clarified some of his positions including why the often used two out of three concept can be somewhat misleading because system designers only need to sacrifice consistency or availability in presence of partitions we all have been taught that cap theorem means you can pick any two out of three so either consistency availability or consistency in partitioning or availability and partitioning right but that's not true that's not what it stands for it stands for that in case of a network partition you have to choose between consistency and this entire discussion gave you that notion of what capam actually is and what it isn't so these two uh is something that I found really helpful when I was first trying to understand cap theorem these two documents just uh covers everything that what exactly is Cap how it affects and how when someone says that we are breaking it it does not mean they're breaking it it's more about what they are trading of they're making their Hardware better and better to reduce the chances of it going down and this happening right then what next and I think that's it rest feel free to read all these resources there bunch of them right okay we'll go through some questions oh by the way thanks people thanks wasn't required but thanks means a ton okay we'll take few questions feel free to drop in the chat apologize I could not spend a lot of time in chat but H now let's take some questions in case there are any okay ABI posted uh are we referring to each partition being stored in a different region uh each partition is part of data or is it whole okay so here that's the thing a lot of people assume over here that you don't have to be cross region right understand where your data recites it's the data is effectively partitioned it can very well be present on a single local data center that you have you don't really have you don't really need it to be distributed across region obviously there are datab like cockos DB does that cockos DB gives you GL gives you globally consistent view of data spanner does that right but again it's built on proprietary Hardware in a lot of cases not in case of Coos DB but definitely in case of spanner right so it is not something that one should assume that your data is present across regions it is that's why they've used like quote unquote wide area now wide the definition of wi could be anything if you're underlying Hardware the network cables that connect machines if they are very old and very uh Dusty or Rusty then even your local data center can become a wide area network why because there is a chance of it chopping down and network package dropping and whatnot right that's what it boils down and this is dbms anur what is a subject I'm diploma student I'm talking about uh distributed systems I was all learning K referring to each partition I don't see question in case there are any questions feel free to drop uh drop them in the chat I'll more than happy to cover them uh comparing the stats one sh it's a genre of database new SQL ddb handles lot of public load and Amazon Prime day same as public AWS unlike Google spanner uh I don't think spanner is almost 20x no I'm not saying it is 20x better than DB and then ddb I'm saying it handled more traffic upendra I'm just hting it's part of a Blog that is published where it states that it has handled that much of traffic I'm not in any way saying it is better or worse each has its own class right like uh uh spanner deals with a different kind of load while uh ddb deals with a different kind of load spanner offers different kind of operations ddb offers different kind and you very rightly pointed out it is apples to Orange comparison 100% true but what happened one company stated facts about another other company stated fact about another it's corporate War we are engineu we would we should just appreciate the scale that they're able to handle and I'm just stating what uh Google very recently published in the blog that they handled more than 20x of traffic than what ddb claimed to have handled that's it I'm not saying in any way it is better or worse or whatnot it just handled more traffic right and ripple might token support I have following concerns about partition it is an it is not time bound how does an offline first distributed systems work if outages are weeks long is it for single Master Cloud only system so we like you obviously have worked on uh offline first databases like you know it really well so here when we talk about partitioning it is more about the way your network is partition obviously you would not want the partition to last for a very long time but if your data is present or if your data let me just get your question correctly how does an offline first distributed system work if outages are weak Lo okay so in that case uh okay offline first distributed systems which means you have a lot of distributed small databases like offline databases which means your right rights can happen locally which means if your outages last for a very long time which means that there is no in this case there would be no Central authority to connect to that who would who would affect the state which means there would be lot of conflict issues so which means there has to be a way to do conflict resolutions so it will all fall under how do you resolve conflict when updates happen at different places and how do you Converse them to the same state so which is where crdt where I did a podcast with you crdts would come in otherwise if it is application specific then if it is an offline first system and if it is application specific then you can Define your your own conflict resolution strategy over there or if a database offers a specific kind of data access pattern in that case your database can take care of conflict resolution on your behalf depending on how they would want to do it so cdts obviously you know it but for others cdts is something that you can explore uh Conflict Free replicated data types is what can help you achieve this otherwise build your own or build on as in depending on the data pattern and the query pattern you can figure out what your what Your conflict resolution strategy should be and that's how you would go about it by the way thanks people for that question then what else I don't see other questions done later any other question oh there are a bunch of messages nice Assumption of cap theorem it can't be Universal genuine question CDT should converge yeah Rohan correctly pointed out CDT should converge there uh J ask if networks are partitioned for example right request not reached if the right request isn't processed how come it is not consistent since right is not even processed yes exactly right happened in case of network paring right happened to one part imagine it's a imagine you have a uh every data is present on five like you have a distributed data store a data is present in five shards right and when your write happens you write it on five let's say there are total 50 sharts right now what would happen is let's say you want your right to succeed so when you're writing the rights go to five sharts right synchronously but it does not wait for all five it waits for the majority the majority is three right so assume that there is a network partitioning such just that two nodes are part of partition a three nodes are part of partition B the right went to partition B right and when the right happened it issued to all of them it waited for three to respond three responded so it says that right is successful right but when your read goes to this other partition it would see the old copy because the right never happened on these shards there was no because of network partitioning your data could not converge right the state could not be replicated that's why the right was successful it got the majority because it happened on the first on the second half of it while the first half was unaware about the rights that's where your network partitioning affects it right you should have two databases offline can handle SQL crdts uh in AP do we ensure at least eventual consistency or no consistency uh in AP do we ensure we in AP if you are giving up on consistency which means see no system is completely available so you either take complete outage to protect your consistency otherwise you take partial outage and return old results that's what it falls on so it depends on what guarantees your database or your data store would want to provide over there and you can choose either uh does spanner High consistency means strong or eventual consistency it means strong consistency and uh as application Dev do I need to consider code for partitioning failure with spanner or treat it will committed and consistent so as a developer you don't have to worry about Network partitioning that's what spanner document States it Sayes say you just bluntly rely on us for all three right so as a developer or as a as someone who is using spanner don't worry about anything just use it uh won't latency take a hit when CA won it take availability if I care for latency how does spanner tackle this issue that's exactly L what would happen right so abilash ask won't latency take a hit when CA right obviously latencies will take a hit right and you can't do much with there right and that's where there's Hy like and in this case because it's a distributed data store you're bound to have a little higher Network obviously it's not a single machine where you know reads and writes happen locally and it's lightning fast right you're bound to do because you're doing a lot of distributed transactions right latency is going to be there but it w won't be much in case of partitioning the other side is not even aware so they would not even know so if there is a majority of node for a particular data Shard it is s it suffices from that then other part is not even getting there so it's not that it would like if you are pointing out that it would try to reach to the other note but it is unable to do it and time out would happen then it's a timeout and the configuration of your SL there is nothing about it right because it's the SLA that you would want to have let's say spanner gives you an SL of let's say 5 millisecond hypothetically it gives you SL of 5 millisecond so which means it would try to write write it to the available IP addresses of the shards where it needs to write three of them are part of the same network the Rights happen there successfully one it is across the network it would issue the right but the right would never happen why because the packet cannot reach there right so after the time out it would break so here it's not that you are perpetually waiting for the right to complete on that other partition on that other network partition right to maintain your SLA you would be timing out your right right and that's how you would build so typically in every distributed system whenever you're making a network call you would typically assign some time out to that oh yeah thanks people uh yeah Martin kman blog on cap theorum is great I I was planning to cover but it LO it was so overwhelming for me to right but super soell super thanks folks thanks folks for tuning in means are turn um you folks by the way thanks for all the great questions uh by the way this is all what I wanted to cover feel free to shoot your questions in the uh uh in the comment section uh sorry in the description section I've attached the form submit a topic that you want me to cover it's easy that I cover what you folks in this came out from that itself so do uh submit what you want want me to cover more than happy to go through it more than happy to do deep Dives like this and talk about some tough topics from the first principles right so thanks folks for tuning in see you folks next Sunday meansa time uh have a great Sunday bye-bye
Info
Channel: Arpit Bhayani
Views: 23,239
Rating: undefined out of 5
Keywords: Arpit Bhayani, Computer Science, Software Engineering, System Design, Interview Preparation, Handling Scale, Asli Engineering, Architecture, Real-world System Design, CAP Theorem, Google Spanner Internals, CAP Theorem Simplified, Explain CAP Theorem, Did Spanner Break CAP Theorem, Google TrueTime
Id: --YbYCfMnxc
Channel Id: undefined
Length: 42min 41sec (2561 seconds)
Published: Sun Oct 15 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.