Threads and Connections | The Backend Engineering Show

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this episode of the back engineering show i would like to talk about threading multi-threaded application specifically within the context of networking and connection management to be more specific even than that tcp connection management it's very critical in backend applications that you have a socket that you listen to whether this is a web server whether this is a ssh server whether this is a custom made protocol that you built grpc you know any other protocol right but the the the challenge becomes how do you accept connections from clients and how much uh can a single box right manage all these connections from all these clients this is what i want to talk about in this episode let's jump into it welcome to the backend engineering show with your host hussein nelson and this is our series our laid-back series where we sit down and and discuss interesting topics and specifically to back in engineering it's a it's a podcast so you can listen to it on your favorite podcast player there's i usually don't add any graphic at all it's supposed to be just a talking head video so if you like this kind of content consider subscribing to this channel and check us out on the spotify and apple podcast yeah i do have other content on this channel if you this is not your cup of tea of course i understand i have all sorts of other content i have crash courses i have you know tutorials i have hands-on stuff using software you know with that out of the way let's get into it in the early days very very very early days of computing when you spin up a process and you had a single cpu on your host machine and that process executes certain tasks let's say it accepts a connection and that connection now has some sort of a request let's say it's an http request once it determines where the request starts and where the request ends that logic of the translation of a request will be taken to the application and then application start processing it whatever that means you know if it's a git slash api that will make a request to some other database as somewhere else so we'll establish another connection to other database since the request since the sql command or the you know key value request to get a value regardless what is the processing some of the processing will be localized within that instance so it will consume cpu power from that host some of the kind of the request will be not cpu bound maybe ioban whether this is a network call or a desk call hey i'm going to the desk so that's why very important to understand the nature of your back-end and whether it's does it cost cpu or does it cost io and this is an episode by itself you know because you can scale differently based on that but regardless if you assume it's a cpu intensive app where you're you're doing processing in the machine itself right even after sending a request to the database getting response you kind of doing localized processing even if you don't know it you are using you're probably using a library that does that kind of processing especially the serialization deserialization that's costly encryption decryption of tls all of this stuff is happening without us knowing and uh i try as much as possible at least this is for myself to erase all this uh m ambiguity and and you know the vagueness of anything that i use by understanding what every single thing i use what is actually doing right it's not everyone's cup of tea i understand but i like to understand everything i use that's just me it's just gives you it keeps your eye open in the old days when you have this single core and you have single process that core will be occupied to your process right and you might your host might have multiple processes and they are sharing you know time sharing this cpu and like all right let's stop there i'm done take over cpu right take over the next process process three you can take over and the operating system is scheduling these things you know more few years more few more years a decade maybe in the future and now we were able to make cpus more powerful you know uh we have more power for cpus the single core is powerful move a little bit forward and now we have the ability to add multiple cores in a processor so you have a processor but that processor will have multiple cores so there's dual cores technically think of it two cpus you know and we have four chords eighth chords so on with that in mind we don't have contention between different applications now because if i if my single process app will get a core and the other host processes can use other cores that's pretty neat i no longer share one core between all the processing but developers thought about it says ha that sounds like a great idea what if my app i'm greedy i am greedy my app is a single process but what if my app actually consists of multiple processes or multiple threads right a process and a thread is very it's it's almost like splitting hairs when it comes to a process and a thread especially in linux i think this process is a thread it's just like they share the same memory sort of speak right so what people invented was says all right let's just spin up multiple threads you know so multiple worker threads and we have one main thread and let them do the work in parallel why because now not only i have access to one core my multiple threads can utilize multiple cores you know at the same time no sore even i'm starting to remember even in the 2000 six ish 2006 and 2007 multi-threading was the jam you know like everybody was talking about multi-threading it's like oh yeah you have to get into multi-threading like maybe it was earlier than that but when i because 2005 was the start of my career and this is where i started hearing about multi-threading that's just being a lot of people start talking about it so now a lot of people move to multi-threading because of the performance benefit that mike they might get right because now i can share multiple cpu if a single process needs x amount of cpu and i can parallelize that work let's pin up multiple threads right and let divide this work and let them all spin up their own task and they execute these tasks in parallel concurrently if you will that was a revelation so now we are using multiple core so the app is is faster but just like any human evolution nothing comes without its own problems almost every solution we create as engineers comes with its own downsides i can't think of anything that we created you know software engineering wise that didn't come with its own downside solos follow the case correct me if i'm wrong what's the problem with multi-threading well the benefits of multi-threading is obvious the problems with multi-threading is two things that i can think of first the management of the threads and the resource access we mentioned that when you spin up a process you are allocated certain amount of memory right it's called the heap you can dump your stuff there but then and and we never had this problem before with the single process because a single process is a single process you know it's when a single process want to write a read a variable it can go ahead and read that variable we need to want to write the variable nobody's writing to that variable except itself but with multi-threading all these threads shares the same memory it's a shared memory when it comes to just that process you can also have shared memory between processes as well i suppose no i'm pretty sure you can i think postgres has that concept and it's an operating system thing i think you can have as dedicated share memory but only do you have a shared memory between these threads those guys start competing on these resources because no two thing no two threads can access the same variable at the same time you might say why they can't sure they can let them do that but you get undesired results and this is a whole thing i talk about in my database course when it comes to the acid thing like atomicity consistency isolation and durability we have the same problem there right because we are a concurrent system database after all so you have two transactions trying to update the same row what does what does that mean what is what do we do so the simplest thing to do is to acquire a mutex or a lock i think it's the same thing no where you if a thread wants to write something it acquires a mutex on it it locks it says hey hey this variable i'm about to write to it nobody can write to it or nobody can read to it at all so if another if i want to do something to that it's blocked see then the management of this stuff is absolutely challenging a lot of people liked it in the beginning but the more they got into it the more complex your app becomes now things that you use do not worry about now you have to worry about them at the cost of an additional cpu so you're finding yourself serializing things so so you the multi-threaded apps all of a sudden now if this uh if these threads are completely isolated you you want the jackpot but if they need to access the same variable which guess what almost most of the time you're going to need to access the same variable either to read or write to increment the value even increment is a very hard problem to solve like how do you increment something you have to serialize it when i say serialized i mean you have to lock it so that the other thread cannot they cannot both of the time let's take an example let's say increment the value the variable foo right if you have two threads that doesn't increment both of them will read the value both of them will read zero both of them will increment it and then both of them will store one that's not correct right because incrementing in that particular case should give you 0 1 and 2. instead you've got one so that's just a simple example of where things can go wrong okay but so now we talked about multiple multi-threading uh of one of the problems with multi-thread is the management of the the second problem that i think of is uh isolation in in a bad way if every thread is running in isolation we don't know what the workload of these threads we don't know if this thread is overloaded compared to this thread that is might not be overloaded so as a result you might not have even load balancing between these threads right so in order to do that you have to introduce a manager a coordinator right more complexity but it is it is what it is so why am i talking about multi-threading right we all know what multi-threading is but i thought it's very critical to talk about and then we're gonna link it back to socket management and connection management here you see when you have when you have a web application and no js application and node.js is a bad example it's a single thread so let's take it out of the equation let's say you built your own app from using c or go and you have a single thread and you said hey i want to listen on port 80 that's a web app http so would you listen on port 80. what happened is you're telling the operating system that hey on this particular ip address i'm listening to port 80 and you can specify which ip address might say what do you what does that mean i should have only one ip address nope you have so many ip addresses on your machine you have the loopback that's an ipad you have you might have an ethernet that has an ip at us but i have a wi-fi that has i don't know ip address might have a a docker bridged in uh interface neck you might have a virtual neck you might have another ethernet port yeah and all you all of these network card another nic i mean can have their own ip headers they have their own connected to their own gateway and they have another completely different ip and a different subnet so when you listen in a specific interface right you can listen on all of them if you want and sadly that's the default and most i didn't understand this before i recently learned that like in the past year like listening is very expensive and i i really i'm really worried that the default when you don't specify hey listen 80 even in node.js most apps when you listen it's listening on all interfaces why i would love i guess they did it for simplicity but just like anything in engineering the more the if the moment you simplify the developer experience by making the code easier you're introducing your hiding abstractions right you're introducing abstraction which hides the complexity of these interfaces right and this is a perfect example when you just listen on port 80 i know i'm going all over the place but i think it's all related so if you listen on port 80 which is the default like without and was specifying a host what will happen is it will listen to and i a pseudo ip address called 0.0.0.0 right which means listen on all interfaces and to nitpick actually i think it listens to all ipv4 interfaces right if you do fffffff that that listens to uh all ipv6 i might be wrong on that one but just just to guess right so this is all interfaces why what what if you're building like an admin api right and this admin api shouldn't only be accessed within the machine itself or within a specific interface so if that host happened to have a public ip address and you wrote your application in a way said such that it listens to all ipads by default then you just expose your admin api to the public that's why that's why all all these leaks happen with elastic search leak and mongodb leak and then postgres leak right because when you listen when postgres listen to us when mongodb listens it listens to all ip addresses i think the default should be changed the default should be hey you tell me which interface to listen to and i understand is that's not convenient for programming but i think we should at some point point we should stop simplifying everything because that's not the way to go right just simplifying everything because eventually you're gonna get you're gonna get bit in the ass that's what's gonna happen right yeah you simplify the api and that's true for everything we're doing in software engineering look at all the countless libraries all competing to make the code shorter instead of writing oh my code is only five lines of code oh my code is three client of code my card is one line of code in one line of code you can do all of this stuff these things really scares me because you the do you know the developer who's gonna use this you have no clue what's going on behind that one line of code you know and that is really creepy right hey if you know what's going on all power to you but if you don't and you're just using an app and this is hey one line of code and voila i built twitter that's a whole thing by itself i don't know yeah i know i know we'll come back to the point yeah so listening on port we talked about all these ipv4 thing ip interfaces but we listened we have a listener and it's a single thread listener so when you listen what happened is the operating system will allocate let's call the backlog for you the queue if you will okay now again this is just tcp let's not go through udp right now because http 1 and hdb2 is gcp so let's just assume tcp for now if you listen the operating system will like will allocate a q40 and you can specify the length of this cube i think it's a thousand by default and that queue is in the kernel memory so you're here at the user space you listened your application is running you asked the operating system hey i'm listening to port 80 the os will create all right it says okay i'm listening on the loopback one two seven seven zero zero one right let's say i'm i'm practicing hygiene here and i only listen to the loopback because i don't really need to listen to anything else so they always will clear this two cues for us something called the syn queue and something called the accept cue right what are these well we talked about how the tcp works right there is a sen snack and then ack which is the three-way handshake so every time a client want to connect to your server on that specific ip address on that specific port which is 80 it will need to send a send packet tcp segment which is carried in an ip packet and is sent to that the operating system receives it through the network interface controller right or some people like to call it card network interface card same thing right that network card will take that frame and then package it up into an ip packet and then package it up to a tcp so i'm going to ship it to the to the operating system and i think it doesn't even do that it just takes the frame hey is it is it destined to is the frame tested into this machine yes yep just take it ship it to the os the os will take it oh it's a sin and it's listen to 80 as listen to this ipad is yup ask me let me add this to the syn queue and it will add it to the syn queue right the app doesn't know about it yet right now it adds that to the send queue they always kicks in and it will say all right let it's time to start finishing the handshake right so once it added to the syn queue the os will kick in and they say okay let me take this sin request because someone is trying to connect to me right at this point it's not a full-fledged connection yet it's just a request to connect if you will so the os will take that sin and they say okay then i need to send us a knack i agree synag will send us an act to the to the uh to the client and then we'll move on because it needs to receive the final act right so we'll move on so meanwhile lots of other sins are coming connection to request and they are added to the queue yeah that's by the way house sin flooding can happen right because because you're adding blindly adding the send packets to this queue this queue can easily get flooded right why very easy a client that's cinder sin and never acts okay just send sin sin sense and all of a sudden you're flooded nobody can else can connect why because there is a timer and the reason is every sin that is received all right is it automatically synacked immediately it will be try to send act the operating system will try to send us an act back and that will immediately fill back the backlog that we talked about so you can increase the backlog but or you can decrease the backup to prevent that sunflower has been solved with send cookies but we don't want to go that right now it's a different story for another day but that's how it works so let's say a legitimate client will send an act back so completing effectively the handshake so when the opponent system receives that final act it maps it back to an entry in the queue so oh you are from this guy because the sin will have a source port and a source ip and a destination port and a destination ip and those four tuples will be mapped to that queue effectively right and that will effectively complete the connection and the moment the connection is complete that pop it's popped from the queue and now there is another key that we talked about the accept queue which is basically a full-fledged connection to happen here so hey i i guarantee this client is good he finished the connection with us again we didn't send anything here we're just connecting we didn't even establish the tls i'm not even talking about tls right here right it's port 80 right the next thing is to send an actual http request right but we send that and now that connection will be transferred to an accept queue all right what does that mean it means that it's the operating system does did its job it's up to the application which is moi remember i listened listening to an app doesn't mean you have connections right you as the application which is the backend application in this particular case have to accept connections actively accept connections so you have to technically ask the you the operating system do i have a connection do you have a connection you have a connection do i have a connection do you have a connection that's how it works today right and you can do this by calling something called accept and you might say i never did this with not js well node.js does that for you behind the scenes there is there is an infinite loop that just accepts what is this infinite loop we might say it is in your thread which is again we said it's a single thread app so we have one listener it it it it has a loop that and that's accepting all the connection and the way it works if it cause accept if the the the the function call except will go to the operating system say hey i want to accept the connection so sure you you have one right here in the accept queue take it and take it really means that it was going to be popped from the accept queue and a file descriptor a unique integer value will be returned to that thread that called accept whoever called it will get that file descriptor and that file descriptor will represent your connection and that is one client one connection one user connected to you and then you can exchange information using that file descriptor so the thread can write to the file descriptor and i can read from the file descriptor and that's its own story reading and writing there is asynchronously there is synchronous blocking reads and there is this whole new thing that linux built called iou ring which is a fabulous design for asynchronous reads and rides for everything files network calls pretty much everything right so i owe you ring that's what it's called but let's let's not get into a lot of details here let's keep this objective and yeah sure what's the problem i have a single thread that's single thread which is not js contrary to the belief not js is a single threaded app yeah it has multiple threading apps but has nothing to do with networking right the networking is still a single threaded experience in node.js the only time node.js uses a multi-threading and it's documented well documented in node.js is when it does dns entries and in specific libraries where it uses multiple but dns definitely right and i suppose when when it uses asynchronous file system reads i talked about through node.js threading uh check out the video there just type node.js studying hussein and you should find it but yeah but network also in girl thread so that means i have a loop that accepts connection and i have a loop that actually processes my request so that's actually pretty cool so that connection that thread will just accept the connections so i have a connection file descriptor what if what if another user came in another connection request well i'm just going to accept it again the thread if it's free it's going to accept the connection and now i have another file descriptor so now it's your responsibility to add it into an array so to speak right because if it's an http request you you you can do that right you there will be an event that will be called for you and they say hey there is an event an open i think connection open is called right in http library itself and that will be delivering you an actual connection object even fancier than that right and the connection object will have methods like write and read and this is how websockets work identically right the same thing and you'll build basically an array of connection in your thread in your process and uh you can talk to any of them right and every connection object will have an event associated with it so and what is happening is your app is constantly asking hey did i get a read here did i get a read here did i get a read here did i get it right here all of this stuff is really going to be managed by the node.js http library and says okay oh some something just came in from connection number one oh something came just coming from connection number 103 and so on right so we have one thread what's the problem of this it easily becomes the bottleneck right because if one of those connections sends you an http request and at that http request you're doing a blocking call that is computing a hash or doing something so expensive and let's assume you don't have threading because if you do like a specific krepto operation node.js will use threading if you enabled it but let's assume there is none right so if you're doing that compute that expensive let's say it's a loop while loop and while loop one through you're done basically why because now it depends on what node.js will do i keep talking about node.js as an example because it's a very popular backend right but if you build your own c application you have to do all this stuff yourself right so now you're blocked and that becomes quickly becomes the bottleneck the listening authority cannot do work technically you can of course you can if you know the limit but at the moment you do work in the listeners in the same thread then new connections cannot be accepted or they will be delayed because the moment that listener thread the worker thread will have a time to breathe finally i'm done with this test oh i'm go i'll go now go accept a connection oh i'll go execute a read right here oh let me go the user asked me to write something up so it's just busy doing stuff you will be facing blocking at some point right so now what do we do like one use case right is uh what memcache d does and we digested a crash course architecture crash course on memcache d what memcached does is it's exactly identical the same thing right it has one listener thread but that listener thread only accepts connection the moment it accepts a connection it spins up a new thread and since that thread that thread that connection file descriptor says hey thread take it that's yours now i'm gonna move on now you have the file descriptor you do a thing if there is a read that comes into that connection it's your responsibility if you want to write right to that i'm not involved anymore as a main listener thread my job has done i just accepted the connection i handed you the connection so the connection array if you will it's not in the listener main thread it's in the somewhere else keep shaking the table right it's in in the thread so another connection came spinnable another connection and there is a limit to the threads i don't know what is the limit i think it's a thousand right because it will go crazy after a while right that's why memcached say hey don't go above a thousand per instance goes there i don't know what will happen right it's fascinating once you know what's happening it's just so cool to understand really hey guys hussein from post editing right now and uh noticed that uh it might be this is slightly incorrect reading through the memcache d so i just wanted to clarify something it sounds like the default number of threads in memcache d is four you can up that but they do not recommend that right but every connection that comes in right will spin up a new thread but up until the maximum number thread allowed if the default is four those four will share these new connections so every connection that comes in will be given to one of the available threads so it's not one thread pair connection it's one thread multiple connections per thread otherwise bear as per the dock i'm going to share it below as well uh it's going to be a disaster if there will there will be like a thousand can actually have a thousand thread so one thread multiple connection pathways just a slight clarification there so just to be uh objective a little bit here back to the video yeah so that's one way so the work the compute is done in the threads right that's that's my point with the multi-threading so that's powerful so now i accepted the connection with the multi-thread right or when with the listener thread but the connections are being worked out in each and on thread so a read that is happening is a responsibility of the thread that can should continue to pull for read are you is there a read is there a read is there a read is there it right or a blocking grid or a iou ring read depends where we whatever you use the threads are doing this job now so that's a model that's one way to do it what so we talked about one way have one thread do do everything ex accept the connection and do the work doesn't scale well right another way memcache d have one thread accept all the connection but send off these connections spin up a new thread for each connection and let the connection do the thread what's the problem with that design the problem with that design is uh one connection not all connections are equal what does that mean a client that connected to my application might be greedier than other clients right one client might send very heavy requests and another kind might send lightweight request right another request by just just flood with me with requests that are so tiny so they are not equal what does that mean it means that you'll end up with a thread that is so overloaded and other threads that has connections have connections but they're relaxed they're just chilling sitting there chilling doing nothing or doing very minimum work so you wasted memory on spinning all these threads but those threads ain't doing much okay why why is this the case because that's part of the problem on multi-threading we talked about it initially right multi-threading is just there is no knowledge knowledge there is no knowledge it doesn't exist the knowledge doesn't exist between these threads so you'll end up with unfairness and this word that we live in is very unfair my friends it's very very unfair so one thread might do 80 of the work while the other threads are sitting by the water cooler and drinking and chatting and just having fun you know so another model is as follows what if we do this what if let the so that's the third one now let there be one listener thread let that be multi-threading but here's how we're going to do it that thread is responsible to accept the connection so we have the connections but keep the connection arrays in the main thread um isn't that just the first one nope wait a second let's do that let's do that what if since since we're trying to solve this load balancing problem right what if we do this what if we accept the connections we have this connection array in the thread all the file descriptors but we also read from all the connections but we do not process so we read the requests the little the listener thread is just re accepting connections saving this for the sculptor and also reading from all these connections so it's reading request but the request oh you want git slash this is get slash api this is good blah this is a good structure and now that it has the vision of requests what it does is hey okay i have a request i think this is going to be expensive go there thread hey there's another request you're there there's another all right and we start distributing requests to threads not connections the the threads have no clue about connections here so you just send requests send requests hey process this process this process so now we just split it the problem that's a beautiful design i like it a lot and i like it a lot now we kind of distributed the lot because now if there is a thread that is doing a lot of work the main thread knows about it hey this thing is busy it knows it's busy because hey it's talking to it you can argue that this is part of the problem we're talking to it there is exchange but hey you got to pay a price all right nothing is free but yeah just talking to them talk talk talk talk and then send requests right and hey you're busy hey he's a threat that is not doing anything hey get back to work here's some work do some work stop sitting next to the water cooler do some work okay no more sitting next to the water cooler okay so load dancing assault that's what's that's an interesting solution that's all i like it a lot i like a lot i forgot what app uses that design though here's another one uh go back to the original model right now let's let's let's do one more so one two three fourth a fourth one we always talk about one listener thread why do we have only one listener process why don't we have multiple processes listen to the same port haha we can't have you even seen this error before you listen to port 80 and you try to listen to port 80 again another app says hey port is already listening can't do that right that was by design you cannot have two processors listed on the same port but if you know what you're doing you can turn that that switch and let the operating system know it's cool opening system i own these two puppies so you can spin up two puppies two threads listening on the same port by turning on an option called so reuse port socket option underscore reuse port it's like hey reuse port useful so now you have multiple reds listeners listening on the same port so now multiple threads you can have 10 threads listening on the same port so the operating system and all of them are calling accept are looping so now the throughput of accepting connection are way higher you don't have a single thread accepting the connection because if you have a client a flood of users connected at the same time you're going to face trouble accepting connections right talked about that right the accept queue might be full and the app is not fast enough except in this connection because it's just a single thread so you do this just scatter shot all of the threads threads are less unorthodox all of them are listening all of them are listening at the same time and all of them are accepting connections so it's an in-parallel connection acceptance so each of you whatever connection you accept it's your loot you take care of it it's yours you process it you do whatever you want proxies like invoice support that proxies like ha proxy supports that i support nginx even supports that right because it's a it's a busy you do this when you have like a very busy back-end you accept like a api gateway a load balancer like a layer 4 reverse proxy when you do that even layer 7 doesn't matter right you right this gateway is going to have ton of connections so you would need to accept as fast as possible connections either deliver them to another thread you can do that model right instead of you but then you're going to have a thread explosion so comes to the fourth this is fourth one fifth one now which is kind of i i like back to the basics back to the original model single beautiful thread it listens and it works as i said you're not using your power of multi-core sure i can though what if i i don't want to listen in the same port single threaded app so simple that's my job that's my app so my app becomes so elegant because it's a single thread doesn't have this mumbo jumbo of threads and connections and loop and coordination no not that a single thread you might say jose single core you're not going to take advantage of your 16 core aws instance here i'll let you know i have this beautiful thing and i use this thing that's called docker you know put in my app in a container and i spin up a hundred containers of my application all of them are different ports sure and then put that and then let them do the work right in this case can i have two containers listed on the same port i wish i can if it's not if it's possible then this is really good let the operating system handle that i suppose you can i never tried it right but that'll be really cool but even if not then i can just do an ipv table rule that just say hey if someone connect to port 443 or 80 load balance them through these guys right and you're going to have an a process running on port 81 82 83 just give an example so now your app is didn't change but now you've just taken advantage of a single threaded app but literally multiplicated right so you are taking advantage of your single machine cores and at the same time you kept your application simple i like this design i like it a lot hey you might say one one app might receive more load than the other then you might add another logic on top of it like a a layer 4 proxy that controls that maybe you can do that then of course it becomes kind of a single point of failure make it simple make it a nat level layer for proxy i don't know i just i just like that fifth model it's just it seems like it's so elegant and simple i i of course nothing is free i'm pretty sure it has its own problem but simplicity like going back to the basics okay my app having my app being simple is is a game changer like given that you have to of course write your app in a way that is statelesslyishway yeah certain isn't it i throw an arabic word there and when i'm tired specifically after a long day like today right i'll uh my english juice will deplete and i'll start throwing arabic words because back to my native language i i work always and by the time i it's 6 p.m i'll start just uh i can't talk english anymore i don't this is just me all right this is kind of indication that i have to in this video all right guys i hope you enjoyed this uh video i i like this stuff i like this a lot uh i'm learning a lot and uh if you enjoyed this kind of content consider becoming a member this channel supports the show uh check us out on spotify apple podcast would you if you prefer to listen to this and uh check out my courses uh this is this is kind of at the same realm network.hsnasa.com for discount coupon learn the fundamentals of network engineering because any anything that comes on top can be derived to its basic first principles hope you enjoyed this episode i'm going to see you in the next one you guys stay awesome goodbye
Info
Channel: Hussein Nasser
Views: 63,278
Rating: undefined out of 5
Keywords: hussein nasser, backend engineering, multi threaded application
Id: CZw57SIwgiE
Channel Id: undefined
Length: 49min 29sec (2969 seconds)
Published: Thu Sep 01 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.