Episode 3: From PyTorch to PyTorch Lightning

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay so i guess for the next section we're going to convert this into lightning into high torch lightning right so basically at a high level pie touch lighting is not going to really abstract anything it's just going to organize the code and then it's going to allow you to remove a lot of the boilerplate um and wait to see how you can improve this stuff yeah of course okay so now we're going to convert this to pi torch lightning and it's going to be literally most of the same code it's in fact we're just going to lose code for the most part so first i just want to i need to install lightning right so this is just um hip install cool so that will actually install lightning okay and so it's installed and then i'm going to import all of the standard stuff you did from the last video as well we need five ingredients right so the first is you need your model you need to figure out what that is then you need your optimizer right what are we using to optimize then you need your data uh then you need your training loop right so this is like the magic right it's where where you spend 90 percent of your time at least some lightning otherwise it's not like not enlightening you spend 90 percent of time people are going to do other things uh and then you have your validation loop uh model optimizer that uh i believe that's it yeah and then you know if you do have a validation loop the validation magic right cool so you need to find these elements in the system all right to start you start with this class it's called the lighting module right so let's call this the uh let's call this a image classifier because you know we can make a general beyond this uh class of fire and so you know in torch you had an end dot module right um and you in fact already have a wrestling here right yep you have this guy here so let's go ahead and just copy all that stuff and my vim is stuck okay great so i'm just gonna call this stuff so let's call this uh let's just drop it all in here so we can do two ways right so i can literally copy this code in here and in this case this is just a rest not right um so that's one way to do it and then the second way is to just make this into an image classifier right uh acidifier that itself takes on a rest nuts right so here it would be like here we just call it like self.rest nuts and then we'll use this restaurant from up here right so we can do it this way so in lightning it's just a pie torch lightning module it's the same thing though you can use it exactly the same as a nano module so all you have to do is change this now oh we have already opened it what is pn oh good points yeah so pr was by torch lightning okay so import pytorch lightning spl right so python sliding so now here's a lighting module right and now you'll notice if you try to like this is fine it's just a model nothing's gonna happen if you try to emit this for the most part um right you'll see this here uh let's see so here you can use this as a model right this is exactly like a nothing happened this is literally the same as an ending module so um so again i can just change this just to illustrate the points that this is exactly the same thing right so you get this and it's the same functionality princess and lightning though that there are added methods that you can implement right so we talked about these ingredients the model the optimizer the data and so on so let's first figure out what the optimizer is right um so you had an optimizer down here somewhere where is that guy here this one right so and you have this parameters thing so let's bring both of those in so lightning has a function in the lightning module called configure optimizers right so def con figure optimizers you're just going to tell it what optimizer you want to use right so in this case we want to use this so so you'll notice something the model you're like it's weird because you're defining this within the model right so the parameters are not really there so so now you're just going to get the parameters from south because you're in a model uh right and so you need to just delete that and then you're done so it's the same thing but you're within the module so optimizer in this case we're only using one optimizer but you know if you were used if you're like making a gan or something you would have multiple you could have many of these right um next time right yeah exactly and so each one of these will give you an actual training loop itself so right now we just pay attention i spread things with my s oh sure yeah no worries okay so we're good so now we have this optimizer model you have the optimizer we're going to leave the data last let's just implement the training link first right so so here uh you know this is called training step right so so batch you get the batch and then you get the batch index is it already served method this one yeah it is so this is something that this is this is part of the lightning module right so lightning module again it's just an module but we add methods to it so this is an added method this is an added method i see okay so the training step is like the magic this is like where 99 of the research happens right and in this mess of like training loop stuff it's really like you only care literally about these four lines five lines here right this is it like if you were if you were to take your project and you know try to start a new project from it like you would pretty much have to do all of this again right and the only thing that would really change is this guy here so that's your training step so in that instance we're just going to literally copy paste that on here okay cool so it's the same thing okay um cool so we don't really have to change anything it's all basically the same except the model is you're in south now so we're just going to do self um and then you're lost you have to define your loss so in our case the loss was this cross-entropy right yep so you know you can do it two ways i mean you can do it the same way that you did it where we had a uh sorry where are we here so this loss we want to we want to attach it to those molecules right so we can do something like loss equals this same thing and then um and now we can do this self.loss and we're good so um the other differences in lightning you don't need to do any of this cuda stuff so you can get rid of all of this yeah exactly so lightning will just put it on the correct devices for you so you never have to worry about the stuff okay um i'm gonna solve our logits problem by just calling it logits okay okay uh cool okay so again this is literally the same code you had you need you still need data right so binding has multiple ways of doing this um depends on your use case mostly um if you just want to kind of notice like if you want to make this model so here this is a rest net right so it's kind of if you want to couple it with the data for whatever reason you can do something like deaf um training uh i think it's training data loader data loader okay so if you want to do that way then you're going to end up kind of coupling this thing with your data right so it really depends what your um what your particular use case is so in a lot of cases why you don't want to do this it's because sometimes for example if we're not doing mnist and we're doing imagenet or some other thing you need to know what the number of classes is right and that's going to come from your data so a lot of times it's useful to know what that is and so sometimes you just wanted to find that on here yeah okay so it's the same thing that you had i just pasted it and then we have these kind of splits right so uh so i'm gonna leave this as this right now and i'm going to just return the train loader because we're not actually going to use a parenthesis the beard light here yeah great so we're not actually using this valve data loader so i'm going to block this out for now right so i can actually need this um so we'll just keep this stuff here uh train data yeah yeah cool okay so we're not actually going to use a validation site at the moment so we're just going to use this training okay and uh that should be it i need to return this loss so so what you're you're thinking about it is it's like this little block here is what's going on inside that training loop right so all of this stuff uh we copy paste it from here right it was this stuff so now you need to send this guy back to the to the system to say here optimize this and lightning will automatically do all the stats for you right so it will automatically set your model into training to make sure that in case you're using that snowman dropout it'll work um and then it'll automatically put the things on the correct devices here it'll automatically do this backward optimize your step append items track all the stuff so you know it removes all of this boilerplate stuff so you don't have to deal with like oh did i remember to make my model and to train or not right um so that's that's pretty nice because it's automated and then um like in the previous video exactly uh so you don't need to worry that you like overwrite the wrong thing or did you track the wrong accuracy or is your model an email or not right there's just so many details for validation little bit of disabled gradients but i'll talk about that when we get to the validation loop so now we've defined the lightning module here so it's literally the same piece of code i think there's some alignment stuff so let me do that yeah okay so we have all of this uh optimizers uh here logits great and then you have your model right great so this is your wrestling uh so you're learning hold on the training step what is returning oh yeah that's right so here you need to return this loss right so you can literally just do that and that will return the losses you don't want to forget to detach stuff right yeah so lightning will attach it automatically so you need to return it with with a graph around it here and um and lightning will automatically detach that stuff and you know you want to specify a key called loss and then you pass in the loss right it's equivalent to doing this here and you know so you return a dictionary and i i believe that loss is a reserved uh word yeah correct yeah that's correct so these two are equivalent i'm going to use this syntax for now though so i'm not going to do this this guy right now um so i'm just going to return the scalar itself you can leave it commented right it's okay yeah okay great sweet okay so i'm going to return that so now i init my model and hopefully i have no bugs great i need to train this right so here's where lightning kind of helps a lot so the way you think about lightning is this is what you care about in research and you know if you're building anything with ai or machine learning this is what you care about like what is actually happening here you don't necessarily care about how the stuff is trained right so you don't care if it's running on gpus or tpus or how all that stuff happens that's all engineering stuff but some so you know sometimes i actually want to have control over let's say when i zero my my gradient right so yeah that's a good point you don't want to overwrite uh something like the backwards step you could get on the documentation figure out what hooks are available by torch lightning right uh so you can just google search it pick the first link okay and then for the docs you can click this up here or there's a little button called docs here um you know the easiest thing is just to search what you want right so you can just say backwards so you want to overwrite backwards so let's do that and then you'll find all of these hooks right so i'm like okay cool let me overwrite that backward so now i see that i can override uh this this method called backward the beauty of lighting is you don't have to deal with most of the engineering you don't have to deal with really none of it but in case you want it it's there right so if you're doing more advanced research you can always find it of course so i'm going to remove this so workers we're not going to do anything custom okay let's do things in order yeah okay great so we're here so now i'm going to now this is where all the engineering effort right comes in right so so now you're going to get this this is all your code so you can have bugs in here and like that's okay but in a deep learning project you're usually going to have a ton of code right so the last thing you want to do is worry about like did you overwrite the right losses did you call step before backward or did you do a pen or all this other stuff right like did you get the order right so you don't want to you don't want to mess around with that stuff you want to make sure that that stuff works well and it's tested what you do want to mess around calling zero grad just after backwards it's very nice exactly so what you want to do is just say you know i don't want to worry about any of that i just want to worry about my idea and maybe you have a quick idea you want to try you're working on a research project or you're building something and you want to say you don't want to try this out and this will allow you to do this very fast and then when you're ready to go you can literally you know it's just an i-torch module so you can run this so the trainer is the key here right so the trainer is going to allow you to do all this stuff so you can just send it this trainer and then you call fit on it right so you pass the model in and then you call fit and then what this will start doing um okay wait so i guess uh configure optimizers all right um okay i forgot to pass and self look at that basic basic engineering okay what we're actually gonna do is instead of returning just that single scalar because i want to show how to log and show things in the progress bar as well we're gonna use the dictionary syntax right so we're gonna pass this in instead so these are equivalent so i'm going to go ahead and use the the kind of the the dictionary syntax because i want to show how to log and how to track progress and different things like that so i'm going to return this instead so you put this keyword in here loss put that j and then you go ahead and you edit your model and then you fit it right and then it starts training and then you see this nice progress bar so cold lab is gonna have issues so i'm gonna pause this for a second because that progress bar is updating too fast for collapse to keep up so it's going to freeze my ui right so so we're going to use uh we're going to use the trainer flag here that's going to slow down that bar well it's going to the training is the same but it's not going to update us frequently so it's only it's only going to update that progress bar in this case every 20 batches and the reason for that is because i don't want to crash colon okay so i'll start this and now you'll see that it's a little bit choppier but at least you won't collapse colab so these metrics are estimates of that so you see my loss here this is coming straight from from that training loop so anything you pass on here it's going to be showing you uh that so this is going to continue the trainer is defaulted to go up to a thousand epochs um so obviously if you need fury boxes right we need five before how do you how do you yeah yeah that's a good point so we can just do max epochs right so max e box and we can just say five for five ebooks and um and this will train for a bit and uh we are on a gpu instance i believe we are okay great turn us on gps now so you know when you when we had to adapt this for gpus we had to like do all this cuda stuff right so we had to like i don't even know where the model is anymore yes we had to do this we had to call them model to cuda for each example we had to do this cuda and then here we have to like make sure that things were attached and on the right cpu and stuff to get this accuracy right um so let's not do all of that in lightning we're just going to set this little argument called gpu so we're going to say one and then now that's going to turn on gps so you see it's available and we're using it okay um so you didn't have to change your code and this is training yeah so let's just see oh the progress bar what's going on so let's see here yeah there we go great so it took a bit to start but here we go so it's a lot faster and you're trading on the gpu right um so i mean again this model is not big enough and the data's not big enough to show a meaningful difference in speed um but if you had a huge data set you would see a pretty significant difference uh even then i think it's already faster much faster so yeah i'm using that gpu but the beauty of this is that you don't have to do anything to your code you just trained right and that was uh that was beautiful so if i if you want to save this model do you have to specify to cpu and things and no so here i'm going to show you that every single thing we just did is available under this folder that lightning created you can modify that for whatever you want oh and you have all these versions so so we just we ran this multiple times and every single time we created a new version of that experiment right and so it tracked everything from the hyper parameters to to the losses and everything we logged we didn't actually log anything so nothing nothing showed up um but i'll show you how you could use this right so so now we have weights on here right so if i do let's see the latest version um and then we have this checkpoints folder in there oh oh oh you actually saved the model yeah so here it's already saved this checkpoint here okay okay okay okay wow um yeah so now we have this checkpoint it's automatically saved for you so this is saving the best checkpoint for you so you don't have to deal with any of that and yeah so now we want to plot this because there you know you were printing stuff to see the progress and then you like you saved it in this uh copy paste thing here right so it's saving the best checkpoint with respect to the to the last which one we only had one yeah just well exactly so in this case it's only using the last the training loss so it's literally like it's it's not great because it's not cross-validating or doing validation um so it's literally just going to save the minimal loss which is in in reality you wouldn't do that right um but let's log this first right so now let's just say that okay so you see this progress bar here yeah it just says loss on it so you you wanted to calculate accuracy as well right so we have um we have a metrics package called where all of this stuff lives right so pytorch lightning dot metrics um dot functional imports accuracy i believe this is a brand new thing okay great so now we have this accuracy thing so now i'm going to show you that accuracy as well so we go back to our training step and i want to calculate that accuracy and i'm going to just call accuracy i'm going to pass in where my logits uh i guess logic's here yep and then my y done so that's my accuracy um so that's better than the all of this stuff here just so that's my one liner i'm proud it's a great one-liner yeah for sure um what's cool about the metrics packages lightning lets you not just train on one gpu but you know if i if i change this number and i had a huge gpu machine i could train an 8 automatically oh wow and and the metrics will actually be calculated across all gpus as well so you have to deal with any of that okay and it's just automatically done for you i never used more than one gpu i don't know how to i don't know yeah well now you will right so okay so we calculate our accuracy and uh we wanna we want to show that in the progress bar so i'm gonna show you how to do that first right so i'm gonna call this a p bar um and then that's going to be just a dictionary and what do you want to call that uh train accuracy sure what's going on what's this b bar progress bar i see okay yeah so let's just call that the training accuracy right so that is going to be called progress uh bar right i believe there's a underscore this is also a reserved word yeah exactly okay so that so in this um you know where in this return statement you have you have three main reserve boards one is the the the log right which should show next and then it's the loss specifically and then you want to show stuff to the to the progress bar you have this progress key right so you're going to use that so i put that over this guy so i'm going to delete that now so i'm going to show that accuracy in the proper um and if we did everything right that should just work great so now we have this active setup and it's printing to my progress bar so i don't have to do anything and now i can see the accuracy of what's happening so lightning implements in the metrics package which is built by some of our core engineers core developers um you know implements most of the key metrics like f1 roc that kind of stuff um and you know i think the beauty of lighting is that it's a community project so it's contributions from people all over the world most people are phd students or researchers future scientists we have people from facebook we have people from nvidia we have people from all these different companies who are contributing different things um and so and so it's really helpful to um to have all of their kind of inputs and and work uh kind of feed into lightning so you can think about this like a joint research project where like everyone in the world instead of coding accuracy for yourself you code it for like the whole world and they can all use it right um and then we the core team validates that that stuff is correct and implemented well that weekend you're right i'm sorry yeah exactly if you if you're up then your up it's very big but we have tests so we have very thorough tests right sure things are not wrong um and for accuracy and these kind of metrics we will usually have some people um benchmark against another something like numpy or sql to make sure that it's correct so that we know that we're not off okay so this model is training and i guess we're getting about 90 96 percent accuracy and five epochs which is pretty cool oh one okay let's that's definitely over fit into the validation loop so we use validation to save the best weights right to stop training to to track to see if our model's overfitting or not so in the validation that you wrote um notice on here we had to do this eval stuff right and then we had to print a bunch of stuff but and then you have to disable gradients so all of this is automated for lightning so it's literally the same code the only difference here is that um really the validation loop again there's no you're not learning anything so what is the point of the validation is to save things or to plot things right or to do something where you want to evaluate on the data so as a result lightning has uh two methods that you can use for that so so you have this uh training step right which did all the stuff that you cared about yeah um so we're just gonna add the validation loop which is going to be validation step right so it's the same concept of the training step except the batch is going to come from the validation data set right so same thing so we could literally just call we can call the training step instead right so okay so cool so we can just say self touch training step and then just pass in the batch and the batch index right we're just gonna use the same code we're not gonna change anything about it uh that's gonna give us this little dictionary back right of stuff yeah um so great i'm not actually gonna do anything i'm just gonna return that dictionary results okay yeah cool okay um yes perfect okay so i'm gonna return that so that's going to so so what is the difference here is that we want to in validation loop we generally don't want to put metrics for every batch because it also makes sense all the batches are independent right what makes sense is to put met is to plot or show metrics for the whole validation set so when you want to know your accuracy you don't want to know your accuracy for one batch you want to know it for the full validation set so what this is saying is for every single batch right in the validation loop so this part here right here this guy all of this uh really just this stuff here um i want to know the loss or whatever you cared about right in this case i cared about the accuracy and the loss yeah so so just return it right and lightning is going to cache these and it's going to do that for all the batches and then when the the training is done the validation which is done you're going to get all those outputs back right so this is called validation epoch and right so this is your chance to take all those i'll call this valve step outputs right so so what you're getting on here is a list i guess this is also a hook as you said before yeah exactly so these are hooks okay so you're gonna you're gonna end up getting this guy here right you're gonna get many of these guys here so one batch one batch two batch three and so forth right so you're going to iterate through that so let's say i wanted to show you my validation loss right so now i'm going to just calculate my validation loss right this is going to be an average validation loss though because it's for the whole um validation ebook like that or the other code right yeah exactly so we're going to loop through this guy here so x for x in outputs so this x is this results thing here right so this guy here that's x right now this so we want to pull out the let's just say what do we want the loss sure let's pull out the loss so we pull out the laws right i see it okay and then that's going to give you an array of losses so this uh i want to take a mean or something like that so first i i need to put that back into a tensor right so i'm going to say tensor and then i'm going to mean that tensor so now i have my average file loss so now this is um so here we returned the outputs from this step went here to this loop right so validation step went to the audition epoch and um and in the loss this went into the system right so that's like the end of the loop uh sorry the length of the batch at the end of the loop we want to also put stuff back into the system so we can plot it and show it or whatever you want so we're going to use another dictionary here right so i'm going to return something so what your return on here doesn't matter i can return like i don't know uh four it doesn't matter like because all of this will be plotted so if you don't use the correct keywords they won't be plotted or anything right so there are a few reserved words so if you want to do so you asked me how we were saving checkpoints right so here we were picking just the minimum on the loss um on the training loss here so in this case um you know we have this early stopping and model checkpoint callback which i'll explain in a bit but they're all looking for this keyword val loss right oh you're not looking for that word so if you have if it's missing it goes for the other one for the loss no if it's missing it won't do anything right it won't actually check it won't it it'll save checkpoints based on the loss as you're saying yeah but it won't early stop so you won't get early stopping behavior oh so there is early stopping automatically uh integrating yes exactly so so let's show that in a minute but let's just say that i i want to track my validation loss as my metric right so i'm going to pass in this average uh vowel loss they also the accuracy right yeah so if you want to do the accuracy so why don't we do that as well thank you so remember that was that was um we need to access the progress fire right so that's the dictionary itself that's this guy here oh and then yeah and then we also and then we need to access the training accuracy so we're gonna do that right train back okay great so average val ack sure so in this case actually i feel uncomfortable we call we access the train ack to compute the val akk oh well we named it that right if we didn't want that we could have done this right so i could have in the validation if i could have said results progress uh bar and then you know come up with a new keyword called val ack which is going to be um the same thing here so we're going to pull the thing from because we use the same loop right we pulled it from the training step and then we're going to just use the train pack yeah and so then we'll we'll just really we'll delete these guys here yeah thank you cool okay so now we get rid of that dead right no death oh yeah sorry good points okay so we'll delete that so now we have this uh train act which is here i'm sorry bad luck yeah yeah okay less great less anxious okay so uh so again uh the only keyword here that care that anyone cares about for early stopping or checkpoint is this one you can change that word if you want and i'll show you later how to do that but um right now i wanna i wanna show this on the progress bar right okay so let's do another p bar guy and then we'll call this um val let's just call this average val hack and then i'm going to pass in this average val hack right cool so i get this p bar here and now i'm going to just come up with my again it's the same words right so progress bar i want to send that to the progress bar so i'm going to send that information to the progress bar so that's it so now i have a validation move so many details in deep learning that we don't want you to get stuck in those details we want you to focus on the research and the science so let's come up with this valve uh data loader here right so so in this particular case again this model's still wetted to mnis but we're going to keep it that way for a minute um so what we're going to do here is in this train data loader i'm going to just copy paste the same code because why not right it's the same stuff so we're doing the same splitting work um but in this case i'm going to not use this i'm going to use this instead right so now we're going to use this training split so i need to enable this so now i'm going to use the training splits so so amnes in particular has two official splits on torch mission it has the training and then it has the test splits right so we want to save the test to see how our model actually generalizes so to do validation we're going to take our training split and we're going to split it into train and validation right right now we are making two different random splits right yeah exactly actually an easier thing is let's just call this self.train uh-huh then that'll be a lot easier right yeah cool so let's just do that so now i'm going to copy this guy here and then i'm going to return that val data loader and that'll be self.file and save those things yeah exactly cool so one one thing to consider is you have so because lightning starts to let you use multiple gpus you have to now you have to start caring a little bit about how you download that and everything else so i'll give you an example let's say that you you set this to uh i don't know eight gpus and then you wanna you wanna do this on um 32 nodes for some reason right like this is a giant network now i'm gonna train you i'm gonna train this on 32 times eight uh gpus hold on hold on hold on what are what are the nodes so each node is a machine and a cluster and each node has so many gpus on it right but how do you how do you talk to between machines yeah that's the bit of lightning you just have to spin up the job and it'll talk between machines right so if you submit this to a cluster for example like slurm if you submit this job then as long as you specify in your slurm script that you need 32 nodes and each node has to have eight gpus then lightning will just know how to do that automatically what's going to happen is this if if all those nodes share the same file system under the hood then you want to download the data once right because why would you download data from i don't know um what is this 256 gpus why would you do that you have 256 downloads right so you don't want to do that so instead what you want to do is download only once right so in that case you know if i were doing something similar like one or two gpus i would leave this as this and move on right otherwise if you know you're going to be doing this distributed stuff then you want to use this method called setup right so let me do it here but we're going to use prepared data and then i'll show you setup um so we're going to use prepared data so this is like hey do whatever you want to happen only once right like we make sure that when we call this method here it's not gonna happen on every gpu it's not gonna happen on a single gpu difference between prepared data and then init isn't init gonna be running at the beginning once uh well no init happens here before you put it into the trainer right so the the data loader stuff won't will be called at the right times so you define the data loader functions here but the data loaders are not instantiated until you actually need them right so lightning has lazy loading there so it's not until you like for example it's not until you get to the validation loop that your validation data loader will actually be called right and it's not until you get to the training loop until the data loader actually gets called okay yeah so so basically this stuff happens later right so if you wanna download your data like what i would suggest is to put this into your into your prepared data so you put that there right and you don't actually need any of this so you're just going to do that you're just going to download the data here now we're doing this like self.val self.train splitting stuff right so that's easier to be that's that's actually easier in this method called setup right so again this is only needed if you want to really do multi-gpu stuff if you don't care about any of that then you don't worry about any of this you literally just define your training data loader and that's it right but in the world where you are doing multi-gpus then what you actually want to do is you want to in this particular case this this endless implementation from torch vision uh it won't download twice right so if you have it downloaded once it won't download so here it what it'll do is it'll just pull the data set right so i don't pull this guy here and then we can do the splitting stuff here right so we can do that in the setup so this is where we do the the train and valve kind of splitting right so in this particular case i'm just going to save those sellers so my setup is pretty trivial here now to your question if you had transforms right you could do things like random flip an image normalize an image whatever you want on here right so you could just put those on here and do that stuff cool okay so we're going to do that splitting and setup um and then again this is setup is going to run on every single gpu so it's okay to assign state something like this it's like self.something that's okay in setup but it's not okay in prepared data because it's only going to work on one gpu we're doing multi gpu training yeah what you're actually doing is you're setting up a model per gpu a copy of that model right oh okay yeah exactly so so that that's what happened um so we have this self.training self.vile thing here so now we can just pull those out so i don't need this anymore right so i'm going to delete this and then i'm just going to pull my data loader from there um and then same for here so okay so this is like this is i mean pretty much the same setup it's just like we split in a different way but this will allow you to scale up to i don't know multiple gpus multiple tpu cores as well and make a very very safe way there are obviously some nuances so on this prepared data it's called from a single gpu right um so that makes sense if you have multiple nodes and they're all using the same underlying file system if that's true this will work but if all your nodes have their own system like if you have five different machines and they'll have their individual disks so lightning will detect that automatically and on each machine download the data individually so that you have that there right so you don't have to deal with this as well so that that will happen there okay so we went just over converting our pi torch code into pi torch lightning and we saw so many benefits in the next episodes we'll dive into more advanced topics so stay tuned you
Info
Channel: Lightning AI
Views: 38,701
Rating: undefined out of 5
Keywords:
Id: DbESHcCoWbM
Channel Id: undefined
Length: 37min 12sec (2232 seconds)
Published: Fri Aug 07 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.