YOLOV5: How to Train a Custom YOLOv5 Object Detector | Official YOLOv5

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
let's learn how to train a custom yellow v5 object detector i will go over a google collab notebook so you understand the entire process step by step we will learn the basic ideas of transfer learning and fine tuning for object detection in this video there are not one but five euro v5 models yellow v5n where n stands for nano is the smallest in the family and is meant for edge mobile and iot solutions yellow v5s where s stands for small has about 7.2 million parameters and is ideal for running influence on the cpu yellow v5m where m stands for medium has 21.2 million parameters it provides a good balance between speed and accuracy and therefore very well suited for many data sets and applications yellow v5l where l stands for large has 46.5 million parameters it is ideal for data sets where we need to detect smaller objects eulo v5x where x stands for extra-large is the largest among the five models with 86.7 million parameters it has the highest accuracy but it is also the slowest in this video we will first use yellow v5s to train a model on a custom data set we will download this data set from roboflow which is the official dataset provider for yellow v5 we will repeat the same experiment using yellow v5m model and finally we will freeze some layers of yellow v5m and train the remaining layers you'll understand the details why we are doing all this as we proceed in the video it's going to be a very exciting video and i promise you will learn a lot let's get started with the notebook that will teach you how to train a custom yellow v5 detector for your own data set now there are a few shortcuts we have taken we have kept the data set size really small so that it is easy for you to try and train the network in real world you would use a much larger data set but this has all the ingredients you need to train your own data set you just have to change it to your own data set use a much bigger data set we have created another video which could be very useful for you if you are gathering your own data uh we will link to it in the description of this video so please check that out if you are planning to create your own data set for yellow v5 it explains a lot about various data sets so let's dive into the code here we are doing some standard imports we are using matplotlib which is a plotting library cv2 which is opencv it is a computer vision library numpy is forms the basis for a lot of scientific computing for matrix and for linear algebra and other matrix operations so we need numpy requests is simply to download uh some data from the internet and random is simply the random number generator now before we start anything we have to seed the random number generator so that we can produce the same results over and over again if you don't select a seed in that case your results will not be reproducible each time you run this notebook you will see a different result so we always choose the seed to be 42. you can choose any number to be the seed but if you specifically want to know why we chose 42 as the seed you should look up what is the answer to the ultimate question of life universe and everything all right so we fix the seed to 42 and then we generate we assign the seed to the numpy random seed generator once you've started this colab session if you do ls you will simply see sample data this is basically a blank collab instance we set uh some variables here train to true and epochs to 25. when we set the number of epochs to 25 we are telling the system that we want all the training data to pass through the network 25 times now the training data is actually passed in smaller batches of let's say 16 or 32 at a time so it randomly selects 16 images and then it passes through the network trains a little bit gets new weights for the network then another 16 are chosen and the gradients are calculated it passes through the network again this process is repeated until all the images in the training set have passed through the network and when this thing happened once we call it one epoch we are saying that continue running this session for 25 epochs now in practice we should be using much larger epochs let's say 5200 epochs easily to see uh you know how the training progresses we have chosen 25 epochs here because this is just for demo purposes but in real life just use a very large epoch there is also a way to specify do not stop until things are converging but we have chosen one specific path here next we are going to download a data set from roboflow it is the vehicles open images data set if you click on this link you will arrive on this page which tells you that there are 627 images and 1194 annotations we will also know that there are five different classes here car bus motorcycle truck and ambulance and of course this data set is not perfect because there is a class imbalance problem here but we are just going to use it for demo purposes if you actually wanted to build an object detector which identified these classes 627 images are not enough we will need a lot more images but anyways this is a good data set for demo purposes we can try it out etc now if you want to know what are the various things you need to be careful about in downloaded data sets or also in data sets that you have prepared yourself please check out the video about common pitfalls and mistakes people make in preparing data sets that we have linked in the description section it is absolutely useful in fact i can easily say that while training a custom object detector or any ai model the most important thing is data you have to clean up your data a good data scientist is a good data janitor now just to prove this point uh we have as you can see here there is code which basically downloads this data set from roboflow we unzip the file and then we remove the zip file this is all you should have needed but you will see that we have this additional code here why does this code exist you have to download the data set and tell us in the comment section why does this code exist and you will also appreciate my previous point that you have to always download and look at your data set manually don't just jump in and start using the data set without analyzing it very carefully what the data set is about and if you answer it correctly in the first two weeks you will get a chance to win one of these books this is deep learning with python by francois chole it is the definitive book for learning keras and tensorflow so please answer we will randomly select a winner from all the people who have commented correctly in the comment section uh in the first two weeks in this section you just see the files being extracted and unzipped once you have the data set you will see that the data has this structure so this is the structure of a standard yellow v5 data set it consists of a yaml file which contains all the information about the data where the training set is located where the test and validation sets are located what how many classes are there and the the class labels now let me just go over this yaml file so you understand what the structure looks like so it says that the training images are located in this directory the validation images are located in this directory there are five classes nc stats for number of classes and the names of these classes are these so that's what the yaml file does the location of the annotation file is automatically obtained by replacing these images with labels as you can see that there's a labels directory inside the labels directory there is a text file which contains the annotation the bounding boxes corresponding to all the classes now we have gone over this data format in detail in another video please check it out it's linked in the description section so you will get a full idea of what the yellow v5 data set looks like next we will write some functions for visualizing our data now we know that the class names are ambulance bus car motorcycle and truck we define five colors so that we can represent the bounding boxes for each class with a different colored bounding box we have this function which basically takes a yellow bounding box and converts it to a regular bounding box so there are various bounding box representations for example yellow actually gives the coordinates of the center of the bounding box and then the width and height of the bounding box here we are converting that format to the top left corner of the bounding box and the bottom right corner of the bounding box that's why we have x-men wyman which is the top left corner of the bounding box and x max y max which is the bottom right corner of the bounding box and then we have this utility to plot the bounding boxes these bounding boxes are in yolo format the very first thing we do is pass it to this function yellow to be box which converts it to the x-min y-min and x-max y-max format that we had talked about but yellow bounding boxes are also normalized by width and height which means that to get the actual coordinates you have to multiply the x coordinates with the width of the image and the y coordinates with the height of the image to get the actual coordinates so here we get the actual x-men and y-min by multiplying it by x by width and height similarly x max and y max we multiply it by the width and height and the actual width of the box is given by x max minus x min height is given by y max minus y min and the rest of the code is pretty simple we basically create rectangles using opencv and display the rectangles we also display the class label as text and the box inside which this text is placed is solid so you can go over this code it is pretty straightforward standard opencv format if you don't know how rectangles and put text work in opencv please check the documentation and finally we have this utility function plot which takes the path of one of the sets the training set or the validation set or the test set it takes the path to the labels and the number of samples that we want to display so you can give the path to the training set you can give to the path to the training set labels and the number of samples and it's automatically going to take the data and randomly sample four images or however many images you have asked and display it and you can see the usage of the plot function here it takes in the image path which is train images star and the labels path which is strain label star and the number of samples here is four and it randomly selects four images from our training set and displays it here now if it doesn't have enough space then you do not see the labels but looking at the color you would know that okay it is a motorcycle so every class has a different color and when possible it's also displaying the class label now if you want to visualize this data sometimes it may be useful to remove the class label altogether because the color of the bounding box is sufficient for you to know which class it is and it may be cleaner for display purposes in which case you can just make the changes in this function you can remove this put text this this piece of code here and it will simply display the bounding boxes next we need to write some helper functions for logging our results which means that when the training is progressing we want to write some results into this directory so that we can see how the training is progressing so here if you look at this function it says says set res directory it is a directory for storing the results first we find if there are other directories inside our results directory so we are going to store all our results inside this directory called runs slash train and it will be stored as results 1 results 2 results 3 etc and so we first need to know what is the current state uh you know how many trainings have completed uh and we know the count and we just increment the count so we say that the current number of results directory is this much and we basically set this results directory to the current directory numbers plus one so that we keep incrementing you can keep training uh with different parameters etc and a new results directory is created nothing is overwritten so you can go back and check uh your results uh for for different runs of training so that's what this piece of code is doing and um next we also need to set up uh something called tensorboard now this function monitor tensorboard is loading tensorboard now when we are doing this uh training we want to visualize how the training is progressing whether our accuracy is going up or not if the accuracy is not going up we should stop and restart with different hyper parameters etc so uh to do this in in real time you know we need something called tensorboard there are other options also another utility people use is called weights and biases but for that you need a login it's a very useful tool but for this purpose we are going to use a visualization tool called tensorboard tensorboard is very popular and the default for for tensorflow and kira's applications but it is also available for pi torch so we are able to use tensorboard right here and once we load tensorboard we simply do tensorboard and then we need to tell it the log directory and runs slash train is where we are storing all the results and if there are multiple runs there tensorboard is going to use the graphs from all those runs and plot it using different colors and we'll see in a moment how that works next we clone the yellow 5 repository and it is located in this location the company that has released yellow v5 is called ultra litex and that's uh that's their github repo so it's a very standard thing we basically uh get clone which will download all the files and then we go into the yellow v5 directory and we can see this is what the directory structure looks like next we simply install uh based on pip pip install minus r requirements this is pretty standard it will install all the things that are required by uh yellow v5 so this is all installation going on here and after that we simply run monitor underscore tensorboard and as you can remember from the code right here it loads tensorboard and then it looks inside this directory runs slash train to see if there are some runs because there are no runs it says that no data could be loaded all right finally we are ready to train our own model before we do that we have to set the results directory this auto increments the results directory as we know so we have a new results directory by calling set underscore res underscore directory and to train a new model we have to run this python script right here the trainingscript train.py is provided by yolo v5 so we are going to use it it takes the following things as arguments first we need to provide the path to the dataset file so the data.yaml file the path needs to provide needs to be provided and the weights of the pre-trained model that we are going to use needs to be provided as well so yolo v5 s is a small model that is good when you want to run the model on a cpu and you don't have a lot of complexity uh yolo v5s is a great model to use so we are going to use that model and the pre-trained weights associated with that model as our starting point and this pre-trained weights have been generated by training yellow v5 on a completely different data set uh a publicly available data set and it's a very large data set that has been used so we learn the features from that data set and we get the learnings for free so using a pre-trained model speeds up the convergence so we will have to train for a much fewer number of epochs to get the same results if we started from scratch so it's always a very good idea to start from a pre-trained models don't start from scratch in most applications so we are starting with yellow v5s next we need to specify the size of our image so minus minus e img 640 says that take our images if they are not 640 by 640 uh change the size to 640 by 640 this is the size we have decided to train on you could also use 320 for example now the number of epochs as we had uh set we are going to use uh 25 epochs but in real world applications you let it run for you know 1500 whatever number of epochs until you're not getting any more benefit from the training process uh for here we just set it to 25 so that it is easy for us to run these experiments you can easily run these notebooks without having to wait for days so uh next is the batch size we have set the batch size to 16. for many object detection problems uh bath size of 16 32 etc is a very reasonable choice as i had mentioned before batch size is the number of images we pick at a time to calculate gradients and update the weights of our model from the training set and finally um where do we want to store the results we say that minus minus name we have to set this at the results directory so that's it we let it train and see what happens so here you can see that it's going to store the results in results underscore one and this is generated by this piece of code here it automatically created results underscore one directory for us we can also see that the training script outputs a bunch of things uh here it says the training has started with uh these options uh the weights are for uh yolo v5s as i had mentioned before the data yaml file is here the hyper parameters which uh are used for training are used from this file if you want to change any of the hyper parameters then you can go and change in this file but usually uh you may not need to change any hyper parameter there could be instances where uh your learning rate could be very high in which case uh you know your training will the model will not train and we'll see in a bit uh when we can say the model is not training in that case you may want to change uh the learning rate given by lr slightly but this is simply telling you what are the various options we chose we chose epoch equals 25 batch size equal 16 image size is 640 and there are a whole bunch of things that you don't have to worry about uh right now okay and then it tells you a little bit about the the processor that is being used it's a tesla v 100 etc it also says that the the repository is up to date the code that we are using is up to date and now it's uh mentioning that the hyper parameters we are using these hyper parameters are actually picked up from a file which is located at this location and the important bits here are you know the learning rate we probably will do another video sometime about various hyper parameters like learning rate momentum weight decay etc but most of the defaults in yolo v5 should work for you very well it also mentions weights and biases weights and biases is basically a visualization tool which you can use but you need to go to w and b dot com and register for uh register on an account and then you can use it for visualization but fortunately if you have not registered for rates and biases your v5 still allows you to use tensorboard which is uh you know built-in you don't have to do anything install anything it just works it does mention that tensorboard has started at this location and we will get to it uh this this is actually available on our in our notebook also so here as you know that we had started tensorboard if you refresh this it's going to start showing some results we'll come back to this graph but tensorbl boot has started showing some results now let's look at uh the the output from the train dot py file here it says that it's overriding model.yaml and initially that file had the number of classes was set to 80 it was trained on a data set that contained 80 classes and it has been modified to uh to be just five classes which are our five classes which is great and then you see the structure of uh the uh the architecture of the yellow v5s model it has a bunch of convolutional layers and such so don't worry about it the details of this architecture are not important right now for training the model but it does give you a summary here you can see that it has 270 layers it has about 7 million parameters etc there are other things which we are not going to go over in this notebook we are using something called sgd optimizer there are different kinds of optimizer like adam etc and we go over these optimizers in our course opencv course on deep learning deep learning with pytorch deep learning with tensorflow and keras both those courses cover some of these concepts but for the purpose of this notebook we'll not go over optimizers you can just imagine that some of these things are being used internally by yellow v5 to train the network and the good news is that most of the defaults they have chosen they work very well you don't need to touch them you don't need to change them you'll get a very reasonably performing model now let's look at the training process uh here the training has started and you can see that it is going over the first epoch and now it is going over the second epoch you can see that some of these numbers are changing these three are losses losses means errors it is making three kinds of losses whether something is an object or not something is uh when it detects a bounding box you could have a localization error where the location is correct but the classification is wrong the object inside it is wrong so we need to not only get the location right we also need to get what object is inside it that also needs to be right so there are these different kinds of losses or errors we want these numbers all three of these numbers to come down so there are three kinds of losses and you will see over uh the epochs they should all come down it was 0.1 here came down to 0.07 and 0.074 0.073 and so on as long as it it's decreasing that's a great sign that we want uh the losses to go down you can also see that in an epoch there are multiple batches mini batches in an epoch so this one tells us how many labels are there in a in a particular mini batch and you can see that the labels keep changing because the number of you randomly select 16 images and the 16 images may have a different number of labels that's what this number is image size is pretty much the same it also outputs the number of images that were used the number of labels that were used we also have the numbers for precision and recall both of these numbers should go up we want high precision and high recall numbers map is basically a measure of quality so map should also go up there are different two different kinds of map estimates here we are not going to go over them but you can just look at this number map at 0.5 this number should go up as the training progresses and we'll be able to check the training process visually if we go back to our tensor board here first of all there are three kinds of losses as i mentioned the box loss is going down the classification loss is going down and the objectness loss is also going down that's great we want that to happen we want these losses to be uh going down we also want uh the precision and recall values to be going up and similarly we have this number called map underscore 0.5 this is map with iou threshold of 0.5 the exact definition of this is not important we will actually cover this in a separate video but for the purpose of this video you can think of this as a measure of accuracy so the map mean average precision should keep increasing similarly this is another way of defining map over multiple iou thresholds again the specifics are not important all we need to know for the purpose of training right now is that this number should go up these both these curves should go up as we go over multiple epochs so the x axis here is the number of epochs and you can see that it has still not saturated so it can do much better so we should have let it run for uh 100 epochs or so so that when these things start saturating that's when we know that oh we are not getting much value out of additional training but it had clearly not reached that kind of saturation here but as i said that this video is for demonstration purposes only in real world you would use a much higher number of epochs the other thing to keep in mind is uh that we want the losses to come down one way to know that the training is not working is that this loss will not come down and one common way to fix this is to reduce the learning rate sometimes if the learning rate is too high uh it doesn't converge it doesn't come down nicely but training loss it needs to come down but the more important thing is uh to look at the validation loss as we know that validation set is the set that the model was not trained on so we want to see how is it doing on the validation set and if you look at this this drop down here it will expose another set of graphs and you can see that the validation loss is also coming down it sometimes not as smooth but it's also coming down nicely uh so that's that's what we want we want the training loss to come down and we want the validation loss to follow the training loss uh similar to the training loss otherwise uh sometimes when the model has overfit which means that it has memorized the answers to the question or memorized from the training set you will see that the training loss has come down but then it doesn't work very well on the validation set which means that um it has just over fit on the training set it has not really learned to generalize it doesn't uh do well when a new set of images are given to us which is not good we have to fix this because ultimately how well a model is performing is on unseen data it doesn't matter how well it is performing on the training data it actually needs to perform very well exceptionally well on training data because that's what we are training on it the losses should come down substantially but the real test is whether it works on the validation set or not a very good blog post about this is on our website learnopencv.com look for bias variance trade-off in machine learning and you will see a very good article about these trade-offs you know what is overfitting what is under fitting etc okay now let's come back to the screen where the model has uh completed uh the training and it's uh also it says that it uh basically completed uh these 25 epochs in 0.096 hours and you can see uh it finally creates a summary of the results you can see that the results are stored in this directory the first thing to note is that there are two different models it saves for you the first one is last.pd which is the last model and the next one is best dot pt the last model may not always be the best one because you can see sometimes we are using these um sometimes the model may be the last model may be actually worse than a previous model that's why we keep uh keep the previous the best model also around also this is very educational to look at we know that the number of images in the validation set was 125 the number of labels was 227 and then it's giving you the precision recall and the map values and it is doing that for every class so uh for example out of the 227 labels 32 you know the cards were the dominant class we already know that this class was imbalanced but you can look at the values for precision recall etc for every class separately so you will know which class is not performing very well for example you can see here that the precision and recall values for the truck class they it's lower than um than the rest of the classes uh part of the reason is also because the number of examples we have is low but what we need to do here is get more trucks truck examples if you want to improve the results so now uh we have seen that uh you know it does reasonably well on the validation set we see that the training had started it was doing reasonably well on the validation set but just to give a visual sense of how this has done we can look at the results and this function right here show underscore valid underscore results takes in the results directory and it basically shows some of the results yellow v5 actually automatically stores these results in underscore pred for prediction thread.jpg files and all we are doing here is we are just taking those files and displaying uh over here so let's run this okay so now we have trained the model we have looked at the results on the validation set and how do we do inference how do we take this model and actually run it on a new image that we are interested in so here's the code for inference this basically creates a new inference directory and the the real code is this one here we say detect dot py is the script that comes with uh yellow v5 and this is the one we need to for detecting we need to provide it the weights and obviously we have just trained a new model and the the the weights that we have is stored as best dot pt so we are going to use those weights that's our model and then we just need to provide it the source of the image the path to the image and we need to provide you know the directory inference directory which we will store all the results in uh we'll just run the code and see how it runs right so before we show the results uh we have this piece of code which basically creates a random array of 50 elements and it basically uh this this piece of code picks 50 different pictures from any directory randomly and displays it all these pictures were actually put in by this piece of code here in france and this piece is simply displaying those pictures now let's just use it and you will see that it's coming up all these results are being detected this is a bus this is a car so the results look pretty pretty reasonable um this thing has been detected as an ambulance maybe it is an ambulance i cannot tell i cannot be 100 sure but you can see that most of the results look uh reasonable sometimes i think this motorcycle has been missed but these are the results if you train longer and with a larger amount of data we will be able to easily nail all these results this is just an example to show how to use yellow yellow v5 and finally we are calling the inference function that we wrote before to perform object detection on unseen data the unseen data is stored in this directory called test slash images and if we run this we forgot to actually run this code block maybe visualize we forgot influence we forgot okay so if we run this piece of code now it should produce the results for us so it's basically taking the weights that was the best weight we found and it's actually going to run and you can see this was completely unseen data and it's giving reasonable results so the one thing you can notice right away is that the gpu memory this model requires a bigger gpu memory uh the larger the number of parameters the larger the number of weights we will need a bigger gpu that could be another reason to use a smaller model is that you simply do not have a big enough gpu to fit everything in memory but 8g gb these days is pretty common um gpus you can easily get a 12 gb gpu but just in case you don't have one you can use a smaller model so our training has concluded for the medium sized model now let's see how it compares to the small model the easiest way to check is to go back to this tensorboard and refresh this screen as we know that results one was the small model and results two results under score two which is in blue is the new medium-sized model and you can see that this is the number that you need to worry about uh underscore 0.5 and you can see that this number is actually better for the blue model it is doing substantially better for the same number of epochs and you can see the same kind of trend in uh the training loss also the blue loss should be lower and it goes below the orange line which means for for all these cases and that's expected we are using a bigger model and so it will be able to learn uh very quickly and it will be able to learn better than a smaller model so these are expected results we can look at the validation set also it's not only training that not only is the training loss small the validation loss is also better in both cases you can sometimes find these kinds of erratic behavior that is why it is very important to train for a long time and pick the best model that you get so all these things look great next we are going to freeze some layers of the medium model because the media model is a pretty big it has you know 20 million parameters and we have only a small amount of data we don't need to change all the weights in this model we could just freeze most of the weights and train only some of the layers fine tune some only some of the layers and that may be able to help us train faster because we are not updating all the weights let's see how do we do that yellow v5 has made it very simple for us to freeze some of the layers and keep the other layers for example the expression here for training the medium size model it remains the same the code this is exactly what we had seen before but we can now freeze the first 15 layers using uh this we can selectively say that oh we want to freeze all these models and train only the last few models and this is a very common thing to do what would happen is that your training iterations would get faster and hopefully we can even get better results because we have a very small training set this training set uh should not be used to change all the weights that were learned on a much larger data set maybe we are uh we are actually better off if we change only the last few layers when the data set size is small this is a very common technique we use this allows us to retain much of the learning from the much bigger data set that was used to train the current weights and only fine-tune the last few layers so let's run this and this is going to be um and see uh what what results we get so you can see here that it is freezing the model all the weights and biases for uh for the first 15 layers and the other thing you will notice is because these weights are frozen it doesn't need to actually uh use that much memory so now you can use a bigger model even if you have a smaller sized gpu you can see that it's using about four gigabytes of gpu memory only that's that's great because you're able to use a much more powerful model but you don't need that much memory for training so our training for the medium model with freezing of some of the layers has concluded let's go and check out the results we are going to look at um the tensorboard that we had set up before here you can see that results one two three are uh for the three uh training runs we did we are just going to refresh this uh tensorboard so that we get the latest results and results one was in orange results two is in blue and results three is in red so okay let's first look at the map as i said that this is a measure of accuracy and you can see that this actually does very well compared to the blue curve so the orange is the worst then we have the blue and the red and here we see a little bit of switching but if we had trained this for a longer period of time this would have come out ahead the same thing happens here that you can see that the red and blue they are they are almost the same but it is possible that for this small amount of data if you had trained for a longer period of time that results three when we freeze the model will give us better results but the thing to note is that we could use a bigger model get all the benefits of the bigger model uh by freezing the the weights right we we could use a smaller gpu we could also get all the benefits of the bigger model the accuracy evil while training on a smaller gpu it also speeds up the training time quite substantially so these are all uh looking great you can see that the training loss is actually not great in this case it's not doing great on the training set which is fine we have to check the validation loss how well are we doing on the validation set and you can see that here it is actually performing much better on uh the validation set one other thing to uh note is we can also go and check out the final results on the validation set this is the number map we got by freezing the layer so this is the final over all classes 0.624 and if we go back and check our a previous model let's look at the map for that this was 0.656 it's actually better now by freezing some of the layers uh it's not always uh clear whether you should freeze the model or not as a general rule of thumb if the size of the data set is small freezing the model makes a lot of sense but you still have to experiment and see what results are better for but to do a do a valid experiment you have to make sure that you train for a much longer period of time and you can see that the map value here was 0.656 and finally if you look at the map value for the very first experiment we did it was much lower let's go over it was 0.566 that brings us to the end of this video i hope this video was useful for you to train your own model on a custom data set that's all folks this is sacha malik signing off your guide to the fascinating world of computer vision and ai thank you
Info
Channel: LearnOpenCV
Views: 131,387
Rating: undefined out of 5
Keywords: YOLO, YOLOv5, Training Custom Dataset, Is YOLOv5 better than YOLOv3?, What is YOLOv5 object detection?, What architecture does YOLOv5 use?, Which Yolo version is best?, YOLOv5 Documentation, yolov5 paper, yolov5 github, yolov5 architecture, yolov5 tutorial, yolov5 tensorflow, yolov5 python, custom object detection training, How to Fine-Tune YOLOv5 on Multiple GPUs, How do I make my YOLOv5 faster?, How do you improve YOLOv5 accuracy?, How do you fine tune a model?
Id: Ciy1J97dbY0
Channel Id: undefined
Length: 48min 9sec (2889 seconds)
Published: Tue Jul 05 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.