Docker For Data Scientists

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone and welcome to my new video in this video i'm going to show you how you can use docker to train machine learning and deep learning models and how you can use docker to expose an end point using flask so in order to serve the deep learning or machine learning model that you have trained so this is not an in-depth introduction video of docker i'm just going to tell you a little bit about it and only uh i think that's the that's all you need as a data scientist so if you want to docker is huge if you want to go in depth uh then take a look at some other tutorials in this one i'm going to focus only on training deep learning machine learning models and deploying them using docker so let's get started so the first thing that we need is the data so we need some kind of data to train something on this just for uh purpose of this video so i'm not going into details of model building in this one so it's all about docker how we can use docker so i have already created a video on sentiment classification using bert model so you can take a look at that video to know how to train and about the data set so the dataset that i'm going to use for this video is same as that one and that's imdb data set of 50 000 movie reviews so it's a very simple data set for uh what we are going to do so uh it has a review and a sentiment associated to the review and we are going to download this data set so let's download this data set so now you can see i have downloaded the data set so here is my dataset it's called imdb underscore dataset so let me just rename it to something uh like train.csv and let's see if we have what we have here so we have one column review one column sentiment and now we want to create a machine learning model to train on this this data set so i i think i have already done this bird training so i'm going to use some code from there so what we have done previously is we uh we have created everything so we have the config file we have the data set engine model train we have everything uh so let me see let let's go to train dot pi and i'm i'm going to do some copy pasting now so uh it's highly recommended that you go and take a look at that video and then come back to this one if you want to if you want to learn how i train this model but if you don't want to learn how i train this model then it's perfectly fine you don't need that anyways so i created the strain dot pi file i hope everything will run as it is i'm also going to make some change in the training file here but we let's do that later and what else do we have so we have we have model we have engine we have data set we have config and we have app so let's uh copy everything maybe i can also i can i could have also copied it locally but okay this is what we are doing model dot by and we got the model and uh now we have uh engine.pi so just let's take everything that i i could have i could have actually just started from the word sentiment but i i don't know why i ended up copying this stuff um but anyways since we are doing that okay let me let me just copy it and we can skip that part so now we have you have all the files here i'm going to give you a quick walkthrough of these files because you should know what these files are about so this is the train dot py file so it reads the csv the training data csv it fills with none values it handles a data set in in some way and then it divides into training and validation set then you have the birth base uncased model and then it has the optimizer and scheduler and it puts model in data parallel mode so i'm just going to remove that just for simplicity purposes and then it prints some kind of accuracy score so what i'm going to do is i'm going to change something here i'm going to say use only 5 3 000 rows okay and then we are going to see if this works so in model dot pi we have defined our model so everything here uh we have already done previously so uh take a look at the bird sentiment training video if you want to know more details so engine engine.pi is still empty so let's fill that up it shouldn't have been empty and then we have the data set in which uh which is generating ids mask token type ids and targets required for training bert model and then we have the training file so here we are changing a few things so now it will be called train.csv instead uh which is here the bird path we don't have the word part so what we are going to do is we are just going to download it inside the docker container so bird base engaged so the thing is every time uh every time we train the model we have to download the bird base and gest which is fine for this video for right right now i'm going to show you some stuff there and then we have app.pi which is used for uh deploying the model so i'm going to remove memory cache predict from cache these things from here this one we don't we don't need that and everything else remains the same and now we are going to see how we can train this using docker but before that let's try to train it normally in our own local system so now you can see my terminal and i'm what i'm going to do is just going to do uh so you see i'm using button376 so it's the same uh python train.pi and let's see if it trains the model okay so it seems to be downloading bird base on case so every time um it doesn't happen every time so locally it will happen only once so uh this takes few seconds maybe it depends on the size of the model that you're training so if you have a large model it's going to take a long time but now it should start training the model and i hope it starts training the model let's see okay so it seems to be working i'm going to stop it uh and let's let's see if i can train it on a different gpu so i'm using two gpus one of them is rtx titan rtx and another one is rtx 2080 ti so now i'm trying to train it on 2080 ti so now it should not it should not download the models uh because it has already downloaded them and let's see if that works so this works fine so the model seems to be training and my video is not getting stuck anymore uh because i'm training it on a different gpu now uh okay so now um we are going to dockerize it we can also train it on cpu so if we do this it's going to train on cpu and it's going to take forever uh yeah because it's burt model okay so it's not raining on cpu because i have some cuda stuff defined so what i'm going to do is going to config uh so i have this device defined somewhere so probably it's defined in a lot of places so let's see device cuda so i'm going to take this away from here and going to say config.device and do i have device defined anywhere else so engine is taking a device parameter and it's not defined anywhere else so it must be in this one device uh touchdown device and configure device okay that's fine and now i will add a new argument devices cpu so it's going to train my model on cpu so when you do cuda visible devices in uh empty string it's uh it seems like then the python program will assume that you don't have any kind of cura devices on your machine uh so it is training very slow but it is trending and that's all we need for now now to dockerize uh your application so any kind of application you need first you need to install docker and you can do it on any machine so you can install docker on ubuntu windows you can install locker on osx and it's very easy we go to google and we search for docker install and then it gives you many results so you can see like uh i've clicked on install locker engine on ubuntu so you go to install locker engine and you can see like uh which platforms are supported so just just find a docker engine for your platform so linux os x windows 10 uh whatever you're using and then you can dockerize any application so um to docker dockerize so it's like it's creating a virtual container for you and um then anybody can use your code anywhere so they can train your models at the way you have trained so most of the time you hear from developers right it works on my machine but it doesn't work on anybody else's machine so that's why they made docker so that it works everywhere so the first thing that we do is create a file called docker file and that's an empty text file for now and the first command in docker file is uh for for us is from so from command will take an image a docker image that has already been built so uh to from docker hub so what you can do is you can go to docker hub and you can search for um ubuntu let's say and now you have um you see like you have a result for ubuntu and then inside ubuntu you can go and see the different tags so this is ubuntu latest ubuntu xenial so you have like this so what we are going to do is we are going to let's let's try use ubuntu 1804 so we have ubuntu 1804 so we click on this one and see you can see the name ubuntu colon 1804 let's copy this one so from ubuntu 1804 so now you're saying like in inside your docker container you are using ubuntu 1804 okay that's all for now what else do we need we probably need a another file called requirements dot txt which will have all the requirements for your machine learning and deep learning model so what what are we using we are using transformers we are using torch um we are using tqdm we are using are we using anything else uh we are using flask we are using we are not using job so i'm going to remove it and we are using pandas to read the file itself we are using numpy we are using scikit-learn and all of these things you have to put inside requirements.txt so in the beginning it will be a little bit time consuming but then it's easier so uh what i'm going to do is i'm going to check what uh i'm using uh which versions i'm using so you can do that very easily so let me go back to my terminal and i will do pip freeze and then i will check uh i will prep scikit-learn so this will give me the version of scikit-learn that i have been using locally so that's scikit-learn uh 0 to 31 okay let's see okay so this didn't work so let me copy paste it or just write it equal to equal to 0.23.1 so we got that and now similarly you have to do it for all different uh libraries that you have been using so let me do that so now i have all the uh different libraries that i've been using so i have flask nampi pandas torch tqdm transformers scikit-learn and these libraries are being used only for uh this specific project and i have specified version for all of them so when you have requirements.txt you can just do pip install minus r requirements.txt and it's going to install everything for you from your comments so you can see like it works and uh i have i've i already have everything installed um on my machine so i don't need that but you need that for docker file so that's why we did it and now ubuntu 1804 ubuntu 1804 uses python 3.6 if i'm not wrong but we can also check that so now we got from ubuntu 1804 and we don't have anything else so the second command that you must know is run so run command will just run any kind of command as you run on ubuntu so like i can do app install uh lib jpeg if i'm handling uh image data maybe i need this or app to install something something else so you uh at stop i probably want to see the resources how the resources are being used so i can install that stop um and maybe that's it maybe i don't need anything for now uh but i probably need a lot of things but maybe let's say for now i don't need anything so my docker file consists of these two commands and now let's try to build a container out of this docker file and for that you will use docker build command so docker build command goes something like this you have docker build uh then you specify the docker file so minus f docker file uh then you specify target image name so let's say my target image name is docker tutorial and you specify the location of docker file so here it's in the same directory that's why i use a dot and let's run it okay unable to locate package htop uh okay so probably we need to update it so let's let's try that run apt update and then apt install etch top probably that should work okay so i missed minus y so yeah and uh then we do it again and now it runs everything so i already built it once now what we do is we can uh go inside this docker container so let's try to do that and for that you need docker run command to run it you need a docker run command so let me see okay it's not coming so docker run minus ti then the name of the image docker tutorial in our case and then you uh can run a command so we will run slash bin slash bash so we are going to uh run bash so now okay so something okay i'm already inside this container so uh docker 1 minus ti docker tutorial slash bin bash and now we are inside this uh container so what do we have here we have everything we are the root user and we installed edge top so it must be there and it is there so you can see that nothing is running inside the container so it's not my local machine anymore but it's using the resources from my local machine so what else can we do we started this tutorial to train something on this docker container right so let's see let's see what python version does it have so it doesn't have any python because we didn't install any python so let's install python so one thing that you must remember that docker works in layers so this is this is one layer this is another layer so every command is a layer and that's why you should not have a lot of uh different commands so we will do run apt-get update okay let me reload this one let's do it in a separate command but we can we can also no let's let's not do it in a separate command so apt install minus y stop and python dev okay let's see that should install python for us so now we build the container again docker build minus f docker file uh then minus t name of the docker container that you want to build and then a dot because it's in the same directory the docker file so now it's installing python 2.7 it seems okay i should have i should have done python 3 dev so let's see python 3 dev and we build the container again so it's going to take some time so now you can see it has successfully tagged docker tutorial latest and latest latest version of this docker container so now we can run the docker run ti bin bash so it should have python now but it doesn't because it's the command is python 3. and now it has python 369 uh which is good enough for us we can also probably try to install conda inside this docker container and create any kind of environment we want and do we want to do that we can and so let's try doing that let's install conda now to install conda you need to learn a few more things so let's see so first of all we have different installers for mini conda so we are we are going to be using miniconda instead of anaconda uh so this is a linux installer so i'm just going to copy the link and let's go back to our docker file and we can how how do we get this file so we can run a command and call wget and then we have the location of the file but that's not it there there's there's a few more things that you have to add uh to to to install miniconda and those things include so you have to learn a few more things in docker um so we get the file that is perfectly fine and now we have to um [Music] run the command sh and then the file name so let's try doing so we will create a uh new folder mkdir and we are always working in root so root slash dot conda and in this folder we will install uh miniconda so forgot this and and um sh and then the file name [Applause] and then minus b where minus b is just the silent install so it doesn't offer any kind of uh path modifications to your bash rc file and if you don't know about bash rc now you have to google back rc so rm minus uh f uh because we have already installed it now so we don't need this anymore and one thing you have to remember uh you have to create a conda environment so you can do conda create minus y so which passes yes minus n new environment and let's call uh this environment ml and python equal to 3.7 not not 2.7 3.7 and now it will create me a python environment so let's see if this works so we will build the container again so everything will be there so wgat is not found so the thing is you have to install update using apt install and now um we will try it again so duplicate is not found so we can save it okay i think it is using cache and it shouldn't do that that's because i did not i did not save the file it seems so now it seems to be working so it's installing some stuff and now we need to wait a little bit again so it doesn't take much time so it finished and it gave you an error conda not found it's because you have to set the conda path in the path variables in the environment and for that you have to learn two new commands so and then you specify path equal to so here you have uh slash root slash miniconda3 because that's where it installs mini conda3 and slash bin and now you add this to your path [Music] but okay and you do the same thing with arc argument uh yeah our command now we will see what env and arg and venar command they mean so you look at the documentation docker it's quite extensive you have everything that you need uh to learn from here so to make new software easy to run you can use the and to update the path environment variables in the software your container installs so you install something new and you need to update the path variable so you can also do it like this or the way i have done it it's the same and now let's search for arg command so let's find that and uh after a little bit of googling so we see that arg is only available during the build of docker image not after the image is created and containers are started from it you can use arc values to send uh to set end values to work around that and end values are available to the container but also run style commands during the docker build starting with the line where they're introduced so that's the difference between argan and and we will be using both and now we create the container again and see what happens let's see okay so we have probably made a mistake somewhere uh slash root slash mini conduct in uh two part and and path and arc particle two okay so this one looks fine let me save it again let's try the same thing without uh with no cache and see what happens no cache then it has to build everything and app not found so there is something wrong i need to take a look at that and it seems seems like it's a major blunder that i've done so i didn't include docker so it like changed everything from my path variable and now it should work okay so now it seems to be working fine so it's installing stuff uh and now we wait a little bit and see what happens in the end as you can see now it seems to be creating the environment the conda environment which was failing so it's creating a python 3.7.7 environment now so let's wait for it to get done and there we go we have the docker created now so i will just do bin bash again go inside the docker container and i have the conda command so if i do python i'm getting python 3.8.3 which comes from miniconda but if i do conda activate ml so it tells me it gives me an error so what you can do is instead of doing contact device you can do source activate ml and it takes me to the ml environment it's so cool and now here i have python 3.7 now i can do everything that i have been doing uh till now like on my local machine so what we are going to do next is run another command and that's called source activate source activate ml and and pip install minus r requirements dot txt so this is going to install everything for me but before that we probably need to do something more uh so let's copy our code and to copy our code from our local machine to our docker environment we use the copy command so copy dot so it will copy everything that we have uh locally here to a folder and let's call that folder src slash okay i hope i don't need to create that folder and i hope it works so we will add cd src one more command ampersand percent uh source activate so let's take it to new line and let's take this to new line two so now uh it should install on all the requirements after activating the ml environment so let's try and i can remove no cache because i don't really need it so some of the steps are already cached and see it ran very fast okay so one more thing missing here is uh you cannot you cannot just use it like this so you have to uh use um slash bin slash sh and then you need to install it so i can do slash bin or bash and then minus c and then i have all this stuff let's see um the command returned the non-zero status okay so i need to see what's happening here let's try to run it so i think it's because i have minus v instead of minus z so let's see okay now it's working so uh it has copied the requirements.txt file and now it's installing everything that is inside my requirements.txt file so now we wait for it to finish and it's done so now we go back to this container run the bin bash command and we must have a src folder and we do have a src folder um okay now here we see we have everything we have we even have the data file so it's not usually advisable to put the data file in the docker container but we have everything here so now if i run python 3.8 and i will do source activate ml so i'm in this machine learning environment now and i can just do python train dot pi and let's see what happens so uh it's downloading like previously and then it should it should just start training and let's wait for this to finish and see if it starts training it seems to be training fine so training has started and it will finish after one hour so because we have been training it on cpu so now uh you have created a docker container and you're able to train your model but you can also do it like uh something like this so one thing you need to remember about docker container is once you exit all files are gone they don't save the files so what you can do is you can do bin bash minus c again and here you can write source activate ml uh ampersand ampersand cd src and then again python train dot pi and it should just run your command that you ran well you went to the docker container and then you run the command so instead of doing that you can just run it so this command is all you need now to train your machine learning model but there is one problem it's not going to save the model because first of all it's downloading is fine but it's not going to save the model uh because once you exit the container everything is gone so let me stop this and let's go back and see what else can we do we will uh go one level up and here i will create a new folder so docker data so i've created a new folder called docker data and now let me go back to docker tutorial uh and back to our file so where we have been uh where we have like the config and stuff so where to save the model so let's say we want to save the model in slash root slash docker data okay uh where is the training file training file is also in the same folder let's say it's in the same folder and we will move the training file from here so mvtrain.csv to dot slash docker data so now my training file is gone from here it doesn't exist anymore and now uh since we changed the code we have to build the docker container again i'm just going to reduce it to 64 because it's faster and uh everything else is the same it's still training on cpu it's still downloading the model so instead of it downloading the model you can also change this bird path to slash root slash docker data and you can have your bird base encased model there but we haven't done that so let it download the data it doesn't take much time it's not good for the servers but it's okay for this tutorial uh and now since we have changed the code we build a docker container again so now everything will run again and it's going to install the requirements that's going to take a while okay it's done now we have rebuilt our docker container and let's see uh what we have okay so this is uh not this one but now we can uh run uh the previous code again uh sorry the previous line of command again so let me see okay so this was our command to run the model right and uh before that we have this command so here we will add something new we will add a new uh argument minus v and what this will do is here we specify homework uh slash home slash shake uh workspace or maybe i can just do dot slash uh docker data and then i have uh this pointing to docker data in the container so this is my local path and this is the paths that i want this um folder to be mounted on so let's see it doesn't work in valid mount uh docker data slash or maybe it doesn't like my relative paths okay so okay so yeah it says that path must be absolute okay slash home or maybe i can just do like this okay now it's much easier for me to see what's happening um oh my god i hate this it doesn't paste so slash home shake workspace workspace uh does it need slash root okay so there we are so minus v is mounting a volume from your local machine to your docker container and now when i go here uh inside this i must have a folder called docker data which i don't have cd slash root docker data here so i got this container and inside this i have docker data and when i look at it i have trained.csv so if i'm if i'm modifying train.csv here it will also be modified on my local machine so everything i do inside this docker container so like let's do touch uh create a new file new file dot text and now i have a new file inside the container and i will have the new file in my local machine too so you see new file.txt which was created from inside the container so you can actually also mount your uh code folder to the docker container and then do all the development inside without having to build the container again and again so now we can just run the previous command and okay let me just clear this okay and our previous command was this one where we activate the ml environment and we then we train the model so it says slash root docker data slash train dot csp does not exist so probably we made some okay yeah we made some mistake we didn't mount the volume right okay so this was our command minus c and then here we do uh cd src and person source activate and 2m persons and python train dot by and now it should train the model but it's still going to download everything because we didn't specify we did we didn't pre-download them so we can we can also pre-download all the pre-trained models that we want to use and put them inside this docker data folder and mount it and you can also mount multiple volumes like this so everything else remains the same and now it will start training a model and it will also save the model in the docker uh in this folder so my model will be saved in this folder that i have okay so now it's going a little bit faster because we have reduced the size of the max length of the sentences that we will be training on and now we we can stop it let's stop it because it's training everything on cpus so let's change the device to cuda and now when we change the device to good first of all we can just go and try to take a look uh inside the container so just go in bash and let me run this command and vds mi to see what happens we don't have anything and that's a major problem because we do have gpus on the local system but it doesn't recognize inside the container and it also doesn't have cuda or qdn available so to fix this problem we need to install nvidia docker runtime so you google nvidia docker runtime and you you will come across a nvidia docker github repository go go there and unfortunately this is only available for uh it's not available for windows so i'm sorry windows users but you know you don't use docker to train the models you can train your model anywhere you want and then let's check the binary file and the weights file and then you use docker to actually deploy your model so if you're on ubuntu you can install the run time the nvidia run time using these commands but that also means like you have to install the drivers on your own so this is only the runtime and you only need to do these steps just once then it's always going to be available so now uh what we do is on my local machine we have nvidia gpu right and this is the docker run command and i will add minus minus gpu gpus one and let's see i already have the nvidia runtime and now it's detecting a gpu and let's see gpus2 uh maybe i just want to use the second gpu so if i do this good visible devices equal to one and then i run this command nvidia smi and here okay i'm still getting titan rtx so maybe oh yeah okay because this this won't work like this okay uh never mind maybe maybe maybe i should try uh something else instead of gpus1 maybe i can try gpus2 then i have two gpus so i i got both the gpus here and then i can specify which gpu i want to use and just use that one so you see like uh there's no process running but uh some some memory is being used because that's being used by my local machine and you can also do gpus all then it's going to use all your gpus but we have only i have only two gpus on my machine so now i go back and i've already changed it to cuda so now when i train using the same command uh okay this one and it should uh train it on gpus so i just need to add minus minus gpus one let's use one gpu that i have well out of two um so it's going to use my gpu titan rtx gpu so my video is going to be stuck so you can see like now it has started training and my model path is also slash root talker data model dot bin so it's going to save the model there so i'm just going to wait for some time to see if it's saving the model or not so i'm going to wait for one epoch maybe i can reduce the number of samples and uh it's going to be much faster uh so but we need to build the docker container again so let's try to avoid that shall we uh so let's try that uh so we will we will change uh train dot pi and set the number of rows to 600 and just because it's fast it's going to be faster and uh now here i will specify i will change the command a bit so instead of going to cds rc is going to see the slash root slash code and inside that it's activating the environment and running it and we need to have one more amount minus v uh slash home absolute path so i have to type a lot slash docker tutorial and that's this one goes to uh slash root slash code so now what i've done is i've not changed anything in my container the container still has the old code but i'm mounting a new volume uh which is this folder that i'm working on the docker tutorial and mounting into slash root slash code then i'm going to slash root slash code and activating the environment and running python train dot pi let's see if this works okay so this is working so this is one of the tricks that you can use uh to to like you have already created the container once you don't need to create the container again and again and you can just make changes in the code and those changes will be reflected instantly inside the docker container so this is this is how you can use it for development purposes now we wait for we wait for a few minutes to let it finish one epoch so that i can show you how you can deploy using this uh okay so maybe i forgot gpus or uh is it apex complaining okay let me check this thing so yeah yeah i mean this is a genuine error i i didn't get to get it before so it's happening because you don't have cuda and qdn installed so now you need to install cuda and qdn inside your uh docker container and how to do that you let's go back to our docker hub and search for nvidia nvidia slash cuda so you need cuda and qdnn in order to train a deep learning model so we will search for ubuntu and here you have 10.2 runtime um 1804 and let's see uh could cuda 10.2 qdn8 you also have qdn8 you also have cuda uh 10.2 qdn7 so let's use that one so let's copy this and now you have to change the uh recreate the docker container so instead of doing from 1 to 1804 so first of all let's hide this one so it's just hash you can use hash to comment inside the docker file and we install we use this container as the this image as a base image and then obviously we need to recreate the container so let me exit this one so we run docker build again because we changed something in the docker file and uh okay so now it's installing some packages again it's going to take a few minutes so this is done and now we can we can try to run our training command again and see uh if that works so now we run uh the same code and instead of train.pi here we have app.pi app.pi so uh now it's going to download the pre-trained model again obviously you don't need that you can save it locally and you can also mount it using the minus v parameter and you see like i have gpus equal to gpus1 this parameter and if i go back to code you can see in the config file the device is cuda everything everything is here so you don't need to care about anything else so it's not downloading and after it finishes download it should start the flask api so so far so good we need to wait okay so now it has started the flask api on all a host on this uh network so let me just copy this thing and open in a new tab and you can see when i when i try to open that link it doesn't open because uh it doesn't exist first of all because the port is inside your docker container so let's bring it outside and to do that we just need to change a few one parameter we just need to add a new parameter to this command uh so these commands are they might look huge but they are not really huge so uh the argument is minus p and then you can do uh something like maybe i want from my local machine i want seven thousand port seven thousand going to 999 which is uh originally there in app.pi uh so it's going to load everything and then it's going to start on the same port 99999 but i will be accessing it on port 7000 and when i go there i have not found so my api is working fine so it gives me something uh so it doesn't it's not unable to connect anymore but it gives me that uh this url was not found so instead of accessing it from a browser let's just uh try to access it from another terminal [Music] so we have this command and it's running on port 7000 and it's the same old api that we have already seen in previous videos okay so now you see i'm getting a response i i i don't care if it's positive negative or whatever and then what you can do is you can change your device to cpu and you can remove the volume so the volume that you have mounted here and you can just give the model to someone and the code inside this docker container to someone and they can just run it on their own you can also upload your model uh your docker container to docker hub so they don't have to create the container again and again so it's always going to be there all your code is going to be there and uh like as far as this command is concerned if if it seems huge just save it in a bash script or create a make file out of it and that's very useful make build make run to build your container make use make build or to run your container use maker and make train these kind of things and that makes your life much easier so this is it for today's video i've shown you how you can use how you can like use docker and to train your model and for a data scientist you don't need to learn a lot of things in docker [Music] but it's always good to learn something new and you can you can use ubuntu 1804 you can use alpine linux you can use python slimmer it's so many variations available just go to docker hub and see and this is it for today's video if you liked it do click on the like button and do subscribe to my channel and share it with your friends and thanks a lot and tell me in comments how was it and what you want to learn further thanks bye
Info
Channel: Abhishek Thakur
Views: 58,570
Rating: undefined out of 5
Keywords: machine learning, deep learning, artificial intelligence, kaggle, abhishek thakur, docker, docker data science, train model on docker, nvidia docker runtime, how to use docker to train a model, docker flask, serve deep learning model using docker and flask, docker api, machine learning docker, how to train a machine learning model using docker, data scientist docker, docker for data scientists, using docker to train a machine learning model, using docker for machine learning
Id: 0qG_0CPQhpg
Channel Id: undefined
Length: 57min 9sec (3429 seconds)
Published: Fri Aug 21 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.