Lecture: Principal Componenet Analysis (PCA)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
last time in the first lecture on this singular value decomposition we have three lectures coming up the second lecture I want to start focusing in on what good is this singular value decomposition for what I showed last time is just how would you construct this that you can take a matrix right and decompose it into these three matrices where you think about these matrices as being basically you take some data set and what it does is it rotates stretches and rotates the data okay so those those matrices do for you moreover it was also a theorem that for any matrix I have a singular value decomposition exists for it okay so I want to start building on this theme towards towards the third lecture which we're going to do what it called eigenfaces or we're going to do principal components for face recognition so it's like a software or methods for recognizing faces so we'll talk about that next lecture but I want to start giving you intuitive feel of what this method does in allowing you to analyze data so I titled this lecture principal component analysis and often that's called PCA and you'll see this dropped around in conversations once in a while principal components it with some slight modification it's also sometimes known as the proper orthogonal decomposition the car home car home Lovan those are like some Dutch guys expansion Hotaling transform empirical orthogonal functions these are all different names for basically what is essentially the same they're all the same name for essentially this thing here is just going to be essentially just listening you're value decomposition I'm going to try to motivate what we're doing today with an example a physical example and one that you already kind of know the solution to so let me draw what it is consider the following physical setup I have some kind of rod sticking out here and suspended from this rod there is a spring and this spring is attached to a mass okay now what I'm going to do with this is it one just pull this mass down and let it go okay now this mass has some or this spring has some spring constant K and you know normally if you just let this thing go what it's going to just do is just oscillate up and down of course I could swing it and so forth but for right now let's just constrain it to the fact that I can exactly release it from underneath and it's going to just start to oscillate okay so that's the physical system we have now to measure what's going on with this we might have some function f of T that measures the displacement of this thing as it oscillates okay so let's that's it now what do we know about this well what we do know about this is that it's governed by force equals mass times acceleration okay so that is what the governing equations are for this and in fact if I write that down F equals MA right so we we know that this is a very important law from Newton and what Newton came up with is this idea of also calculus he said well look how this mass acceleration is the derivative of the velocity velocity is the derivative of position so really acceleration is the second derivative of the position in this case here that's the F okay so MA now what are the forces acting on this thing well gravity right so I have mg going in the negative direction right so that's kind of how you might think about this system is you could write this down in some fourth but you have the gravity here but then you also just the spring restoring force right so if you say you know equilibrium the gravity holds it here and then if i displace what pulls it back up is a spring and if you push it up the spring and compresses it pushes it back down if you is what use what's called Hookes law then what you end up getting is this thing sort of looks like you're four so the restoring force of the spring comes to be out something like this okay simple you could solve this and let's just write down the solution we know and by the way right away when I draw this picture you already know how to think about a coordinate system you might think of a coordinate system is X Y Z well clearly the Z Direction is up and the XY plane is down here and my solution would look something like this I have something that oscillates in time so actually I should sorry I should make this T a Z but just pretend I did so you just have this thing oscillating and there's the solution so in fact you could do this for any you know beginning of physics class you're doing often working with these mass spring systems and there's your solution or there's your setup to the equation now in this case it's so simple you know what the answer is you know if you release this thing it's going to oscillate now of course there's other things in here there's damping and so eventually it will stop but let's consider sort of this ideal system where there's no damping no friction no noise and I release it exactly right so it just sits there and so really it's just oscillating in this little in the Z direction it's a one degree of freedom system that's it okay everybody okay with that that's the that's the basic set up I mean that's a well known problem now let's take a very different viewpoint of this problem am i here and let's wipe this out I have a spring mass system and now let's take a data-driven approach what I mean by that is suppose I didn't know F equals MA in fact suppose that cameras had been invented before F equals MA so here's the question to ask is it possible from just taking data alone could I have inferred F equals MA from the data so in other words I'm going to take probes I'm going to place them in the system and watch this system evolve and I'm going to try to first of all figure out like so first of all how complex is the system can I get an idea of that so what I'm going to do is take this system and I'm going to put measurements on it in particular I'm going to have cameras so let me try to draw camera camera one or let's actually call this camera a stay consistent or camera one who cares so this is camera one and then I might have another camera over here looking at this thing and when I'm going to maybe one over here I filmed this thing I started a given time I set this thing going I film let's say this is camera two and this is camera three my objective now is to try to figure out just from the data itself can I say something about this underlying system here now remember I already know the solution to it but in many cases you don't know the physics of what's going on underneath there you just have the system where all you can do is take measurements on it and what you're trying to do is figure out what is going on in the system that produces these measurements that's we're going to try to do here we're going to try to do it here knowing that in fact we already know the governing equations underneath okay so what do we do we take some time stamps and from camera one I take the following data what does that mean well remember back here on the camera this is taking some snapshot of here I'm seeing this thing moving around so you know in my in my box here this weight is going up and down and what I have is you know in the frame of reference of my camera there's a certain x-direction and a y-direction right in this camera fame so whenever I look at this thing I calculate where is this position of this mass in the X and the y direction in the frame of reference of that camera as I take snapshot of it in time and I collect them so here's the X location in the reference frame of the camera the Y location in the reference frame of the camera and I can do the same thing for camera two and karemma three so it's three sets of recordings ex aya pair which is the x and y position in the camera frame of where this mass is the xB YB which are the mass in the frame of reference of camera two XC YC mass in the reference frame of camera three okay this is all my data by the way at this point right if you think about this uh there is no XY coordinates like you know when I look at this thing of course I can say oh you know x and y is here z is in the up direction remember suppose it's already making use of the fact that you know the answer suppose you just have measurements and that's all and you're just trying to figure out what's what is in here in fact in the first place and what's it doing what is the dimensionality of the dynamics that's going on here we already know it's one dimensional it's going like this question is can we reconstruct that directly from the data so I want to do is make a data matrix I'm going to make a matrix X what I'm going to do is in the first row I'm going to put X of a in the second row put Wiese of a then X of B Y sub B X of C Y sub C there is all my data arranged six rows so I took all my data put it all in here so this is all the data I have and then we start to dress to fundamental issues associated with this data noise what's called redundancy what you know is these cameras they're not perfect so whenever you take in your pixels probably some noise and those in your images not only that maybe the person holding the camera is shaking it a little bit so you're introducing noise into the system that doesn't give you a perfect representation of the data and having a lot of noise can obviously affect things a great deal because you're no longer looking just at the data you're looking at the data with noise on top of it if you have too much noise on top of it you're not going to make any sense of the data okay so hopefully there's a way to control that the bigger thing we want to focus in on is this idea of redundancy okay what do I mean by redundancy camera one camera two camera three they each pull in X a Y a YX be XY b y b XC YC i could ask the question is are these measurements independent of each other in other words when this thing is taking a picture of this thing going is my x coordinate does it behave independently of the y coordinate and right away you know that's not true because if you take a picture of this say you see this mass in your camera how it's changing in x and y it's related right there's a trajectory it's on and it happens to be the x and y are intimately tied together and by the way if I take a recording here and here this is going to pre is going to give me basically the same information is here just from a different angle these are going to be related and this is gonna be related especially if it's just this thing going up and down in fact what is the degree of freedom of this system if you look at it if I the release it straight up straight up and down it's one degree of freedom yet what I have are one two three four five six sets of data all I really needed was one set of data to get everything I needed right if I could take a picture knowing how it lines up take a picture like this so that the camera the z-direction ik is this way in the camera when I hold it it's level in the XY plane and then all I need is just the Z Direction dynamics and that's it it's one degree of freedom the fact is normally with your data you don't know how to take that perfect picture ahead of time what principal components is going to tell you is oh I don't need all these cameras all I need to do is one camera at this angle and it would give me the whole thing okay that's what we're aiming for but we're not there yet we need to figure out the methodology to get there so I'm going to introduce some concepts variance and covariance okay these are statistical ideas and the variance and covariance are going to tell you how essentially there's two really key statistical properties often people talk about one is the mean the mean and the variance so if you think about a Gaussian distribution of variables where there's some bell curve like this this would be the mean and there's a variance which tells you sort of how fat this thing is okay that's a conception that you might have which is there's this idea of a variance which tells you what's this distribution of the sizes in other words what's the width of this distribution okay on a bell curve so how do we compute these things all right well consider two vectors a which is you know some components a1 a2 all the way to a of n and a vector B and let's say that these two things are collections of data a and B and what I would really like to understand is there a relationship between a and B in particular if I think of a as a set of data and B as a set of data I could ask the following question are these two data sets sort of statistically independent or are they related somehow statistically in other words when I look at a I could tell you something about B or vice versa so this is what variance and covariance gives you or covariance tells you how they're related but variance tells you something simple let's let's for a moment assume that the mean of a and B are 0 okay it's going to be our assumption for a moment a mean of a and B are 0 then I can define this quantity the variance is being the following Sigma squared a this data set is the following one over n minus 1 where n is the total number of points a a transpose so take this vector multiplied by the transpose so this a vector and then I transpose it turns it into a column vector and I can do this multiple multiplication and what is this it's an inner product this tells me the length of the vector essentially okay so if this inner product is big it means there's lots of variance if this inner product is small there's very little variance same thing here would be okay so there's the variance of a there's variant to be let me give you an example of something that might have high variance versus low variance suppose I have some signal like this a sine wave well so first of all what is the mean of this thing well zero but the variance is related to how big my excursions are out here okay so I'll have big excursions like lots of big oscillations then I have a high variance and you want to see a little low variance right there small sine wave so that would be like a small variance this is a large variance remember it tells me the fat how fat my distribution of values are okay so if I were to put this on a bell curve I could see that like you know I'm spending a lot of time in here less on the edges but still look for this big amplitude thing my values are very spread out for this small one the variance is almost zero hardly any oscillations at all okay so that's a that's an intuitive concept of variance and so what this tells you is how large the changes are in some sense in the data in a this tells you how large the changes are in the data values of B okay alright so that's the idea of variance and this will play a very key role in the singular value decomposition the more important quantity we want to focus in on for a second is what's called the covariance and this is measuring the statistical relationship between data and a and data and B in particular what it is is an inner product between matrix the vector the data a and the and the data be these two vectors and you get a score and this tells you the covariance here's was kind of interesting about this score if a and B were both of length 1 let's say let's say they were a bunch of data mean 0 but their lengths in other words this was length one that's length 1 then what this is is just simply an inner product or dot product what it does it tells you how much of a and B are in the same direction if a and B are exactly in the same direction in other words a is equal to B then this thing would be 1 that's the maximum covariance score you can get however if a and B are really different directions and in fact if a inner product B is 0 then a and B essentially have nothing to do with each other the data here is orthogonal to the data there okay so in some sense you can think of a as in this direction B is in this direction and what this measure tells you is just the dot product tells you how much of B is in a well you can look in that direction do a little projection down here that much and by the way this is how much in the opposite direction so when you start looking at this if a and B are like this orthogonal to each other then a has nothing to do with B ok so in this sense when you get a covariance at 0 and B are thought to be statistically independent however if an being go in the same direction you know here's a who's be on top of it you know basically then no the same vector they're completely they're just the same they're statistically dependent now more generally you kind of get something like where a and B they're not either completely orthogonal nor they on top of each other so they have some overlap and so there's some kind of statistical dependence and the measure of how much distillate ISTA codependence is given by this quantity right here okay now how does this pertain to what we've been talking about let's come back over to this board over here let's talk about this experiment what I really want to understand because I already know this to be true is that when I take these three different cameras filming the same simple experiment that just does this what I see from this camera and this camera this camera are not statistically independent they have some kind of covariance what happens here is directly related to what happens there and what's directly related what happens here all I've done is taken this thing and I filmed it in three different frames of reference but the fact of the matter all this data a lot of its redundant because so much of it is correlated that's this idea of redundancy what I'd really like to do in systems where I have a lot of data I would like to remove redundant information in other words I know this system it's a one degree of freedom system but I've taken six degrees of freedom of information what I'd really like to do is reduce this from six back down to one in some self-consistent fashion it's a one degree of freedom system you had six degrees of freedom how do I do that reduction so in other words I have five degrees of freedom that are redundant and I have to remove them okay so this is the idea of thinking about it here's the formal way you want to start thinking about this which is the a and B matrices or a and B vectors of data have these variants scores tells you how much each of those directions is really changing how vigorous the dynamics is in that direction and this tells you how the two sets of data are correlated to each other okay variance-covariance what's the fact of the matter is here we don't just have to date two vectors we have actually has six in fact in general you'd have lots of vectors of data so what we want to do is construct what's called the covariant matrix so we erase this we're supposed to have a bunch of data and what I'd like to do is with all that data is remove redundancy so we construct with called C of X which is defined the following way I generalize from vectors to matrices all I do is say look I'm going to look at covariance among all pairs of data vectors so there's going to be a matrix I have this 1 over n minus 1 factor provides what's called unbiased estimator who cares the scaling factor then have this quantity here my data matrix times the transpose of the data matrix what this does is it takes each vector and multiplies and calculates not only the variance because if you take here if you take first row first column that gives you the variance because of the same vector but now when you do first row second column it gives you the covariance between the first data set and the second data set okay what does this thing look like in the context of this data in particular right I have X of a Y Survey X would be y sub e X of C Y so C 6 sets data ok and what I want to do is essentially calculate how does this one correlate to this one how does that one correlate to that one how does that correlate to this this to this this this that's an important thing to do and then I can ask well what about Y of a how does it correlate to this 2 itself which is the variance to this guy to this guy to this guy to this guy and you can do this with all pairs all pairs of vector you want to check so in this case here we have six measurements this thing here is a 6x6 matrix and let's write down what the components looks like or what they look like well in the first here you have Sigma squared of X a X a in other words what is the variance of the vector X of a but now in the second component it's your calc computing the covariance between X of a Y sub a so you get Sigma squared X of a Y so there and then Sigma squared X of a X of B better make this little bigger Sigma squared X of a Y sub B Sigma squared X of a X of C Sigma squared X of a Y sub C that is the first row of that matrix so what do I have here I have one Verret one variance score which is hey what is the variance of the XA direction in other words how much is it changing in the XA direction so I can compute that right so this is a an inner product of XA with itself but then it says yeah but what's the inner product of XA with wife of a because that's going to give me a score about how X of a and y of a are related and then how X of a and X of B are related how excavated Y's would be related and so forth and in the second column you have Sigma squared X of a X of B I'm sorry so here we're really computing for you get X of B X of a you get Sigma squared X of B X of B Sigma squared X of B what sorry shoot sorry sorry sorry : this is this is the Weis of a row sorry about that so this is get Weis of a X of a Sigma squared Weis of a Weis of a Sigma squared Weis of a X of B and so forth and then you get Sigma squared here this is now where you get X would be X of a you get Sigma squared X of B Weis of a and then on the diagonals X Sigma squared X of B X of B and so forth so this is kind of the structure of the 6x6 matrices matrix now I want to point out one thing look at the diagonal terms here they are what are the diagonals tell you oh it's the variance of X a the variance of Y a variance of X B and so forth so on the diagonals one of the things to notice is covariant matrix on the diagonals are the variance measures what about on the off diagonals on the off diagonals now you have all the covariance scores X a Y a that tells you how X a and y EV are are basically correlated XA xB X a Y B and so forth and notice Sigma squared X a Y a is the same as X Sigma squared Y say X I've a so it's symmetric so this matrix that comes out the 6x6 so first of all this off diagonal terms they're all symmetric so wherever you are on this side you go across the diagonal it's the same this is the same as this and so forth so it's a symmetric matrix or another way to say symmetric matrix self-adjoint or another way to say it is hermitian for instance those are different they all mean the same thing so the off diagonal terms give you your covariance scores between all pairs okay so this matrix becomes important because this matrix is telling us something very fundamental about our measurements tells us how each vector is changing and how vectors are related to each other let's let's go one step further what does it mean to have small values small off diagonal elements so if you on the off diagonals if they're very small what does that mean so very small question mark so so for instance take this X a Y sub B this is a variance covariance score between X a and the Y of B component and a very small it means these two vectors are basically orthogonal and they don't have really anything to do with each other so if it's very small they are essentially statistically independent in other words one vector doesn't really have much to do with the other however if they're big or moderate-sized what does that mean if this is pretty big it means X a and y so B have a big inner product that means the two vectors have a lot of stuff that they share okay so if they're very big they're statistically dependent and another way to say that is a lot of redundancy so this is how you start identifying redundancy in fact as you can say look the off diagonal elements if they're bigger telling me the kind of measures and scores that in fact produce a lot of redundancy okay now let's talk about the diagonals and the variance we're going to make an assumption here which is if you look at these if there's a big variant score that means that vector is changing a lot in other words it has a lot of stuff happening whereas if it's really small not much going on we're going to make an assumption that if the the diagonal terms that are big they're the ones that matter that's where all the system stuff is happening the stuff that's small not much is going on there okay just going to it's an assumption but we're going to use it okay so this matrix tells us a lot already by the way if you want to compute this in MATLAB you want to know the covariance scores you just do this take your data matrix that you build sake covariance X and what it's going to give you is this matrix it's going to compute the variance along the diagonals the covariance on the off diagonals that's it okay all right and now what we're interested in now is once we get this is to say I'd like to remove redundancy I have all these things out here these covariance is telling me oh that's big that's big that's big it means the data is very strongly related what do I want to do what I really want to do is what's called diagonalize the system I would like to do the following I would really like to change the basis I'm working in so that this thing here becomes diagonal I want to make this diagonal let the that is my big goal it's so big I wrote big letters and all caps that's like I'm shouting it okay that's how important it is you want to make this diagonal what does it mean for this thing to be diagonal if all these non diagonal terms are 0 it means that the correlation in this new frame of reference between different pairs if there's zero that means the completely statistically independent there is no redundancy in other words if I can put this in a diagonal way so I have C X it just looks like some stuff on the diagonals and zeros everywhere else I've removed all redundancies from the system in other words in this frame of reference there are no redundancies I've removed them all from the data what's left the diagonal terms if I can diagonalize and I can order the diagonal so that the biggest are here and the smallest are there the biggest diagonals are going to tell me the dynamics of the straw August and the ones that are much smaller they're kind of not doing much in the system I'm going to order them biggest the largest hey by the way does this look familiar a diagonal matrix order from biggest to smallest this is starting to look like our SVD and all its really doing in the data management sense is to say look I have a bunch of data I don't know what it represents let's say I take a bunch of measurements and what I'd really like to do with all these measurements is say okay look I got some noise I'm not sure how to take care of the noise but what I know for sure is that maybe you've have all these sensors what I really want to find out is how big is the underlying system how much of the data collection that I did is independent or is the same was redundant and this is the way to do it this is a way to check that you can move to a frame to make that diagonal and making that diagonal you've removed all redundancy and then you can look at the diagonals to tell you which directions or which variances matter anything that's near zero you could say let's not much is going on I'll throw it away and it's going to just leave you now fully removal full removal redundancy and the only thing left over is going to be now things that matter things that have large variance that is our goal diagonalization alright ready I'm going to break out a color marker diagonalization is a very important critical thing to do when you look at dynamics or any kind of system like this by the way you're already familiar with diagonalization from eigenvalues and eigenvectors and i'm going to show you the connection to those and then i'll show you the connection to the SVD okay so let's start taking a look at this thing and take a look at our data matrix X let's come back over here and start thinking about this diagonalization idea we'll erase our original system here and remember that we have a matrix X this is our data matrix that is critical for understanding everything we want to do now what I'm going to do is I can start looking at that correlation matrix and what I know I'm going to compute is something that looks like this X X transpose of course there's a scaling factor 1 over N but the point is this pretty much is going to tell me what's going on in that matrix right if I do this the diagonals are the variances of each individual measure the off diagonals are essentially the covariance or correlations among pairs of data ok now what you've already covered in eigenvalues and eigenvectors is that any matrix like this and by the way this now this matrix here i've already said is symmetric ok it's symmetric or self adjoint which means it has real eigenvalues and in fact I can always do now if I have a symmetric matrix like that I can always do what's called an eigenvalue decomposition into this form here where what is s s is a matrix of the eigenvalues of eigenvector excuse me okay so I take the eigenvectors of this matrix and by the way again since it is a symmetric matrix I know what I have are real eigen values and all the eigen vectors are orthogonal to each other and with those orthogonal eigen vectors I make this and by the way since there are thought to each other s itself is a unitary transformation so inverse of s is same as s star well it's actually even better s transpose and this is the capital lambda this is a diagonal matrix with the eigenvalues of X X transpose okay that decomposition always works provided you know certainly if you have a symmetric real matrix like this which is what this produces this is always guaranteed to hold and then you get these unitary transformations and a diagonal matrix with eigenvalues on it okay so here's what I'm going to do what we already learned is that you know when I took the data I just pick some random arbitrary coordinate system what I'd really like to do coming back to here let's go to here remember my goal is to take this thing here which this took a bunch of data in an arbitrary system and I got this interesting thing and by understanding what this covariance matrix tells me what I really want to do is make this thing diagonal in other words what I really want to do is figure out what is there what is the basis modes or what are that what are the frame of reference I should have used in the first place so that what I would get instead of this matrix here with all these off diagonals and a bunch of redundant measurements that I get this guy do you hear that this guy go I don't know can't tell the difference between guy and gal with these matrices but the point is I want to be diagonal and that matters a lot so I come over here and I say that's what I want to do and a simple way to think about that is say okay how about we write this let's make up a new set of measurements which are related to the old set of measurements by this matric back matrix s times X remember X is my original set of measurements and I want to say that's going to be I want to work in this new basis new frame of reference it's just a transformation I transform from what I had to a new set of references let's calculate the covariance of this what is it 1 over n minus 1 y y transpose ok I'm just using the covariance formula that we had before but notice what do I get here this is now s transpose X s transpose X transpose which s transpose X the transpose comes in you get X transpose X the S there oh but look x times X transpose isn't that this right there so I can replace that there say 1 n minus 1 s transpose I'm going to get s diagonal s minus 1 but that's really s transpose s and notice what happens s transpose s 1 s paren so is s 1 this whole thing comes down to and what is that lambda diagonal matrix so in other words if you work in this basis here Y then the covariance matrix is diagonal so what I did effectively then is I figured out the right way to look at this problem what does this correspond to I took those three cameras it's like oh I'm going to replace it all in fact by the way when you do this and it's really a single degree you know what happens to all the variant scores there's only one nonzero variance score so even though you get this what it is that matrix is some number here blaah zeros and zeros everywhere else what does that mean it means there's one direction that matters the z direction and as this variance and tells you how large the amplitude fluctuations are of the mass I just diagonalized and it told me it's a single degree of freedom system that is the diagonalization process this is this idea of taking it from the frame of reference you measure it into and make a transform so that you get basically a structure where all the redundancies have been removed so this is if you're doing it with eigenvalues and all your vectors you can do the same thing with singular value decomposition so let me erase this and start talking about singular value decomposition because it's another way to get to it besides doing eigenvectors and eigenvalues which is let's write it in terms of use star X where u star is the u matrix out of my singular value decomposition remember I can take any data matrix a and decompose it into a u a Sigma and a V matrix I'm taking out the U from it so in other words it's data matrix X I can decompose it into u Sigma V star guaranteed and if I do that represent everything there and why then I can do the same calculation what is the covariance now in this new frame of reference remember all it does is takes my data and rotates it into a new frame of reference but here's the cool thing about putting in this new frame of reference what is my covariance in this matrix new frame of reference 1 by n minus 1 Y y transpose and if you work this all out it's right there in the notes you get the following you get this score right there so if you get this score this here is the middle matrix is your SVD squared so in other words these are your principal component basis right here these u so if you take a data matrix you do the decomposition u itself will allow you to transform now into a frame of reference where your covariance matrix is diagonal and notice the connection Sigma squared is the same as the eigenvalue matrix or set another way each eigenvalue lambda I is write is equal to Sigma I squared that is a connection between singular values and the eigen values okay we're going to continue more on this in the third lecture but key ideas here this is it right here when I took the data from that spring-mass system what I actually did is I took the data and I write a transfer I just randomly pick spots to take measurements these weren't the right spots to take measurements I mean if I look at a spring system all I have to do is take a measurement along the Z direction but my cameras didn't know this ahead of time what this tells you is say look take your data and what I'll do with all that data is I'll stack it up I will do an SPD decomposition and this U matrix that comes out is right it's a rotation its transformation so take your data and transform it according to this and now in this new frame of reference everything's diagonalized so now essentially we just took all the data you removed all redundancy and you actually even lined it up along the direction of motion this is what this thing tells you now when it uses to great effect in the next lecture to do face recognition most important lecture of your life the SVD is awesomeness awesomeness in a its awesomeness and we're going to hit at least one awesome application in a third lecture but you should never forget this ok that's it
Info
Channel: AMATH 301
Views: 146,553
Rating: undefined out of 5
Keywords:
Id: a9jdQGybYmE
Channel Id: undefined
Length: 51min 13sec (3073 seconds)
Published: Fri Feb 19 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.