ME 565 Lecture 27: SVD Part 1

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay welcome back everyone so this is the last week of class the second midterm / homework 6 should have been posted yesterday you all have access to it and it's due at 4 o'clock on Friday ok so this is the last week of all homeworks exams and class and this is probably my favorite topic that we're gonna cover this week which is data analysis and the singular value decomposition so this is just kind of a tiny peek into the wide world of data analysis but I want to tell you about some of my favorite methods and some of the really powerful things you can do with very very simple mathematical tools ok so today we're going to talk about the singular value decomposition the SVD so the SVD is probably the so if not the most important algorithm of the last hundred years it's maybe the second most important algorithm after the fast Fourier transform and this is revolutionizing how we deal with large data sets this is really the basis for how Google determines which pages to rank first how Amazon determines which products they think you're gonna buy how Netflix builds a profile of what movie tastes you have and the list goes on and on and on ok so especially in Seattle and San Francisco and New York City the singular value decomposition is becoming the most important algorithm for making tons of money after grad school okay so SVD is awesome it's a very very very simple matrix decomposition ok so it's a matrix decomposition and it's a very useful matrix decomposition okay so how many of you have heard of the SVD before in some other context okay good so some of you have heard of it so the singular value decomposition we're going to be able to do things like build facial recognition software okay so that's something that I'll do on Friday we can start to look at the genetic markers that are characteristic of healthy versus sick patients so one of the examples we'll do probably tomorrow or Wednesday is look at the genes for ovarian cancer and try to determine which genes are or are not responsible for ovarian cancer that turns out that the singular value decomposition is very useful for that we can also do more pragmatic things like well that's pretty pragmatic but more simple things like figure out what factors affect housing prices what mixture of various parts will determine the heat production from a cement mixture and the list goes on and on and on the singular value decomposition allows you to take training data from your past experience and essentially build a model and extrapolate what other situations would look like okay so this is the basis for machine learning which a lot of you are going to be bringing into your research in the next you know years and years yeah even if you don't know it yet machine learning is gonna play a part in a lot of your lives ok good so the idea is that the singular value decomposition is particularly useful when you have big data ok so good for big data and everybody has a different definition of what big data means my definition of big data is relatively all-encompassing so big data is data that's bigger than what I can plot in 3d ok so five D data is bigger than 3d but often times big data would be like the results from a huge fluid simulation that Boeing might run or the results from a huge questionnaire ok so the post office for I guess decades and decades now has been giving this gigantic ecological evaluation to every single person who applies for and enters the postal service so they have millions of employees answering hundreds of questions and if you write down all of the multiple-choice answers in a big matrix you have a gigantic data matrix okay so questionnaires fluid data experiments if you wanted to measure the seismic data on planet Earth or the position of all of the stars or how every senator has voted right this is these are examples of big data and the singular value decomposition is a way of extracting the most important features so we want to know what are the most important features in the data sets sometimes you call these dominance features and what it means for a feature to be dominant or most important is that it comes up again and again these are features in your data that you see over and over again so they're strongly correlated ok so let's do some examples of this singular value decomposition and start getting a feeling for for what this thing actually looks like okay so let's say that I have a data matrix X and typically what I'm going to do is I'm going to write this as a bunch of column vectors X 1 X 2 X 3 dot dot X M ok so X is an N by M matrix ok please interrupt with questions at any time if I start getting confusing or unclear because I do I deal with the SVD like every day and it might be easy for me to go into something that I know that you don't okay so the each column of this data matrix X represents some big measurement ok so this big measurement could be this could be the temperature measurement at all of the weather stations on planet Earth on day one and this would be the temperature measurement at all of the weather stations on planet Earth on day two day three dot dot up to day M ok so each column is a high dimensional measurement measurement vector okay another example is let's say that I have the fluid simulation for a flow past a cylinder or a volcano or something I could take the fluid velocity field at all million grid points and stack them in a really really really tall million by one vector okay and this would be the fluid velocity at time one time two times three dot ah up to time n so each column is a very high dimensional measurement the dimension of the measurement is how many pieces of information I need in that measurement and I have M examples of these measurements okay and so the M examples might be a sequence of measurements in time so often a sequence in time and if they're a sequence in time then we call them snapshots makes sense right you have like what's you kind of imagining you're taking data with your camera and every moment in time you take a snapshot these are each snapshots of data okay so does it make sense how we're gathering this data another example I'm just going to keep giving you examples of data matrices that have come up in my research or my collaborators research so this is a good example another example you might take the number of infected people infected with polio in every single district of Nigeria so this might be you know four or five hundred districts in Nigeria and every number in that vector would be how many cases of polio you've had that month and each of these snapshots would be that tally for that month month 1 month 2 month 3 and so on and so forth okay and so what we want to do is we want to find examples of things like are there correlations between these snapshots are there places in Nigeria that are just more prone to having outbreaks of polio okay so we want to be able to find dominant features in this data set in a way that we can actually understand what we're looking at okay so the singular value decomposition is a way of rewriting our matrix X in the following way so we take our big data matrix X and we write it as the product of three new matrices we say that it's equal to u times Sigma times V Star and Star just equals complex conjugate transpose so technically the singular value decomposition exists for complex data matrices X like if you had the columns of X came from a Schrodinger's equation simulation the nodes might be complex valued and in which case V would not just be the transpose but it would be the complex conjugate transpose we are only going to deal with real value data in this class and so star to all of us should mean transpose just matrix transpose super simple okay so we're taking our matrix X and we're rewriting it as the product of three new matrices u Sigma and V and it turns out that these new matrices are actually much much simpler and better organized than our original matrix and we can use these for tons of things okay so I'm going to tell you what the properties of U Sigma and V are I'll tell you why they're important and then I'll give you an example okay so U is an N by n square matrix Sigma is an N by M diagonal matrix what do I mean by a diagonal matrix just numbers on the diagonal they're very closely related to diagnose so everything off the diagonal is equal to 0 and V is an M by M square matrix okay so in the example that we were looking at over here if this is the number of people with disease in different districts of Nigeria or this is the fluid velocity at all million grid points or this is all of the weather stations on planet Earth oftentimes our size of our column n is much much larger than the number of snapshots we have M okay so for some examples for many examples we have n is a lot bigger than M so examples of that would be like disease modeling disease whether fluid modeling and so on and so forth there are a few examples where n is a lot bigger than n any like ideas what some examples would be yeah the survey is a great example or M might be much bigger than n this would be the survey what's another good example sometimes I put electrodes I sometimes people I know put electrodes and brains and they might measure 10 different electrode signals but they would measure them at 10 kilohertz okay so they'd have 10 measurements of bajillion times okay so like brain measurements another example would be the stock market so even though there are tons of stocks you know if there's not that many might be hundreds or thousands but I have the stock ticker for every second for the last 50 years okay so I have more snapshots than I have number of stocks and this is a great example the stock market is one of the most interesting data sets to analyze with these methods and so there's that's why New York City is in the list of people that are benefitting tremendously from the SVD okay good so in a lot of cases that I work with you is a gigantic square matrix V is a smaller square matrix and Sigma is a diagonal matrix that's the same size as X okay now the last thing I'll tell you about U and V is that U and V are extremely special matrices called unitary matrices this probably deserves red so U and V are unitary unitary so how many of you have heard of unitary before okay unitary matrices are some of the most special matrices on planet Earth they're amazing matrices that have this great property that u times U star is equal to u star times U and that's both of those are equal to the identity matrix the N by n identity and the same thing with V and V star so VV star equals V star V equals my M by M identity so these are extremely extremely special matrices u and v such that if I want to take the inverse of U or the inverse of V all I have to do is complex conjugate transpose it why would that be a really useful property for our matrix to have then it's easy that it's inverses the transpose it makes a commutative what's maybe a more practical day-to-day reason easy to compute so if I wanted to invert a generic and by n matrix how extensive is that order n cubed okay if I want to invert a unitary matrix I just multiply by its transpose that's order N squared okay so it's way way way way way way easier to matrix inverse unitary matrices and they also have beautiful geometric properties and all kinds of other things so they preserve distances if I if I take two vectors x and y and I map them through a unitary matrix their angle and lengths are preserved it's a pretty special relationship only unitary matrices have that property okay so if I take any any vectors any pair of vectors and I map them through you or I map them through V their lengths and angles between them are preserved super duper simple and super powerful so the singular value decomposition the last thing I should tell you is that this exists and it's unique for all matrices X and these could be complex matrices so all complex and by M matrices and I'm not going to prove this if you want to prove it you can it's in books you can google it it's really easy to find a very very simple proof for why this is true but this definitely exists and it is unique for every possible matrix so any matrix you can name X I can give you a you a Sigma and a V such that this product equals your matrix and we have these properties okay so our is everyone with me so far we have a big data matrix X it holds lots of information that I care about and I want to extract the patterns in this data that are the most useful for informing policy or design or making money or selling products or whatever you want to do we want to find what the most important features of this data are okay so the last piece I'm going to keep saying the last piece there's just more and more pieces of this that keep coming so another piece of information about the singular value decomposition probably one of the most important parts is that this Sigma matrix is a diagonal I'm gonna draw this for n larger than M is a diagonal matrix with what are called singular values on the diagonal and zeros everywhere else so Sigma's on diagonal these are called singular values the reason they're called singular values is completely historical like a hundred years ago people looked at singular integrals and found out you can derive the singular value decomposition and that's not at all how we derive it now but they're still called singular values and they have the really really important property that Sigma 1 is bigger than or equal to Sigma 2 is bigger than or equal to Sigma 3 all the way down to Sigma n em and they're all either greater than or equal to 0 they're all positive or they're all non-negative and they're in descending order from biggest to smallest and so this allows me to do something really really interesting which is so I'm just going to write this out explicitly we have our data matrix x equals u 1 u 2 dot u n the columns of my U matrix right times Sigma 1 Sigma 2 dot 2 Sigma M zeros everywhere else times V 1 V 2 up to V M complex conjugate transpose okay this is just what the singular value decomposition looks like but the fact that my thick my Sigma matrix in the middle is purely diagonal with zeros everywhere else means that what this is really equal to is Sigma 1 times u 1 V 1 Star Plus Sigma 2 u 2 V 2 Star Plus Sigma 3 u 3 V 3 star plus dot dot dot okay does this maybe I should be explicit about why this is true so let's take this matrix times Sigma so take V Star Times Sigma so I get essentially my V 1 salting is transposed so it should really look like this should look like v1 v2 dot VM star right and so what this Sigma one in the first diagonal does is it just pulls out the first row of V and the first column of U and everything else is multiplied by zeros that makes sense yeah the second singular value only multiplies the second row of V and the first column the second column of U and so on and so forth and so what this does is it breaks my matrix X down into the sum of em rank one matrices each of these as a matrix so this is a sum of em rank one matrices does everyone know how to yeah it means complex conjugate transpose so I mean you as a column vector in V as a row vector so this leads them to my next my next question so what does it mean to take a row vector times a column vector times a row vector so we know what we get if we take let's say that I take a row vector one two three times the column vector four five six we know how to do this right it's just 1 times 4 plus 2 times 5 plus 3 times 6 okay but what if I do it the other way what if I have a column vector times a row vector how do I do that so the other one is called the inner product this is called the outer product and think about this as like u1 and think of this as V 1 star right if V 1 is real valued then I just take a column and I turn it into a row that's V 1 star so the way that I compute this inner product first of all it's going to be a 3 by 3 matrix and these have the same inner dimension okay they're both one on the inner so I take four times this whole row vector 4 8 12 then I take five by this whole row vector five 10 15 and then I take six by this whole row vector 6 12 18 so this is U 1 V 1 star it's a matrix ok so u 1 V 1 stars of matrix and it's a rank one matrix and what do I mean by Rank 1 matrix all of its columns are linearly dependent on the first one and all of its rows are linearly dependent on the first one so if I looked at this matrix I would say that this the columns of this matrix only span one direction or the rows of this matrix only span one direction they don't span all of 3-space they'll only span one two so this is a single rank one matrix okay so writing this as u times a diagonal Sigma Sigma times of V Star allows me to take my matrix X and write it in a series of rank one matrices this is a matrix plus a matrix plus a matrix dot dot and since Sigma's are in order of importance each of these matrices is in order of importance this first matrix is the most dominant contributing factor to the data in X and the second Matrix Sigma 2 u 2 V 2 star is the second most dominant contributing factor to the matrix X and then the third and the fourth and the fifth so does that make sense so far you have a question okay so this Sigma matrix I'll show you kind of where it comes from but for now we just take for granted that it does exist and that we can write it in an ordered Sigma 1 2 Sigma M yeah in this case it looks like you have another way the 0 is instead of being below would be to the right exactly so and yeah exactly if this was like a cycle and tall skinny matrix to use or short fat matrices and for a tall skinny matrix the zeroes are below for a short fat matrix the zeroes are on the right yeah exactly but you have the same the same sum of rank one matrices this is called a dyadic sum if you're interested sounds druidic dyadic sum because these are dyads u 1 and V 1 they're dyads and because U and V are unitary what it tells me is that the columns of U or orthogonal to each other u 1 is orthogonal to u 2 its orthogonal to u 3 because otherwise I wouldn't be able to get this identity same thing with the columns of V they're all orthogonal okay so these rank 1 matrices also span orthogonal directions so when I add them up I build more and more rank into my matrix X okay we're ready for the final final final piece and then I'm going to show you a matrix example so the actual final piece I think I hope this is the final piece is that you can use this to approximate a matrix useful for matrix approximation so what do I mean by matrix approximation how would I make this useful for matrix approximation so I've written my matrix x as a sum of a bunch of rank one matrices I could always just truncate that sum after the first couple right I could just say well X is approximately equal to u 1 u 2 u 3 times Sigma 1 Sigma 2 Sigma 3 times you know V 1 V 2 V 3 star I could say that my matrix X is approximately equal to just the first three of those matrices in my son because I know that Sigma 1 is bigger than Sigma 2 is bigger than Sigma 3 so all of this stuff I'm throwing away is small in some sense it's you know Sigma 4 is smaller than Sigma 3 Sigma 2 and Sigma 1 so the stuff that I'm throwing away are all smaller than the things that I'm keeping okay so this is the idea here the reason that this is so important is that at any point that I decide to truncate I'm guaranteed to have the best possible rank 3 approximation to my matrix at that level of truncation so if I throw if I only keep the first three modes and the first three singular values these are guaranteed to be the best three possible features in my data they capture the most the most information in this data there's no other three use Sigma's and Vees that will better approximate this matrix X and if I wanted to keep five pieces of information I would keep you know u4 u5 v4 v5 and Sigma 4 Sigma 5 finally to keep 9 I would just keep 9 I can keep as many as I like and throw away as many as I like and I'm guaranteed that that's the best approximation with that much information possible ok so each of these columns u1 is more important than u2 that's the more important than u3 that's more important than u4 similarly with v1 v1 is more important than v2 and v3 than before they're hierarchically ordered in importance of how much they explain your data X and the great thing about these features is that they have the same shape as my measurements my use are exactly the same shape as my X's so for example if X were a bunch of columns of ovarian cancer data like all of your gene expressions you one would be the most important genes that Express ovarian cancer you two would be the second most important linear combination of genes that expresses ovarian cancer similarly for disease data a column of X might have been the disease populations in each of 500 districts in Nigeria so u1 is the most important mixture or a ratio of diseases in those districts u2 is the second most important that explains my data that I've collected ok same with stock markets and with everything else if my data was collected in time so if my data was collected in time so you know x1 was at day 1 x2 was day 2 XM was day M then these rows of V correspond to time histories okay these are the dominant time patterns so for example if I take the stock market data and I run it through the singular value decomposition there's two dominant modes that I see over the last ten years any guesses what the dominant modes are for the stock market in the last ten years what happened in the last ten years so there was a huge crash in the stock market in late 2008 I was sitting in a BMW dealership getting ready to test drive a car I couldn't afford at all when I saw in the news that the stock market had crashed okay so the first actual dominant mode is gross the first dominant mode in the stock market even with the crash is growth and the second dominant mode is a huge crash in recovery okay these are my V modes because those are the time progression this is the dominant time progression of all of my use or at least a view one in you two okay so these are our extremely interpretable quantities the columns of you are like my measurements they're the most dominant linear combination of measurements that show up in my data and columns of V or rows of V star are the time histories of those measurements the dominant time histories that I see over and over again so for example in the disease you see a dominant one-year cycle because there is a one-year you know like cycle where people spread diseases more more regularly okay questions before we go on to some MATLAB and some real actual examples yeah Sigma's are the heaven was developed how do you actually compute these things yeah we're doing correlate to your rose oh good so so first of all I'll just say I just wrote a book chapter on the singular value decomposition and I'm gonna post sections of it on the website if any of you want all of it you can email me and everything I'm going to derive here is like flushed out a lot more detail and here but the basic idea I told you that the singular value decomposition is measuring correlation between rows and columns of X ok so my my u structures u the U matrix is ordered and each column of U is a dominant correlation between columns of X ok so this measures correlation among columns of X and it doesn't in the following way so if I want to measure correlation among columns of X what I'm going to do is I'm going to take X transpose star is just a fancy transpose it's just the transpose times X what is the size of X star X sorry well I take this tall skinny matrix that's n by M and I knock it on its side so that's M by N and then I multiply it by the tall skinny matrix n by M so this is an M by n matrix and it's a matrix of inner products this is a big matrix M by M of X 1 transpose X 1 times X 2 transpose X 1 I think X 3 transpose X 1 dot X 1 transpose X 2 X 2 transpose X 2 dot dot dot F at M ok I literally just take every column of X and I inner product it with every other column of X and I build out this huge matrix of inner products ok and if some of these row if some of these columns are really correlated with each other then what should their inner products look like they're gonna be very large right so columns that are correlated give large inner products columns that are d correlated give small inner products and so the idea is that let me make sure that I get the math right so the idea is that X star x times V equals V Sigma squared and we also have the X X star times u equals u Sigma squared and I'm going to call this Sigma Sigma a and Sigma B they're slightly slightly different than the real Sigma's that we wrote down so the idea is that this is an eigen decomposition ok X star X is a big matrix X X star is another big matrix v are eigenvectors of x star x and sigma is a diagonal matrix of eigenvalues similarly for x x star which is a much bigger may be a million by million matrix u are the eigenvectors and Sigma's are the diagonal of eigenvalues okay so V are you know eigenvectors and Sigma squared are e values and similarly here and so this one is going to be an M by M matrix of eigenvalues on the diagonal and you just take those m by m eigenvalues take the square roots and those are my Sigma's so that's basically if the answer is that I take X star X and I do a huge eigenvalue decomposition i get the eigenvectors and then i get the eigenvalues as the diagonal elements of this matrix and those are my singular values so my singular values are the square roots of the eigenvalues of this matrix this matrix will have a million or you know a ton of eigenvalues but most of them are going to be 0 it just has to be you can prove it unless I had a million snapshots in which case then I could have a million real eigenvalues ok so this is how you would actually possibly compute it by hand there are much much much better ways to compute this in a computer but this is the conceptual way that you would actually get it I would take these huge correlation matrices I do an eigen decomposition and I get U and V and Sigma yeah they're just different in the size so Sigma a is an M by M Sigma B is an N by n they have exactly the same nonzero eigen values these are all things you can prove and demonstrate if you want ok so there's lots more I can tell you about the singular value decomposition for example I would never actually compute it this way there's lots of better ways to compute it and I'll show you that on Wednesday but I want to give you an example so in this example what I'm going to do this is kind of a simple example what I'm going to do is I'm just going to take a matrix so a really easy example of a matrix is an image right an image is a matrix so I'm gonna take x equals an image of my dog his name is Mort he's Amish okay so this is gonna be an image of my dog Mort he's gonna have something like a thousand by a thousand pixels and what we're going to see is that lots of his column vector vectors are correlated right because he has correlated features right if you look at one column it looks like the next column and lots of his rows are also column correlated so I can apply the singular value decomposition to approximate my dog Mort with a much lower rank matrix and I can do image compression in this way so I can say that this really equals some u sub R Sigma sub R V sub R transpose where I just keep the first R columns instead of the first thousand columns maybe I'll keep the first 50 columns the first 50 singular values and I'll get a big compression like a factor of twenty compression okay any questions while we code this up okay this is Mort and okay so we all know how to read in images now we're gonna load dog into a matrix a I'm going to convert that matrix a into a grayscale just you know numerical values into the matrix X so X is my data matrix and then I'm going to do the SVD so in MATLAB the SVD is beautifully simple it's one-line u comma s comma V equals SPD of X so s is just Sigma because I don't like typing Sigma okay and I'm gonna plot my original dog and then what I'm going to do is for R equals 5 20 and 100 I'm gonna I'm gonna create an approximate morte with the first five columns of U times the first five singular values times the first five columns of V Star and then I'll do that for 20 and 100 makes sense so these are rank five rank 20 and rank 100 matrix approximations to Mort takes a little while to do the SVD because mort's a large file and so this is what we have the original dog in the upper left pretty crisp and clear if I only keep five modes which is 0.58 percent of the original data storage so this is a about a 200 times compression it's a really crummy dog terrible right like if you can kind of almost convince yourself that there's nose eyes and ears but that's a stretch when I keep 2.3 percent it looks like a grainy dog you can see some of his features you definitely know it's a dog but you can't tell everything about him but then when I start to keep more like a hundred features you actually get a lot more resolution in the image it's a little hard to see here not perfect but you do see a lot of the features you see like flecks of snow and things like that you get whiskers with a hundred modes okay so even complicated matrices can be approximated by much lower ranked matrices using the singular value decomposition I wouldn't really recommend the SVD for image compression I'm just showing you this as an example that illustrates the concept of low rank matrix approximations really we would do the fast Fourier transform and do FFT image compression to be way way better okay questions about this figure yes so this is completely looking at correlations in the columns and correlations in the rows and that you can kind of convince yourself that this makes sense because let's say I take a column right here in the dog midsection where it's like white brown white brown some stuff white well if I look just one row to the right or you start one column to the right or one column to the left those are gonna have big in our products with each other because the column to the right and the column to the left are also going to have a lot of white then brown then white and brown then white then you know they're gonna basically look the same so there is a lot of correlation in neighboring columns and neighboring rows and if you look at the five most dominant correlations you actually see those patterns popping up I see white brown what not quite as brown white right like that is a that is one feature that's that's creating that entire back midsection and you also have a pretty simple you know white dark white dark feature okay so this really is there are five rows and five columns and you're taking linear combinations of those to build up this Mort that's why it looks kind of crummy so another thing you can do is make this plot of the singular values so I should probably actually just do this so I could plot on a semi-log y semi log y is just log in the y-direction so I'm gonna semi log Y the diagonal entries of X okay these are my singular values so maybe I should make them okay so what I find is that my first singular value is really really large ten to the five that's a hundred thousand the next one is a lot smaller ten to the four so the first singular value is ten times bigger than the second one the next one is smaller smaller smaller so they are ordered from smallest to biggest and then eventually they get much much smaller compared with this original one okay oftentimes you'll want to plot the cumulative sum so you do the cumulative sum of the diagonal entries divided by the sum and what you can find is this plot on the right which shows you for example the first mode the first column of U in the first column of V account for 28% of the information in the dog twenty-eight percent of the raw information content of that image is in the first column of U in the first row and then if I had two rows and two columns I would get 34% if I had 3 I would get 38 five gets me forty two point four six percent of the total information you know forty two's not enough that's why it looks crappy but if I go up to a hundred I see that I'm actually getting about seventy-five percent of the energy and the dog and I just keeps getting better and better and better and better the more modes the more columns I include the better my Morton approximation gets until I get to a hundred percent warped any any questions about this does it make sense what we're doing now this was a single image where we're looking at correlations between his rows and columns we could also put in many images that have been reshaped into extraordinarily large column vectors so that's something we'll do so I could take a picture in every single one of your faces and I could create a big column vector with a million pixels for each one of you and I could have each of those as my columns of X and then I could do this same analysis and I could find the dominant facial patterns that are shared between all of you the dominant facial correlations from everyone in the class or I could take a hundred pictures of just one of you and different you know like expressions happy sad angry and I could find your individual dominant facial features okay so that's what we're gonna do on Friday that's called eigenfaces this is an eigen based method eigen decomposition and so those are eigen faces and that's the basis for facebook facial recognition and all the scary stuff that robots are gonna do in the future it's pretty pretty cool stuff so I told you a lot about the singular value decomposition today you can read more about it I'm gonna post the first section tonight on Wednesday we're gonna talk about more about the correlations and some real practical examples like ovarian cancer and on Friday we're gonna look at Augen faces and we're gonna build our own facial recognition software okay all right thank you
Info
Channel: Steve Brunton
Views: 38,082
Rating: 4.9667773 out of 5
Keywords: Singular value decomposition, SVD, Engineering mathematics, Data science, Dimensionality reduction, Compression, Linear algebra
Id: yA66KsFqUAE
Channel Id: undefined
Length: 50min 12sec (3012 seconds)
Published: Sat Mar 05 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.