CppCon 2015: John Farrier “Demystifying Floating Point"

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
my name is John Ferrier I work for Booz Allen Hamilton I do modeling and simulation for a living I am a generalist I know very little about many things and this is one of them floating point so I wanted to come today and talk a little bit about some things that I've learned about floating point and kind of prepare everybody else for learning more this is going to be kind of basics of what floating point are the vocabulary you need to continue to understand more about floating points try to give you a few tools so that you can walk away with something usable and basically prepare you to learn more as it pertains to whatever it is that you do on a day to day basis it's a pretty full length talk if you could I would appreciate keeping your questions until the end so if you are concerned about floating point you're in good company I like this quote that nothing brings fear to my heart more than a floating point number and perhaps after the first few examples you will begin to understand why but I think it's also important as we go through this to keep everything in perspective as a modeling and simulation person I like this quote as well the modeling error is usually several orders of magnitude greater than the floating point error and people who nonchalantly model the real world and then snare floating point is just an approximation well I think they've got things well out of perspective so keep that in mind as we're going through this - this is not the end of the world just because math doesn't actually work like you think it might work there's a lot of floating-point fallacies at least floating around my office every time something seems to be not quite right I hear it's just floating point error don't worry about it there's all sorts of other myth and lore kind of going on about floating point and I think maybe after this hopefully we'll be able to talk about things more intelligently and be able to maybe guide our discussions in a productive way to minimize errors that actually do exist if you were to Google this like I did all roads lead to Stack Overflow and every Stack Overflow post will tell you to read this document this document is 93 pages it's not exactly a quick answer to whatever question it was you innocently were trying to ask the first sentence out of the document is the floating point arithmetic is considered an esoteric subject by many people so now please continue reading for another 93 pages on something that's esoteric and you just had a simple question about thank you Stack Overflow please go read this document after my presentation let's first talk about the anatomy of a float there was a time evidently I wasn't around for it but prior to 1985 in computing when hardware vendors implemented things in a very hardware vendor specific way we don't have that problem anymore today right so along came a standard or two or three so standards were created so that math could be implemented portably between machines and there was much rejoicing the standards said some significant mathematical identities should be maintained and there's a few the standard also goes in and says there's about 50 functions that should work basically the same on all the hardware the standard evidently has been a little weak and exactly how to do a lot of those things so there can still be some anomalies from system to system but in general now we have portable math the goal is to ensure that every floating-point number representation is unique which seems obvious but this requirement put a special case around zero ensure that every floating-point number has an opposite it doesn't do you any good to only do positive math and then the algorithms that are specified so there's a few listed that there's actually I think about 50 in the standard internally floating-point representation is and approximation using scientific notation easy enough so let's talk about how the that actually works you have a negative one to assign the exponent then you have some base in our world it's base to to an exponent and then one dot mantissa mantissa just being the part that follows the dot and so if you were to look at the bit representation of that and print it out with a bit set you will get something like this I added the brackets to parse it for you but that's kind of what you you're looking at if you were to use a bit set for a 32 bit float so 32 bits gives you one sign that a exponent and 23 mantissa 64 bit doesn't exactly double things like you might have anticipated so you actually get a lot more mantissa bits which ends up being important because remember the exponent is only used to kind of scale your number there there are of course special floats maybe divided by 0 I don't know if anyone's ever ran into that when you're doing math I I haven't but I've read about it not a number there's some special situations that get you into not a number conditions you get signed in infinity the reason we have that is for overflow protection for very large numbers we have negative infinity for very underflow for very small numbers we also have positive zero and negative zero I'm not going to dwell on these corner cases in this talk but you know I think it's worth noting that they're all there we've probably all ran into them and you can look up the types of situations that get you into having special floats so that's floating point layout and we are of course now all experts on how floating point will will work let's start with a simple example possibly it's later in the afternoon we can do this math in our head I've got some obscurely named variables 0.1 0.2 and 0.3 I'm going to sum those up and I'm gonna do some checks all this code I've got some code examples that's all in github I've got a lot more examples under the CPP Khan project you're welcome to download that I tested it on one whole compiler once so I'm sure it will work fine everywhere but you know but it gets me data for this for this briefing so let's do this in your head real quick some should certainly be 0.3 if we do this in floating point neither 0.1 or 0.2 nor 3 were the numbers that we typed in if you actually print them out I mean they're close but they're not the exact number and we haven't started doing a really you know interesting math yet you may have noticed in the example though that that was all floats you know floats are 32-bit so being intelligent people you say yes yes but I'm just going to use a double and I'm gonna fix it and it will be much better well well no you just ended up with a different problem and now 0.3 does not equal zero point 3 anymore yeah which may not be what you were anticipating by just increasing your position so there's other things going on here other things that work maybe we can understand the storage of 1.0 let's let's just start there how do we throw a 1.0 in a float well we have a sign bit we have an exponent we have a mantissa so negative 1 to the 0 times 2 to the 0 times 1.0 ok we need to store looks like three zeros that does not quite look like what I would expect for 1.0 in memory yeah it's actually looks like this which isn't quite what we said starting off for the exponent why is that well the exponent is of course ship 127 encoded I'm sure everybody knew that going in I thought it put it on the slide just to clarify but it shift 127 encoded slightly different than what you may have anticipated going into this ok fine but that creates our special case for 0 we talked about that up front 0 is a special case all zeroes is 0 now if we want to actually represent something we've got to ship 127 encode the exponent so that 0 maintains a special case in every floating-point YOUnique okay great so now we know how to represent 1.0 let's toggle that last bit go to the next number all we did is take the the zero on the end change it goal one and we get a much different number zero on the end ginger - one much a different number what is the significance of this number this is epsilon epsilon is defined as the difference between 1.0 and the next available floating-point number this is useful for kind of almost equal conditions as you saw from the 0.3 does not equal zero point three example directly comparing fluence can be dangerous epsilon can help check out standard numeric limits it defines epsilon and a host of other things that we will be talking about but then you can find out what that is for your specific system without having to go through this check significant digits are basically a measure of precision and you have to keep in mind when you're when you're figuring out your I Triple E floats that you've got all this precision in the mantissa but in order to compare them you've got to kind of have exponents that are the same so significant digits isn't really the magnitude that can change the Signet number of significant digits can change based on your exponent but this is kind of interesting that this is out of a document here that shows we're probably gonna be using single and doubles you have six to nine digits and 15 to 17 digits of precision significant digits now that's there's a precision I've misspoke because that's right there yeah so even if you can't print out lots and lots and lots of decimal places some of that is going to be noise which brings us to maybe how do we store them very very small things for a 32-bit float you have a minimum base 10 exponent of negative 36 hi I want to represent a number that's smaller I go one smaller one to the negative thirty seventh and how do we represent that all of a sudden our exponent field has gone to zero and this is a special case this is called ad normalized number soon if you're reading about these and you and you come across denormalized or denormalize number or denormalization this is a way to start to gradually fail okay you are running out of bits to work with at this point and this helps prevent underflow okay it's important to understand that the representation of these numbers as a result of how they are implemented is not uniform between all the numbers all of the precision really lies between the zero point zero and zero point one positive and negative you've got a lot of precision right there and the further out you go precision falls away there's standard next after which will tell you what the next number is after the one that you have which is kind of interesting because it will continually return different numbers on the MSDN site there was an exercise on this webpage and it's fairly interesting go there and look at this but I believe this was a six bit float that they were implementing and then they plotted the distance between each number and you get this kind of interesting distribution this is going to be a different distribution than what we're dealing with with the I Triple E floats but it's going to have the same shape at least I thought it would have the same shape so I wrote a little program to find what the next number is I counted up all the numbers from zero point zero to zero point one and there were quite a few but now going to zero point two is far less and then every time I double the distance I'm going I get that same number and I only ran this out to three point two but you can see how it's scaling so that when you're doing math to me this says it would be very useful to keep math normalized negative one to one you've got a good amount of precision in there and that's going to help help your math out and when you start dealing with very very well numbers really that just deviate from negative one to one you are losing precision and make sure that's acceptable to the types of math that you're doing which brings us to errors and float I thought this was an interesting float with an interesting error on it you never know what you're gonna get when you're looking for images to help with your briefing so let's look at the storage of pie we have pies a good number right if you look at the value of pie represented as a float it's slightly different from the regular version of pie and there's a delta there so what does that Delta mean that is units in last place right there there are two very popular definitions out on the web the Harrison and Goldberg definition there's six to eight other definitions floating around I'm not sure there is a definitive definition but the one I liked because I understood it the most was the gap between two floating point numbers nearest to X even if X is one of them so on here you're looking at nine units and last place that being the difference the standard itself requires that basic math operations are correctly rounded to within 0.5 in last place which seems like a very good guarantee more complex functions of course could be rounded to between 0.5 and 1 units in last place so depending on the kinds of math that you're doing the kinds of functions that you're using you're gonna get different guarantees on the precision of those functions another way to measure error which which I like especially for unit testing and regression testing and those types of things is relative error as the difference between the real number the approximated number divided by the real number I went ahead and wrote that out because it's kind of hard to say but it's much easier to see so if we look at our relative error for this it's you know 2.8 to the negative 8th which I think helps put that in perspective again where that error actually lies how far out is it and then you can decide is that good enough for the type of application that you have there's the one everybody always talks about which is rounding error everything is rounding error forget all the other things we talked about everybody just calls everything rounding error in my experience so what is rounding exactly well we have a fundamental limit that is we've only got so many bits to fit an infinite number of numbers into so of course we're going to have to round at some point you can't have all of that precision all the time the universe is only so big on the CPU math has done exactly and then rounded to give it back to you so there are several rounding modes that can be used sometimes these things can be controlled by compiler options sometimes you do not have control over them there tends to be right ways and wrong ways to do this rounding sometimes it makes sense round to the nearest round towards zero you can think of the number line you're always going to be moving towards zero whether you're positive or negative there's also rounding towards infinity and rounding towards negative infinity there's also a term out there called exactly rounded which you're welcome to Google which is quite interesting as well well what about the half-way case the zero-point-five case you could round up you could round down you could round even in school I was always thought 0.5 rounds up to the next number when you're rounding but that really wouldn't work out well there's now you're gonna put a bias into your rounding your numbers will tend to grow vs. versus something else that might tend to neutralize that so that technique is called rounding even and that is you're going to round to the next even number so then you know statistically hopefully half the time you're rounding up half the time you're rounding down and over time that cancels each other out to keep your total error from growing how is that done well there are at least three more bits after that last bit that we talked about the guard did the round bit and the sticky bit these bits are only used when you're doing calculations they're not in memory they're not something you're going to observe they help control the precision and help you learn how or help you decide how to round and how's that well if the guard bit is 0 round down it doesn't matter what the other two bits are but if the guard gate is one you should round up if the LSB least significant bit and the mantissa is one so this is the round even if it's 0 you'll round down and all the other cases you round up um if anybody is interested when you're doing the math and bits are shifted out out of scope here if any of them is one passing through the sticky bit the sticky bit will remain one for the rest of that calculation to help with this rounding to if you're losing deaths as they as they fall off they're another source of areas significance error this is a fairly classic example that I have stolen from a lot of other people's work that I'll show you here there's a classical formula for computing the area of a triangle and this is an arbitrary triangle not a right triangle an arbitrary triangle and if you were you know you've got a class and you want to implement this equation you find this in the book and then you type this in this isn't hard to type in you type this in and and you're fairly happy with the answer that you get usually you'll get an answer out of that but it is not generally correct for certain types of triangles namely triangles when two of the sides are extremely close to the same length and one side is much maybe what's smaller than the other two so in there's a book I have it in the references and the title escapes me because I'm presenting in front of a bunch of people but there's a book on floating point arithmetic it's blue what's that ha no cuz that one's out of print there's one you can still buy I see I don't remember not numeric I don't know maybe sure whatever numeric algorithms I know it's in my notes it's on the github well I'm sorry I brought it up but Khan said that there's a way to design this algorithm so you get a much better answer and the idea in general is that when you're doing arithmetic you want to keep big things with big things the little things with little things until you can't do that anymore so if you keep your exponents of similar magnitude together when you're doing your arithmetic so I went ahead and wrote wrote this up there's my triangle a B and C the lengths of the sides my application that I wrote on line if you can grab it was all done with Google test so we can check some things and it's kind of handy you can trust that I implemented the two formulas correctly I'm sure I did I really make mistakes and let's go ahead and do this math so first off we're going to do this math as a written here with floating-point arithmetic there's the answers one typing that equation in out of the book another one using the numerical analysis way of typing that algorithm and those numbers are drastically different I think you can probably do that here ahead but I went ahead and printed out the Delta anyway there's another problem so let's go ahead and toggle this over it and maybe do with double precision and now we get numbers out of both at least are in the ballpark but you can see the cons algorithm is much closer when you care about precision to the real result than the classical formula so these things can be important they can bite you I mean imagine we go back a slide if you wrote this test and you said well ok no I'm in 14.5 sure that seems like a reasonable number I mean nobody jumped up in here and screamed that's not 14.5 you would do this you would look at the results of your program it seems reasonable great move on with life but that double precision was very important here in this case the the real answer here is use stable algorithms sometimes this is very hard I think it's something that you know personally has not been on my radar long enough that if you can screw up computing the area of a triangle well the stuff we do on a daily basis is generally much more complicated than that where are we going wrong there parentheses can help if you know that you can keep big numbers with big numbers and little numbers with little numbers go ahead and group that kind of a thing yourself well ultimately you have to analyze the algorithms that you're looking at if you really care about the precision and the answers that you're getting the compiler won't fight you on this it will not rearrange your math if it can't prove it would yield the same that rearranging it wouldn't prove that read the slide my tongue doesn't work right now yes it won't screw it up if you're explicit look back at that coms algorithm example there's a lot of good proofs online about that too if you're if you really want to get into that how did this all come onto my radar let me let me do a practical example here modeling a simulation we're always wanting to mess with time that's kind of the fundamental thing we run simulations that progress over time time is used in lots of the calculations yeah sure this is kind of domain-specific maybe you folks don't care too much about Doppler shift and pulse compression but for our simulations we care a lot about those types of things and they use time time is very important you don't want to screw time up and how could you screw time up because you're just you know there's only so many ways to do it and it's the least or probably the most I'm sorry the most trivial of all the calculations that your simulation is gonna do and then of course we don't we don't test anything I mean that takes a lot of time what good ever came of that all that doesn't ever find bugs we're know in your day so let's talk about two ways of implementing simulation time that you'll find out there and in the wild both kind of I think the first one is kind of the naive first attempt that everybody does if you're just gonna write a little simulation you're gonna take you're gonna maybe have the total number of frames that you want to run or some condition and condition the length of your frame here we're going to be running out a hundred Hertz point zero one and it will have a little variable to track simulation time then in our loop we're just gonna keep looping through and on every loop we're gonna do some work and then increment simulation time easy enough another way and that works great for arbitrary time steps sometimes you know you're not guaranteed to always have a hundred Hertz and you need to be able to adjust that maybe from frame to frame so that gives you that ability it's a good general purpose simulation counter then over here you've got Delta time which says I'm just going to track my total frames I'm at a constant frame length course I should have had concierge someone will yell at me for that then I'm just gonna say that you know my simulation time is the total number of frames I have times my frame lengths and I didn't a little counter well will give me a good simulation time so let's go ahead and run the simulations and just look at our time here on the Left we have at the top kind of what 1.2 would have been represented as and then here's what our simulation time really was and I put the delta below it and at 12 seconds our time was a little bit more off 120 seconds it's even more off by 600 seconds were point to seven seconds off which is really significant if we think we're running a 100 Hertz simulation note the time is much better there's a little bit of noise up front but then we start getting really consistent results when I first saw this this was kind of and it was really this case on the Left simulation we'd been using for a long time and then you start looking and you go this is not doing at all what I thought it was doing we've got to do something better and this is kind of what turned me on to to looking more into the floats and how to use them properly so we can get rid of those types of silly errors in our simulations and get on with the real work so you can imagine my surprise when someone came in and said I've got a really really accurate simulation it's so accurate it uses sub microsecond precision sub microsecond precision pulse the pulse remodeling waveforms it's fantastic there's so much precision in our simulation in the back of my mind I'm thinking precision you keep using that word so let's come up with the same examples of microsecond precision come up with an arbitrary frame rate here our delta frame will be one point two two three four five microseconds which comes out to about eight hundred and ten thousand Hertz pretty quick so we know that when our simulation time should be point zero one two three four five seconds that clock that they had was actually at point zero one four eight one seconds run out to twelve point three four five seconds that our clock is now behind so it started out ahead and moved to be behind because they're accumulating some time with a coding point plotting points fine I don't need that many more decimals of precision we've already talked about this right issues floats it's faster I'm told so should you not use floats well you can always use integers integers are very fast they tend to have a trade-off right here for precision and range you can still do input an output with floats so you're only exposing that to your user if you've got input files or reports that you're writing do the conversions then I'm told I don't write financial applications what I'm told that financial applications tend to do that type of thing which makes sense the other route you can go is to go the software route right you use a math library it might be slower but it might give you a lot more control and there's math libraries out there to help with this I mean you're not alone if you really care about a high precision math there's some good libraries out there there's some things in boost which you may already even be using to help out with this as well one of our solutions and Colonel II has just been to keep simulation time tracked as an in 60/40 or un 6040 which tends to give us plenty of time to run our simulations we don't tend to project out more than you know five hundred thousand years or so those are the kinds of errors to mathematical identities we learn these in grade school these are simple you probably use them all the time to optimize hash numbers keep them together or whatever they don't necessarily apply distributive rule does not apply the dissociative rule does not apply you changes the location of these parenthesis you could get a different answer you cannot interchange division and multiplication and you can try but you'll get different answers how bad good at me there's a really nice paper on the Microsoft website paper tutorial great explanation here of assumptions that the compiler may or may not be making it's long and it doesn't talk about anything other than Microsoft compiler but I think it might give you insight into how other compilers are thinking about these types of things as well so let's look at another example let's take a Radian we're gonna multiply it times 1 to the negative 10th and then divide it by 1 to the 10th we know what number we should get out of it that's our control and how different could they be I'm kind of surprised it's quite a bit different mathematically you know that they're identical you know that they're identical but practically you do get different results with that going through all this there are floating point exceptions that you can use I like these it's been more than one time when I am at the end of a calculation and I have man sitting in front of me I don't know where the hell that came from because there were you know 5 million computations ahead of time looking what exceptions can make your life better they can be triggered when you start when you get under flow and overflow and divide by zero in valid computations you can turn these on you get a nice break point and it's much easier to then debug those things you can also start to tune the floating point environment and and understand more about that with with this little header down here I haven't dug into that much I don't want to try to mess with more things than I have to and there's a lot of other kinds of errors we could talk about as well I'm not going to get into all of these other than if if you weren't aware you know in our C++ domain the type narrowing of lightning rules you're going to be promoting floats to doubles if bubbles are involved and they won't be demoted and what you're storing it back into a float so that's something to keep in mind I would suggest though if you care a lot about the math and to just keep it in doubles and not worry about types just use the correct types when you can okay I've got a few other miscellaneous and notes I want to go over things to take away one is to please use your compilers output it's trying to help don't turn off the warnings okay I conversion from the double to float just let you know we might be losing something something's gonna fall off the end there there's only so much we can do overflow that might be important if we're gonna keep using that number for anything truncation absolutely and that your numbers might even exceed the range that can be represented there great warnings please heed them there's also a technique called fuse multiply add you can look it up on the Wikipedia page there it's supported by some architectures but not you know not universally but this gives you one rounding instead of two roundings in this type of situation there are there's a GCC compiler option to turn it on there's a pragma in Visual Studio it's not something I have used but I wanted to make everyone aware of it SSE can help as well I'm I think if you're a performance conscience you're probably already aware of SSE and using and using it unfortunately C C++ and Fortran don't have native support for it but there is a compiler specific support for it which is worth looking at I am not aware of what the rules are for SSE math though relating to the IRA believed floating board standard so that would be something else that I think would require investigation if you were to look at that there are a lot of floating-point tricks out there this is a an approximation of a famous trick in quakes library I would encourage you to look that up even while you're sitting here and read the comments it's much more entertaining than this one but because of the storage it lends itself to some to some silly tricks which I don't fully understand but I trust that they tend to work finally testing you must test you must design for an American ability we looked at that with the algorithms if changing parentheses the location of parentheses in a string of addition impacts the result of the of the math that you're doing holy crap right I mean there's so much more that you're doing sines cosines inverses whatever's in multiplications and square roots and it's all creating little issues and it's all going to sum up and if you really care a lot about your mathematical result you're going to look at that meaningful testing don't let that 14.5 value pass in your test put some controls in do something by hand maybe and and figure out what it is that you should be looking for the documentation I think is important even just documenting your assumptions what you think should be happening here so if it turns out to be a problem later that there's traceability there's also a technique worth looking at called backwards error analysis to say are the outputs identical or slightly modified inputs what are you expecting there and this might help shed some light on whether or not your algorithms are stable so that is what I have I provide an xkcd comic because everybody loves this and if you don't understand it you know I've also included that as well so then you can laugh after everybody else so that is the end of my talk I'd be happy to take some questions yeah I don't have an example there are examples out there that you can find fairly easily I think if you've got your epsilon value there are I believe C macros for FLT underscore epsilon and DB l underscore epsilon and there's also the C++ epsilon is defined in the numeric limits what you would do is look subtract two numbers take the absolute value of that and see if it's greater than Epsilon so that would be one way to use epsilon in your application to compare two floats for equality does that make sense one day you would subtract a from B take the absolute value of that and see if it's greater than Epsilon you know so three and four would not be or less than Epsilon I'm sorry yes yeah I don't think Epsilon does not scale that way yes that's true epsilon does not scale epsilon is really one and the next number so that is the technique we talked to he said you have to divide the two numbers compare that to Epsilon but that's the technique we talked about for relative error yeah yes well let's see slide 49 here so right so this says you're talking about right all right those two properties I didn't hear the last bit of that ah actually you know I don't know what did I say up here up front that's way back mhm storage of one wasn't there yeah well I don't know I don't know these are the ones that I found and you know there's not too many and this isn't the same as X plus y plus Z right yeah yeah I don't know yes yes yes so the the question is if multiplication division cannot be interchanged then is one better than the other my understanding is the multiplication tends to be better than the division practically speaking that's what I have seen but I don't know that it is a hard and fast rule that that's the case for me the default is multiplication yes so the common is the multiplication is faster than division so you may need to wonder if you're optimizing for precision or speed because that may not be the same answer yes I did mention that on a slide yeah that's got a lot of interesting float tricks in it and other types of things yes hackers delight yes sir X X plus X divided by zero is okay and you were taking really good notes I should have I should have like showed this to you before I stood here in front of all the youtubes and stuff to give them oh okay excellent so the FP contract fragment that I showed had the parameter backwards yes yes and that link that I had on that's the comment about FMA only being supported in certain hardware and there's a list of supported hardware on that on that link that I've provided that's a good comment so that the comment there was that enormous numbers tend to make their codes slower you said two orders of magnitude slower that you've seen okay excellent very good very good yes sir I'm sorry I couldn't hey oh okay great so there were some articles about how to use Upsilon properly I'd say I think the Google test library does use it we tend to not use it and look at relative error and just measure it so that's that's kind of what we're doing inside our code yes okay okay oh fantastic so it's using the units and last-place great then I have no practical examples to offer you sir for where epsilon is being used yes yes yes so the comment was is that C++ does not officially support the I Triple E floating point standard and if it were to support that standard what do I think I you know I don't think my opinion matters so I do have the mic that's a good point but I think it kind of goes back to your point as well this sometimes people want speed and sometimes people want precision and that those two things tend to fight each other there's I had some discussions with folks that were doing stuff on GPUs and then one had lots and lots of floats they didn't care so much about the doubles because that's was good enough for the games that they were working on they didn't need the double precision where other people really need the double precision because they're using it for scientific computing you know so I don't know how you decide which way to go with that other than maybe to make it you know some type of compile time option whether you tend to want the fast stuff or you tend to want the conformant stuff there are flags for that now I don't know how conform it that makes things I can tell you that in the sea make project I set up for this code I did have a drop-down in there for weather for the floating-point option on the visual studio and I think there's three of them and you can you can pick one and that depending on how you've got that option set these numbers change some tests pass some tests fail and all the stuff that these tests we're doing is very trivial so that's kind of interesting that even with this trivial math changing those compiler flags gives you different numerical results let alone doing something you know complicated and important anything else great thank you very much
Info
Channel: CppCon
Views: 29,458
Rating: undefined out of 5
Keywords: John Farrier, CppCon 2015, Computer Science (Field), Bash Films, Conference Video Recording, Event Video Recording, Video Conferencing, Video Services, Floating Point (Measurement System)
Id: k12BJGSc2Nc
Channel Id: undefined
Length: 47min 33sec (2853 seconds)
Published: Thu Oct 15 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.