Visualizing Logs Using ElasticSearch, Logstash and Kibana

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everybody my name is Jeffrey sawed-off and you can find me on Twitter right below at GS o gol we're going to talk about elasticsearch log stash and Caban just want to give a brief shout out to polish or coffee and Vincents Buick which should lot of my material is based on the ox neck really prabha really saves the company's millions of dollars and I'm going to talk about briefly later on how it actually does that so let's talk about logs in general what is a log what do you want to do with logs do you want to abrogate the logs analyze the logs how do you currently do that now and one of the main selling points of this whole ELQ stock that combines the three technologies is just an easy way to aggregate the logs analyze the logs maybe do some analytics main to do fraud detection and all kinds of other things that one may want to do so log is really rare just a human-readable it's got to be machine possible and this is just a it's got a different examples of the different blogs that one may have first one might be an is log and some of the other different ones so log is really a timestamp a combination of a timestamp and data one of the main selling points of one of the technologies in the Alex - being the log stash is to stop inventing date formats right every time somebody logs whether it's darknet Java in technologies they've got their own way of logging they either month comes first date goes later in Europe it's it's vice versa or the rest of the world it's vice versa do you put seconds do not put seconds so it's early in its a solved problem there's already nice so there exist you want to log it one way in one way only and regardless of the technologies that you use you you really just want to see Logs with one tax them so some of the pro tips for logging and there's a link on the bottom on the last bullet point that you can you can if you want to Google for that or if you want to go to that URL you can get more information but you know there's there's there's a nice so exists you know for for Linux you want to put it in a specific location for Windows maybe some some company or some team you're on makes you want to put it in in in a specific location you want to make sure it's plain text and there's not all kinds of weird characters in Unicode you know you don't want to you don't want to put passwords or private information due to whether it's HIPAA all kinds of different requirements and compliance you've got to go through different events no matter what the technology a common way is to is to log information logs debugging logs warnings exceptions and you want to really categorize those so you can later on you can actually query by different times so a log really needs to be human a lot of the logs are not if you if you've had experience with different logs from from different systems a lot of logs are real enough human readable you open up a log and it's just a mess rather I just sometimes I just just close or just because I can't even understand how half the stuff that's that's in a log and then logs really need to be machine Parsifal so yes if I open up a log N and I can as a as a human being I can read the log but if the Machine really can't parse the law then a lot of values is lost so we want to make sure that the logs are really machine Parsifal that's the key people are I've seen a lot of people do reg ax or regular expressions and one of the big regular expressions is is it is an Apache log so this is a real Apache log regex that that will parse each line of of an Apache log so it's pretty large you know if you know it off by hand and fantastic I you know I can't remember how to do simple dates but I got a Google for that so regex is really complicated so let's talk about just the evolution of log processing one of the ways in winix at least that you can do is you can you can open up the log you can parse the log and you can get values out of it it's really confusing it's it's manual really difficult to do and if you can do that if you really want to combine different posts rather they've got all kinds of service maybe it's a web farm maybe it's a cluster of different nodes RabbitMQ could be it could be a distributed database and you really want to get all the log so you gotta write some sort of a script to actually you know go inside every host and bring the logs over locally so you can do something with it so a lot of times you actually have all these disparate systems that you need to bring in into a centralized location and there's a already commercial tool called spunk which does data aggregation and it has all kinds of other great aspects about it that can not only aggregate you can analyze the data and it can do it at a very high of bottom but the problem Splunk though is the first gigabyte that you use a day is free so you get hooked on it you start using it I think it's fantastic and it is fantastic but it gets very very expensive over time in fact one of the quotes that I've seen for one of our teams was about five million dollars for what they were trying to do so if you're talking about you want to analyze logs and at that cost it just gets very pricey I think for a lot of startups and even for a lot of companies it's just a it's not something I want invest the essent provides a lot of value so you got to figure out the cost versus the value provides but we really can do a a similar thing in in the elk stack which is what we're going to cover so the next step for a lot of companies is open source okay so spunk is expensive oh it's great let's go open source one of the great tools that we can use is log stash and log stash it's really a way gives you a way to aggregate all the logs that you have so let's talk about log stash three it's got a great community a lot of people it's super fast it's it's written it's JVM based so it's RIT's language that they used is JRuby it's containing a single jar it's pretty easy to extend it was pretty easy to integrate it pretty much in a both in a Java based environment and a dotnet environment it's really technology agnostic if you will yes you need a GVM to run it but cool pumps the logs or who creates the laws is really it could be a ruby application the Python script could be anything it doesn't matter it's a badass just it just takes the logs and simply just aggregates them and Jordan Sissel who's the original developer is really awesome responds pretty pretty fast at Twitter as well if you have any issues and there's a forum as well so let's talk about log so this is just a simple log right just a simple line but the blog that got log and the log stash architecture so log that is one of the three components they talked about the knee originally in the Alex stack l4 logs - you've got different shippers so you just imagine you have web servers all over the place you have database servers just all kinds of systems that create blocks and you can do two things you can install what's called a box - forwarder which is actually written and go but there's a again you can run it in a Windows environment again rather than a Linux environment it's just an executable and I'll show an example of it later but you have all these shippers that are so shippers really like a log stash forwarder which is a it has a very small footprint and all that does it just looks and watches you edit your files and aggregates them those files and sends them sends them over to an indexer which can then pump the data into some search or some storage place and there's a web interface that can which is called Kabana that can help you analyze the works so let's talk about the lifecycle of a look usually you record a lot it goes to maybe a rolling file it goes to someplace you transmit the log you go somewhere you store it and most of the time you probably just delete it right it either rolls over or maybe you have a TTL on it it's kind of kind of becomes useless at some point but in reality for most of the people you just record you sometimes most of the time you don't look at the logs or you just just just delete him right off the bat alright it just gets deleted then you didn't know what to do with them and you can't really go back in history to see well you know got rolled over how do I go back well I can't so what do I do so it becomes really problematic logstash really has this notion of inputs and outputs and in between it's got filters so you can do some more advanced stuff to the data that you're getting so let's talk about you some of the inputs so just imagine any input you could it could be a file so maybe you're monitoring or watching a file maybe you're sending packets through some TCP port maybe - you repeat maybe you're using rabbitmq you're publishing messages to rabbinic view and you really want to get every message that you get to some sort of a messaging system like grab it you can you can listen to those messages and you can pump the data somewhere else at your store maybe an auditing system whatever else want to do to be as your mq can be Amazon queuing system could be Rattus it can be Twitter to be WebSockets it's got it's got dozens and dozens of different inputs that you can what you can also do is you can once the data comes in from any one of those inputs or more than one of those inputs you can mutate the data so you can do something to the data you can enrich the data you can maybe calculate the metric side maybe you want to be translation maybe you want to if you see an IP address you want to look up to see where it came from so it's got different filters that you can apply to the data before you output into someplace the output can be also it has a very large number about this thing again you can use out of the box or you can create your own you can listen on one TCP port but forward to another T support you can send me now you can publish to a different revenue exchange maybe you want to use your mq maybe you want to republish the data to to Amazon to maybe to a gratis Nagios and all kinds of other system we're going to concentrate an elastic search so we basically take the information from in a file or UDP port and we basically pump that data into elasticsearch the Alex tag or are actually launch that particular has a lot of influence outputs filters you I'm really shocked to see sometimes people saying well what if I want to do in XY and Z and people can't find it so if you're one of those on I'm just shocked logstash so what is the log stash configuration file looks like it really has inputs filters and output so input could be look what I mentioned before could be TCP to derive it here could be 0 mq which is the file filter whatever you want to do to the to each line that comes in and output is what do you want to once you translate it on once you've done something to the logs what do you what do you want to output it to so let's talk about a particular filter that is used at least I'll use it 80% of the time and that's called grog crock is just something something that log stash name it's a label for a regular expression so I could take a specific log that looks a specific line in the log that looks a very particular way and I can create a regular expression we use an existing regular expression and I can create a label I can create a username that you can see that that looks a particular way and I can just label it okay anytime log stash sees this that's a username every time somebody else sees a month in a particular way just call it a month and so on and if somebody call if I see an Apache down below an Apache log is really a combination of other labels and log stash comes with a directory that has a lot a lot a lot of predetermined labels that you can already use out of the box and if you and there's there's not one already created for you obviously you just create a regular expression and create a label off of it and then you can use it in your log stash configuration file so in essence it just turns a complex regular expression into just a repeatable pattern at a label directly expression and then I just use a label anywhere I want so grok is really so in the filter so in lockstep config file a filter a filter again is just to what you want to do to the data before you output it so I could basically say all right let's just let me just label Apache and I'm just looking through this pattern and combined Apache log is a label that's already created and that's based on a very large regular expression that you've seen in the best so if you have a particular log file or a log line that looks like this we basically want to turn it into something like below so if you look at the log file right above we're going to turn it into something that looks like below which is a JSON data that you can query on so it we basically extract that client address with username timestamp was an HTTP POST or a get and was the path and so on so we're extracting data from the log and creating fields off of it so we can nicely query the data later we can make any date into really nice so I saw 8601 compliant time stamp so no matter you log no matter what technology you use the Lord log stash will automatically set all right when you just grab this data from all this places and I'm going to turn it to a consistent time step that I can use while analyzing the data if you remember our Pacha log from earlier we can let's define some inputs and some filters so we're basically saying right we're going to monitor a specific path so in the other input section we're going to monitor specific death which is located as cific directory we're going to apply a glut filter and the pattern is combined Apache log which exists already out of the box and that's you label it the outputs you got to decide where to what do you want to output the data you can have 50 outputs you can have two outputs you can just have one output you can output the data into stats D you or you can output and or you can output a to elasticsearch you can output now you can visualize the stats D data using you know for example tool called graph we'll talk about a tool called Kabana which is a tool that communicates to elasticsearch which receives your data indexes your data based on the fields that you've extracted and you can visualize your data just X plug has a visualization tool cabana is the visualization tool that sits on top of elasticsearch and you can do all kinds of great visualizations you can do it by maps you can create pie charts all kinds of stuff that you're able to do and you can create your own dashboards on will actually cover how to create a dashboard ourselves one of the interesting things you can do with wood loss there is use it you can actually connect to a Twitter feed and analyze the data for Twitter so we can for example have an input that says ok it's a Twitter here's my username password look for some keywords called Bieber and just please output the data to elasticsearch and so we can search that it's a Twitter feed we're going to create a type called Twitter and then we can actually analyze the data we can see who's tweeting about Bieber and we can graph the data we can see even what clients are being used and so on so we can really get some interesting results if you already have a central server for example our syslog syslog-ng you can actually create an input that points to that log you can filter and you can use an existing pattern coil called syslog base and you cannot put it to elasticsearch so this is what the log stash conf looks like and again I'll show demo exactly how I've set up log stash config file and how to run it both in Windows and political environments and this is just a an example one of the other interesting things you can do is you can create logs from your network appliance or switches so you can actually see how the message is flow maybe from your sewage to your database to back back to another switch so you can really see a nice visual of how the message flows within your system which is really really powerful a lot of times people just do it at the application level forget the networking aspects or when you're debugging application is just one aspect right there's always networking that if you're able to debug from historically from a network perspective as well that will help you tremendously in your debugging efforts so in Linux you can pipe one command into another command you can do the same thing with kind of with locks actually you can say I've got one instance of log stash that's looking for specific file or multiple files I'm going to output it to another log stash instance which will do something else you can really pipe messages from one system to another and not have this so maybe you can you wanna output it from one log stations for instance into elasticsearch and then also to another log stash instance which will do something different on a completely different machine so you can kind of distribute your load that way as well the long stems architecture is really powerful it can do Linux windows write switches network switches as long anything that supports TCP UDP plus the other inputs that I showed really you can do powerful stuff and you can we can for example pump the data into a rabbit queue right you can maybe publish data into R and Q cluster and you can have a log stash instance that looks at rabbitmq connects the rabbit and Q changes and Q's gets those messages could pump the data back into another rabbit and Q cluster for for maybe again for low distribution or for paralyzing tasks whatever you may want to do elasticsearch which is a distributed system that you install separately you can pump the data into any output and one of those outputs is is RabbitMQ or elasticsearch so you can do kind of different variations that's kind of about that that's what I'm trying to get to is you can pump the data from logstash into RabbitMQ and then have elasticsearch with a plugin that can take the data from wrapping view or you can do a log stash directly to elastic search so you can do different variations elastic search is a distributed system it's route it really indexes your data it's based on leucine it has a restful api client you can also connect to be a thrift and you can put combined on top of it to actually analyze the data and not only you can use elastic search just for search for logs you can actually do it for full-text searching for maybe recommendation engine fraud detection all kinds of other things they just write let's just paste on leucine it just indexes the data and then you can do all kinds of cool things including it has a very cool a speckled percolator which basically says it monitors data as it comes in and can notify you when something changes when it's greater than a specific SLA or if you look if you're looking for a specific keyword it can it can notify you so very powerful it's a separate install from log stash and separate installed from Cabana elastic search is the GBM based system log stash is also JVM and Cabana is a actually an angularjs app that can run on any web server and it just needs connection to elastic search we're going to create an engine X for the demo that's kind of adjustable so if you'd like further reading this is good information there is also a log stash puppet module which I might and I'll talk about in this demo just a separate discussion and let's we're done let's do let's go live demo so this demo is really based on a reusable way to recreate everything I want to talk about so you can push a button or create a whole environment for you install everything for you and you can really play with it one of the ways that I'm doing this is is a vagrant so vagrant really helps me to create a VM and out-of-the-box supports with the VirtualBox I'm going to use parallels for so I'm going to execute a vagrant command which will create a new bun to virtual image for me in parallels it will install a few technologies that I'll briefly touch upon one is darker docker is just a a separate VM within my VM so it's not really a VM HSS it's called a container and I could create dozens and dozens of containers just just think why you would would why you'd want to do that as you think about if you're using a VM how many how many VMs can you really create in your machine where's the docker Linux containers do I can create dozens and dozens of different containers and and can create a cluster of different elasticsearch nodes and so on so it's really powerful this is a docker is not part of this demo I just want to make sure people can understand what what the technologies that are involved so the first piece is just a vagrant Ruby script that kicks off your virtual machine creates a Ubuntu 13.10 version it's going to create a shared public network and it's going to map some mock on my Mac workspace to the Ubuntu workspace it's going to automatically image you can see this is the image this image if you go to so if you go to index that darker data oh I have an image so it basically takes a full image that I have some - a major searchlight on my name and it will do all of these commands so it will install java 7 it will install download install elasticsearch it will download Cubana it will install nginx and put cabana and it won't still logstash and run it so it kind of does all these things and the reason it's called a trusted images because I put all the commands here for darker to see it so that's why the trust that if it's not a trusted image it's just a binary image that you can download so you can manually download this this docker image and you everything will be pre-installed and you can run it and I'm exposing different ports for different things but I'll cover more in detail so switching back to vagrant just is just to summarize real quick it will run a docker container what a single docker container within my Ubuntu VM and that will that docker container will contain the whole ilk stack and you can see I'm exposing all these different ports so I can get access to to that docker container within my VM so you can see I ran the right here vagrant up so this just basically runs that vagrant script that you just saw which executes the darker which execute that script which downloads all the all the different like log stash and elasticsearch and Cabana sets huddle up it creates a docker image and is my IP and this is the you can SSH into it and so this is done around for about 20 20 minutes or 15 20 minutes just because it's a lot of stuff to download the good nice thing about docker is you can take my image or take anybody else's image and you can add stuff to it without read downloading so it's kind of like a gift for a for Linux VM so you can build on top of it and it ran it install docker set everything up and now I can really SSH into that docker container right here you can see i SS aged drive a few times we forgot the password finally a Association into it so if we take a look at my log stash configuration file forget about all the other stuff you just concentrate on this aspect I'm listening to port 99 and I'm going to tag something as a DB which is the company I work for right now so anytime somebody sends request to port nine nine nine nine I'm going to tag it as a DP and I'm going to push it down to elasticsearch which is right here alright I'm just going to output it to elasticsearch so I'm going to use jmeter so I just ran jmeter and I am this is my docker IP this is the port that I'm going to send request to and I'm going to send about a hundred different requests so I'm going to run this okay and if I go and take a look at what's happening so I just hit my darker IP 9200 is the port that the elasticsearch is running on there and you can see that I have one node and it automatically names the notes right now so I have one node that's so if I take a look at the data you can see I just created a bunch of these requests this is just here EP as I send those requests just through jmeter sent me over to be a UDP over port 99.9999 and it basically just created I can see the logs in elasticsearch so not the most interesting demo this is just a what will start us will make it as a base our baseline so let's do a little bit more of a complicated demo so I want to pump more advanced data into elasticsearch so I want to do more advanced things to to my login so I'm going to run a simple Java program that creates all kinds of different work so when you just run it on the know I'll show you what it creates so when I quickly run this she's going to start pumping data so it's just going to log the data right it's not actually connecting to anything to slow logging the data locally so while it's running oh whatever for a couple minutes minutes so logstash looks at I'll talk about this later on so let's just keep it for now we'll do a dotnet example from a Windows machine this particular one just simply looks at this location and right and just pumps the data and outputs the data here and I'll cover the different filters that I have of what it does to did it to the data so while it's running so for example so I want to be able to ask these questions and I want to get an answer to a lot of my log data so some let me give you an example of what I just logged so it's got everything that was just logged here I just stopped the process but I just found get too much data it could look something like this it could look something like like that and so on so I've got four different types of blogs first one is really so this one is really some application ID is in the e-commerce application researching the categorical portable so you can see portable right here and it's coming from specific IP address next one second line is somebody is really researching mobile category maybe brand name Samsung and they're coming from a specific IP address the third one is the researching portable looking for Apple eleven inch screen 128 gigabyte disk and a specific IP address and a fourth type of log that was generated was they visited iPhone 5c model 16 with 16 gigs priced at $599,000 so having all this information what can I do with this information right this is just a log right now right but I want to I want to be able to ask these questions what is the research versus sales ratio simply for my logs right now so I want to analyze my laws and act I should get answers to these questions what is the criteria most used what's the most viewed product was the best selling product model one of my best customers what is proportion of men to women and where do we stand in relation to our sales targets so just from having for logging this information I want to get an answer to those questions and you can tell here it created one of those four logs so it just logged a bunch of information so now if we if I go back to elastic search okay so it created a bunch of Lords fantastic and you notice now it created all these fields that you see in in blue dark blue so it created out of those logs I extract a timestamp a brand the category city name email address all kinds of information I extract that and the way I extracted is from filters so if we go back to my log can't file so you can see I have different filters I have a custom filter that's that's using existing labels I have looking for specific time stamps I'm splitting certain things where it has comma into separate fields I am labeling that if I see something like this I'm gonna label it a cell if I see something like search request I'm just gonna tag in a search okay I'm converting I'm splitting so for example I have pipes that have different categories and I'm splitting those up I am taking an IP address that I'm logging I'm using a database an open source way to convert an IP address and again an approximate destination of where it came from or the source where it came from and I'm outputting all of that information to elasticsearch now you don't have to worry about the actual filters what specifically they do as you can create your own it really depends on on the data that you have my data was in a specific format so I am splitting as you can see right on I have pipes here having pipe ears I'm splitting the pipe for different options I have commas I have a comma here so I really want to separate brand from options I'm going to separate category from brand IP address and so on and you can see I have search requests times ayah so if I see search requests I'm going to tag it as s search so if I open up Gabanna now which is running on nginx just give me a default given interface this is not something that I created it just comes by default so if I look at the sample dashboard that they have right I can see the same things that you saw if you hit elasticsearch directly I saw the same fields so I can query those I can filter those fields bye-bye by different balance I can do by a category I can filter by all these fields that are just created so I actually treat but I actually have a different therefore so out of that data I'm going to load a different network and I'm gonna just say change the time I'm gonna explain a dashboard books undo last out and quickly zoom in the specific times so this dashboard is much more visually impressive this dashboard really shows search versus cell right here men women and counts each shows the different countries that the IP addresses were originated it can even I can even do my goals that I hit my goals it sure I can do cell versus search so I can actually click on this and just get go to cell I can do women versus men I can get my top options I can get my top customers right here get my top products my top IPs and I can filter into specific products so this dashboard gives me answers really all my questions just from the logs that you see just to recap real quick just from these logs I actually can answer these questions right here by looking and creating a dashboard like this the dashboards are really easy to create you can create rows so for example this is a row that contains two columns this is another row and you can see every time you see this panel it's a separate row so I can modify each row and I can add whatever I want so if you if I go and just click on configure that row I can see that I have four columns in that particular row and this is just my title and type is just a different chart that I can create so I can create a map depending on what the data is I can treat out of these options I can create different charts and that's how this dashboard is great so really powerful stuff at least for me this is what what sold me a lot of times right the visualization is a is a huge fee simple so I can I can filter by different different days I can you know last seven days let's do these since I just ran if I did it just last hour and this is the information that I got so if I filter by something I can just click on sample iPad Nano and it's going to filter my iPad now right so it's going to show me now women and men ratio for that particular product and Sol versus search for that particular product so let's switch over to the windows and a.net side I have a simple console app that uses seer log Siri log is great for a few if you've heard of semantic logging right now I'm using a log format sink and that sink is right here see I'm using log for that and look for an app right now it's is set up to point to a specific directory or a specific directory with a file it's a rolling file appender so soon as I execute this this is just going to cause some exceptions to happen it's going to cause some success to happen it's going to cause a fatal an error just different types of blocks and then I what I want to do is is forward what I'm logging into the elastic search for it to index so I can actually view it to the Cabana interface so the log starts forwarder is really built and go and in order to install it you just have to clone the repository and build it and for Windows that actually create a log stash - forwarder that Exe and we're going to use that Exe you can run it as a Windows service it needs a properties file basically something that looks like this where I'm basically saying I need to send all my logs to this location this IP over this port and it needs a certificate it's going to monitor this location this is the this is what up this is what where my console app is going to output and I'm just going to tag it as a syslog whatever you get if you can tag it as whatever you want so let's run the walk stash forward so I'm going to pass in this configuration file you just seen it will connect to that to this particular IP address over this port I'm just going to execute the app let it so it's just going to do a bunch of a bunch of stuff so you can see so you can see while it's running the log stash forwarder sees that the file is being upended and it's just it's just forward to all the messages to it to the elasticsearch so if we switch over to the elasticsearch I'm going to actually go to the browser so you can see all the logs that I just submitted so this is what I just I called it syslog this is that highlighted syslog you can put anything you want so you see all those messages being forwarded to elasticsearch so this is really good stuff where we have published messages or forwarded messages from our Linux system from our java application we forwarded messages from a dotnet client for the same cluster and so now you have an aggregated view of your logs and we've seen how you can view those logs via kibana just just to give a quick overview of the log stash based on what we've seen with inputs these are so if you go to log stash on the latest documentation here are all the different inputs that it allows you to get a data from is what you can do with the data and how you can filter the data and this is where you can output the data you seek and obviously output it to to the speed location to an elastic search which is what we just did V file you get the idea so a really powerful stuff and one of the things that I wanted the last thing that I wanted to mention in just since we only had one known so if I go to elastic search overview and this is by the way this is just a plugin that I that I expelled for elastic search that just visualize the actual cluster of elastic search I can so right now it's actually just running one known you can see it's just you go to automatically called it Amazon we can actually go and create a separate docker container that just has elastic search so what I did I created a separate docker container that has as elastic search that walks - that it's coupon irani so a lot of a lot of stuff running into one node so I can start up another container with the same with the same container as my first container but you defeat the purpose I just want to start up another container that just has elastic search without gabon and anything else because I already obviously have it in my other container so it basically you could see that I could run two node cluster with elastic search just from having two separate buffer containers which are much more likely than any other VM so what to do that so if you do if you go to index that Vario you can just search for elastic search it's got a bunch of containers that people have already done so I'm just going to download this the first one mine is showed you before is this one and I said it contains a lot of different installs which we don't want to do so I right now I just want to add a second container that just has elastic search I don't want to add anything else let's pick the dockerfile slash elastic search so on the left I'm SSH into the actual docker container on the right is my VM that's running the docker container so in my VM on the right I'm just going to say docker run and that was called docker file elastic search and it's going to start downloading this particular image so download that docker container and it's and you can see on the bottom that it started I didn't run it in the background but you can also just by passing that Koran and the container name and - di little route of that background so if i refresh my cluster you can see so if I you can see by the name on the right right here called it gave it a name and now a second node showed up so I can add ten more containers and I'll have a cluster of ten elastic search nodes automatically discovering each other so it's pretty easy hope you liked the presentation and have fun
Info
Channel: Jeff Sogolov
Views: 526,794
Rating: undefined out of 5
Keywords: ElasticSearch, Logstash, Kibana, Docker
Id: Kqs7UcCJquM
Channel Id: undefined
Length: 48min 18sec (2898 seconds)
Published: Fri May 16 2014
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.