Istio Service mesh explained

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so you've always wanted to learn about istio and you want to get into service mesh technology learning a service mesh technology for the first time can be super daunting there's a lot of terminology and things to know about and learn in this video we're going to be taking a look at istio and we're going to be breaking it down into simple terms we're going to deploy my video catalog microservice architecture add a mesh to it and see what we can do with the service mesh now we have a lot of features to cover so without further ado let's go so the first question to ask is why a service mesh a service mesh helps us solve specific challenges around service to service communication if you're totally new to service mesh you have to pause right here and check out my link down below to my introduction guide to service mesh where i cover all the challenges and reasons you could be looking at a service mesh as a solution so we're going to be taking a look at getting istio installed the useful things we can do observability and metrics how to detect network failures and faults automatic retry of requests between microservices traffic splits canary deployments and tls between microservices but the first thing we're going to need is a kubernetes cluster for our microservice architecture in our architecture we have a video html website it's exposed via an ingress controller on servicemesh.demo home when it loads up it makes a call to the playlist api also exposed by an ingress controller over service mesh.demo api playlists the playlist api retrieves the playlists from a redis database for each video in the playlist it calls the videos api which is a private service in the cluster it has its own raiders database after the playlist api has retrieved all the videos from the videos api it sends the json data back to the videos web which renders it in the browser so if you take a look at my github repo in there i have a kubernetes folder and under the kubernetes folder i have a service mesh folder under the service mesh folder i have an sdo folder with a readme and in here we have all the steps that i'm going to be showing you guys today so be sure to check out the link down below to the source code so you can follow along so if you head over to the sdo site and you go over to docs and you go into setup istio has a platform setup guide which helps us prepare many types of environments for istio and these are different kubernetes clusters on different cloud providers including kind in this demo we're going to be running a local kind kubernetes cluster kind is a great tool for running kubernetes clusters locally inside of docker containers so to create a kubernetes cluster we're going to say kind create cluster we're going to call it istio and we're going to run kubernetes 1.19.1 so i'm going to copy this i'm going to paste it to the terminal and that'll give us a kubernetes cluster within a docker container that we can use and throw away when we're done and now that our kubernetes cluster has been created i can say cubectl getnodes and we can see we have a onenote kubernetes cluster up and running ready to go so now that we have a kubernetes cluster the next thing we're going to do is deploy our microservice video catalog so we're going to go ahead and deploy an ingress controller and then we're going to proceed to deploy the playlist api playlist database videos api videos web and videos database so to deploy the ingress controller i'm going to create a new namespace called ingress nginx and then i'm going to apply the ingress nginx yaml files to my cluster and then to deploy my applications i'm going to say cubect i'll apply and i'm going to apply all of these microservice yaml files so i go ahead and paste that to the terminal and that'll deploy the entire architecture to our cluster and if we wait a couple of seconds we can make sure that our applications are running by saying cube ctrl get pods and we should see all of our applications up and running we also want to make sure the ingress controller is deployed correctly so to check that i say cube ctl in the ingress namespace getpods and we can see we have two nginx ingress controllers up and running so to test out our microservice architecture we're going to need a fake dns name so to do that i'm going to create a fake dns called servicemesh.demo to create that it's very simple i'm just going to go ahead and edit my host file and add the following entry and then to access the dns we need to port forward to the ingress controller pod to do that i'm going to say cube ctl in the ingress namespace i'm going to say port forward to that ingress controller deployment on port 80. so copy that paste it to the terminal and if i go to the browser on service mesh dot demo forward slash home slash i can see we have our application videos web running in the browser and you can see it's pulling out all the list of playlists as well as videos inside each playlist the explanation for all the architecture is under kubernetes service mesh in an introduction readme in this readme i explain the full architecture in detail as well as an architecture diagram and how to build and run these applications locally in docker containers [Music] so now that we have a kubernetes cluster and we have some real-life micro services with network traffic let's take a look and see what it takes to get istio installed so to install sdo and download all the dependencies what i'm going to do is i'm going to open up a new terminal and i'm going to run a small container to do all the work in so what i'm going to do is i'm going to say docker run minus it i'm going to mount in my home directory into root because that's where my cube config for my kind cluster lives i'm also going to mount in the working directory which is this entire github repo i'm going to mount it into a folder called slash work i'm going to set that as the working directory give it host level networking and run a small alpine container so i run and i paste that into the terminal so now i have a small dev environment in a container to install all the dependencies so firstly what i'm going to do is i'm going to install curl and nano and i'm going to use curl to download cube ctl and nano is my lightweight text editor so firstly i'm going to say curl and i'm going to download the latest version of cubectl i'm going to give it execution right i'm going to move it to userlocalbin and then what i can do is i can say cubectl help and we can see ctl is now installed i can also set my default cube editor to nano so that allows me to edit kubernetes objects and to test whether this container can access my kubernetes cluster i can say cube ctl get nodes and we can see our sdo one onenote kubernetes cluster is accessible and up and running so the sdo documentation is a great place to start they have a getting started page where they show you how to download the istio command line utility and install istio to your cluster so istio also has a download istio url that you can run using curl and you can pass in the seo version that you want and if we take this command and we run it in the terminal that'll go ahead and download istio including the command line utility and what this command does is it goes and downloads istio and extracts a tar file into your local directory we can see if we take a look at the root of my github repo there's an istio folder that's been created and in here is a is a binary folder with the istio ctl command line utility there's a ton of manifest sample codes add-ons demos tools and a bunch of other cool stuff that you can explore now what we're going to need is the sdo cli so i'm going to grab this istio command line utility and move it to user local bin so i just say move and i move that binary to user local bin and then the next thing i do is give it execution rights with chmod and now i can say istio ctl and we can see the command line is now installed within our little container and then what i'm going to do is i'm going to move this istio folder that we have in our root here i'm going to move it to a temp directory just so that we can refer to it later so istio also has this pre-flight check command we can say istio ctl x pre-check and this will go ahead and query our cluster and do a pre-flight check before we install istio to make sure it's compatible with our kubernetes cluster so if we run that command we can see it does a bunch of things it double checks the kubernetes api whether it can talk to it it checks whether it's compatible with this version of kubernetes it says istio will be installed in the istio system namespace it does a bunch of kubernetes setup checks sidecar injector checks and then gives us a message to say our pre-check has passed now before we install istio it's very important to understand that istio has a concept of configuration profiles so this is very well documented under the configuration profiles page in the documentation they have six types of profiles that are deployable there's default there's demo there's minimal remote empty and preview and these are pretty self-explanatory and well documented and they also show you a little matrix here with all the components that are installed with each profile they also recommend that the default profile is recommended for production deployment so that is the profile i'm going to be taking a look at today i've put a link in my document here to the documentation of the config profile so be sure to check that out we can see a list of profiles by running sdo profile list and if we run that we can see the available profiles and then we can go ahead and install istio by running sdo ctl install and setting a profile so i set my profile to default this will go ahead and install istio in the istio system namespace so if i run that we can see this will take a couple of seconds to be completed and once it's done we can see it spits out a couple of messages istio cores installed sdod the ingress gateways add-ons and the installation is complete i can go ahead and confirm this by running cubectl in the sdo system namespace i can say getpods and we should see everything up and running i can also run this ctl proxy status command to see the proxy status of the cluster now let's go over the basic architecture of istio so if we say cube ctl in the sto system namespace get pods we can see the service called sdod and this is the pod that makes up the control plane of the istio service mesh and we can see it in the diagram over here now sdod is responsible for injecting these sidecar proxies into our services that are part of the mesh those applications that opt in then become parts of the service mesh all these applications that then become meshed form what's called the data plane and we can see this pilot component within sdod now the pilot component is what's responsible for traffic management and injecting and managing the life cycle of these invoice sidecar proxies that we inject into our service pods the citadel component of sdod is basically the certificate authority which helps achieve mutual tls between services that are part of the mesh the gallery component in sdod is basically what translates kubernetes yaml into a format that istio understands and this helps istio to run outside of kubernetes as well so now that we have an sdo service mesh running in our cluster it's very important to note that none of our services and applications are part of the mesh sdo service mesh provides a capability to opt in for services to become part of the mesh and there are two ways for services to opt-in to become part of the service mesh the first way is you can either just label a namespace that means all pods that are running part of that namespace will become part of the mesh so you can either measure an entire namespace or the second approach is you can use istio ctl to grab the deployment yaml and inject the invoice sidecar proxy using sdo ctl this is a more manual approach but allows you to opt in only for specific deployments and this is useful if you want to engage in a staged approach where you install sdo and then you only apply to certain micro services until this technology matures within your team and then you can proceed to use the entire namespace so to have all these pods in the default namespace join the service mesh i can apply the istio injection equals enabled label on a namespace and once i apply this label to the namespace sdo will inject the invoice sidecar proxy into any new pod that gets created so i can say cubectl label i can label the default namespace with sdo injection enabled so if i go ahead and do that and we run cube ctrl get pods we see nothing happens this will only take effect for new pods that get created so to have sdo inject the sidecar proxies to each pod what i can do is i can say cube ctl delete pods all and that will force kubernetes to create new pods and when it does that sdo will inject that sidecar proxy into each one of these micro services and if we do cube ctl get pods we can see that there are now two containers within each pod so we have our application in each of the container as well as the sidecar proxy and if we wait a couple of seconds we do keep ctrl get pods we can see all of our services are now meshed so now our entire namespace is basically meshed using this label concept if you want to manually mesh deployments you'll need to use sdo ctl so let's use the manual injection method on our ingress controller now kubernetes has a method of getting the yaml out of the cluster so i can say cube ctl in the nginx ingress namespace i can say get deploy and i can get the deployment of that nginx ingress controller and i can output it as yaml and when i do that we can see cubectl spits out all the yaml for that deployment to the terminal and what this allows us to do is pass that yaml to the istio ctl command line so i can pipe that into istioctl cubeinject and when i grab that and i pipe it to the istio command line's cube inject function i copy that and run that we can see it also spits out yaml to the terminal but it's important to note if we take a look at the yaml it produces we can see that it has our initial deployment here for our nginx ingress controller but it also injects a sidecar proxy so to inject a sidecar proxy into our deployment we have to pass that yaml to cubectl apply so to summarize that we say get deployment we output our original deployment as yaml we use sdo ctl cube inject to inject the sidecar proxy yaml into our original yaml and then we take the output of that and apply that back to the cluster so if i paste that we can see our deployment is now configured and after a couple of seconds we can see we have our nginx ingress controller pods here and both of them are now meshed and because we've created new pods i have to jump back to this terminal where i had this port forward command and just ctrl c and port forward again and that's as simple as that now all our micro services including our ingress controllers are all now part of the service mesh now to simulate some traffic to our application instead of just refreshing the browser page let's write a small script that runs in a loop that continuously makes web requests to our microservice architecture so to do that i'm going to open up a new powershell terminal and i'm going to run this small while loop i'm going to say while true and i'm going to make a request to service mesh.demo home and then to simulate the call to the playlist api i'm going to say curl and i'm going to do the same thing to the playlist api and then i'm going to sleep for one second so this is going to run in a loop continuously every second and make a request to the browser page as well as the playlist api i'm going to copy this guy and paste it to the terminal and then i'm just going to leave this running in the background now the cool thing of having all our services part of a service mesh is the proxies that are running as side cars in our application exposes rich telemetry about all the network traffic between these services and we can visualize this telemetry in two ways istio comes pre-shipped with a grafana add-on as well as a kiali add-on these are dashboards that we can go ahead and deploy which will allow us to look into the network so let's take a look at the add-ons if i do alice on the temp folder that we used earlier we can see we have the whole sdo directory there that we downloaded and if we take a look at that we can see inside as a samples directory and if we take a look at the samples directory we can see inside here is an add-ons directory and if we take a look at the add-ons directory we can see there's a grafana yaml jaeger kiali and prometheus so these are the add-ons for istio that we can go ahead and install so to get some telemetry out of the envoy proxies we can run cube ccl apply and we can apply this prometheus yaml this will go ahead and deploy prometheus into the istio system namespace and we'll start scraping metrics out of each of those proxies running in our service mesh and to get a dashboard we can deploy the grafana add-ons so we can say cubectl apply in the sdo system namespace we can apply the grafana yaml and if i go ahead and do that we can see it'll deploy a grafana deployment to the istio system namespace and if i do cube ctl getpods in the sdo system namespace we can see we've deployed grafana as well as prometheus so now that we have grafana installed we can access it with the kubernetes port forward command so we can say cube ctl in the istio system namespace port forward to the grafana service on port 3000. so if i run that i can head over to the browser on localhost port 3000 and this is the landing page and the first dashboard we're going to want to look at is the istio mesh dashboard this one has a summary overview of all the namespaces and services that are meshed in the cluster you can see on the left here we have services and then we have workloads which are two separate dashboards we can see our videos web which takes the traffic for the browser page we can see our videos api and playlist api we can also see the number of requests per second the latency percentiles and the success rate so this is very useful for setting up alerts and having an overview of the networking in your cluster so the left dashboard is more around the kubernetes services so if we go into something like the playlist api we can see like the number of requests per second the server success rate the duration latency percentiles and if we scroll down we can see the service workload so we can see things like duration by source we can see the request size and the response sizes and if we go back in the middle here we got the workloads this is the interesting one so if i go into the playlist api we can see again incoming request volumes success rates we can see the request durations we can see tcp client traffic this is probably the request to our database and this is where it gets interesting we can now track inbound workflows so we can see all our traffic coming into the playlist api from the nginx ingress controller over here so we can see basically where the traffic is coming from we can see the success rates we can see incoming request duration by source request size by source and the response size by source so this gives us a good overview of the inbound workflow and if we go to the outbound services we can see the outbound call so the playlist api we know makes internal calls to the videos api and we can see the success rate over here we can also see request duration and sizes and we can also see outbound tcp connectivity so we can see we're calling our playlist database over here and if we go back to the landing page we can also drill down into the videos api and see the same thing over there so we can see success rates request volumes and durations we can see inbound calls so remember the inbound traffic will be the playlist api and we can also see that all requests have mutual tls between them so we can see secure traffic over here we can see all inbound requests are coming from the playlist api response sizes by source and request sizes and we can see outbound calls so now outbound calls will only see our database tcp connectivity to the videos db and then we can see the same thing for videos web which is our web server that serves the landing page and because we're doing a sleep and a loop we can see we roughly have just under one request per second hitting the videos web we can see the success rate the request duration and we can see the inbound workflows now because all request comes to the nginx ingress controller we can see that here so the inbound workflow is all the nginx ingress controller traffic and we can see duration by source request size by source and the response sizes and there is no outbound connectivity here now this is what's called black box metrics and it's the highest benefit of having a service mesh is that we get all of this telemetry for free without making any code changes so we get all this network observability regardless of any programming language all we have to do is add our service to the service mesh [Music] another similar dashboard that's really useful is the kiyali dashboard and if you want to deploy that that is in the add-ons directory as well so if i do ls and i list out the samples add-ons folder we see grafana is here we also have kiali.yaml so we can simply deploy that by saying cube ctl apply and we can apply that yaml file so if i do that we can see that it will deploy kiyali in the istio system namespace and the first time you run this you'll get a crd error that is because kiali will go ahead and generate some crds so we give it a couple of seconds and we run the command again and this time you'll see it will pass without any errors and the crds will be created and if i do cube ctrl get pods in the istio system namespace we can see kiyali operator is now being created and similar to grafana we can go ahead and port forward so we can say cubectl in the sdo system namespace port forward to the kiari service on port 2001. so if i go ahead and run that that will expose the kiyali endpoint on localhost 2001. now kiali is very similar to grafana as it shows us workloads as well as services with similar metrics than we saw before so if we could click on workloads we can see all our workloads that are part of the mesh over here and it also allows us to drill in so i can go into the playlist api and i can kind of go through these metrics at the top here so we can click on traffic we can see inbound traffic is coming from the nginx ingress controller outbound traffic is going to the videos api and the playlist database we can look at the inbound metrics so very similar to what we saw in grafana with the inbound traffic we can also see outbound metrics are also very similar to what we saw in grafana but the cool feature that kiyali has it has this graph section which shows you like a graph of the traffic so you can actually visualize how traffic is flowing in your cluster so we can see all the traffic is coming from the nginx ingress controller it's passing through it's hitting videos web so that's the one loading in the browser we can also see subsequent requests from the browser coming through the nginx ingress controller hitting the playlist api we can see the playlist api making a call here to the playlist database we can also see it making course the videos api which the videos api is then calling its database so we can get a nice graph of the network over here as well the other thing that kiyali has that grafana doesn't have is we can also manage istio configuration so as you start building out the istio mesh and start taking advantage of things like virtual services you can come into here and you can filter by namespace and then look at different sdo types so you can start filtering on things like sidecar service entity gateways destination rules and virtual services so in here we'll be able to see things like traffic splits retry rules timeout rules and the canary deployment rules now to define these types of rules we have to interact with the istio ctl and create some of these sdo objects with the main istio object called virtual services [Music] now let's say the team responsible for the videos api commits and deploys some buggy code to the kubernetes cluster there may be a lot of teams doing multiple deployments throughout the day now let's see how we can use these observability tools to locate and track and solve this problem so i've written some buggy code on the videos api that i can turn on using an environment variable so to simulate that i'm going to say cube ctl edit deploy i'm going to edit the videos api and i'm going to go down to the environment variable section and i'm going to turn this environment variable called flaky to true and i'm gonna go ahead and save that this will simulate someone checking in some buggy code and deploying it to our cluster and if we say cube ctrl get pods we can see that change has been deployed and now we most likely have alerts firing to our development teams and we can see here that there's a large failure happening in our cluster we can see the videos api as well as the playlist apis being impacted now before we make assumptions about where the failures are let's use the istio dashboards to figure out and find the problem so the most logical place to look at is the playlist apis we know that that's where all the traffic is coming in so if we go to the playlist api workload we can see that the incoming success rate has dropped we can also see the request duration has spiked up by quite a lot and if we take a look at the inbound workflows we can see that our nginx ingress controller requests per second has dropped a little bit as well so we can see that the nginx ingress controller is impacted by this change we can also see that the nginx ingress controller is receiving errors as it started right over here and also we can see that there's been a huge spike in the request duration the nginx ingress controller is definitely being impacted the requests are taking much longer to come back we can also see that the outbound calls to the videos api are failing as well and there's a huge spike in latency to the outbound request to the videos api and what's interesting here about the outbound services is if we take a look at the request response codes we can see that we're getting 503s from the videos api and if we search for the 503 error code we can see that this means that service unavailable so we are getting a service unavailable error for network traffic going to the videos api and for interest sakes we can jump into the videos api as well and we can see the incoming success rate has dropped dramatically and if we look at the inbound workflows and we look at the response codes we can see 503s are being synced to the playlist api so there's an obvious issue with the videos api here and if i go and check the logs to confirm i can run cube ctrl logs on the videos api and i can just get the last 50 lines and we can see we have a lot of exceptions happening and we can see here the server is panicking with the flaky error so we can see how useful these observability dashboards are to locate network problems now while developers are working on a fix which in reality can take a bit of time and since the errors seem to be intermittent because we do have 200 and is happening a virtual service may help us implement some form of auto retry logic which can help us mitigate the error until the fix is deployed now a virtual service is an istio configuration that allows us to effect traffic routing we use a virtual service to define what we want istio to do with the traffic things like automated retries canary deployments timeout settings traffic splits and more so to begin with virtual services if we take a look at my kubernetes service mesh sdo folder in here i have a retries folder and i have a videos api.yaml file and this is what is virtual service looks like so it's called kind virtual service i give it a name and i tell it what host name to target so in this case i want all requests to the videos api and i want to implement auto retry on that traffic so i specify host i specify videos api since that is the service name that's been configured in the playlist api to send requests to i then say http and i define a route destination and the destination hosts this videos api and then i also say i want to implement retries and you can pass in a number of how many attempts you want to be retried and also the retry timeout for every retry that occurs but it's very important to understand the effects of auto retry too much retry adds extra requests onto the network which can strain the network so it's important to dial in this value so that you do just enough retries to make the error go away but not too much retries to add too many strain on the network so to deploy that virtual service i'm just going to say cube ctl apply in the istio retries folder and i'm going to apply that virtual service and istio will grab that and immediately start auto retry on that traffic we should see that the request from the playlist api to the videos api should start getting a higher success rate and we can see that number is going up and we can see now that the playlist api has fully recovered so all requests to the playlist api are successful which means if we go to our video catalog and we hit the spade we can see that it's there's no longer any errors if we go into the playlist api we can see that the 503 errors have now stopped and now the nginx is receiving 200 status code so that's how virtual service auto retry functionality can help us mitigate those errors now let's say a fix has come in so i can go and say cube ct i'll edit the videos api i can go back to flaky true i can set this to false and let's say a fix has now come in and then i can go ahead and safely remove that virtual service as it's no longer needed now let's say we want to implement a traffic split maybe we want to send five percent or 10 percent of traffic to a new web api or a new web interface this might be useful for api testing now for traffic splits i have a separate virtual service which is in the kubernetes service mesh istio traffic splits folder now let's say a development team has created a v2 of our videos web and they want to send 50 percent of traffic to the new v2 so what i've done is i've created a new deployment with v2 we can see we're running videos web v2 and a new service so i'm going to go ahead and say cube ctl apply and i'm going to deploy v2 of our videos web and this videos where v2 has a new header that's in development that maybe the videos web team want to test and i can do cube ctrl get pods and we can see we now have videos web v2 running side by side with videos web v1 and if we take a look in the traffic splits folder i have a videosweb.yaml with a virtual server showing you how to split traffic for the videos web since all traffic is coming in over servicemesh.demo as a host we want to create that as our host and we want to route to a destination videos web v2 and videos web v1 and here we can specify a weight so we can save 50 of traffic go to the one and 50 of traffic goes to the other now this is probably more useful for api type of testing because usually we don't want customers to be flipping between user interfaces i can go ahead and apply that virtual server saying cube ctl applied in the traffic splits folder and i can apply that virtual service and if i go back to my service mesh.demo home and i refresh it we can see that i'm hitting the new v2 service over here and if i refresh it again we can see i'm back on v1 i refresh it again i'm on v2 so we're bouncing 50 50 between videos where v1 and v2 useful for api testing may be not so useful for ui testing now traffic splits has its uses but in a micro service architecture many teams utilize feature toggles to turn certain features on and off we may not want traffic to bounce 50 50 between services now usually what development teams like to do is set a cookie in the header for a small percentage of customers and that cookie value allows us to send only a portion of that traffic to our new v2 videos web by setting specific cookie values we can turn certain features on and off and send a portion of customers to different areas of our system this is called canary deployments now for canary deployments i can go to the kubernetes service mesh sdo folder and i have a canary folder with a virtual service and this virtual service name is called videos web canary and similar to our traffic split we want to target all traffic with where the host is servicemesh.demo and then in here under the http section i have two match patterns the first match pattern looks for urls where the prefix is slash so basically all traffic coming into our videos web and i want to specifically match a cookie with this regex statement i'm looking for version equals v2 in the cookie header and then i want to route all of that traffic that matches this pattern to video's web v2 and to send traffic to v1 we want to add another match pattern where we only simply say that we want the url prefix as slash without the header of the cookie regex and that traffic we're going to send to destination videos web which is v1 now before i apply that i'm just going to go ahead and delete the virtual service we created for traffic splits as that may interfere and then i can go ahead and deploy this virtual server saying cube ct i'll apply minus f in the istio canary folder and i want to apply the videos web yaml there that'll go ahead and deploy our virtual server so to try out our canary deployment i can go ahead and access our service mesh.demo forward slash home slash in the browser and if i hit that page we can see we're on v1 so what i want to do is press f12 to go to the developer tools head over to application and we can see we have cookies here we can expand this and we can add a cookie version equals v2 and once i've set that cookie and i open this page in a new browser window we can see now we're stuck on the v2 of our web interface and we can see now that all our traffic is going to this canary deployment of v2 so that's a ton of information on the basics of istio that'll hopefully help you with your sdo journey now be sure to check out the source code link down below and try to take it for a spin and let me know in the comments down below about your service mesh experience and remember to like and subscribe and hit the bell and also check out the community page in the link down below and if you want to support the channel even further be sure to hit the join button down below and become a member and as always thanks for watching and until next time [Music] peace you
Info
Channel: That DevOps Guy
Views: 22,458
Rating: undefined out of 5
Keywords: devops, infrastructure, as, code, azure, aks, kubernetes, k8s, cloud, training, course, cloudnative, az, github, development, deployment, containers, docker, messagequeues, messagebroker, messge, aws, amazon, web, services, google, gcp, istio, linkerd, openservicemesh, servicemesh, service, mesh, networks, trafffic, tutorial, guides
Id: KUHzxTCe5Uc
Channel Id: undefined
Length: 33min 14sec (1994 seconds)
Published: Sun Nov 08 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.