Testing Strategies for Continuous Delivery

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I think the actual first official Meetup of the test fanatic group just for some context this was actually the merger of two other media groups from last year and this will be our first one so some other quick little things those of you who've been members from the other groups will have another calabash workshop and I apologize for those who went in October we lost the instructor the day before so the nice part is I'll actually have a book hopefully about the same time done with everything that I got out of the guy's head and I will actually be double checking with xamarin who now own calabash so if you've nothing they're mobile automation calabash is a ruby or c-sharp now cucumber implementation for mobile test automation they're actually here in the city and they'll actually probably sometime this summer come out for a talk about some of the new stuff they have implemented before their next big event and then at least I got a confirmation this morning for our next meetup it's going to be in April part of the reason is I'm not going to be here the rest of this month so I have no way to coordinate March so the next one be in April ghost inspector we've just got purchased by run scope a friend of mine runs run scope he'll come in or someone from there we'll talk about run scope which is API tests man test testing and monitoring and the recent active acquisition of Ghost inspector is to add in web-based testing into their sweeted tool so someone from that team will come out in April to have a chat about overall what it is they do and some of the testing tools they have available I will actually be trying out ghost inspector next month myself to give a full write-up of what my thoughts are and I've used their run scope API tool at my previous job to write full-blown test Suites though it's not too bad all right so we're all here testing strategies for continuous delivery it's actually the extension of a talk I just did almost ten hours ago over a developer week so I'll post the PowerPoint on our meetup group when I camera name the app now what called me to try out their app to record the entire presentation this morning that I did building the CDP what we were doing here to build out our continuous delivery pipeline from scratch so this one is a little bit more in detail on the testing portion which I think I cover there like five minutes so we'll go ahead and expand on to that so always like to give a little context to understand why we do our designs the way we do years ago right we all went into this waterful mode in fact when I joined Macy's about three years ago we had just left this model of software delivery where we'd spend months in the development and go into testing and and then finally a release cycle and kind of did a little bit more like this where we created iterations our sprints at for like a two-week units that each of these blocks represents a two-week interval roughly or our first attempt three years ago was we'd spent two weeks developing the future we'd have dev stories and then we'd have QA stories which would execute on the second cycle or it's been two weeks testing the hell out of it and then developers would just sit there and fix every defect that would come and at the very end we actually wanted to which is not drawn right but it's a four-week cycle where we did a full-blown regression merged all the code from all the different projects and in here so back down we have 40 different projects working on the same map and so we kind of went to this model yeah yeah so we a challenge and that's in the other talk but we had some challenges with it in that model back then so we made a lot of changes today where we had every developer or every development project had their own environment I think that was our biggest change from what we had three years ago back then we had one environment for Macy's and one environment for Bloomingdale's and so whenever 40 teams would check in code we didn't know who broke what and so we made a lot of changes there however we kind of not paying attention we kind of entered into this model that were at least my team is currently moving away from because of our the way the wiki development developers got a little lazy in writing the unit tests and even the component tests so responsible for and relied on their QA teams to write a lot of n2n test which as nice as it is it's actually a challenge when you're trying to do CD I can tell you trying to get regression libraries of 10,000 plus tests and some of the team's done in a day it's not and you know can't be done because we'll have failures left and right and it'll take us three or four days to actually finish a full testing cycle so or the way we develop caused us to have this problem so even with this problem diving a little more I found other issues even though we had this agreed cycle we still kind of went in this development mode two weeks manual testing for two weeks handed it over to a different team to go off and write the automated test cases on that and then actually commit that for a lot of groups because we delivered at the end of commit tests it didn't no one outside really cared how it got delivered but internally this for it was - for this eight week process was kind of a pain for all the different teams involved so what did we do we made a lot of changes so I walk through the pipeline a little bit so I mentioned earlier we added infrastructure and a lot ability to add a lot of development and testing so we now can do so I just pick teacher a B and C but we can do about hunt with our current infrastructure and our current toolset that we have we can do about a hundred and fifty different development projects at the same time for the same application we made some changes and our startle e increasing on build which is actually build and running you know so maven clean run everything is what we do here at this stage where we're on all the unit tests then our deployments and then we run a series of build acceptance tests that was a current strategy from QA or you know these tests must pass before we even go any further and then we'll sign off on if everything looks good past it and build and deploy alright so I think most people are familiar with this kind of flow I don't have this slide twice oh okay so even within a fast way when we even with this flow we need additional changes to the teams we started trying to embed more experience automation engineers to start writing tests faster so tests became manual and automated at the same time so then we can go into a full regression and then we have a smaller regression for any fixes found afterwards right but still our deployment time from start to finish of a feature was still eight weeks wasn't wasn't bad wasn't good it just everyone knew that if you wanted something you were gonna get eight weeks later and that wasn't as we start to get more competitive with other companies you know Walmart down there I don't know anyone from anyone from Walmart Labs before I say anything I just realized so no I mean we have a lot of I have a lot of colleagues out there but I think they're also going for the same challenge of how do we get stuff out faster so we started looking at the problems about six seven months ago my team I have a smaller development team which from the outside world our she doesn't have QA so if anyone chats with Steve or me just realized I don't actually have a QA team I just have a team which is hard for a lot of people who want to apply for my particular team but when we first started when I joined to this group eight months ago we formed a small set of features and we realized our very first problem was way back here as so on features see I'm going to check in code it passes all the unit tests we write we go off and deploy and then we failed and it used to drive us crazy because when we go look at these bat tests the failures had nothing to do with anything we checked in but because of some issue with an environment or a test and a lot of times I'll be honest we found it was the way the tests were written and failed for us that I wrote you know we wrote a feature for this had nothing to do with that and we just could not get past this until either we talked to the QA teams to go fix these tests or we'd actually go in and either a fix the test or be actually hopping to Jenkins and disable the job because it because we were 100% confident as we were getting through it we're just failing I can you know there there are times that we say it until midnight because we couldn't get past this stage you know 11 o'clock I'd make a call to my boss and say look I'm disabling this job do you need like some kind of email I can actually verify that this has nothing to do with the code we checked in all right and so that became kind of a really a problem where we were identifying well why are we developing code of the CIA the CD python supposed to protect us by actually knowing what to run and we were just you know so I think after the fourth or fifth time that we were staying up till midnight to get past this and the third time calling getting authorization to disable the jobs I think by the third time we just stopped calling we just disabled it but we realize we had a minor problem so we kind of changed our approach we actually disabled for for our test we actually ended up reaching the job so if it can actually from my team depending on which group or which set of tests were written so for those in the Java world you'll notice we actually call a unique set of tests that matches what we cared about so that was one of the changes that we did because we were just getting frustrated left and right of you know feature a is done but we can't get past because feature B and C broke ours you know stuff rise late so we actually spent some time to rewrite our maven job and then go back into the test repo and rewrite a lot of the annotation so that we allow only tests related to this particular story to be run and then for our Ruby stuff our Ruby tests that run for existing applications we just ran the tags and make sure we tagged a bunch of feature files would the story that we wanted and so if any questions feel free to ask but that was basically the first thing we did because we were just getting kind of little frustrated you know tests that we're running had nothing new or other code changes so we did that for a while and then what we discovered was it's actually not too bad cool yeah so we discovered was now that we have this the Jenkins job is actually working through and actually picking up the right tests to go and we work actually you know now we're passing well we also realized we probably should be very careful with that that command so what we ended to do is we end up changing how our team work so for the last three months my team has actually changed we had a certain set of rules features had to be done in five days oh my goal so we did a lot of things we actually went through this process first we actually went through a story walkthrough looked at what were what we were building and say uh break that smaller and we were breaking things so small that they were literally done in a day I think it was a database change that became its own feature then we made a small change to the automated test we actually so in that Jenkins job not only does it run that command but if it returns no test found we actually stopped the whole thing and sent off an email saying hey something we you didn't uh you don't have any tests I'm not even gonna bother building alright so we actually scan ahead now for a lot of the story so it actually changed my teens behavior to actually get the test done first right in the true a TDD fashion to get it done before anything and then once we were off we went off and started checking in the code and a lot of times the thing the one big change behavior change we did was tell everyone the test that we write the first time isn't going to be perfect and I think that's how our other teams got to the point where they rotted the automated test after the development code was they didn't want to write tests that were bad here we went oh here's a mock-up you know what it's gonna look like write a bunch of tests you know selenium you know the commands get close I think we were about 85% accurate in writing those automated tests just because one we knew the application and two we were lucky to have a PDM a product manager that had enough details in the screenshot or mock-ups that we could actually go off and write without ever writing any the code so that was probably the biggest change to the team in that in that fashion so once we got that we spent a lot of time cleaning up the functional tests I think the now from the story walkthrough to the review bill for us now it takes us about three days three to four days where we get that far so that's a big drastic difference from you know two weeks of development in two weeks of of testing and then we spent a day cleaning up our functional tests so a lot of cleaning up the element IDs but more so we focus on which this tells me what the next slide is but well basically yes we actually started to remove a lot of the functional tests when we compared it to the unit tests I know for the stat for one of the features we develop we went from 15 acceptance tests and dropped it to 6 or 7 because we found out eight of those tests did exactly the same thing as the unit test and then we realized okay I take that out because that's 15 minutes runtime when everything's exactly covered at the unit test level and so we now at least for one project team or two project teams we've changed to this model so we kind of had our new goal in mind right we're now trying to get the word to our teams to switch how we do our testing it's kind of scary for QA to go wait you have no end-to-end test there's no UI test but what we can what we're actually finding out is as we start going through this review process and pushing test down we actually have the ability to go now our QA teams can go off and write real meaningful tests that go from one side of the product all the way to the other and then much more critical fashion than we could before so I'm part of the merchant systems so not on the website so to give you an idea to get a product onto the website is actually a lot of work and so many do we have I think lasts at least five main systems to get that guarantees the product will go up on the on the site and now that we've removed a lot of the you know I think one of the most annoying tests I saw all the time was login to the application and do a search and see if it turns back a result I mean I most 60-something test do that right we found out get rid of those and so now we actually have teams focus on okay now I have a product I'm gonna do something their product and then make sure it updates to the site so now we now that we have more focus in that area we can actually start delivering our internal systems to be a little bit more stable than we have been in the past because we focus in other areas our next biggest thing once we started we discovered we're moving code are moving our tests down into the unit level as we actually did go off and measure our unit test coverage we used to do that in the past and what it was it was just a maintenance effort so we I think at some point we stopped doing it because again we ended up relying on the the functional test that we made a conscious effort so this is just one of the snapshots on one of the projects I'm on where we spent a lot of time to get close to 100% test coverage all right one of the things we learned after going through this exercise is we're still gonna ship the bugs has nothing to do with that yellow line but it's just something we just learned over time that we only test whatever code we actually have if we don't code for edge cases we don't code for you know null pointers or anything like that our code coverage still doesn't tell us anything no cool so in this particular case here it turns out we wrote the wrong test or we have the wrong functionality in this particular app and nothing to do with what we intended and the whole time we left this code in here until we actually doing coverage so this couple lines it turns out no matter what we did we were never gonna hit those lines I mean other than directly calling it as it was intended it turned out we should have put this with another product or another service but for the longest time we left this code in here for a while going I think we need it but we turned out we never really used it so that was one of the things we learned with code coverage it actually gave us a little insight into go huh shoot she didn't do that all right so walk through what we were actually currently going through today so the way our current world works is we check in a bunch of things - a story branch for a feature branch we go off and you know build off a dynamic environment so if I feature a I'll have environment a we'd often build it and right now we still run those tests that potentially could fail that we have nothing to deal with it but the problem is we can't change it because there's at the time side it's 150 different teams not everyone has tests to go through so we actually still have to run those tests and then - loop is just a name that we just kind of gave it a code word it actually goes into sorry this one actually goes into version one which we use using to find out whether who actually committed that code if it come from my team to go different paths right now because we have a set a separate set of Karma's we go off and extend to the environment a little longer because we have a long usually we have a longer set of tests and then we execute what we're calling right now the smart job because it's going off and picking off the test cases based on what it's reading in version one and JIRA and so on saying okay these are the tests that I need to run for this feature branch and then you often notify and then at some point up here somebody has to go actually do dive in into a code review you know for us is mostly anyone who didn't actually developed a feature or the test so anyone else who isn't he didn't commit into any of that in the feature branch goes off and does the review and today we'll merge that code to master we'll do a couple deployments current process to the left because we didn't want to break our existing pipeline for most of the features that go into the regular QA cycle that we do daily but then for a little bit smarter on our side we go and start running the targeted regression or deploying to a dynamic environment because we don't check in that we don't check in that much many features a day we execute that smart you can't see here but we call it smart prime and that's the collection of basically going back to juror and version one and finding all the components and hoping that we've made some effort to tag all our tests properly so that we know which tests are run based on a component or the test class and then in our case we execute the bat tests again just to make sure we didn't screw up right now and then target off to the mini regression okay yeah it's a food Yanis so targeted regression that's an interesting one that we we're still spending some time rewriting today and we're talking everything right now to get when we go into the get commit we'll take a look at JIRA alright we'll find the name of the story or the storing identifier we actually have our we'll have we have some of it and Alex will actually execute a bunch of API calls to JIRA but we need to go rewrite this now to version one and try to get the component names and past this name down into our testing framer so earlier you saw the Maven command or the cucumber command where the story branches it's basically replacing those annotations and tags with the appropriate components or list of components at least for cucumber we can do multiple I don't think we have the multiple working yet for the j-unit framework so where we're heading to now or very soon is a much smaller version where you saw earlier I can tell you the previous diagram took about two and a half hours to run through this one's taking us roughly about an hour hour and 15 right now to get through this and I think I know part of a problem is the targeted regression job is still not picking up the right test and so we're running extra tests all right so our last our last day we made all these changes to the framework or to Jenkins basically we have one minor problem and I'll go ahead and pick one here Oh pick the last test that pass is test results all right we need to find something bigger buddy now I'm switching to this view being able to aggregate all the test results to kind of correlate back and forth with what we have so for example knowing I can't click on here so knowing the failures here and then I see the previous job habit right I think the previous failure and then identifying whether these two test failures are the same between these two builds or were they different without having right now we manually go into Jenkins and click and say oh these are the three test cases that fail go click previous bill oh there's are still the same three ones this so this hopefully someone's still working on it or it's a completely different set of test cases but imagine I'll just give you a cheek it's our Jenkins and imagine the right tab open but imagine now you had to go look across all your applications I forgot what was the history of all the different test failures going back and forth so that's our you know that that was actually our biggest challenge even after making all these changes luckily although don't have a copy working where did it go yeah luckily yes she partnered up with Xavier labs to to build out a new tests or not to build out - we took their existing tests reporting dashboard and they've actually started working on integrating into our CI or CD pipeline so unfortunately I couldn't get the real one up and running but I'll have a local copy running and I also have Brian back there in case aya from Xavier labs in case I've got this wrong because it's actually a different version that one that was used to a lot different but like for one there's a login page now the first version that we had didn't have a login page so that was luckily it's the same password that the other tools that we were using has so this is basically our new what's going to become once we hook in Jenkins everything into a new dashboard one of our one of our biggest challenge is if you have an application we might have 40 different jobs that represent different computer components for testing we'd have to read all those to go figure out which ones Ashley broke you know hopefully we have a lot of features here where is it detail know what happened to my where's my Brian did we move the differ ports out of here okay well I'll find some yeah okay so one of the biggest things that we had oh I don't the plug-in on here is that what it is okay so okay no worries up so what we have now or what we have soon is we actually can tell you know here's our basic failures and pass but the plugin I don't have installed on this version is we'd have a little graph here basically a similar bar chart that actually tells us which tests passed last time oh sorry these are tested so imagine these are tests that passed last time that failed today or the this time and those that failed last time which pass today so that was probably one of the biggest enhancements that we have now is that clear visibility between what happened between every single bill and imagine now this is actually 20 or 30 different Jenkins jobs executing 30 sets of tests and actually sending it all into one dashboard currently today we have to go read every single Jenkins jobs or the testing team's read every single jobs and kind of aggregated themselves and some you know it's a mystery to me how they aggregate it all up but to be able to track all the different versions of commits where we now have a nice little build where I can tell you know which ones go and then I can quickly look and see okay of the failures you know the admin all right I can actually quickly tell did we make changes to the admin code that there should be any deltas all right ideally I think the vision for this is ideally things that passed or failed should be zero if we didn't touch that code those components shouldn't be here so if I only touch the you know the integration service then all these guys if we get our if we get our coding right or a test right should not even have any graphs on there so that's kind of the goal there and then diving in somewhere with our so test is even more we can actually drive down to the test case and actually look at the details all right so we have that in we have that in Jenkins with the JUnit test we're going to scroll down but for us at least from what I'm finding is looking at a high level I can quickly make quick glances to see across the board for the entire application if I know that this commit that this feature that came in only touches these portions the application I can actually quickly scan through these pie graphs and visually just do a quick inspection to make a decision one of the so let's even drive let's drive down even more doesn't do it okay this is sorry Brian this is as far this version I have goes right I can't dive down further okay alright so now I can actually look at the history of the individual test cases you'll notice down here we're trying to have a flakiness graph it shows our our test execution as a particular tests over time and we haven't I'll be honest we haven't played with this one too much can I ask how yeah okay okay does everyone hear that or okay and then we have duration and overall failure results I think that might actually be the last slide I see yeah okay so all in all that's actually the testing strategy that we're doing on the merchant system that we're working on for 2015 I'll leave it open to questions I'm also running out of power looks like yeah way in the back so my team's test suite sure oh yeah no problem so the question is I start mark the question is percentage of time maintaining test so for my team that's probably no more than 10-15 minutes a day we wrote them we run through actually we run through an older version of Excel tests and so we can quickly look and we actually get rid of tests that I've been flaky or perceive flaky so we're spending no I don't see anyone spending more than 10 15 minutes a day unless we have a really doozy of failure that were you know worst case a day but on average I don't think today you know I think I asked someone we haven't even looked at the test in a long time because we have this report that says yeah everything's good or everything fails quickly and we know where we're we tied to you I mean I can find out what everyone else's test time is but I know for my team it's not that long it's actually a very small portion of the day um kind of depends back and forth my bosses here laughing cuz we both wrote the actual code recently before the unit test but it kind of depends on our crunch time we're trying to get diligent in it but as in our workflow we actually have to have the unit test before we even get the lead approval so whenever we actually do it we're not too strict on it's actually in the approval process we're hoping some time they're sure to actually have an automated gate process that goes your unit test covers was actually worse than last time based on percentage of code checked in and tests but for right now because that lead approval step at least on my pretty diligent on making sure there's unit tests in unfortunately hasn't really changed our workflow in new test cases because these are new features that were not from a technical standpoint we don't know all we the only gain that we get out of it is that very first commit might have taken let's say 20 minutes to run the test suite but by the time I go into lead approval probably the total run time of the functional test so like the one that was from 15 to 7 so that went from 25 minutes I think the runtime for that was 6 minutes alright so there's a slight difference when we get rid of redundant tests in that level because again opening up a browser waiting for the browser to communicate all these little things especially one for that first feature we found out almost everything can be done it's actually we actually redid everything in DB unit since a lot of it was really more calling a database query to duplicate it's do create another round entry so we got a really a lot of a test or open up the browser quick duplicate and go through the whole process yeah ah interesting so we're currently building out where we have besides the code coverage running we're Excel tests Ashley allows us to put a little bit of algorithm we can actually put some logic into it so I think our first version right now says ninety percent passing will make some judgment off of it but as we actually start walking you know start building this thing through well actually start writing additional logic and say these set of tests must pass or as we learned through I mean one of the things that's nice with my team is we also deal with production issues so when we get production we also go through the same pipeline and build tests and we can actually mark those as super critical or never have that happen again or you know this guy over here is not paying attention to me is gonna go have my neck out so so we are currently writing a lot of Python scripts or well actually we're learning Python so we can write the scripts into Excel tests to actually start doing a little logic on test results um I don't think it's the number of buds it's the stupid bugs that make it to production will never release time hope I'm not account as you to releasing bug-free code they're just gonna be things that we don't know but something as stupid you know I don't know yeah I can't login or you have a null pointer that you didn't know you know your education you didn't you know obviously and you you know entering the zero for a price all right well we'll get dinged for that but something way out there were you know we just didn't see it I think we're okay as long as we can again through our new CI CD pipelines if we can start getting the point where we can deploy the next day I don't think that people are gonna get too mad at us if we miss something sure no today it's not well right now it's actually triggered by a commit when I check in a piece of code so there's no button I actually push I I'm sorry yeah one manual action to me checking in a piece of code I do actually have a red button I'm actually programming into right now so that if they all goes green or something I can press that button and have it deploy to a unfortunately read by don't have is yeah everyone's completely out everyone wants the giant red button right now it's a tiny little button I think I've bought the one from geek tools and it's only like this big okay well so our solution area is I'm actually going off and writing an iPad app they just put a red circle although I am towing with the idea of fingerprint I'm not just pushing the button yeah we could print our own button but you know it's probably cheaper to buy me an iPad any other questions yeah yeah yes we do we do um we we we do have it because the way we one of the challenges actually in my previous talk this morning talks about how we actually started to get a little more controlled on our devil development environments so to overcome that challenge we're actually playing right now with docker just our container izing our apps so that development testing teams and so on will have the exact same piece everything right now it's slightly different when I go build something locally on my laptop and when I build something in CI when we actually hand it over to release engineering it's a slightly different build in terms of environment configurations network setup it's slightly different and then so there's a lot of slight differences I know for like one feature our configuration files were completely different but we never knew that they had a different set of configuration files right so we're working our way to get a little better at it yeah no no it's fine I know you're from a testing company I'm sorry so I think QA is half of the in terms of size is half of production right now my CI environments where I actually do a lot of my work is probably one fifth or one sixth but ideally we're trying to get the point with docker that we don't really care the size difference we you know the piece of code and the configuration squirrel do it that's what we're trying to shoot for this year yeah we actually do we actually just started doing performance testing within CI um we actually this last year well you spent a lot of time testing two different sets of code we didn't know which one was better so we did Jenkins to spin up two environments to compare two pieces of code to see which one actually was a better performance improvement right now this year we're going to focus on trying to get more of that performance testing daily in CI right now against unfortunates my project team since I also have a performance engineer who can go write these scripts and using jmeter and you know Jenkins has the performance plug-in and Excel test which I haven't got a test yet but has a jmeter enhancement as well so where when I come back from vacation will actually start implementing that inside the pipeline on a regular check probably not every checking basis right now because infrastructure but at some point per check-in um so okay so - when I say when we were splitting up these builds they were actually the same functionality we were testing to see you know we had a problem with the safe and you know it takes 10 seconds to of us argue if we do it this way it's gonna be faster we do it this way it's gonna be faster so to prove it we actually spent off and wrote Jenkins jobs to actually build two separate environments with two separate pieces of code and to do comparisons between the two I don't know I know the site's down but we can actually see comparison wise which of the two and kind of got competitive between the folks like Oh mine was faster than yours in terms of the individual pieces it's kind of what we're doing now with the individual features running a smaller set of tests against that and then at some point when we hand it over to release engineering they run the full regression any other questions yeah well okay so for me a full regression is running the set of tests that actually adds value to the code changes I've made I at least I must first before I can do that have an own baseline that this is a I know to you know close to 100 percent this is good I have a reliable regression library and now I'm just doing era mental changes a lot of other teams are doing full regression as run every test you have I'm kind of against that approach was a couple years ago I instrumented the build to measure what they were actually testing and even though they ran every single test that they had they cover 45% of the app so I'm trying to find it right now my team's trying to find a balance of where 100% fully tested and not saying 100% regression trying to separate that out by fighting data that says here's the test here's the code change here's the code coverage based on that change and within high probability these are the tests that we had to run to validated it's a mixture so as I mentioned earlier code coverage only covers what I've got coded if I didn't code edge cases or anything I still have to somehow test that manually kind of more than the metric I'm shooting for this year is kind of - it said earlier how many bugs that I actually have in production you know I think what I'll try to propose is less patches and higher reliability but at least my project team deploys is you know without fault so but yeah I mean as I've you know I can post more as we as we go through that discovery this year of what's what's a value I'll keep posting it in the meetup group as well the others okay well thanks everyone you feel free if you want to ask some questions you'd want to ask in front of the group and I said thanks for coming to the first test fnatic Asheville meetup and look forward to seeing you guys in April when ghost inspectors here all right thank you
Info
Channel: InfoQ
Views: 25,575
Rating: 4.6923075 out of 5
Keywords: Melvin Laguren, Test Fanatic, Meetup, San Francisco, tutorial, new practices, best practices, testing code, code cycles, deliever software, better, strategies, learn to test, code, continuous delivery, software testing, Test Strategy, Tricks, Tips, Help, Learn
Id: DgQWSaCQ82U
Channel Id: undefined
Length: 40min 47sec (2447 seconds)
Published: Wed Mar 04 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.