Going Reactive: Building Better Microservices - Rob Harrop

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this is a talk about reactive all the reactive content apart from this presentation is tomorrow so I'm going to give everybody kind of a really quick intro to the motivation for reactive why should you care we're not going to talk about performance or any of those kind of like technical low-level issues of the reactive I'm going to talk a lot more about how the spiritual the conceptual the kind of real reasons why you're gonna see long-term value from going reactive the talks in two halves I'm going to talk a little bit about why I like reactive and what attracted me to it and then I built something for work I'm going to talk you through it full warning Stephan is outside now changing the api's so it's it's out of date already but you'll get the picture previously I was co-founder of Spring sauce and I've brought my little rod Johnson bobble head along because you haven't gotten to these you can win one if you take a lot of photographs with speakers today and now I'm doing machine learning at skipjack we use machine learning to do performance optimization of applications and we like reactive for two reasons we like to build things with reactive it helps us build microservices in a way we think it helps us think about how our systems and our micro services piece themselves together but also it's nice for performance it's you can build performance systems with it and you can build tunable systems with reactive you get me on Twitter I'm gonna get live and github the code for this talk is on my git lab hotspot Mon and I occasionally write a little bit about maths and modeling and performance so as I said I want to talk about principles I want to talk about why like what what what is reactive me and why should you care what are the key things you're gonna have to learn what the key thing is you're going to gain and then we'll see some code for as kind of as long as we've got this was originally a fifty to sixty minute presentation and that's the slot I pitch to Peter and then he gave me 30 minutes so we're just gonna see how it goes it's gonna be gonna be exciting let's start at the beginning why reactive and the thing that attracted me the most to reactive is that when you think about systems or when you think about really high-quality and architectural components and things that compose things you can reason about good architecture is fractal the way you talk about your systems and how systems communicate is very similar to the way you should talk about how your subsystems in those systems communicate and how your classes and how your objects and how your modules communicate the whole thing is fractal I'm a little bit of a kind of conceptual architectural geek slash snob so this really appeals to me it's the kind of thing you can model mathematically it's the kind of thing you can think about but reactive is probably the first time I've seen this translated well into code in my mind it can you can trace this back to the 70s when Tony Hawk came out with CSP and you could reason about systems with CSP you can reason about subsystems with CSP you can reason about classes with CSP now you can build systems that way you can kind of construct whole systems and whole services and piece them together using similar concepts this is where we get reactive systems we start to think about everything using the same terms everything using the same abstractions and the same concepts and how we see objects being pieced together and composed we also see services being pieced together and composed did anybody go to Marc's talk about spring dataflow before there's a few viewer he was talking about composing micro services together as streams and reactive talks about how we might compose objects inside micro services together with streams and obviously we can do the same thing for whole systems as well it's not just micro services so this notion of a stream and this notion of kind of reactive components that we can piece together that we can transform is something where you start to see this fraction fractal nature arise from so this wasn't working before but will will give you a go cool you can see here like this is my hotspot Mon so I had a problem which was I wanted to monitor the internal state of all the JVM s on a machine across the whole fleet machines in particular I was interested in finding out the state of a JIT compiler this is not something you access via JMX there are no standard ap eyes for this you have to kind of get into the internals and hotspot exposes an API called JVM stat which is kind of from internal usage there's a whole bunch of Sun tools and our Oracle tools that use this API to give you kind of serviceability and so forth it's a very useful API but it's horrendous to Cote it's like the worst api you'll see some examples when we look at how it was rat wrapped in reactive and what I was interested in was modeling this notion that I had a series of JVM Xand let's think of those as the publishers of data they're the kind of the sources of data and I worried hotspot want to consume all that data and then push it out to various different sources in my case influx because I'm monitoring this time series data but maybe also into L kind of searching and so forth and the idea here is that in real time I'm watching the JIT compiler in systems figuring out when the system is warm and that kind of ready when we're ready to test and all from the top to the bottom you can think about this as being a stream of data a stream propagating from the publishers so the hot spot one which is a subscriber but he's also a publisher down two systems downstream so that's kind of systems as reactive thinking about how a whole system might look reactive but inside that system inside that hot spot Mon there's a whole bunch of extra components in there if you think about it we need to figure out how to monitor the JVM s there's a whole bunch of timing issues about when when do we sample what do we sample when how do we keep the kind of the state of things that we're monitoring up today when somebody expresses an interest in a new metric on a new JVM how do we see that and we can monitor and we can go model all this as streams from while all this is reactive streams so thinking from the top again I've got my JVMs in the outside world and I've got a little reactive component or subsystem that that translates JVM start into reactive and I'm consuming that data in my sampler but I'm also consuming a clock signal and I can model my clock as a reactive stream as well I'm gonna see some examples of that and I'm pushing this data out to influx but also to my repository and then kind of down onto the web and then Juergen talked about this before and I guess it's been hinted at maybe in some of the keynotes as well that spring 5 is going kind of natively reactive you're gonna be building reactive web components you're going to be using sort of end-to-end right from your repository all the way out to the web it's gonna be the stream of data that you can transform and compose so it helps to think about how you might break your services down into their subsystems I think it's naive to assume that a micro service is going to be the lowest in a building block you don't want to end up where every micro service is like two classes they're just so fine-grained that the may even pom is more code than the Java code that's kind of taking it to the areas of craziness so each micro services will have subsystems it will have kind of individual parts of it that could be used independently but aren't necessarily packaged independently but good architectural practices and good design coupled with reactive will allow you to kind of maintain these discreet areas of function but it's not just the subsystems themselves that are reactive we can even think about objects and classes as being reactive so if you think about this in the hotspot one example I have a JVM so like one objects representing every JVM and then for everything that I'm sampling for every metric that I'm sampling there is a stream of data for that metric coming from that JVM so maybe I'm interested in knowing you know the total number of compiles and Thirsty's maybe 60 to than 83 and 100 and the subscribers consuming that data and I've got hundreds of these metrics that's like 250 available to me and I'm watching them all but these are individual objects that are also streams composed into subsystems that are also streams composed into services that are also streams composed into systems that are also streams there's just fractal the whole architecture is just streams and Composition on streams it's very powerful it's not just thinking like something's that something's a really simple problem it's really the kind of canonical problem that translates into reactive in my opinion it's like logging was for AOP back in the day whenever we want used to do an AOP example it was log at the entry for a method or due transaction demarcation metric sampling is kind of reactive logging but there are so many more things you can model as reactive so here's one that's interesting is we're monitoring a system and JVMs come and go your application components start we want to test them they go away more JVM start so we can kind of treat that stream of JVM is arriving as a reactive stream as well in fact when we look in the code we can do the opposite the opposite we can treat JVM is dying as a stream as well like when a JVM terminates we get a signal emitted on a stream so that JVM is gone you can pretty much model everything as these streams and when you start to model everything the streams and you look at the kind of wealth of operators which right now is being extended by Stephan out there in the hallway it makes sense because you can compose them in all these very powerful ways now that's all well and good it's kind of a bit hand wavy there's this kind of architectural purity here and I can sit here all day and talk to you about cell similarity and fractals and so forth which is very interesting but when you see in action you're gonna see how it how it pans out and it helps to understand why this works or what the principles in the reactive world in the reactive kind of mindset that make this work the reactive manifesto talks about four things talks about resilience talks about responsiveness elasticity and kind of asynchronous it's your message driven components and you'll see different people have different stances if you talk to the guys from light Bend which was previously type safe and they're the a curse collar guys they'll tell you the micro services should be message driven always and there should be asynchronous first and any kind of synchronous processing is kind of anatomy to them and that's that's a valid viewpoint I think it's maybe a little bit too religious on one side they sometimes a synchronicity is great but also sometimes a synchronous core is all you need especially if you've got some kind of low volume service where you're not performance critical maybe you're not even production to call going to the overhead of introducing a synchronicity maybe too much likewise resilience it's become the mantra that you must always build resilient services now later you everything must be resilient but it fundamentally treats every service is the same like if you were Google and you were building the borg clustering system the bit that actually does like you know all the workload placement I would say that's far more important to be resilient than the bit that does the thumbnailing for Google hangout pictures and so forth like if that breaks no one really cares so not every system is created equal but reactive provides us with a way to to make things resilient or get some resilience for free what's most interesting I think and this is the reason why I'm really excited to see all común to spring is the responsiveness pieces that is probably the most important part certainly from my business we're in the business of performance optimization we have a lot of customers who care a lot about responsiveness when we started our business we had this thesis that everyone would want to save money by making their applications run faster and then having fewer instances running in Amazon but actually most customers want to run faster just to run faster just to make their system more responsive for their customers so being able to build applications that are more responsive just straight out of the box is really important and key to that is being able to build concurrent applications very safely and that is a big enabling part of reactive so for resilience he's heard of back pressure ok cool very few of you and that's that's not surprising so the idea of back pressure is to break the common problem in systems that are asynchronous which is the publishers publishing things and the consumers consuming things but nothing links them nothing tells the publisher how much data the consumer wants or worse nothing tells the publisher how much consumed data the consumer can even handle so back pressure is a principled way in which the consumer can control how much data is coming from producer can signal you please stop and the idea here is that you don't run out of memory that you don't often start thrashing because you've just you know constantly GC is it's pressuring you you're not just completely swamped with data you want a really nice way of saying I'm ready for more data now please stop and ready please stop and ready please stop and you can kind of see this here I've got this total compiles stream this is a stream of data about the JIT and I can sample this as quickly as I want so I could sample this every two milliseconds and when you're hitting a system hard with a lot of load the JIT compiler is basically working as fast as it can so it will change on a regular basis and we're trying to publish this downstream to influx and eventually it's going to HTTP so there's some you know very fast component where we're just doing like sampling of memory so the way these calendars work is they're just a memory mapped file so reading from it is very quick whereas publishing over HTTP to a time series database possibly on another network somewhere that's a lot slower so naturally there will be some kind of back pressure here and if I were to do this naively and just do the usual kind of synchronous call and keep hammering that HTTP client I'd either just start to block and go back and at some point the queues would would we kind of build up and it would die or I do the very traditional java asynchronous thing of have a blocking queue and put those things in a blocking queue and have some publish that's pulling off them but the rate of publish will be way faster than the rate of consume and that key will just grow and grow and grow and grow and grow and then explode so the nature of reactive systems is to have bounded keys everywhere and have back pressure from the consumer to the publisher so I request I'm ready for two more items please I'm going to publish them and want to publish them I get two more items and so on and so on coupled with this is responsiveness and this is you know it kind of goes hand in hand this idea that I'm going to hand off to a queue to continue processing downstream and all these things are happening asynchronously so the publisher can keep publishing to the queue and the subscriber can keep reading from the queue independently of each other they're running on different threads on different thread pools and one of the nice things about reactive and certainly project reactor is that it makes the scheduling of tasks on different thread pools and the handover nice at the API level but also very explicit in the construction when you construct your streams and you construct the wiring of your streams you're very explicit about what thread pool you subscribe on and what thread pool you publish on what the queue size is between those things and how the backpressure works you have full control over that and it's in that sense that the idea is that all of the streams and all of the AP is a message driven you're handing over to a queue and you're expecting to be resumed at some point in the future and this is basically a parent in the api's and we're gonna look at that now so you're going to probably hear tomorrow quite a lot about reactive streams which is the pivotal project for reactive but there's also rx Java which is another reactive Java project there's also rx dotnet rx Ruby rxjs that's like tons and tons of projects that have these same concepts in there that in itself is quite powerful if you can imagine on the server side using Java you can model with these reactive streams concepts you can publish that data over the web using reactive streams again in the spring 5 and then on the client side you can use our XJR you can completely array file that data into a reactive stream on the JavaScript side and you can basically carry on from there with the same concepts so it really does extend kind of from end to end in your stack in an effort to standardize this in the Java world the community so project reactor rx Java the akka guys have created reactive streams a set of very simple standardized API is that kind of captured the essence of what reactive is we start with the publisher and the idea is that a publisher will publish data to a subscriber and the only method you get on here is subscribe so every time somebody wants to subscribe to your publisher they pass in a subscriber and away you go you start publishing to them no return value and none of these methods have a return value and the idea here is that you're always expecting kind of continuation later on and there's whole host of api's about what read the SUBSCRIBE happens on what thread do any publications of data happen on you have complete control of all of that on the other side of that is the subscriber so the subscriber is the way of not actually getting the data and to preempt any questions for afterwards people might ask well what about completable future what about divert result all these kind of things the really important part of why this is such a powerful API is that it pulls out all the most important callbacks so you get a start callback unsubscribe you get an unnecessary api's I've ever seen tend to go wrong is they conflate the last two on error and on complete uncomplete is success it's done it's fine I'm thank you you can carry on on error is it's gone wrong you need these AP you need these API you need all these callbacks and you need to handle them all appropriately you might need to close whole hosts of resources if there's an error maybe you're holding open some kind of connections to an external system or your monitoring JVM is that you want to kill if the monitoring dies you need to obviously propagate all his errors back up the channel and there's a host of operators you can use to do this you can manage your errors as nice as you can mind your success data another nice part of the reactive API there's this notion of a subscription so if we go back actually and see here when we get called with unsubscribe we get passed a subscription holding on to that's really important because by default you will not receive any data until you call request and you're gonna this is how backpressure works the initial set of the initial setup with back pressure is your up you're applying infinite back pressure you have no demand and until you request some data you'll get no data and it's up to you to call request process some data anon necks and then kind of do it again and do it again doing it again now there's a whole host of kind of convenience features in the API that you can use that avoid the need to do this but the essence is there by default your subscriber will not get data and so you ask for it and it's down to you to cancel down to you to say please stop now and if you can think about it you know it may well be that lot of stuff happening behind the scenes like in the case of hot spot more and when someone subscribes to a metric I start a clock signal and I'm you know I'm sampling that every 50 milliseconds or whatever if you cancel that and I don't think i'll me that clocks still running it's still sampling that value every 50 milliseconds you might not go anywhere but that code is still running so the ability to cancel is really important no this is the fourth and final interface in the reactive streams API and I'll admit when I first saw it I was a bit what is the point okay so it's something that's both as a publisher and subscriber sure that's very common and in fact it's so common it's really important you might be better calling this an operator so when you're piecing these streams together what you're often doing is kind of chaining a series of operator on operators on streams and you're subscribing you're doing some transformation you're publishing the transform data and you chain these together and actually when you're calling these operators in a long chain what you're doing is just chaining these processes together so this ends up being probably the most important abstraction because it's the one that gives you all the power it's where all the compositional power comes from so you can think about reactive streams as defining the kind of the programming model and then project reactor and rx Java and other things define an implementation of the model and in project reactor we have two classes flux and mono now these names are crazy they don't mean anything except I'm pretty confident that mono is a disease and so is flux so I'm not sure where the names come from but that's what I always think of when I see them but it's really simple mono is just zero or one you'll get one thing on nothing from that publisher and flux is zero to many you may get nothing but you may also get an unbound number of values from that and then react or manage all the back pressure and all they synchro's processing you don't have to worry about Q bounding threading kind of hand off anything like that all of that sits inside the project reactor and somebody at the end of the organs talk ask me a question where does the actor or the disruptor or the ring buffer which many of you may have heard could have used alongside this projects it in this world it's an implementation detail don't worry about it it's just a really efficient way of doing that handoff if if you now have this kind of constant handing off between publisher on one thread and subscribe on another thread and you're piecing these together it becomes very important that happens quickly otherwise this is a model that's no longer performing and a lot of work has gone into this martin thompson Lmax fame reliably tells me that the fastest this can possibly happen is 5 nanoseconds so now obviously everyone's striving to make sure that we can get Java handoff between one third and another as close to 5 nanoseconds as possible but that sounds ridiculous but if you look at the numbers they've got for traditional Java blocking queue it's very slow compared to disrupt your ring buffer so it's an implementation detail you don't worry about it but it's an important implementation detail so now in practice let's take a look at the code this is what I built and as I said before the point of this is to kind of get internal hotspot metrics and push them out to the outside world and on demand we want to be able to do two things I once we have to start a JVM with a particular command line and then pass that command line and use it to say what metrics to sample I also want to be able to connect to a running JVM and that runtime tweak what it is that I'm sampling about that JVM so hotspot one has a Web API where you can post and kind of query the existing statements of interest and it also has a way to monitor JVM as they come up and start something them immediately I'm gonna look into this in idea and what I wanted to do with this not only was solved the problem but I really wanted to try and push to the limits this idea of using streams to capture all the abstractions because I thought if I do that if I can capture all the main abstractions when it comes time to compose them all together it will all work really smoothly and it did and that's why I was so excited to give this talk so the first thing we built was this clock and the idea is you're sampling a metric 50 milliseconds and you want something that emits the time at every period so I've got a flux of long I don't have a flux of long every 50 milliseconds or whatever and just a little kind of hack here there's a protected method that takes a supplier of long so if you want to change what the time value is you can do that and one of the things that I put in the the abstract for this talk was that testing is really part is really easy with the react if it's really powerful and there's a really nice kind of testing DSL so we can see this here I've got a dummy value of 1 2 3 and I'm going to create a tick publisher that will just omit that value every tick it's not going to admit the new timesheet emit one two three one two three one two three one two three it's gonna do that every 50 milliseconds and then using the test subscriber class from project reactor I can subscribe to that publisher and then I can await and for every one of the arguments in that await it will basically wait for the next value and then verify it and all it's doing here is saying expect at least three values 1 2 3 1 2 3 1 2 3 but then I wants to check that actually when I'm really using system current time-released that it is actually working I'm getting the right thing so I can pass in an assertion here rather than actual value and this assertion is just a function that takes the value in and then you return a boolean and I'm just saying that when you see the value you're obviously seeing it later than the current type early from earlier than the current time so just assert that there was no other meaningful assertion I could make there but it's project reacts that's handling the threading here it's waiting for those three values and if after some time out period I think it's three seconds by default you haven't received those three values then it will fail the test and say sorry this isn't working so you've got a really easy way to sort of block and pull all that a synchronicity down and it's blocking your test because you juj unit yeah isn't reactive so there's no way to say it's a J unit this is going to happen at some point in the future you kind of half the block I'm also interested in figuring out when J vm's arrived and as we taught he saw in the slides before that kind of steady stream of JVMs arriving there was a stream of that this is what the flux of VMs is modeling and then each VM has a set of interesting models and fluxes so it's got a mono have integer for termination so when that VM terminates it emits it spit the reason for this is that the JVM start API just simply doesn't really work for termination if you're monitoring a remote VM you'll sometimes get a termination event if you're monitoring a local VM you never get a termination event the VM can go away and you happily sit there reading the last value of all the sample values again and again and again until you restart your computer so we have to build that and we had to extract that behind her Amano likewise I wants to be able to find out all the available metrics so I've got a flux of that but the most important one is being off the sample I want to sample this metric name this interval width this time unit and what I get out of that is a tuple a stream of tuples two tuples the first value is the timestamp and then the second is the actual value each sample and this will just continue to be emitted until I cancel it and we can see some details in the implementation of this the most so if I think important thing to see is this this is how you bridge between kind of traditional asynchronous publishing API so think things like a WT action listener or any kind of API where you pass in a listener and you get called back you want to bridge that into reactive so you can do it this way you can create a flux using this callback and it gives you this emitter and then using that emitter you can emit events whenever you see them so inside here I'm creating a JVM start listener and the details of this API are not so important other than to recognize it's kind of messy and every time JVM start calls me I emit events on that emitter so that's how I'm hunting on next and that's been my own next on next on next on next if I see some errors then I'm going to call them it of fail and that's how I'm propagating that one on error core we saw back on the SUBSCRIBE before so if for some reason I can't actually add my listener I'm gonna fail and then the most important thing is to make sure I clean up after myself so the emitter has a call back it has something that will happen when the subscriber counsels and here all I'm doing is just removing the listener from the host if you don't do that obviously you can create hundreds and hundreds these streams and you cancel but all the listeners from before are still left behind so eventually you just kind of run outta memory and explodes it's really important that you clean up after yourself that's brunching kind of into the outside world what's also really nice is thinking about how to sort of piece these things together so I'm gonna just do this this is one of the nicest things I liked I wanted to have the ability sample but obviously when the JVM terminates I want to stop that stream if I didn't stop that stream would just kind of keep hitting values even though the JVM had disappeared so I start with a publisher that's ticking at the sample interval and I map that on to a couple of the actual value so you kind of see I'm piecing these operators together and then I do this take until and I'm passing in that mono that before and this will basically carry on until that mono emits a value and the moment emits a value that flux will terminate as well so we're able to propagate the VM termination signal back to all the sample streams as well really really simple so in one place we have logic for how termination is in cap is encapsulated and how it's and how it's tested but we're able to reuse that everywhere else so that's really quite simple this I wanted to do just to show something this is watching new JVMs come in and admitting statements of interest there's a kind of particular command line sharing you can apply and it tells you what metrics the sample but here you can create fluxes from Java eight strings so although the operators look kind of the same like filter and map those are operators on a Java eight stream and that stream itself is completely adapted into a phlox so if you've got an API that's already emitting Java 8 streams and you want to adapt it it's very easy to do so just kind of create the from stream where the flux is so much better than the stream though if you have this ability to peek and this is a nice little design point here is I can peek on every next and I can call my logger so I'm these are of rare events like statements of interest maybe trains like five or six times during the life and one of these monitoring events so doing an info log here is really important to make sure that we saw that statement of interest I don't have to write another subscriber subscribe to that that stream I can literally just kind of tap into it peek on what's happening and just do this kind of imperative action the functional purist in me hates this but I think it's kind of really useful and then kind of right at the back end and this is probably the most important kind of impressive code when we start to think about piecing this together and sending you out to influx the time series database this you will really start to see like how all of the operators can stack up so the sampler is basically pulling together all of the samples so you saw the wiki we had one method to sample metric and another method simple metric and so on the sampler will basically create one stream that has all those samples interleaved it's really really nice but here what I want to do is now send these out to influx so the first thing I do is kind of turn it into an influx point which is the domain object that influx DB has then I buffer them and I'm basically doing a batch size here or say 100 or every three seconds which is really nice so in reactor handles all that for me it either gives me 100 or it waits three seconds and gives me say 50 or whatever so I'm kind of getting these batches to push out then I'm turning that into a batch and I'm saying what to do with that pressure here this is where I get slow like publishing can be quite slow and these batches can arrive really quickly these are just sample data so losing them isn't important I don't want to stop I want to kind of carry on but I'm happy to drop them so I just say I'm back pressure drop and if I've got warning enabled I'll get a a warning my log to say I'm dropping some batches I'm also tracing sometimes when I want to see publication and then I'm just controlling where publish and subscribe happens so I'm using a specific scheduler to publish out to influx so I've got like one thread basically that just handles the publication out in flux so in flux is a great time series database but it can't handle a huge influx of data at the moment it's still quite new so you can't publish like hundreds of thousands of values so we use that thread to limit how much data we send into it and then just kind of because of because of interests like created my own subscriber here with all of the things done completely so in unsubscribe I store the subscription and kind of request the first batch and then in our next I write each batch out and then request another one and if I didn't put this here then I'd publish one batch and stop and if I didn't push this here I'd never publish anything because I've never never expressed in demand so lots of the the subscribers you see in lots of the kind of api's handle this for you by expressing kind of infinite demand so you can pass in a standard java consumer from the java api and you get infinite demand and you'd get caught back but when you need more finding control you can implement subscriber yourself and all this codes available on on get library if i had them a full session time i would have got into more detail but peter screwed me it's kind of his fault so we'll say such want to summarize think about reactive more importantly as a way to think about problems it's this frat this fractal model you can apply the concepts and the idea is all the way down from the top of your system right down to how your individual objects collaborate publishes our your outputs from things they output some systems your outputs from objects and subscribers are your consumers really powerful isolation you get async isolation and capacity management is just completely built in cue management's built-in threadings built in back pressures built in this is just all something you get free in the model you've got reactive streams to kind of serve as your interchange api for those of you are worried about portability I will say in practice it doesn't make a lot of sense right now reactors where all the action is so don't worry too much as you saw in my code coupling myself to flux and mono in most cases that's kind of fine if you don't do that what you tend to find yourself doing is you expose a publisher and the first thing you do is you turn it into a flux to do something meaningful with it because all the operators exist on flux and not on publisher and because of this you know it's project react and that's where all the sophisticated implementation is and I think it's five minutes over sorry Steve but time for questions if you want some otherwise thank you very much you
Info
Channel: SpringDeveloper
Views: 9,453
Rating: 4.8481011 out of 5
Keywords: Web Development (Interest), spring, pivotal, Web Application (Industry) Web Application Framework (Software Genre), Java (Programming Language), Spring Framework, Software Developer (Project Role), Java (Software), Weblogic, IBM WebSphere Application Server (Software), IBM WebSphere (Software), WildFly (Software), JBoss (Venture Funded Company), skipjaq, reactive, microservices
Id: wJfyyGUkKME
Channel Id: undefined
Length: 36min 26sec (2186 seconds)
Published: Tue Dec 06 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.