Experiences from 7 years of Azure Cosmos DB

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to cosmos db conf and my talk seven years of experiences from cosmos tp my name is sakarinai i'm the ceo of this company jour that was founded 10 years back and we purely work on azure i'm also a microsoft regional director and an azure mvp also there's this meetup user group in finland finland as a user group and we do pretty mean stuff on youtube on on azure so if you're not familiar with us you can join us and see our channel and see if there's something you'd like to learn about jure is a company that looks for challenges and likes to grow by solving them sometimes the first time we don't succeed but the next time we learn and then we get better and it seems to fit some experts mind space to work like this to talk it was august 2014 when documentdb was released to public preview and with the people with experts with some developers we were working working on this project and we kind of got excited we felt it would be a good fit for the project and we started working on it half a year later in march 2015 we were able to deploy the application with document tp and we were pretty happy it was it was working very well we put all of the entities in the application into a single container in those times the pricing was based if i remember correctly it was based on the number of containers on the amount of containers so we were like okay let's make this cost efficient and put everything in one it's easy it's no sequel it's scandalous why not the next month came and documentd became generally available and what happened was that the pricing changed from container container amount based into request unit based that was bit of a bummer i remember the moment when the pricing on that thing jumped from you know 100 euros to 4 000 euros or something a month and the client wasn't too happy so we kind of had to refactor the solution and the fastest way we knew we ported everything into azure sql and that wasn't too auspicious that was the beginning of my experiences with documentdb today cosmos db however since then i've been working with cosmos tv for the last seven years and nowadays what i do mostly is i i access and optimize solutions that for some reason need more performance or need to cut down on costs or something like that so today we will go through some of the most common things i see when i visit teams and visit customers on what's wrong or what could be optimized on their cosmos db deployment or environment i've never done this uh so i thought i'd begin with the summary now if you are very very very very busy like we might be today when it's you know the copy time so there's a teams meeting starting every second uh you can just watch this summary and if you don't want to see anything else then you're kind of done these are the 14 things that i'll walk through quite speedily this talk is not aimed at total beginners so if you are a beginner at cosmos dp and you don't have the the basics down then you might have to stop every now and then and check out the microsoft talks but i'm pretty sure that you can still follow and get the gist of it if there's nobody interested in any of the parts then i think i'm done but if you'd be happy to know something more let's get on now this isn't really a tip on your environment the first one is something i've been feeling for the last couple of years and it relates the naming of the cosmos sql api if you look at the creation picture on the right there's the api it's got core and in parenthesis there's sql and that is something i think that the product group is moving towards they are trying to fade the sequel naming but it's kind of impossible because the sql language is there and it supports joins and there are these quite nifty capabilities that can be used very effectively but at the same time it really really can hinder the development now if you are part of a team who's never done cosmos dp or you've done but your mates haven't please the first thing you have to do when you are starting approaching on cosmos tp tell your friends that it is not sequel actually if you can just tell your buddies to go through the some of the basics of cosmos tv the microsoft talks are very good and it would it will take a good developer like an hour a couple of hours of reading through the materials on these things i've listed which i think give out a good base on understanding the cosmos the provisioning of throughput is it database based or container specific is it are you gonna provision the throughput manually or with auto scale the partitioning strategies they are key you need to decide those in the beginning of the project so it makes good sense to learn something of the partitioning keys the way the physical partitions act behind the scenes because you have no control over those then there's the harm caused by cross-partitioning queries the cost of sql api when should you use it and when you shouldn't when should you read directly that is surprisingly often co-location of entities which is something sql people would never do they don't co-locate entities in a single table tables have a schema and then indexing somehow often forgotten fruit so with a team go through this list and think about those have a bit of a brainstorm a little bit of a chat and i guarantee that people will start realizing that okay this is not your grandma's equal we are working on sorry for having a sip it's quite clear in finland so diagnostics again not a performance optimizing tip directly but something that's very weird now i've been i've been looking at some environments that are you know pretty pretty nicely sized like hundreds of millions of items getting tens of millions of items in a burst daily and equal amounts of queries in apic usage in unpredictable usages and then i go check hey how could this be optimized and the diagnostics are off they are always off so if you are using cosmos dp if your environment has cosmos tp go check it out i bet you a dinner 80 chance there's no diagnostics on if the diagnostics are on groovy if they are not on and you are using sql api which is something a lot of people use turn on data plane requests now if you're using cassandra gremlin there are other requests ending diagnostics you can turn on also partition key reu consumption might be interesting but i think it currently only works with sql api so as you know cosmos db goes forward at a rapid clip and the product group is always producing something new so i think that might be something that that that might work with some other ways at some point maybe or something similar but for now it's only with sql api for sure go check out the link on the on the diagnostic settings basics and turn those on if you feel they're necessary next bit is again something that feels like a controversial thought like if you've been developing for for 20 years like like i have then it's not a thought you had earlier uh you were developing an app so like some kind of a web app or something on top of a sequel and you weren't like okay let's let's hope for rate limiting let's let's use the sequel until it can't respond anymore that was a that was a picture of horror that was a nightmare with cosmos db however you should kind of aim for rate limiting that should be the first thing hey let's let's optimize the costs so that it rate limits a little bit you should be getting some four two nines now if you don't get four two nines and you are using manually provision throwboats or you are using autoscale in either case it might mean that you are paying too much it doesn't necessarily mean maybe you're just so optimized maybe it's you know down the line but it's very good to check that if you don't get four two nines then that might be a sign of overpaying in some applications 429s cannot be accepted but in many cases let's say you have a business application like you know 95 percent of the applications and there's a person who is using the website and they don't get the website rendered in 0.8 seconds they get it in 0.9 seconds because of cosmos db 100 millisecond retry so i that that often isn't an issue and the rate limiting can really really help what you get if there's a 429 you get the information on when you can retry and most often you don't have to worry about that because the native sdks.net sdks java node.js python and so forth they catch the response they check the response and then they reach a retry again and you can configure that approach so aim for rate limiting explicitly decide you don't want to aim for rate limiting if your application is strategic enough next one is something that is again a bit surprising indexing is something that's very visible in cosmos tp there's a lot of talk about indexing in the docs they're very visible but in many cases when you need to optimize something they haven't been untouched it might be hey could you optimize the writing we don't get the items here fast enough and it's causing some bottlenecks in the process down the line and then it indexes all properties so optimize indexing check out their indexes remove properties you do not query with the application doesn't make queries with so the writing is a lot faster if you do bulk inserting or importing during the night for example from 2am to 3am or whatnot then it might make sense to consider setting the index into none for that time now in some cases you want to have the data fully indexed when they are in there but in some cases you just want to have the data there you maybe want to let other consumers or databases fits the data with ids or whatnot so that they exist in cosmos dp and then you can index them leisurely if you are not getting complex queries at that time and then of course if your order by clauses are slow then use composite indexing that's gonna help provisioned and scheduled throughput this is something that actually is one of the highest cost savings available or that's how it feels to me um here you see the picture on the on the right i guess from your viewpoint i don't know if this is mirrored or not and one of the highest cost savings available uh is that here you have the 600k request unit per second maximum usage and it's for half a day or 10 hours or eight hours or something and then it's lowered into 120k so that's quite a hefty lowering of the request units per second throughput and that's done by scheduling the throughput with azure functions so look at that if you have predictable workload it means you can schedule it and then just set up the azure function to do it in a serverless manner you don't have to pay a lot for that and that can really save you a lot of dough if your throughput is not something you can predict now predictable throughput and scheduled manual manual provisioning of throughput that's the that's the cheapest way to do cosmos db in a sense now if you cannot predict the throughput then autoscale is a great option and it's really a bit like magic it's it's very exciting and it feels very elegant and for a developer it feels very nifty because it just works so fast you see there's a need in your in your database and you have set it at 4 000 request units so it it goes between 400 and 4 000 then and it's it very fast scales into what you need but it does scale a little bit greedily or at least that's how how i feel about it if there's even a slight speed of chance that somebody might need a bit of request units a consumer or a query then then autoscale will jump very fast very high and that of course means that there are no four two nines when you do auto scale scale speed with an 10 percent event what's noticeable with auto scale otherwise it sounds like a dream but it is 50 more expensive than the manual scaling so it turns to loss after 15 hours of maximum usage and the rest of the day or night should be should be used on the on the minimum on the 10 percent scaling and otherwise it's going to be the same price and more expensive than manual manual provisioning of throughput so think about that if you have unpredictable workload for let's say workday in your time zone like eight hours or from from six to six 12 hours and then your database really settles down for the 12 hours like there's not really any usage some late workers maybe or something then auto scale is very nifty because it works as intended it's more cost effective it gives great performance and there's no worries however what's cool is that you can also schedule auto scale so you can schedule a higher auto scale range and then schedule a lower auto scale range later this might run into trouble with physical partitions and how how they will be spread out and we will speak about physical partition growth later so you you can see then i urge you to look at this feature it can be pretty handy in some situations as long as you get the boundaries large items now one kilobyte items are optimal it's very good to somehow internalize it's very good to internalize that that if you go large if you have larger items a lot larger items in in cosmos dp you're gonna run into into issues that are very bothersome 100 kilobyte items result in 10 times the request unit cost but of course if you did 100 times one kilobyte items that would be even more costly so in some cases it does make sense to use a little bit bigger items max item size is two megabytes so cosmos db isn't for really really large files and you know if the files go nearer to that then i would consider using something like azure storage as the storage for those documents and then just reference to them from cosmos db this is something that that is good to take into account we are going to talk about unstoring entities later and i will give you some hints on this reading with sql api there you have a document that i generated with some web app somewhere and it generated me these documents it's a pretty okay business document not a lot of properties but some properties and if you go to azure portal and you do a fetch on this kind of a document with the sequel sql clause then you get these kind of results you see okay this is 700 bytes it's less than one kilobyte that's good to understand you could add some more uh data into a document like that and it would still be the the minimal optimized the most optimized document size for cosmos db but the request charge shows you the cost of that query and it's 2.9 request units and as you may know what cosmos db says what the documentation says the cost of items the request unit cost is defined by one kilobyte item being read from cosmos dp that should be one requesting and this is two by nine so what's going on here if we look at this from the code perspective the the box at the top is uh reading directly so there we have the id and partition key and we read from the c-sharp snippet with read item async function we give it the id and the partition key and we get the document back at the bottom we have the reading with sequel that we do the same kind of sequel sentence we did in the previous slide and we get the back the same document back what's the end result with the request unit cost is now there at the below so the directory cost one request unit and the sql api read cost 2.9 so if you are working and you are able to optimize the size of your documents to one kilobyte you know that you can do direct reads with one but if you do the sql api that's gonna be thrice the cost that's quite a heavy difference now if instead you have one megabyte documents in cosmos tp and you spend like i don't know 100 or a thousand request units to read that one megabyte document somehow with a direct direct read then the sequel additional cost for the sql api engine to spin up which is the difference between 1 and 2.9 is not as meaningful so the sql api can be more can be used in a more relaxed manner when your items are bigger but if you really want to optimize your cosmos tp you use small items and then you have to be careful when you spin up the sql api engine and i think this is pretty interesting the cost is 1.9 when you spin it up or something like that all right about physical partition growth so this is what i what i referred to earlier now we have a single physical partition here and it contains a number of logical partitions and the logical partitions of course are by the partition key now you've set up uh 10 000 request units per second throughput to the to the container and i'm not sure maybe a 10 000 directly because that is the maximum limit a physical partition can handle so if i put 10 000 you know maybe maybe cosmos db spins up two physical partitions so there's some space or or not i'm not sure about that one but let's assume there's one physical partition it can give data back at the rate of 10 000 request units per second and it can contain 50 gigabytes of data as an aside a single logical partition can contain 20 gigabytes of data whenever you design your partition keys you never go near 20 gigabytes but let's see so this is the kind of thinking that there's there are physical partitions behind those logical partitions and you have no way of controlling how many physical partitions you get so it's very important to understand and being able to envision the amount of physical partitions depending on how much you have data and how much you utilize request units so a little bit more complex example is that your application above now gets a huge load of users it's black friday and you work in retail and you peek at 300 000 request units for a second for that friday your database so that cosmos dp can can really really handle the load and it can handle the load brilliantly so it's just going to scale the physical partitions the logical partitions are distributed in those hopefully you did the partitioning strategy correct so there are no wrong hot paths and so forth and then you get 10 000 request units per second out of all of those physical partitions so that's that's kind of like perfect scaling at least in the in this picture because earlier you had one physical partition then you have 30 but the throughput from every physical partition is as good as from the first single physical partition now let's assume that this busy hour has passed the physicism has passed and you're like all right 300 000 request units per second that's mighty expensive let's let's lower the amount of throughput and get some money back you lower it to 30 000 request units per second and what happens is that since you have 30 physical partitions they all get 1000 request units per second cap and this might become a surprise to you earlier when you had the peak and there were some items in in a physical partition that were fetched with a throughput of 10 000 request units per second maybe there are some very costly queries or maybe there's a lot of data coming out or or whatever or maybe there are just that many requests going in and suddenly you have the same physical partition that will have less load because the busy season has passed but if there's a hot path you get throttled and rate limited at 1 000 request units per second and that has beat some teams in the bomb to my knowledge there's no way for us developers to merge physical politicians back now in this scenario i would like to roll down the amount of physical partitions from 30 to 3 but that that that's not possible yet i'm pretty sure it's gonna be possible and this is of course not something you run into on the first day when you are using cosmos db but if you are planning your queries to the amount of 10 000 request units per second which is the maximum for a single physical partition and you scale horizontally and then you scale down then you'll see that that this will happen and this is nasty in the sense that usually this is something that's noticed during the during when the application is in production this is not something people anticipate when they do the design and they're like okay this works and then there's new users and they scale it and then okay this still works it's amazing and then they scale down and they start thinking that how come these same queries or performance is now slower than it was even before we scaled up this can be the reason so be aware next one is general stuff on storing entities first about co-locating entities that's something that's very against sql kind of thinking uh if you are a and like many developers are especially the people who've been developing for for 20 15 years or so they didn't start with no sql databases they started with sql and acid and normalization and relationships and stuff and now suddenly you have these these sql like capabilities in cosmos dp and then you should maybe think about co-locating entities in the same table in the same container now if you have an application and you need to fetch a lot of data into a view or into a function in the application then consider co-locating those entities in the same container that will optimize and minimize the query scope and can be a very good example of this feature slice kind of approach that you might be having in your in your back-end back-end logic or back-end architecture as well in your application the other option the other way to approach this is to think about whole documents if your documents are aren't getting too big or too large even if they have some additional information we've seen it beneficial to store even if there are duplicate data to store a document that has some of its relational data stored in that document as well then if you have a view you get the data and you get all data of course there's a bit of a bit of a bother about the synchronization and so forth so some part parts of data might grow old you have to think about that one but sometimes you don't have to worry about creating this snapshot in time and just storing the whole thing like like a cache like cache like hold document in cosmos dp and then the last one that we've been like i've been very happy and we've been very successful with is using azure cognitive search with cosmos dp so i think those two go together they're like very good buddies because if you think of azure cognitive search if you use it like 100 or 200 euros or dollars a month and then you put all kinds of stuff there and your client is like or your boss is like hey we need this feature and that feature and by the way wouldn't they google like search be cool and stuff like that starts cropping up and you've been doing well with the cosmos gp and then you're like okay let's put the azure cognitive search here as well for the searching capabilities and then you index your items there and you find the items and when you want more detailed information you want information that's not part of the search query or the parameters you can do a search for then the details are in cosmos dp what you do though is that you find the id from the search and then you fetch the data from cosmos dpv that direct red that we we saw earlier and this can result in a situation where your cosmos dip is actually the minimum sized like 400 request units per second where you have a bunch of items like a lot of items and you can make 400 queries a second into it that's almost one and a half million queries in an hour that's a lot of users a lot of queries and you use the search engine up front and then you just get the get the files get the documents with the id and that usage there's often when i when i speak with teams they are either it costs too much it's performant but it costs too much or it's not performant with the price we can pay but then there are these kind of capabilities of hey really getting the data out with a cost of nothing and you can put in 400 requests per second and i think that's that's pretty amazing but that is of course something you have to plan from the beginning on par partition keys so i think i said already logical container max size is 20 gigs so if you think that some subset of your data for a certain topic might become close to 10 20 gigs don't choose a partition key that would reflect that amount of data your partition data should never get close enough to 20 gigs uh i think there would be some performance issues with that so if you think of partition keys and you are struggling with doing the partition key design and that's something you need to kind of set not in stone but close by in the beginning of the project because as the project grows it's gonna be harder and harder to change the partitioning strategy that you have going there's good to know that there are a few strategies that do not really work often and that's like index numbering or random guides there are stuff that really work with the sql databases relational databases but if you do ids that are like one two three four or just random goods and then you want to get data uh with a surname or a city or or you know get me the tasks that are in project x y then what you are actually doing you are doing a query that reaches into all of the logical containers and therefore into all physical partitions underneath them and it's a so-called cross-container query cross partition query i mean and that is very expensive from the request unit point of view so you kind of have to fork at the index numbering and the random guides from the get go now the other thing that doesn't really work is that if your application for example returns the uh you know the people in a city for example and you have shanghai and then you have helsinki or then you have even smaller town like kangkampa where i was born in in finland then you make a query for people in shanghai that's gonna be a very different amount of people you get from partition and then you make a disproportionate strain on the cosmos db architecture and that's not an efficient way of partitioning your data strategies that might work depending on your application if you have users and you do queries for users and you want data about users and so forth user id sounds good especially if your users are you know you make the same amount of queries or they have the same amount of data or that's close enough or tenant id if they are equalish enough and so forth and maybe you have one for other and one for for the other or around some application feature you have rooms or then you have reports or maybe you have some classification or something you have an id for those things that are equal from an external point of view when you look at the amount of data and the amount of queries they get so those kind of strategies are good to think about when you start trying to figure out what are the partition keys i would go for so avoid cross-partition queries at all costs and avoid the kind of situation where you know that hey ninety percent of our queries will be headed to uh partition key shanghai because that means that you are just creating a bottleneck to the throughput that will be to a single partition to a physical partition underneath like we saw earlier when we spoke about the about the physical partition growth so be careful with this dot net bulk support if you've been using cosmos gp for a while then you might be aware of bulk executor library so this.net bulk support is something that came like year you're a half pack or something and a year ago there was this update to it it became even more performant it is basically an implementation that like paralysizes the the inserting of items for you so it sends data in batches of 100 items or two megabytes whichever comes up first and if the batch doesn't fill up it waits for 100 milliseconds and then it sends it so this is something that really makes the upload or inserting of data faster if you have to do stuff like that during the silent hours of the night look that up on consistency levels uh if you are not very familiar with this uh then session is good enough if you are not like kind of sure that hey our application is gonna like have to play with this then session is session is might be the right call it's often very good what is interesting to note is that if you feel you have to go to stronger consistency levels like bounded stainless or strong then they will cost you double the request units because the same operations will be done against two replicas and that's that's something that's that's nice to know before you even start to think about it that you are just doubling your cost do you really really need this in some cases you do for basics there's very good microsoft documentation on on this consistency level so go check it out and even the azure portal on the constancy level selector it has a it has a nice demo of notes so now let's talk about something called two container strategy it's a pretty common scenario there's incoming data from a lot of some things in this case i'm saying devices and these devices are situated in sites all around the world or something but there's a lot of data coming in from devices and you want to bulk insert items you want to insert items in a batch because there's a lot of incoming data and maybe they come in burst during the night or during the early hours or something that's when the devices send their daily load for example or something it's not exactly streaming but it might be something like that and then you have a bunch of consumers maybe applications other databases even or something that might be interested in the device raw data but some consumers are interested in the device data by site if a site has a hundred devices maybe it's a building it's a smart building you are reading the the warmth or you know air conditioning data of of the rooms or whatnot so do you optimize rights or reads and that's kind of a business question like do you want your consumers to be fast or do you need to be have the inserting of data to be fast and often of course the answer is both both need to be fast now if you start optimizing for rights that makes sense because you get the data before you you do the reading and you're like hey let's make the rights fast and you're like okay very fast writing of of this data would be doing the bulk insert in such a way that the partition key would be any hash just a generated something we spoke earlier that guides are bad and now i'm saying that that that can't be used to write data into cosmos db fast because if you do the bulk insert the bulk insert does 100 pieces or two megabytes or then it waits for 100 milliseconds and since the patch that is in there like two or three items or something in this manner you just get the data you get the 100 items and you send those immediately and that one batch would be then one logical container with some hash any hash so the bulk inserter would just patch the files and send them into container and that would be very fast however reading that container would be crazy excruciatingly slow because if you search with the if you query with site your cosmos db has to check in all of the logical containers and therefore all physical partitions for those devices that have the property site for whatever parameter was given in if you then optimize for reading then you of course put the partition key aside so all the device data for site is in the same location and when you read by side bam it comes out fast however when you are doing inserting of data you have to add that and you get like data from all the devices and then the bulk inserter tries to select the ones that go to site and then it gets like four four files instead of hundred and then it sends four files in a batch or five or something and the parallelization is not one hundred percent of 100 items is that it's like a level of four to five percent or something so you can see the issue one or the other is non-optimized two can container strategy can fix that this is the uh a real-life production curve of a situation that i just described in this situation the customer the the system was getting data from you know before 5 a.m and it ended before 6 00 a.m and the data was coming in and there was reading from the container at the same time and the right took a lot of they're like peeking out what is it nine million request units per request units uh taken in in a a few minutes or something so that's uh that's a lot of that's a lot of request units and throughput required and if the situation was such that at the same time the read would be peaking it is not at that time but if the read would be peaking then it would be hard to estimate the amount of air use required at the same time as data is being inserted so it's very hard to control the kind of approach where one container is the answer for everything now in two container strategy the key is that that there are two containers for both operations the writing is with hash for example because that's fast that way and reading is fast with the sides as we spoke so the process would be that the fast book insert is done to any hash with the with the bulk inserter inserter and then there's a change feed mechanism to trigger azure function to get the items from the right container and storing those items in the read container but by the side partition key so this would mean that there's a fast right and a fast read and in between there's a bit of a complexity with the change feed and azure function that kind of orchestrates and organizes this whole thing what is the benefit of this is that write container throughputs can be optimized separately and the read container can be two if the read contender doesn't need a lot of throughput during the night put that down if the right container only needs a lot of through bit from 4 am to 6 am you can schedule that down otherwise and in this way you can handle the peaks of usage for both containers separately there's one other point and it comes from this diagram and these are again production production lines production diagrams or telemetry and above we have the single container and below we have the tool container and as you can see in the below model the writing operations are done at something like 5 a.m and 25 25 or something like that and that is the point when all the data is in cosmos tp in the above single container approach all the data hasn't been written or we don't know if it's all been written in the database and in the below container or we know it's not written and in the pillow approach with two containers the data is written earlier and this means from an enterprise architectural viewpoint that the other consumers other databases like synapse for example if you are using synapse link with cosmos and you have some kind of analytical store inside cosmos gp then you can start enjoying all of those device items written like at 5 30 that would is not possible with a single container approach so these two container approaches sometimes a helpful pattern to summarize i've now categorized these tips i guess into easy optimizations and slightly harder optimizations the easy ones are tell your friends sql api is not sql turn diagnostics on if you feel like there's something iffy going on aim for rate limiting don't be afraid of rate limiting expect rate limiting be happy about rate limiting optimize your indexing like don't just forget it's so easy to optimize it like look at it in the beginning of development start playing with it schedule your throughput if you can expect and predict the usage and if you cannot if it's unpredictable use autoscale large items are an anti-pattern the smaller your items the more optimized you can do your cosmos db and then reading with sql api if you can do it with the reading directly do that because spinning up the sql api engine can cost like 200 percent more which is which is surprising but that's with the very small files the slightly harder optimizations are harder because they require more code changes into your application or solution or they require changes into how the how the files are inserted or or how's the bulk done or something like that physical partition growth is very hard because if you let the amount of your physical partitions grow very high then it's very hard to merge those back down on storing entities if you approach the storing of entities in a sql mindset then co-location or storing whole documents where some of the relational data is duplicated that might be hard to envision and also sometimes using azure cognitive search makes sense on partition keys like if in the beginning the strategy is is not sound then making changes during the project it's very annoying it is possible it's being done but it just takes a lot of work bulk support changes how you insert the data on consistency levels if you're not familiar with them don't play with them or you can play with them but be careful because they might cause surprising issues your customers or the end users of the application might come back to you and say hey what what's going on like i i use the application and i think i'm seeding different data than my mate over there so so be careful with the consistency levels and the two container strategy that can be something that helps you create a more granular architecture inside cosmos db thereby giving you more control and more capability of defining the throughput by operation and now i'm done thank you everybody it was a pleasure to be here and see you somewhere physically hopefully very soon bye you
Info
Channel: Azure Cosmos DB
Views: 339
Rating: undefined out of 5
Keywords:
Id: uuWnu43Eujo
Channel Id: undefined
Length: 46min 1sec (2761 seconds)
Published: Tue Apr 20 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.