Realm Creating Sophisticated GraphQL APIs in Minutes

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
-Hi, folks. My name is Drew DiPalma and today we're going to be talking about how to create sophisticated GraphQL APIs quickly and simply. What this talk is going to cover, first, we're going to go over a little bit of GraphQL. Why it emerged and talk a little bit about it at a high level. Then we'll really dive into the details of how it works alongside MongoDB Atlas, within MongoDB Realm and run through a quick example. Hopefully giving you the tools and showing you how everything fits together, so you can start building your own GraphQL APIs on top of MongoDB. Now to start off, we're actually going to start with REST, and if you've built an application before, like it's a web app, a mobile app or more of a backend application, you've probably interacted with REST APIs and you're actually probably pretty familiar about how this all fits together. Let's take maybe a little bit of a retail case. Imagine a simple API endpoint, which returns a user's orders. As you can see, we provide a user ID and we get a list of orders in return. Now, this is fine. It's obviously very simple, but it really-- it's easy to set up, easy to get started. This is really in the model for how API endpoints work. Now, part of the difficulty comes when you start to grow and expand and add complexity to your application. Let's take that original endpoint where we provide an ID and we get some orders back. Let's say that we want it to actually start adding more details to the orders. Now, there are a couple of ways that we could go about that. We could add details to the existing API endpoints. Now, that might be difficult because that could either break or add additional data that's not getting used different applications. We could version our API. Then, eventually, we'll have to probably want to get some of those older applications onto the new API version. You could create a new endpoint, the recent orders detailed endpoint. Now, we're going to just have to document that out and make sure that everybody knows that endpoint exists and when to use recent orders or recent orders detailed. We could add a flag onto detailed, a detailed flag onto the recent orders, still has that kind of documentation. Also, you might notice that both the recent orders detailed and the recent orders with the detailed flag don't really tell you which order you're going to get the details about. Do you get details for all the orders? Maybe you only want them in those recent order. Maybe you actually really want a separate endpoint where it's order detailed by ID, where you're just keeping single ID to get that order with the corresponding details. Now, I think what really comes across here is that there's no one way to actually deal with this and build out APIs. Honestly, what tends to happen is that you add complexity to your application that changes that start to impact other apps. It's a little bit tough to manage all the applications that interact on top of a single API. You have to maintain versions. You have to document and really detail out the complexity that is compassed within the REST application. Oftentimes, what this ends up is applications that are not using endpoints efficiently, maybe getting too much data or having to work with multiple REST APIs together and deal with all of the intricacies and nuances that might come along with that. Now, there've been some great tools that have helped get at these problems, tools to make it easier to create and manage API's test or to automatically generate documentation, but these don't really get at the core of the issue here. I think this illustrates a lot of what GraphQL is trying to solve in its approach. At a high level, what is GraphQL? It's an API query language, and a lot of it's based around this concept of a single schema. A complete understandable description of the data that's provided by your API. At the core of it, what this does is it takes this simple scheme and it pairs it with an API language that's a little bit more declarative in nature and lets you query for exactly the data that you want and only received that data in return from your API endpoints. Overall, what this lets you do is combine lots of different data from either cross-single database or even across many different databases, bind it up into a single schema and then give your applications access to that schema. Letting them understand how all the data looks and giving them the ability to ask for just the pieces of data that they need. Now, in practice if we took this original endpoint, what that might look like, because you might have a single endpoint in GraphQL for all of your e-commerce data for your users, your orders, your details of the orders, and if you were just dealing with the orders themselves, your GraphQL query might look something like this. You would ask for your orders objects where ID was equal to a set of ideas. Then you would choose the fields that you wanted. In this case, the ID field, the details field, and you might be able to query on multiple other fields here. You can add and subtract the fields that you want in return from that API endpoint. As a result, the GraphQL API is just going to give you the exact fields that you need. Now, the result of this is it makes it simpler to make precise requests. The benefit of this is that you are only bringing back the data that your application needs, makes it easier to build performant applications, easier to write clean application code. You know a little bit more predictably when your data model changes, when your schema changes, and one nice thing about GraphQL is they really focus on ensuring that updates to your schema are additive in nature. You have a schema model that just evolves over time and you simply add new data as you want to take advantage of it. You have that assurance that your original data stays in place. This gets around a lot of the versioning or having to combine together multiple APIs or think about dependencies that you might run into in REST. How does this actually relate back to MongoDB? Now, earlier this year, we released a GraphQL service in MongoDB Stitch. One thing you might notice here is that this slides is MongoDB Realm. We've just renamed MongoDB Stitch to be MongoDB Realm, really more as an indication that we've merged with Realm for all of their mobile application building, tooling and their bi-directional sync. Those are new features that you might see if you're familiar with Stitch. This talk is really just going to focus on the GraphQL aspect of things, which hasn't changed since we'd released it in Stitch earlier this year. It's really just that the name has been updated. Our goal with this GraphQL service is to make it simple to set up, manage, and continue to build and extend and customize a GraphQL API on top of MongoDB data. Now, I think one of the most important things here is that we've built this out to be serverless, so completely hosted serverless API, no need to think about creating your own web service, scaling up or scaling down. With that, comes some key features. As we talked about, GraphQL is really based on this concept of a single schema. What we do is make it simple to generate your schema, not only to create that initial schema, but also to validate it by scanning your data. In addition to the schema, to the end point, we generate a set of monthly query operators, giving you a lot more flexibility to query your data. You also want to define your own custom types with Realm functions. Giving you the ability to tap into these serverless functions when you want to have more complex interactions with your data. Finally, we look at some of the places where maybe the GraphQL spec isn't as prescriptive, specifically integrating authentication and data access rules. Not only just providing you an endpoint, but really giving you end to end, which you need to build out your application. Without further ado, let's start building. What I've built for this project is a little bit of a simple stock tracking application which shows how all of these features of GraphQL work together, and hopefully also shows you why GraphQL is a bit of a better way to build applications. What I'll do is give you a little bit of a tour of that app and then we will go through each of the steps to build this out in your own MongoDB cloud environment. Flipping over to the app, you can see that this is a simple stock application. I can search for stocks. I can start pulling from a GraphQL API in real-time. We've got integrated authentication up here. When you log in, you get a little bit more data, especially if you are in this concept of premium members. If you downgrade your application, you don't get to save stocks. You don't get to see that premium information, the extra information about stocks and then if you upgrade, you can get that information back again. This is all built on top of MongoDB Realm, single GraphQL API points information together. It's actually pulling the latest stock price. This updates in real time. We'll show you over the next couple of minutes how to build this app yourself. Let's get started. Now, as we show you how to build, we're going to break this down into four steps. First, we're going to kick things off with generating a quick GraphQL API on top of some data that we have. Then we're going to look at extending the model by customizing the schema and adding more sophisticated logic with custom resolvers. Next, we're going to add authentication and rules showing you how you can start to productionalize your application. Finally, we're going to leave you with some tips on how to iterate and grow over time. Starting with data. Now, one thing I want to make sure comes across is that when we say starting with data, we don't mean that you have to have a full production database with all your data decided already. You can have all sorts of types of data, whether it's an existing database. You can start off with the Atlas sample data with a single document that you load that represents what you think your data is going to look like, or you could load a file with mongoimport, text, CSV, JSON, or you could get started with a public dataset or API. The important thing is that you find some data to get started with and this is going to really serve as the basis for your schema. The way this works in practice is you'll start with some data in your MongoDB Atlas instance. We'll create a Realm application that's going to serve as the basis for your app. Then we'll start defining some rules for more data. One important thing to note here is that by default, all access to data is off with MongoDB Realm. By default, you'll have to create some rules to start developing and start working with your data. Then we'll show you how to generate and validate a schema. We'll show you how to add relationships, so you can link collections together. This is a pretty key concept in GraphQL, giving you the ability to link different types together and to traverse data from one type to another. Finally, we'll show you how that schema automatically gets translated to GraphQL. Now, once you have your GraphQL API set up, we'll show you how to review your schema and how to test using GraphiQL. Let's get started. We'll move over here to our Atlas instance. See that I've got a cluster up and running and I have some data stored in that cluster. Really, there are just three collections we're going to be working with. First, is some basic data about stock tickers. Second is more of in-depth data about the companies and stocks. This is really where we start to bring in some of that premium data that we showed. The third collection is your user data, and this is really just custom data about the users who signed up for the application. This is going to give us the ability to save stocks, to mark users as premium or non-premium, and some interesting things that you're able to do with that that we'll show you later. Starting with a Realm application, you don't have one, you can go over here to the Realm tab in the MongoDB Cloud and create a new application. I've actually got one set up. The first thing we'll do here is start to work with our data. I'm going to go over here to rules. You can see that I've already got some rules set up. If you don't, you can just link to a collection, select your database, select collection name and we get a bunch of default templates to get you started with the rules a little bit more quickly. I've already got my stuff set up, which is great. The next thing you'll want to do is go over to schema and if you don't have a graph JSON schema already, you'll want to generate your schema. What we'll do here is actually go through and scan your data, picking out the type of the collection, and giving you a JSON schema. Here, we're just going to scan 1,000 records. It'll run really quickly, it's going to give us our schema. If we want to validate that schema, you allow that too. It's going to go through and scan your records and look for any documents that don't meet your schema. This is great. Yes. We have logs that'll let you know when the document is not to match your GraphQL schema, but this is great to get ahead of that and to just check beforehand. As you can see here, it scanned 1,000 documents, all of them are in compliance with our JSON schema. That's I think a great indicator that we can keep going. Last thing I want to talk about in this page is relationships. If we want to create a relationship with the information from our basic record to our premium record, essentially allowing you to pull in some additional data and to join those two records together, we can do that. Here, just use this add relationship and select a field. Here, it's the ticker and our basic record links to the stock data, premium record and it links to the ticker over here. Just like that, you can link those two collections together. Now, once you've set up this, we actually have already generated a GraphQL API for you. There are a couple of things that you can do to start to really understand what that API looks like and how you can start using it. If you're interested in looking at the schema itself, we show you all the schema that we generate, all the types and you'll see not only do we have these basic record types, we also generate a lot of MongoDB query operators that I mentioned before, that will let you off the bat have more sophisticated ways to query your GraphQL API. You can see here, we have, let's say, the name is [?]. We're looking for a place where the ticker exists. Maybe the ID is in an array of different IDs. You have to get more operators. There's a lot there to just get started. Now, if you want to start querying your API before you actually setup your frontend, we also embed GraphiQL into our UI. This is a great tool for you to just kick the tires, get started, really understand and tune your data model. You have your documentation over here, which we'll actually let you go through and look at the API that you've created. We also have a way to query directly in the UI, so I've got a few queries set up here to my query history. First, maybe I just want to query a basic record and you can see here I'm actually even using the ticker to link to the premium data and get the CEO as well. That shows you how that relationship works. As we talked about, the nice thing about GraphQL is you can really choose which fields you want and customize your query to work for you. You want it to match, let's say I want to get specifically the Apple ticker. I can just query right there on the basic record, and the information is returned. There you go. That's a simple way to just get started, start building. Let's continue to move on. Now, what we've shown up to this point is pretty simple. Let's get a little bit more involved. The first thing I want to talk about is customizing your schema. One of the reasons I like this is because you have the ability to add validation logic and a little bit more sophistication at the schema level and it's all declarative. It fits into the simple JSON schema syntax. What it really does is it prevents you from writing more of API definition or adding more logic to your resolvers. There are a couple of things that you can do that I'm just going to walk through here. The first is renaming fields. You can use the title property in JSON schema to rename field and you can rename any field, you can add required fields. This is, again, just using the simple required property to indicate which fields are required versus optional, and you can even use JSON schema syntax to validate. Here, for the different exchanges that are going to be returned, we're going to put an enum there, listing out the different exchange options that we have. For market cap, we're going to set a minimum there and you can really use any JSON schema syntax to validate here and that's going to be validated on both reads and writes and you will get error messages in the logs if things aren't being validated correctly. This will all carryover also into that automatic validation that I showed you before. That's one way to extend, but it doesn't really give you an extension of the API itself. To do that, you can obviously add more JSON schema, you can add more data, you can generate more schema, you can have a collections. We also have the concept of custom resolvers. This is really going to get to that second point that we talked about a little earlier why GraphQL is great. It lets you really bind together lots of different data sources, APIs together all under the same schema and that really is sort of why we provide custom resolvers here. To get started, there're just a couple of quick steps you need to take. You decide whether the custom resolver you're creating is going to be query or mutation. You pointed them to a Realm function. It's going to actually define a logic and once you do that, you then find the input and the payload types. This is really again that same JSON schema syntax and that's going to really define what gets translated into the actual GraphQL API and how your work types look and feel. Couple of notes just quickly on building with Realm functions. For the most part, it's just standard JavaScript. There are a couple of things that are a little bit nuanced here. One, you have simple access to MongoDB Atlas. We've got this concept of context which is built in, gives you direct access to Atlas, easy ability to query, but it also gives you information about the request. If you want to see, let's say where the origin is, what IP it's from, where it'd be executing, it gives you information about the user, so who is the user, what is data that is linked to that user profile and also gives you access to other functions and services you can call function [?] function, things like that. You have access to global variables. For example, if you wanted to find an API key or something like that. Similar to GraphiQL for working with the GraphQL API, we also have a live development console here, which lets you execute your function and also to switch up your user that you're executing as, so you can get a sense of how the function operates as users with different permissions interact with it. Let's see how that all works together. Here, we've got GraphiQL again. We have this new record with price, custom resolver that I defined, and if I execute this, it's going to be a couple of things. It's going to give me some of that more premium information and premium information collection as well as getting the latest price. This is a real time updated price, which is pulled from an API endpoint. Show you how to do that just in a sec. First thing you want to do is define that custom resolver. You can see here we've got that record with price to query type. It's linked to a function and all we've done for the input types are defined. It's going to take a ticker, which tells you which stock do you want to get with price and then it's going to give you a payload. This payload is very similar to the payload from a premium record, but it has this latest price built in. Really, that's all you have to do. When you want to actually define the logic, what you do is go over to functions. On the function page, you can see that we've got couple of key functions here. We're going to look at the stock with price function and what this does is really puts together all that information that's served from that custom resolver. The first thing we do is look at the user who has requested this request. We're going to get an API token that's going to help us interact with this price API that we have. We check to see if the user is a premium user and that's going to help contextualize the information that we return. They're not a premium user. They won't actually get the data from the premium collection. If they are, then they will, and that's all decided with a rule. We don't even have to think about that in terms of the logic within the function. We then get the latest price from another function. We go to the MongoDB instance, get both the basic and the premium information and then we are going to bind everything together and return that. It really is that simple. Now, you really know how to define a custom resolver. You've seen a little bit of how we're doing that to pull the up-to–date stock information or these different stocks. Now, let's start thinking about how do we add authentication, how do we add rules and how do we start thinking about integrating this into our frontend. Starting with authentication. One of the nice things about MongoDB Realm is that there's a whole bunch of built-in authentication providers. Everything from password, anonymous, social providers like Google and Facebook, but MongoDB Realm also lets you bring your own auth. If you're using something like Cognito, Auth0, AAD, you can also just integrate bypassing the token to MongoDB Realm. Not only can you just set up authentication, which you can actually link custom data for your users, and this is really what's going to let us take that GraphQL API and start to integrate it into our application. We'll have the auth, so we know which users making the request and we're also going to be using custom data to point to a specific collection in our MongoDB Atlas instance, which has some user data. This is going to let us make better decisions about rules and also bring context about the user into the frontend of the application. Specifically, on the rules end of things, there are a couple of key things to call out as we talk about rules. Now, we showed some really simple examples of rules as we were getting started, but now let's talk about how we take those rules and get them ready for production. With rules in MongoDB Realm, you can define read and write access at the field level for your data. You also have role-based access, so you can have different permissions from one role to another or across different collections. You've got this nice way to design and start building out your rules in the UI, but you can also really tweak and customize them with code. We have expressions that can either look at the document, either pre operation or post-operation, if it's a right, we can bring in information about the requests, what the arguments are, what IP it comes from, can bring in user data. You actually even have access to functions and services or MongoDB comparison operators in order to have a really sophisticated rule. You have this nice UI to sort of click and build out, but you also have a fully advanced mode where you can do a little bit of editing for [?] or you can edit the rules as a whole. This is a good time to point out that everything that we're showing in the UI breaks down into a set of JSON and JavaScript files. While we do have this nice UI to build on, if you ever want to edit locally or integrate with a repo such as GitHub, we give you those hooks as well. We're not going to dive into that too much in this presentation, but it's good to just call out at this point. Now that we've talked about auth, we talked about rules, really want to talk a little bit about how you actually start moving into this code in your application. You'll be using two things to do that. The first are the Realm SDKs and these are going to provide you access to that authentication, access to user data, which is really just going to let you sort of maintain that connection to MongoDB Realm. The second is the Apollo Client. This is actually going to give us access to GraphQL, so we'll be able to set our GraphQL queries and be able to actually start working against that GraphQL endpoint that we created in our MongoDB Realm application. The reason that we use Apollo, one, our purpose with GraphQL really is to integrate with the tools that the community uses. Apollo is a really popular one there. There are also some really nice things built-in, not only do you have all of the hooks to integrate with your GraphQL queries, but you do have built-in local caching, which helps as you-- sometimes you end up using the same queries as you're rendering data in your application and you want to make sure that those are cached, so you're not over fetching or over querying or anything like that. Let's see how this all fits together. First, going back to our application. Let's take a little bit of a look at our users and our rules. On the user side of things, you can see we've got a nice console here with some user data. We have two providers set up. One is letting users log in anonymously. This is how we have that public non-logged-in view. The second is email and password authentication. You can see that we just set this up pretty quickly, automatically confirming users, haven't really set up a password reset, but that gives us the ability for people to log in and get an identity. We are able to then enrich that identity with this custom user data. See, we've got this custom user data enabled. What we're going to be doing is pointing to our MongoDB Atlas cluster or user database and our users collection. We're going to say that the ID of the user in MongoDB Realm, it's equal to the ID field in that collection. What's going to happen is when a user logs in or makes a request, we will pull that data, and so you will be able to get a metadata about the user. In this case, what we're doing is we are using this to track the stocks that the user had saved. If I log in on my computer and then check on my mobile phone or another computer, whenever I log in, it ensures that that data's going to come in with the authentication without having to make another request, so it's definitely more efficient in that regard and we're also keeping the [?] premium field there. Now, this would've takes us nicely into our rules, and let's go over here and take a little bit of a look at some of the production rules that we've got here. For the basic data, we have this all read and you'll see that we added some field names, you don't really need to do this. If you're going to have this all additional fields check, that's just going to give you access to the whole document. We have this public access rule, and that is going to be chosen always, because we've got an extra document that's trivially true. You're always going to, when someone's being requested, this collection match with that public access rule. Now, the next thing that we've got are our premium records. I want to actually add a little bit more, have a rule on top of that. What we're saying is that when someone is a premium user, they're going to have access to this whole document. What we're going to do is actually check the user, check their custom data and see whether they are a premium user. If that field is true, This is one of those expressions that I was talking about that give you access to things like the user. If that's true and this is bringing in our custom data, then the user will get this rule. If not, they're not going to have any permissions. It's just good to point out here that if a user makes a request and collection, they don't match with a role, they don't have any permissions. Finally, we've got our user collection. Right now, we've got this read and write on all fields. There might be a way to make this, we wanted to make it a little bit more sophisticated about when a user was able to upgrade to a premium user, we could add a more sophisticated role there, but what we're looking at right now is just making sure that the idea of the user who's making the request is equal to the ID field. This is ensuring that a user can only read and writes their own profile data. You can't access another user's profile. You can't write to another user's profile. That is pretty simple there. With just this authentication and these rules, it gives us the ability to actually start building out our application with a little bit more confidence knowing that our data is secure. Let's actually see how that's done next. Going over into our application code. The first thing I wanted to show is the SDK. We've got our MongoDB Realm SDK here, we are initializing it. Then we are logging in, so we have this method for getting an access token, which is either going to refresh the access token. If the user has an access token already, that stored cache somewhere, it'll refresh it. If not, then we will create an anonymous user. Our login button is going to have to handle, then converting that from an anonymous user to a fully-fledged username and password user. As we move over into the application logic itself, there are a couple of things that we're doing here. We're getting our saved stocks from looking at that user who's on the MongoDB Realm client, getting it from their customer data and also doing the same thing for premium user. When we are finding a stock, this is where the Apollo client comes in, so we have this find stock query that's the standard GraphQL query. I'll show you in a minute. Apollo gives some nice ways to understand whether that data is loading, whether it's an error or whether there's data coming back. Really, all we're doing is just taking this data, rendering it, run some nice UI around it, but this is a lot of the core of the application logic. Now, if we look at those GraphQL operations, there are two really that we use. The first one is getting our record with price and a lot of fields here. If we wanted to, we can always change or update these fields, so if we say I'm not using industry tags or I'm not using sector, we just take those out of the query and those will stop getting fed back to the application. That's one of those nice things where you really have a lot of control over the data that shows up in your application with GraphQL. Next, we've got the ability to update a user, where user can update their own object. They can set whether they're a premium user, they can save stocks. Again, this is really simple to write. We wanted to add a little bit more precision around here. We could definitely add some more rules, which changed what a user could set and what they couldn't set. For now, we haven't really needed to do that. Let's go back and-- Next, we want to really look at how we continue to iterate. What are some of the tools that MongoDB Realm gives you to continue to build out your application and to not only get something out there as a PFC, but to have it be a production application to continue building for the long run. Starting first with static hosting. This is just a really nice, easy way to host your content, your website. This is actually how we are hosting the example stocks that I showed before that we've been building throughout the course of the talk. We're using just the standard URL, but you can also go in and configure your own URL as well. No address, custom domain, things like that. In addition to that, I want to talk a little bit about troubleshooting with logs. We'll go through and show this in the UI in a sec, but the nice thing about logs is it really comprehensively shows all of your requests, not only the high level performance stats, giving you information about the user, the request origin, letting you console.log information, and also showing you about the rules. If we go through here, you can see some high level stuff and you can also see a little bit more on the GraphQL query, was executed, as well as compute rules performance metrics. What's most impactful here is to just show all of these things live. Looking first at hosting, you can see here as I was showing in the slides, it just really uploaded my bundle to the static content hosting section of MongoDB Realm and this is really just hosting the application for me. I go to logs, you can see all of the different operations I've been making, or you could see, not only do we have these GraphQL operations, but when I was doing my schema generation, my schema validation, it shows that as well. Not only does this show the GraphiQL query, it shows the rule performance metrics. This is great if you're not sure what rule is being evaluated. If for some reason you're seeing a little bit slower performance than you might have guessed, it's a great place to go look and to troubleshoot that. When you package all this up together, everything from the GraphQL API to rule set content hosting, you get this nice demo that we showed before. Giving you the ability to search for all the stocks to save, remove stocks, see information, half the price update in real-time. This is I think, what I would call pretty sophisticated API pulling real-time information, combining multiple collections. There are all sorts of different ways that you can extend this. I think about integrating other APIs, other databases, other types of data, custom resolvers, and the GraphQL service really give you the power to do all of those different things. Couple of things that I want to just wrap up with as we end this talk. Folks are always really interested to know about pricing. I think that's always top of mind. One nice thing about this is because this is a serverless system, it is fully consumption-based in its pricing. We have compute, how long your requests take to run, your 500 hours free per month, we have Realm requests. It's only $2 per million requests and you get again a million of those free per month, and then did it egress the data that you're sending from your GraphQL API to your application, get 10 gigs of that. One last thing to point out here is that this doesn't include any of the static content, static content hosting is free, we'll give you 100 gigs, egress, and we're a little bit flexible on that number as well. This is really just for that dynamic content serving data API requests. Now, to talk a little bit about where we go next from here, the code for this is all up on GitHub. I encourage you to check that out. It's great for sort of bootstrapping, starting your own project. I would definitely encourage you to learn more about the GraphQL service while we really sort of gone end to end here, there's a lot more that you can do, especially when you think about the ways to extend with functions and custom resolvers or rules. I would just encourage you to build, keep giving the team feedback, we have feedback on mongodb.com, Intercom, and we really look forward to see everything that you're going to build. Thank you very much. Hopefully, you have a chance to check out some of the other MongoDB Realm talks and enjoy a lot of the other talks at MongoDB live. Thanks.
Info
Channel: MongoDB
Views: 5,971
Rating: undefined out of 5
Keywords: coding, database, mongodb, mongodb on google cloud, technology, mongodb.live, cloud database, nosql, MongoDB cloud, software, virtual event, database as a service, developer, free cloud database, mongodb on azure, cloud, stem, serverless, tech, engineering, MDB, mongodb atlas, software engineer, dbaas, engineer, conference, code, mongodb on aws, data
Id: eiUYYYiG7F0
Channel Id: undefined
Length: 42min 37sec (2557 seconds)
Published: Tue Jun 09 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.