FaunaDB Basics - The Database of your Dreams

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
when you build an app one of the main pain points that you'll face is your database you often hear developers debate the merits of sql versus nosql on one hand you have safety security and consistency and on the other you have flexibility scalability and productivity but what if there's a database that could put an end to this debate once and for all fauna db is the cloud database that i've always dreamed of it's entirely serverless so we don't have to provision anything and we only pay for what we actually use beyond the free tier it's as easy to use as a document database and you can manage your data from its web interface or the command line it's extremely fast and scales infinitely in the cloud and is the first database in the world to implement the calvin model for partition database systems most importantly though is its ability to handle complex data modeling use cases like the ones you would find in a relational graph or time series database over the next few minutes you'll learn everything you need to know to get up and running with fonodb we'll use it to model the data relationships that you would find on twitter a user has many tweets users can follow each other and retrieve a feed of tweets from all the users that they follow we'll write our code in node.js and by the end of the video you'll have an api that you can use to connect to your frontend application if you're new here like and subscribe and check out the full write-up on fireship io to follow along today the first thing you'll need is a free fauna account once logged in you'll see the fauna dashboard at this point you won't have a database so go ahead and click the button to create a new one this is what's known as a top level database and an awesome thing about fauna is that it supports multi-tenancy it's beyond the scope of this video but a cool feature you should know about a database can have an unlimited number of child databases allowing you to scope data and privileges to a specific organization or team in most cases though you'll just need one single database for your entire application now that we have a database we'll create a collection a collection is like an sql table and works very similar to other document-oriented databases like mongodb or firestore a collection is kind of like a folder that contains many documents then you can make queries against that collection to filter out a set of documents that you need for your frontend ui when you create a collection you give it a name which is usually plural and in our case it will store all of our user data now a unique thing about fauna is that you'll also notice an option for history it retains all of the changes that happen to a document over time when you write to the database fauna never actually changes a stored document instead it creates a copy of the original with the changes then archives the original document to the history that's extremely useful when working with time series data and also for time traveling through any changes to your data you also have a ttl or time to live option which can automatically delete ephemeral data that you no longer need now that we have our user's collection let's go ahead and add a document to it a document is represented as a plain javascript object the data you save here does not need to follow a rigid structure however when you query multiple documents from a collection it's based on the keys in this object so if we want to match all of the users with a certain name or email address we'll want to use the same key name in every document go ahead and click save and now you should see that document in the collection what you'll notice is that it's automatically assigned a unique id to the document now if we expand the document you can see in addition to our custom data we also have a ref property that points to the user's collection along with that unique id the reference is very important because eventually we'll use it to join data together from multiple collections in our case for example we could save a reference to this user on a tweet document so we know who a tweet belongs to which is very similar to a foreign key in an sql database at this point i'm going to add a few more users to the database and now we need to think about how we want to access this data there are multiple native apis you can use to access data in fauna the first one that you might be familiar with is graphql that's not the api we'll be using in this video however if you're familiar with graphql you can actually upload a schema to fauna and it will automatically create the collections and indexes required to retrieve your data based on that schema it's really powerful stuff but i think to really understand fauna you should learn its custom query language called fql you can execute fql right here on the console or you can install fonichel on your local system to execute it from the command line and a little bit later we'll learn how to use it in node.js fql is a functional language and is very intuitive and flexible but there is a lot to learn and i recommend keeping the fql cheat sheet nearby we'll look at about 10 of the most common functions but just keep in mind that fql can do a lot more than i present in this video one of the first functions you'll want to know is get which retrieves a single document based on a reference get is just a function that takes a reference as an argument we make a reference with the ref function which takes a collection and a document id as its argument if we then execute the query we get the document data back as a result and as an added bonus you'll notice if we hover over the information icon it will tell us exactly how many bytes were transferred and a bunch of other useful information now reading a single document is very easy when you have the id but in the real world the document id might not be readily available and that brings us to our next big topic indexes an index provides a way for you to define how to query documents from a collection for example we might want to fetch a user based on their email address or username instead of the actual document id a collection can have millions of documents so an index provides a way to create a lookup table to quickly retrieve documents based on their actual internal data first we'll give our index a name which in our case will be users by name from there we can specify one or more terms which is a field on the document itself that can be searched in our case every user document has a name and we want to be able to fetch a user by the name by default the index will only return the document reference but another cool thing about fauna is that you can tell the database which values from the document you want to return in other words you search a document by the username and then return the email address or whatever other data you want for the ui that's an extremely useful little feature and we'll put it to use later in the video now let's go back to the shell and we'll see how we can read a document with this index inside of git we'll use the match function to search across an index it takes the index as its first argument and then an array of search terms as the second argument in this case we just have the username which we can add as the second argument and go ahead and execute the query and you should get that same document back as the result now that we've explored the fauna dashboard a little bit we're going to switch gears into an actual nodejs application our goal is to build a simple rest api with express that enables us to read and write to the database with our own custom code by making http requests you can follow along at this point by opening your ide to an empty directory you'll need to have node.js installed on your system and i'd also recommend installing the fauna vs code extension it allows you to see all of your collections and indexes right here in vs code which is much easier than going back and forth to the dashboard from this empty directory we'll initialize a new npm project by running npm init with the y flag that will give us a package.json and then we can install faunadb and express from there i'll create a source directory and add an index.js file to it which is where we'll write all of our source code inside this file we'll first require express which will serve all of our api endpoints and to interact with the database we'll require fanadb and initialize the fonodb client the client will connect your source code to the actual database in the cloud and to do that it requires a secret key so let's head back to the fauna dashboard find the security tab then create a new key to authenticate your server by default it gives you the choice between an admin and a server key for our use case we just need a server key however i would like to point out that you can implement fine grain access control right here from the fauna dashboard that makes it easy to follow the principle of least privilege where you only grant access to the bare minimum set of operations that are actually needed on the server in any case go ahead and copy the server key and then paste it into the fontadb client the next thing we'll do in this file is start up our express server by calling app listen on port 5000 now if you open the command line and run node pointed to your source directory it will start up the server and keep in mind that you'll need to restart it anytime the code changes if that's too annoying for you then check out nodemon to automatically reload anytime there's a change now if you remember earlier i mentioned that fql is a functional language and we can import those functions and use them in javascript from the faunadb query namespace as you can see i've imported 10 different functions here and you don't need to know what they do at this point the only important thing to know is that we use these functions to interact with the database now let's go ahead and set up our first api endpoint to read an individual tweet we'll use express to set up a get endpoint that points to a tweet with its id in the url then for the callback we'll set up an async function because any query you make to fauna will return a promise for this endpoint we simply want to read a single document like we did earlier from the shell in the fauna dashboard then we'll use the document itself as the response we can make a request to fauna for a document by awaiting a client query query takes an fql expression as its argument and remember it's a functional language so what you end up doing here is composing multiple javascript functions together conceptually it's very similar to component composition in a ui framework like react we'll start with the get function to read a single document then we use the ref function to make a reference to the document that we want ref takes two arguments one is the collection that we want to read from and the second is the document id which we'll get from the request parameters id in the url and that's basically all there is to it the promise will resolve with the actual document data which we then send back as the response one thing we're not doing here though is catching errors but that would be a good idea you can wrap the code in a try catch block or you could simply chain the catch method to the other end of the promise now let's go ahead and test it out at this point we don't have any tweets in the database so let's go back to fauna and create a new collection for tweets and then we'll add a document to it with the text of hello world once created go ahead and copy the document id now at this point we need to make a request to the url in my case i'm using an http client known as insomnia however you could just use curl from the command line or a vs code extension to make these requests in any case what we need to do is make a get request to localhost 5000 tweet followed by the tweet id when we send it we should get the document back including the document data with the text of hello world now that we know our api is working we're going to look at some more complex examples of relational data in this app a user can have many tweets but how do we connect a user document to a tweet document well first we need a way to create a tweet and to handle that we'll create a post endpoint that points to the tweet url and then we'll start with the same async function setup that we used in the previous endpoint creating a document is really easy we just use the create function point to the collection we want and then pass it whatever custom data we want to save there when we construct the data object we can use regular javascript values or we can use fql functions to get data from the database in this case we want the user field to be a reference to another user document in fauna there are many possible ways to do this but one way is to use the select function which will select a specific value from a document which in this case is the reference now at this point we might assume we only have the username in which case we can use our index users by name to get the corresponding user document reference so the main takeaway here is that you can read one or more documents when performing some kind of other operation like a write to the database and that allows you to model relational data in a way that's very similar to an sql database with foreign keys and it's also worth noting that fauna is 100 acid compliant which means when you run this database transaction it'll be globally consistent or in other words all future reads will reflect the value of the right even when distributed across thousands of users around the world if we go back to our http client we can make a post request to localhost 5000 slash tweet then we should get the newly created tweet back as the response and if you go to the fauna dashboard you should see it created in the users collection there as well now that we have multiple tweets in the database let's create an endpoint that can retrieve or query multiple tweets based on a username before we can make that query we'll need an index so let's go back to the fauna dashboard and create a new index called tweets by user the field we want to index here is the user reference to give us all the tweets owned by a user and because we already have the user reference we don't really need to have that returned on every single document we can tell fauna to only return the tweet text by setting it as a value that way we're only returning the actual data that we need for the ui when we make this query it's very similar to reading a single document but instead of get we use the paginate function the difference here is that git returns the first match whereas paginate will return a set of documents that match that query we then point to the index and then use select to grab the reference to the user now there is one thing that is bothering me at this point we have some code duplication going on in our code base notice we're using the same exact select code for this query as we are in the previous endpoint and getting a user is something we'll do often so it's only going to get worse as we move forward and that brings us to yet another awesome feature in fauna called functions go ahead and copy the duplicated code and then head over to the fauna dashboard and find the functions tab a function allows you to extract some fql code so it can be used on any server or any platform we'll give our function a name of getuser and then apply it to the server role then in the function body we have a query followed by a lambda function if you're not familiar with a lambda function just think of it like an anonymous arrow function in javascript the value of x is something we pass into the function then in the function body we'll add our duplicated code instead of the hard-coded username we'll add a variable that points to x and then we'll actually change the name of x to user just to make our code a little more readable now we can go back into our node app and reference this function we'll need to import call and function and when you import function you'll want to rename it something else using a colon because it collides with the built-in function in javascript next let's find the duplicated lines of code that we want to replace now we can call the function by its name getuser and then pass in whatever custom argument we want here and it will be executed by fauna remotely so if you ever need to update this code it gets updated throughout your entire code base atomically the bottom line here is that functions provide an awesome way to keep your code concise and maintainable what we created is a one-to-many relationship where a user has many tweets and a tweet belongs to a user what we're going to look at next is the relationship between users where a user can follow another user and or be followed by that user or in other words a user can have and belong to many different relationships it's your typical social graph data model first we'll go to the fauna dashboard and create a new collection for relationships each document in this collection will contain a follower and a followee whose values are user document references you can think of this document as an edge in a graph that connects two user accounts together a one-way graph where the follower points to the following back in our code we'll set up a new relationship endpoint that creates the relationship document it creates a document just like we did for a tweet the only difference is that we point to the relationships collection in the data object the follower and the followee are just user references which we can get by calling our get user function that we created previously for now we'll just hard code a couple of usernames in here to show that bob follows fireship we can now go ahead and make a post request to the relationship endpoint and we should get a new document back that has a follower and a followee with a user reference on it now that we can establish a relationship i want to show you how we can get all the accounts that are followed by a given user and not just the accounts but also the tweets that are owned by those accounts so we can show the end user a feed of tweets from the people that they follow the first step of course will be to create a new index which will give a name of followers by followee it will take the followees user reference then return all the accounts that that user is following from there we'll go back to our code and create a new git endpoint for feed we'll again use the paginate function in the query but this time instead of retrieving a single index we're going to join two indexes together the join function takes two arguments the first one is the initial thing that you want to query which in our case is all the followed users which we can get by pointing to our recently created index and then a specific username like bob that will give us all of the user references that are followed by bob we can then join all the tweets from those users by using the index tweets by user as the second argument if we go ahead and make that request you can see that we now get the text from multiple tweets back from that index and there's a lot more we could do with joins here instead of returning the entire index we could use a lambda function to filter and sort the results with our own custom logic but i think we'll save that for a future video i'm going to go ahead and wrap things up there fauna is a truly awesome database and the more i use it the more i like it if i were starting a new project today fauna would definitely be at the top of my list for the tech stack if you want to see more videos on this topic let me know in the comments consider supporting my work with a github sponsorship or by becoming a pro member at fire ship io thanks for watching and i will see you in the next one
Info
Channel: Fireship
Views: 151,528
Rating: undefined out of 5
Keywords: webdev, app development, lesson, tutorial, db, database, faunadb, fauna, faunadb tutorial, faunadb basics, sql, nosql, sql vs nosql, relational database
Id: 2CipVwISumA
Channel Id: undefined
Length: 16min 35sec (995 seconds)
Published: Thu Oct 15 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.